Incredibly easy to install & use
Ship Features 100x Faster.
Real Python. No DSLs.
No DSL, no Spark or Flink jobs. Plain old Python & Pandas means zero learning curve for you.
Pipelines backfill automatically on declaration - no more pesky one off scripts.
Fennel brings up & manages everything that is needed - zero dependencies on your prod infra.
Feature repository for reuse
Write standardized features once, share & reuse across all your use cases.
Best-in-class data quality tooling
No more feature or data bugs
Catch typing bugs at compile time and at source of data, thanks to strong typing.
Immutability & Versioning
Immutable & versioned features to eliminate offline online skew due to definition changes.
Prevent unforced errors by writing unit tests across batch & realtime pipelines.
Compile Time Validation
Strict end to end lineage validation at compile time to prevent runtime errors.
Specify expected data distributions, get alerted when things go wrong.
Online / Offline Skew
Single definition of feature across both offline and online scenarios.
Temporally correct streaming joins
No more stale data
Sub-second Feature Freshness
Single-digit ms response
Ultra-low Latency Serving
How it works
Read & Write Path Separation
The right abstraction for realtime feature engineering
Bring your Data
Use built-in connectors to effortlessly bring all your data to Fennel.
Derive Data via Streaming Pipelines
Query via the REST API
The primary language of our backend, relying heavily on Tokio's async runtime
Handles all inflow data. All streaming jobs read and write to Kafka.
Handles all at-rest data, with some also offloaded to Redis.
Used as the dataframe interface between user-written code and the server.
Used alongside Protobufs to write services and exchange data.
For maintaining the lifecycle of all running services.
Used for provisioning Fennel infrastructure as code.
Used as a central metadata store, with the exception of customer data.