The fastest way to ship realtime ML features

Fennel is a fully-managed, realtime feature engineering platform that combines the simplicity of Python with the performance of Rust.

The complete platform for realtime feature engineering at any scale

Fennel removes the engineering friction from deploying realtime data pipelines, enabling ML teams to iterate on features at scale.

Getting started?

Python-Native Authoring

Define features using Python, Pandas and other familiar libraries instead of specialized DSLs.

Have a couple data scientists?

Powerful Data Connectors

Prebuilt data connectors to ingest data from batch, streaming, and realtime sources like Postgres, S3, Snowflake, BigQuery, and more.

Have a large data science team?

Fully Managed Infra Ops

Zero operational/management concerns with fully managed infrastructure deployed in our cloud or your VPC.

Have a large data science team?

Native Testing & CI/CD

Out-of-the-box unit tests, integration tests, and CI/CD to rapidly iterate and deploy features to production with confidence.

Have a large data science team?

Feature Versioning & Lineage

Versioned immutable features with full lineage tracking to maintain multiple versions of features, roll out bug fixes and track PII data.

Have a large data science team?

Drift Monitoring & Data Quality

Typed schemas, data expectations, drift monitoring, and more to prevent, catch, and alert on data quality issues.

Founded by a team of tech veterans who built ML infrastructure serving billions of people

Founded by the creators of ML
infrastructure at FAANG

Backed by top VC’s

Backed by unicorn founders

Realtime Optimized

Compute & serve features in realtime

Second Level Feature Freshness

Fast Rust-based computation engine that updates features within seconds of new data being available.

Zero Online-Offline Skew

Consistent feature definitions, immutable and versioned features, and batch point-in-time correct training data eliminates the potential for skew.

Blazing Fast Online Serving

Custom-built KV store reduces RAM usage by keeping much of the data on SSD while still delivering single-digit millisecond lookups for realtime serving.

Affordable Realtime Performance

Rust engine optimized for low and predictable memory overhead compared to JVM systems, efficient compute, out-of-box support for spot instances, and a lot more to keep your costs down.

Integrates with your favorite ML tools

Built for scale with enterprise-grade privacy and reliability

Securely deploy inside your VPC

Private & Secure Deployments

Fully managed infrastructure deployed in your VPC ensures full privacy and security because data never leaves your cloud.

Reliably handle internet scale

Scale to Billions

Architecture designed to serve billions of requests per day with low-latency online serving and enterprise-grade reliability.

Reliably handle internet scale

Enable GDPR Compliance

Team-level access control and complete lineage tracking for all of your features to meet GDPR compliance.

FAQ

I don't have any realtime features, only offline features. Can I still use Fennel?

Absolutely! Fennel can be used for both realtime and offline features (after all, batch processing is a special case of realtime stream processing).

How does Fennel eliminate online-offline skew for realtime features?

Fennel uses the same feature definitions to both power the streaming/realtime feature computation and create point-in-time correct training data in the batch mode. Further, all features are versioned and immutable which eliminate any potential skew from different definition versions.

How does Fennel make writing realtime features easier?

With Fennel, you can write realtime features declaratively using familiar tools like Pandas – without having to tune Spark Streaming or Flink jobs. Fennel takes care of computing and serving correct values in both online/offline modes. In addition, Fennel handles all the infra ops – from container provisioning to online feature serving and CI/CD.

As a result, writing realtime features becomes as easy as writing a Pandas transformation.

How does serving work when my production services don't run Python?

Fennel is designed to be as extensible and developer-friendly as possible. Fennel’s Python features are hosted as REST endpoints and hence can be accessed from any language.

What is the Fennel deployment model?

You can either deploy the platform in the Fennel Cloud or deploy it inside your cloud. In either case, we manage the infrastructure resulting in zero operational overhead for you.

What is the installation/provisioning process?

Deployments in Fennel's cloud are completely hands-off, and provisioning only takes about 1-2 minutes.

Deployments in your cloud require administrative access to an AWS sub-account to provision a cluster. Once Fennel has access, the provisioning process takes about 30 minutes.

Either way, once provisioning is complete, you can pip install a Python client and get going. Fennel is completely ops/management free after that.

How does Fennel respect the safety and privacy of my data?

Whether you deploy Fennel in our cloud or yours, protections are in place to ensure that data is secure – including encryption of all data in transit and at rest.

If you want an even higher level of protection, Fennel can be deployed inside your cloud, in which case your data will never leave your cloud and will be subject to the same security policies as the rest of your infrastructure.

I’m concerned about the costs of my current feature engineering platform. Can Fennel help me with that?

A very large part of feature engineering platform cloud costs comes from RAM — both for computation (e.g., in Spark Streaming) and serving (e.g., in Redis). For computation, Fennel uses a Rust-based computation engine that keeps a low and predictable memory overhead compared to JVM systems. For serving, Fennel uses a custom-built key-value store that reduces RAM usage by keeping much of the data on SSD while still delivering blazing-fast lookups.

All these cost savings are directly passed to you.

Learn more about realtime ML

High-quality technical content on all things machine learning.

Explore the blog

Experience modern feature engineering.