Sync

Sends the local dataset and featureset definitions to the server for verification, storage and processing.

Parameters

datasets:List[Dataset]

Default Value: []

List of dataset objects to be synced.

featuresets:List[Featureset]

Default Value: []

List of featureset objects to be synced.

preview:bool

Default Value: False

If set to True, server only provides a preview of what will happen if sync were to be done but doesn't change the state at all.

Note

Since preview's main goal is to check the validity of old & new definitions, it only works with real client/server and mock client/server ignore it.

tier:Optional[str]

Default Value: None

Selector to optionally sync only a subset of sources, pipelines and extractors - those with matching values. Rules of selection:

  • If tier is None, all objects are selected
  • If tier is not None, an object is selected if its own selector is either None or same as tier or is ~x for some other x
1from fennel.datasets import dataset, field
2from fennel.sources import source, Webhook
3from fennel.featuresets import featureset, feature, extractor
4
5webhook = Webhook(name="some_webhook")
6
7@source(webhook.endpoint("endpoint1"), tier="bronze")
8@source(webhook.endpoint("endpoint2"), tier="silver")
9@dataset
10class Transaction:
11    txid: int = field(key=True)
12    amount: int
13    timestamp: datetime
14
15@featureset
16class TransactionFeatures:
17    txid: int = feature(id=1)
18    amount: int = feature(id=2).extract(field=Transaction.amount, default=0)
19    amount_is_high: bool = feature(id=3)
20
21    @extractor(tier="bronze")
22    def some_fn(cls, ts, amount: pd.Series):
23        return amount.apply(lambda x: x > 100)
24
25client.sync(
26    datasets=[Transaction],
27    featuresets=[TransactionFeatures],
28    preview=False,  # default is also False, so didn't need to include this
29    tier="silver",
30)
Silver source and no extractor are synced

python

Log

Method to push data into Fennel datasets via webhook endpoints.

Parameters

webhook:str

The name of the webhook source containing the endpoint to which the data should be logged.

endpoint:str

The name of the webhook endpoint to which the data should be logged.

df:pd.Dataframe

The dataframe containing all the data that must be logged. The column of the dataframe must have the right names & types to be compatible with schemas of datasets attached to the webhook endpoint.

batch_size:int

Default Value: 1000

To prevent sending too much data in one go, Fennel client divides the dataframe in chunks of batch_size rows each and sends each chunk one by one.

Note that Fennel servers provides atomicity guarantee for any call of log - either the whole data is accepted or none of it is. However, breaking down a dataframe in chunks can lead to situation where some chunks have been ingested but others weren't.

Errors

Invalid webhook endpoint:

Fennel will throw an error (equivalent to 404) if no endpoint with the given specification exists.

Schema mismatch errors:

There is no explicit schema tied to a webhook endpoint - the schema comes from the datasets attached to it. As a result, the log call itself doesn't check for schema mismatch but later runtime errors may be generated async if the logged data doesn't match the schema of the attached datasets.

You may want to keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

1from fennel.datasets import dataset, field
2from fennel.sources import source, Webhook
3
4# first define & sync a dataset that sources from a webhook
5webhook = Webhook(name="some_webhook")
6
7@source(webhook.endpoint("some_endpoint"))
8@dataset
9class Transaction:
10    uid: int = field(key=True)
11    amount: int
12    timestamp: datetime
13
14client.sync(datasets=[Transaction])
15
16# log some rows to the webhook
17client.log(
18    "some_webhook",
19    "some_endpoint",
20    df=pd.DataFrame(
21        data=[
22            [1, 10, "2021-01-01T00:00:00"],
23            [2, 20, "2021-02-01T00:00:00"],
24        ],
25        columns=["uid", "amount", "timestamp"],
26    ),
27)
Logging data to webhook via client

python

Extract

Method to query the latest value of features (typically for online inference).

Parameters

inputs:List[Union[Feature, str]]

List of features to be used as inputs to extract. Features should be provided either as Feature objects or strings representing fully qualified feature names.

outputs:List[Union[Featureset, Feature, str]]

List of features that need to be extracted. Features should be provided either as Feature objects, or Featureset objects (in which case all features under that featureset are extracted) or strings representing fully qualified feature names.

input_dataframe:pd.Dataframe

A pandas dataframe object that contains the values of all features in the inputs list. Each row of the dataframe can be thought of as one entity for which features need to be extracted.

log:bool

Default Value: False

Boolean which indicates if the extracted features should also be logged (for log-and-wait approach to training data generation).

workflow:str

Default Value: 'default'

The name of the workflow associated with the feature extraction. Only relevant when log is set to True, in which case, features associated with the same workflow are collected together. Useful if you want to separate logged features between, say, login fraud and transaction fraud.

sampling_rate:float

Default Value: 1.0

The rate at which feature data should be sampled before logging. Only relevant when log is set to True.

Returns

type:Union[pd.Dataframe, pd.Series]

Returns the extracted features as dataframe with one column for each feature in outputs. If a single output feature is requested, features are returned as a single pd.Series. Note that input features aren't returned back unless they are also present in the outputs

Errors

Unknown features:

Fennel will throw an error (equivalent to 404) if any of the input or output features doesn't exist.

Resolution error:

An error is raised when there is absolutely no way to go from the input features to the output features via any sequence of intermediate extractors.

Schema mismatch errors:

Fennel raises a run-time error if any extractor returns a value of the feature that doesn't match its stated type.

Authorization error:

Fennel checks that the passed token has sufficient permissions for each of the features/extractors - including any intermediate ones that need to be computed in order to resolve the path from the input features to the output features.

1from fennel.featuresets import featureset, feature, extractor
2from fennel.lib.schema import inputs, outputs
3
4@featureset
5class Numbers:
6    num: int = feature(id=1)
7    is_even: bool = feature(id=2)
8    is_odd: bool = feature(id=3)
9
10    @extractor
11    @inputs(num)
12    @outputs(is_even, is_odd)
13    def my_extractor(cls, ts, nums: pd.Series):
14        is_even = nums.apply(lambda x: x % 2 == 0)
15        is_odd = is_even.apply(lambda x: not x)
16        return pd.DataFrame({"is_even": is_even, "is_odd": is_odd})
17
18client.sync(featuresets=[Numbers])
19feature_df = client.extract(
20    inputs=[Numbers.num],
21    outputs=[Numbers.is_even, Numbers.is_odd],
22    input_dataframe=pd.DataFrame({"Numbers.num": [1, 2, 3, 4]}),
23)
24pd.testing.assert_frame_equal(
25    feature_df,
26    pd.DataFrame(
27        {
28            "Numbers.is_even": [False, True, False, True],
29            "Numbers.is_odd": [True, False, True, False],
30        }
31    ),
32)
Extracting two features

python

Extract Historical

Method to query the historical values of features. Typically used for training data generation or batch inference.

Parameters

inputs:List[Union[Feature, str]]

List of features to be used as inputs to extract. Features should be provided either as Feature objects or strings representing fully qualified feature names.

outputs:List[Union[Featureset, Feature, str]]

List of features that need to be extracted. Features should be provided either as Feature objects, or Featureset objects (in which case all features under that featureset are extracted) or strings representing fully qualified feature names.

format:"pandas" | "csv" | "json" | "parquet"

Default Value: pandas

The format of the input data

input_dataframe:pd.Dataframe

A pandas dataframe object that contains the values of all features in the inputs list. Each row of the dataframe can be thought of as one entity for which features need to be extracted.

Only relevant when format is "pandas".

input_s3:Optional[sources.S3]

Sending large volumes of the input data over the wire is often infeasible. In such cases, input data can be written to S3 and the location of the file is sent as input_s3 via S3.bucket() function of S3 connector.

This parameter makes sense only when format isn't "pandas".

When using this option, please ensure that Fennel's data connector IAM role has the ability to execute read & list operations on this bucket - talk to Fennel support if you need help.

timestamp_column:str

The name of the column containing the timestamps as of which the feature values must be computed.

output_s3:Optional[sources.S3]

Specifies the location & other details about the s3 path where the values of all the output features should be written. Similar to input_s3, this is provided via S3.bucket() function of S3 connector.

If this isn't provided, Fennel writes the results of all requests to a fixed default bucket - you can see its details from the return value of extract_historical or via Fennel Console.

When using this option, please ensure that Fennel's data connector IAM role has write permissions on this bucket - talk to Fennel support if you need help.

feature_to_column_map:Optional[Dict[Feature, str]]

Default Value: None

When reading input data from s3, sometimes the column names in s3 don't match one-to-one with the names of the input features. In such cases, a dictionary mapping features to column names can be provided.

This should be setup only when input_s3 is provided.

Returns

type:Dict[str, Any]

Immediately returns a dictionary containing the following information:

  • request_id - a random uuid assigned to this request. Fennel can be polled about the status of this request using the request_id
  • output s3 bucket - the s3 bucket where results will be written
  • output s3 path prefix - the prefix of the output s3 bucket
  • completion rate - progress of the request as a fraction between 0 and 1
  • failure rate - fraction of the input rows (between 0-1) where an error was encountered and output features couldn't be computed
  • status - the overall status of this request

A completion rate of 1.0 indicates that all processing has been completed. A completion rate of 1.0 and failure rate of 0 means that all processing has been completed successfully.

Errors

Unknown features:

Fennel will throw an error (equivalent to 404) if any of the input or output features doesn't exist.

Resolution error:

An error is raised when there is absolutely no way to go from the input features to the output features via any sequence of intermediate extractors.

Schema mismatch errors:

Fennel raises a run-time error and may register failure on a subset of rows if any extractor returns a value of the feature that doesn't match its stated type.

Authorization error:

Fennel checks that the passed token has sufficient permissions for each of the features/extractors - including any intermediate ones that need to be computed in order to resolve the path from the input features to the output features.

Request

Response

1response = client.extract_historical(
2    inputs=[Numbers.num],
3    outputs=[Numbers.is_even, Numbers.is_odd],
4    format="pandas",
5    input_dataframe=pd.DataFrame(
6        {"Numbers.num": [1, 2, 3, 4]},
7        {"timestamp": [datetime.utcnow() - HOUR * i for i in range(4)]},
8    ),
9    timestamp_column="timestamp",
10)
11print(response)
Example with `format='pandas'` & default s3 output

python

1from fennel.sources import S3
2
3s3 = S3(
4    name="extract_hist_input",
5    aws_access_key_id="<ACCESS KEY HERE>",
6    aws_secret_access_key="<SECRET KEY HERE>",
7)
8s3_input_connection = s3.bucket("bucket", prefix="data/user_features")
9s3_output_connection = s3.bucket("bucket", prefix="output")
10
11response = client.extract_historical(
12    inputs=[Numbers.num],
13    outputs=[Numbers.is_even, Numbers.is_odd],
14    format="csv",
15    timestamp_column="timestamp",
16    input_s3=s3_input_connection,
17    output_s3=s3_output_connection,
18)
Example specifying input and output s3 buckets

python

Extract Historical Progress

Method to monitor the progress of a run of extract_historical query.

Parameters

request_id:str

The unique request ID returned by the extract_historical operation that needs to be tracked.

Returns

type:Dict[str, Any]

Immediately returns a dictionary containing the following information:

  • request_id - a random uuid assigned to this request. Fennel can be polled about the status of this request using the request_id
  • output s3 bucket - the s3 bucket where results will be written
  • output s3 path prefix - the prefix of the output s3 bucket
  • completion rate - progress of the request as a fraction between 0 and 1
  • failure rate - fraction of the input rows (between 0-1) where an error was encountered and output features couldn't be computed
  • status - the overall status of this request

A completion rate of 1.0 indicates that all processing has been completed. A completion rate of 1.0 and failure rate of 0 means that all processing has been completed successfully.

Request

Response

1request_id = "bf5dfe5d-0040-4405-a224-b82c7a5bf085"
2response = client.extract_historical_progress(request_id)
3print(response)
Checking progress of a prior extract historical request

python

Extract Historical Cancel Request

Method to cancel a previously issued extract_historical request.

Parameters

request_id:str

The unique request ID returned by the extract_historical operation that needs to be canceled.

Returns

type:Dict[str, Any]

Marks the request for cancellation and immediately returns a dictionary containing the following information:

  • request_id - a random uuid assigned to this request. Fennel can be polled about the status of this request using the request_id
  • output s3 bucket - the s3 bucket where results will be written
  • output s3 path prefix - the prefix of the output s3 bucket
  • completion rate - progress of the request as a fraction between 0 and 1
  • failure rate - fraction of the input rows (between 0-1) where an error was encountered and output features couldn't be computed
  • status - the overall status of this request

A completion rate of 1.0 indicates that all processing had been completed. A completion rate of 1.0 and failure rate of 0 means that all processing had been completed successfully.

Request

Response

1request_id = "bf5dfe5d-0040-4405-a224-b82c7a5bf085"
2response = client.extract_historical_cancel_request(request_id)
3print(response)
Canceling extract historical request with given ID

python

Lookup

Method to lookup rows of keyed datasets.

Parameters

dataset_name:str

The name of the dataset to be looked up.

keys:List[Dict[str, Any]]

List of dict where each dict contains the value of the key fields for one row for which data needs to be looked up.

fields:List[str]

The list of field names in the dataset to be looked up.

timestamps:List[Union[int, str, datetime]]

Default Value: None

Timestamps (one per row) as of which the lookup should be done. If not set, the lookups are done as of now (i.e the latest value for each key).

If set, the length of this list should be identical to that of the number of elements in the keys.

Timestamp itself can either be passed as datetime or str (e.g. by using pd.to_datetime or int denoting seconds/milliseconds/microseconds since epoch).

1from fennel.datasets import dataset, field
2from fennel.sources import source, Webhook
3
4# first define & sync a dataset that sources from a webhook
5webhook = Webhook(name="some_webhook")
6
7@source(webhook.endpoint("some_endpoint"))
8@dataset
9class Transaction:
10    uid: int = field(key=True)
11    amount: int
12    timestamp: datetime
13
14client.sync(datasets=[Transaction])
15
16# log some rows to the webhook
17client.log(
18    "some_webhook",
19    "some_endpoint",
20    pd.DataFrame(
21        data=[
22            [1, 10, "2021-01-01T00:00:00"],
23            [2, 20, "2021-02-01T00:00:00"],
24        ],
25        columns=["uid", "amount", "timestamp"],
26    ),
27)
28
29# now do a lookup to verify that the rows were logged
30keys = [{"uid": 1}, {"uid": 2}, {"uid": 3}]
31ts = [
32    datetime(2021, 1, 1, 0, 0, 0),
33    datetime(2021, 2, 1, 0, 0, 0),
34    datetime(2021, 3, 1, 0, 0, 0),
35]
36response, found = client.lookup(
37    "Transaction",
38    keys=keys,
39    fields=["uid", "amount"],
40    timestamps=ts,
41)
Example of doing lookup on dataset

python

POST /api/v1/extract

Extract

API to extract a set of output features given known values of some input features.

Headers

Content-Type:"application/json"

All Fennel REST APIs expect a content-type of application/json.

Authorization:Bearer {str}

Fennel uses bearer token for authorization. Pass along a valid token that has permissions to log data to the webhook.

Body Parameters:

inputs:str

List of fully qualified names of input features. Example name: Featureset.feature

outputs:str

List of fully qualified names of output features. Example name: Featureset.feature. Can also contain name of a featureset in which case all features in the featureset are returned.

data:json

JSON representing the dataframe of input feature values. The json can either be an array of json objects, each representing a row; or it can be a single json object where each key maps to a list of values representing a column.

Strings of json are also accepted.

log:bool

If true, the extracted features are also logged (often to serve as future training data).

workflow:string

Default Value: default

The name of the workflow with which features should be logged (only relevant when log is set to true).

sampling_rate:float

Float between 0-1 describing the sample rate to be used for logging features (only relevant when log is set to true).

Returns

The response dataframe is returned as column oriented json.

1url = "{}/api/v1/extract".format(SERVER)
2headers = {
3    "Content-Type": "application/json",
4    "Authorization": "Bearer <API-TOKEN>",
5}
6data = {"UserFeatures.userid": [1, 2, 3]}
7req = {
8    "outputs": ["UserFeatures"],
9    "inputs": ["UserFeatures.userid"],
10    "data": data,
11    "log": True,
12    "workflow": "test",
13}
14
15response = requests.post(url, headers=headers, data=req)
16assert response.status_code == requests.codes.OK, response.json()
With column oriented data

python

1url = "{}/api/v1/extract".format(SERVER)
2headers = {
3    "Content-Type": "application/json",
4    "Authorization": "Bearer <API-TOKEN>",
5}
6data = [
7    {"UserFeatures.userid": 1},
8    {"UserFeatures.userid": 2},
9    {"UserFeatures.userid": 3},
10]
11req = {
12    "outputs": ["UserFeatures"],
13    "inputs": ["UserFeatures.userid"],
14    "data": data,
15    "log": True,
16    "workflow": "test",
17}
18
19response = requests.post(url, headers=headers, data=req)
20assert response.status_code == requests.codes.OK, response.json()
With row oriented data

python

POST /api/v1/log

Log

Method to push data into Fennel datasets via webhook endpoints via REST API.

Headers

Content-Type:"application/json"

All Fennel REST APIs expect a content-type of application/json.

Authorization:Bearer {str}

Fennel uses bearer token for authorization. Pass along a valid token that has permissions to log data to the webhook.

Body Parameters

webhook:str

The name of the webhook source containing the endpoint to which the data should be logged.

endpoint:str

The name of the webhook endpoint to which the data should be logged.

data:json

The data to be logged to the webhook. This json string could either be:

  • Row major where it's a json array of rows with each row written as a json object.

  • Column major where it's a dictionary from column name to values of that column as a json array.

1url = "{}/api/v1/log".format(SERVER)
2headers = {
3    "Content-Type": "application/json",
4    "Authorization": "Bearer <API-TOKEN>",
5}
6data = [
7    {
8        "user_id": 1,
9        "name": "John",
10        "age": 20,
11        "country": "Russia",
12        "timestamp": "2020-01-01",
13    },
14    {
15        "user_id": 2,
16        "name": "Monica",
17        "age": 24,
18        "country": "Chile",
19        "timestamp": "2021-03-01",
20    },
21    {
22        "user_id": 3,
23        "name": "Bob",
24        "age": 32,
25        "country": "USA",
26        "timestamp": "2020-01-01",
27    },
28]
29req = {
30    "webhook": "fennel_webhook",
31    "endpoint": "UserInfo",
32    "data": data,
33}
34response = requests.post(url, headers=headers, data=req)
35assert response.status_code == requests.codes.OK, response.json()

python

Core Types

Fennel supports the following data types, expressed as native Python type hints.

int

Implemented as signed 8 byte integer (int64)

float

Implemented as signed 8 byte float with double precision

bool

Implemented as standard 1 byte boolean

str

Arbitrary sequence of utf-8 characters. Like most programming languages, str doesn't support arbitrary binary bytes though.

List[T]

List of elements of any other valid type T. Unlike Python lists, all elements must have the same type.

dict[T]

Map from str to data of any valid type T.

Fennel does not support dictionaries with arbitrary types for keys - please reach out to Fennel support if you have use cases requiring that.

Optional[T]

Same as Python Optional - permits either None or values of type T.

Embedding[int]

Denotes a list of floats of the given fixed length i.e. Embedding[32] describes a list of 32 floats. This is same as list[float] but enforces the list length which is important for dot product and other similar operations on embeddings. Embedding types also use fewer bits for storage (by default 16)

datetime

Describes a timestamp, implemented as microseconds since Unix epoch (so minimum granularity is microseconds). Can be natively parsed from multiple formats though internally is stored as 8-byte signed integer describing timestamp as microseconds from epoch in UTC.

struct {k1: T1, k2: T2, ...}

Describes the equivalent of a struct or dataclass - a container containing a fixed set of fields of fixed types.

Note

Fennel uses a strong type system and post data-ingestion, data doesn't auto-coerce across types. For instance, it will be a compile or runtime error if something was expected to be of type float but received an int instead.

1# imports for data types
2from typing import List, Optional, Dict
3from datetime import datetime
4from fennel.lib.schema import struct
5
6# imports for datasets
7from fennel.datasets import dataset, field
8from fennel.lib.metadata import meta
9
10@struct  # like dataclass but verifies that fields have valid Fennel types
11class Address:
12    street: str
13    city: str
14    state: str
15    zip_code: Optional[str]
16
17@meta(owner="[email protected]")
18@dataset
19class Student:
20    id: int = field(key=True)
21    name: str
22    grades: Dict[str, float]
23    honors: bool
24    classes: List[str]
25    address: Address  # Address is now a valid Fennel type
26    signup_time: datetime

python

Type Restrictions

Fennel type restrictions allow you to put additional constraints on base types and restrict the set of valid values in some form.

regex:regex("<pattern>")

Restriction on the base type of str. Permits only the strings matching the given regex pattern.

between:between(T, low, high)

Restriction on the base type of int or float. Permits only the numbers between low and high (both inclusive by default). Left or right can be made exclusive by setting min_strict or max_strict to be False respectively.

oneof:oneof(T, [values...])

Restricts a type T to only accept one of the given values as valid values. oneof can be thought of as a more general version of enum.

For the restriction to be valid, all the values must themselves be of type T.

Type Restriction Composition

These restricted types act as regular types -- they can be mixed/matched to form complex composite types. For instance, the following are all valid Fennel types:

  • list[regex('$[0-9]{5}$')] - list of regexes matching US zip codes
  • oneof(Optional[int], [None, 0, 1]) - a nullable type that only takes 0 or 1 as valid values
Note

Data belonging to the restricted types is still stored & transmitted (e.g. in json encoding) as a regular base type. It's just that Fennel will reject data of base type that doesn't meet the restriction.

1# imports for data types
2from datetime import datetime
3from fennel.lib.schema import oneof, between, regex
4
5# imports for datasets
6from fennel.datasets import dataset, field
7from fennel.lib.metadata import meta
8from fennel.sources import source, Webhook
9
10webhook = Webhook(name="fennel_webhook")
11
12@meta(owner="[email protected]")
13@source(webhook.endpoint("UserInfoDataset"))
14@dataset
15class UserInfoDataset:
16    user_id: int = field(key=True)
17    name: str
18    age: between(int, 0, 100, strict_min=True)
19    gender: oneof(str, ["male", "female", "non-binary"])
20    email: regex(r"[^@]+@[^@]+\.[^@]+")
21    timestamp: datetime

python

Duration

Fennel lets you express durations in an easy to read natural language as described below:

SymbolUnit
yYear
wWeek
dDay
hHour
mMinute
sSecond

There is no shortcut for month because there is a very high degree of variance in month's duration- some months are 28 days, some are 30 days and some are 31 days. A common convention in ML is to use 4 weeks to describe a month.

Note

A year is hardcoded to be exactly 365 days and doesn't take into account variance like leap years.

1"7h" -> 7 hours
2"12d" -> 12 days
3"2y" -> 2 years
4"3h 20m 4s" -> 3 hours 20 minutes and 4 seconds
5"2y 4w" -> 2 years and 4 weeks

text

Assign

Operator to add a new column to a dataset - the added column is neither a key column nor a timestamp column.

Parameters

name:str

The name of the new column to be added - must not conflict with any existing name on the dataset.

dtype:Type

The data type of the new column to be added - must be a valid Fennel supported data type.

func:Callable[pd.Dataframe, pd.Series[T]]

The function, which when given a subset of the dataset as a dataframe, returns the value of the new column for each row in the dataframe.

Fennel verifies at runtime that the returned series matches the declared dtype.

Returns

Dataset

Returns a dataset with one additional column of the given name and type same as dtype. This additional column is neither a key-column or the timestamp column.

Errors

Invalid series at runtime:

Runtime error if the value returned from the lambda isn't a pandas Series of the declared type and the same length as the input dataframe.

1from fennel.datasets import dataset, field, pipeline, Dataset
2from fennel.lib.schema import inputs
3from fennel.sources import source, Webhook
4
5webhook = Webhook(name="webhook")
6
7@source(webhook.endpoint("Transaction"))
8@dataset
9class Transaction:
10    uid: int = field(key=True)
11    amount: int
12    timestamp: datetime
13
14@dataset
15class WithSquare:
16    uid: int = field(key=True)
17    amount: int
18    amount_sq: int
19    timestamp: datetime
20
21    @pipeline(version=1)
22    @inputs(Transaction)
23    def my_pipeline(cls, ds: Dataset):
24        return ds.assign("amount_sq", int, lambda df: df["amount"] ** 2)
Adding new column 'amount_sq' of type int

python

1from fennel.datasets import dataset, field, pipeline, Dataset
2from fennel.lib.schema import inputs
3from fennel.sources import source, Webhook
4
5webhook = Webhook(name="webhook")
6
7@source(webhook.endpoint("Transaction"))
8@dataset
9class Transaction:
10    uid: int = field(key=True)
11    amount: int
12    timestamp: datetime
13
14@dataset
15class WithHalf:
16    uid: int = field(key=True)
17    amount: int
18    amount_sq: int
19    timestamp: datetime
20
21    @pipeline(version=1)
22    @inputs(Transaction)
23    def my_pipeline(cls, ds: Dataset):
24        return ds.assign(
25            "amount_sq", int, lambda df: df["amount"] * 0.5
26        )
Runtime error: returns float, not int

python

Filter

Operator to selectively filter out rows from a dataset.

Parameters

func:Callable[pd.Dataframe, pd.Series[bool]]

The actual filter function - takes a pandas dataframe containing a batch of rows from the input dataset and is expected to return a series of booleans of the same length. Only rows corresponding to True are retained in the output dataset.

Returns

Dataset

Returns a dataset with the same schema as the input dataset, just with some rows potentially filtered out.

Errors

Invalid series at runtime:

Runtime error if the value returned from the lambda isn't a pandas Series of the bool and of the same length as the input dataframe.

1from fennel.datasets import dataset, field, pipeline, Dataset
2from fennel.lib.schema import inputs
3from fennel.sources import source, Webhook
4
5webhook = Webhook(name="webhook")
6
7@source(webhook.endpoint("User"))
8@dataset
9class User:
10    uid: int = field(key=True)
11    city: str
12    signup_time: datetime
13
14@dataset
15class Filtered:
16    uid: int = field(key=True)
17    city: str
18    signup_time: datetime
19
20    @pipeline(version=1)
21    @inputs(User)
22    def my_pipeline(cls, user: Dataset):
23        return user.filter(lambda df: df["city"] != "London")
Filtering out rows where city is London

python

1from fennel.datasets import dataset, field, pipeline, Dataset
2from fennel.lib.schema import inputs
3from fennel.sources import source, Webhook
4
5webhook = Webhook(name="webhook")
6
7@source(webhook.endpoint("User"))
8@dataset
9class User:
10    uid: int = field(key=True)
11    city: str
12    signup_time: datetime
13
14@dataset
15class Filtered:
16    uid: int = field(key=True)
17    city: str
18    signup_time: datetime
19
20    @pipeline(version=1)
21    @inputs(User)
22    def my_pipeline(cls, user: Dataset):
23        return user.filter(lambda df: df["city"] + "London")
Runtime Error: Lambda returns str, not bool

python

Drop

Operator to drop one or more non-key non-timestamp columns from a dataset.

Parameters

columns:List[str]

List of columns in the incoming dataset that should be dropped. This can be passed either as unpacked *args or as a Python list.

Returns

Dataset

Returns a dataset with the same schema as the input dataset but with some columns (as specified by columns) removed.

Errors

Dropping key/timestamp columns:

Sync error on removing any key columns or the timestamp column.

Dropping non-existent columns:

Sync error on removing any column that doesn't exist in the input dataset.

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    city: str
6    country: str
7    weight: float
8    height: float
9    gender: str
10    timestamp: datetime
11
12@dataset
13class Dropped:
14    uid: int = field(key=True)
15    gender: str
16    timestamp: datetime
17
18    @pipeline(version=1)
19    @inputs(User)
20    def pipeline(cls, user: Dataset):
21        return user.drop("height", "weight").drop(
22            columns=["city", "country"]
23        )
Can pass names via *args or kwarg columns

python

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    city: str
6    timestamp: datetime
7
8@dataset
9class Dropped:
10    city: str
11    timestamp: datetime
12
13    @pipeline(version=1)
14    @inputs(User)
15    def pipeline(cls, user: Dataset):
16        return user.drop("uid")
Can not drop key or timestamp columns

python

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    city: str
6    timestamp: datetime
7
8@dataset
9class Dropped:
10    uid: int = field(key=True)
11    city: str
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(User)
16    def pipeline(cls, user: Dataset):
17        return user.drop("random")
Can not drop a non-existent column

python

Select

Operator to select some columns from a dataset.

Parameters

columns:List[str]

List of columns in the incoming dataset that should be selected into the output dataset. This can be passed either as unpacked *args or as kwarg set to a Python list.

Returns

Dataset

Returns a dataset containing only the selected columns. Timestamp field is automatically included whether explicitly provided in the select or not.

Errors

Not selecting all key columns:

Select, like most other operators, can not change the key or timestamp columns. As a result, not selecting all the key columns is a sync error.

Selecting non-existent column:

Sync error to select a column that is not present in the input dataset.

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    weight: float
6    height: float
7    city: str
8    country: str
9    gender: str
10    timestamp: datetime
11
12@dataset
13class Selected:
14    uid: int = field(key=True)
15    weight: float
16    height: float
17    timestamp: datetime
18
19    @pipeline(version=1)
20    @inputs(User)
21    def pipeline(cls, user: Dataset):
22        return user.select("uid", "height", "weight")
Selecting uid, height & weight columns

python

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    city: str
6    timestamp: datetime
7
8@dataset
9class Selected:
10    city: str
11    timestamp: datetime
12
13    @pipeline(version=1)
14    @inputs(User)
15    def pipeline(cls, user: Dataset):
16        return user.select("height", "weight")
Did not select key uid

python

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    city: str
6    timestamp: datetime
7
8@dataset
9class Selected:
10    uid: int = field(key=True)
11    city: str
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(User)
16    def pipeline(cls, user: Dataset):
17        return user.select("uid", "random")
Selecting non-existent column

python

Dropnull

Operator to drop rows containing null values (aka None in Python speak) in the given columns.

Parameters

columns:Optional[List[str]]

List of columns in the incoming dataset that should be checked for presence of None values - if any such column has None for a row, the row will be filtered out from the output dataset. This can be passed either as unpacked *args or as a Python list.

If no arguments are given, columns will be all columns with the type Optional[T] in the dataset.

Returns

Dataset

Returns a dataset with the same name & number of columns as the input dataset but with the type of some columns modified from Optional[T] -> T.

Errors

Dropnull on non-optional columns:

Sync error to pass a column without an optional type.

Dropnull on non-existent columns:

Sync error to pass a column that doesn't exist in the input dataset.

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    dob: str
6    city: Optional[str]
7    country: Optional[str]
8    gender: Optional[str]
9    timestamp: datetime
10
11@dataset
12class Derived:
13    uid: int = field(key=True)
14    dob: str
15    city: str
16    country: str
17    gender: Optional[str]
18    timestamp: datetime
19
20    @pipeline(version=1)
21    @inputs(User)
22    def pipeline(cls, user: Dataset):
23        return user.dropnull("city", "country")
Dropnull on city & country, but not gender

python

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    dob: str
6    city: Optional[str]
7    country: Optional[str]
8    gender: Optional[str]
9    timestamp: datetime
10
11@dataset
12class Derived:
13    uid: int = field(key=True)
14    city: str
15    country: str
16    gender: str
17    dob: str
18    timestamp: datetime
19
20    @pipeline(version=1)
21    @inputs(User)
22    def pipeline(cls, user: Dataset):
23        return user.dropnull()
Applies to all optional columns if none is given explicitly

python

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    city: Optional[str]
6    timestamp: datetime
7
8@dataset
9class Derived:
10    uid: int = field(key=True)
11    city: str
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(User)
16    def pipeline(cls, user: Dataset):
17        return user.select("random")
Dropnull on a non-existent column

python

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    city: str
6    timestamp: datetime
7
8@dataset
9class Derived:
10    uid: int = field(key=True)
11    timestamp: datetime
12
13    @pipeline(version=1)
14    @inputs(User)
15    def pipeline(cls, user: Dataset):
16        return user.select("city")
Dropnull on a non-optional column

python

Transform

Catch all operator to add/remove/update columns.

Parameters

func:Callable[pd.Dataframe, pd.Dataframe]

The transform function that takes a pandas dataframe containing a batch of rows from the input dataset and returns an output dataframe of the same length, though potentially with different set of columns.

schema:Optional[Dict[str, Type]]

The expected schema of the output dataset. If not specified, the schema of the input dataset is used.

Returns

Dataset

Returns a dataset with the schema as specified in schema and rows as transformed by the transform function.

Errors

Output dataframe doesn't match the schema:

Runtime error if the dataframe returned by the transform function doesn't match the provided schema.

Modifying key/timestamp columns:

Sync error if transform tries to modify key/timestamp columns.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int = field(key=True)
5    amount: int
6    timestamp: datetime
7
8@dataset
9class WithSquare:
10    uid: int = field(key=True)
11    amount: int
12    amount_sq: int
13    timestamp: datetime
14
15    @pipeline(version=1)
16    @inputs(Transaction)
17    def pipeline(cls, ds: Dataset):
18        schema = ds.schema()
19        schema["amount_sq"] = int
20        return ds.transform(
21            lambda df: df.assign(amount_sq=df["amount"] ** 2), schema
22        )  # noqa
Adding column amount_sq

python

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int = field(key=True)
5    amount: int
6    timestamp: datetime
7
8def transform(df: pd.DataFrame) -> pd.DataFrame:
9    df["user"] = df["uid"]
10    df.drop(columns=["uid"], inplace=True)
11    return df
12
13@dataset
14class Derived:
15    user: int = field(key=True)
16    amount: int
17    timestamp: datetime
18
19    @pipeline(version=1)
20    @inputs(Transaction)
21    def pipeline(cls, ds: Dataset):
22        schema = {"user": int, "amount": int, "timestamp": datetime}
23        return ds.transform(transform, schema)
Modifying key or timestamp columns

python

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int = field(key=True)
5    amount: int
6    timestamp: datetime
7
8@dataset
9class WithHalf:
10    uid: int = field(key=True)
11    amount: int
12    amount_sq: int
13    timestamp: datetime
14
15    @pipeline(version=1)
16    @inputs(Transaction)
17    def pipeline(cls, ds: Dataset):
18        schema = ds.schema()
19        schema["amount_sq"] = int
20        return ds.transform(
21            lambda df: df.assign(amount_sq=str(df["amount"])), schema
22        )  # noqa
Runtime error: amount_sq is of type int, not str

python

Explode

Operator to explode lists in a single row to form multiple rows, analogous to to the explodefunction in Pandas.

Only applicable to keyless datasets.

Parameters

columns:List[str]

The list of columns to explode. This list can be passed either as unpacked *args or kwarg columns mapping to an explicit list.

All the columns should be of type List[T] for some T in the input dataset and after explosion, they get converted to a column of type Optional[T].

Returns

Dataset

Returns a dataset with the same number & name of columns as the input dataset but with the type of exploded columns modified from List[T] to Optional[T].

Empty lists are converted to None values (hence the output types need to be Optional[T]).

Errors

Exploding keyed datasets:

Sync error to apply explode on an input dataset with key columns.

Exploding non-list columns:

Sync error to explode using a column that is not of the type List[T].

Exploding non-existent columns:

Sync error to explode using a column that is not present in the input dataset.

Unequal size lists in multi-column explode:

For a given row, all the columns getting exploded must have lists of the same length, otherwise a runtime error is raised. Note that the lists can be of different type across rows.

1@source(webhook.endpoint("Orders"))
2@dataset
3class Orders:
4    uid: int
5    skus: List[int]
6    prices: List[float]
7    timestamp: datetime
8
9@dataset
10class Derived:
11    uid: int
12    sku: Optional[int]
13    price: Optional[float]
14    timestamp: datetime
15
16    @pipeline(version=1)
17    @inputs(Orders)
18    def pipeline(cls, ds: Dataset):
19        return ds.explode("skus", "prices").rename(
20            {"skus": "sku", "prices": "price"}
21        )
Exploding skus and prices together

python

1@source(webhook.endpoint("Orders"))
2@dataset
3class Orders:
4    uid: int
5    price: float
6    timestamp: datetime
7
8@dataset
9class Derived:
10    uid: int
11    price: float
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(Orders)
16    def pipeline(cls, ds: Dataset):
17        return ds.explode("price")
Exploding a non-list column

python

1@source(webhook.endpoint("Orders"))
2@dataset
3class Orders:
4    uid: int
5    price: List[float]
6    timestamp: datetime
7
8@dataset
9class Derived:
10    uid: int
11    price: float
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(Orders)
16    def pipeline(cls, ds: Dataset):
17        return ds.explode("price", "random")
Exploding a non-existent column

python

Join

Operator to join two datasets. The right hand side dataset must have one or more key columns and the join operation is performed on these columns.

Parameters

dataset:Dataset

The right hand side dataset to join this dataset with. RHS dataset must be a keyed dataset and must also be an input to the pipeline (vs being an intermediary dataset derived within a pipeline itself).

how:"inner" | "left"

Required kwarg indicating whether the join should be an inner join (how="inner") or a left-outer join (how="left"). With "left", the output dataset may have a row even if there is no matching row on the right side.

on:Optional[List[str]]

Default Value: None

Kwarg that specifies the list of fields along which join should happen. If present, both left and right side datasets must have fields with these names and matching data types. This list must be identical to the names of all key columns of the right hand side.

If this isn't set, left_on and right_on must be set instead.

left_on:Optional[List[str]]

Default Value: None

Kwarg that specifies the list of fields from the left side dataset that should be used for joining. If this kwarg is set, right_on must also be set. Note that right_on must be identical to the names of all the key columns of the right side.

right_on:Optional[List[str]]

Default Value: None

Kwarg that specifies the list of fields from the right side dataset that should be used for joining. If this kwarg is setup, left_on must also be set. The length of left_on and right_on must be the same and corresponding fields on both sides must have the same data types.

within:Tuple[Duration, Duration]

Default Value: ("forever", "0s")

Optional kwarg specifying the time window relative to the left side timestamp within which the join should be performed. This can be seen as adding another condition to join like WHERE left_time - d1 < right_time AND right_time < left_time + d1

  • The first value in the tuple represents how far back in time should a join happen. The term "forever" means that we can go infinitely back in time when searching for an event to join from the left-hand side data.
  • The second value in the tuple represents how far ahead in time we can go to perform a join. This is useful in cases when the corresponding RHS data of the join can come later. The default value for this parameter is ("forever", "0s") which means that we can go infinitely back in time and the RHS data should be available for the event time of the LHS data.

Returns

Dataset

Returns a dataset representing the joined dataset having the same keys & timestamp columns as the LHS dataset.

The output dataset has all the columns from the left dataset and all non-key non-timestamp columns from the right dataset.

If the join was of type inner, the type of a joined RHS column of type T stays T but if the join was of type left, the type in the output dataset becomes Optional[T] if it was T on the RHS side.

Errors

Join with non-key dataset on the right side:

Sync error to do a join with a dataset that doesn't have key columns.

Join with intermediate dataset:

Sync error to do a join with a dataset that is not an input to the pipeline but instead is an intermediate dataset derived during the pipeline itself.

Post-join column name conflict:

Sync error if join will result in a dataset having two columns of the same name. A common way to work-around this is to rename columns via the rename operator before the join.

Mismatch in columns to be joined:

Sync error if the number/type of the join columns on the left and right side don't match.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    merchant: int
6    amount: int
7    timestamp: datetime
8
9@source(webhook.endpoint("MerchantCategory"))
10@dataset
11class MerchantCategory:
12    merchant: int = field(key=True)  # won't show up in joined dataset
13    category: str
14    updated_at: datetime  # won't show up in joined dataset
15
16@dataset
17class WithCategory:
18    uid: int
19    merchant: int
20    amount: int
21    timestamp: datetime
22    category: str
23
24    @pipeline(version=1)
25    @inputs(Transaction, MerchantCategory)
26    def pipeline(cls, tx: Dataset, merchant_category: Dataset):
27        return tx.join(merchant_category, on=["merchant"], how="inner")
Inner join on 'merchant'

python

Groupby

Operator to group rows of incoming datasets to be processed by the next operator.

Technically, groupby isn't a standalone operator by itself since its output isn't a valid dataset. Instead, it becomes a valid operator when followed by first, aggregate, or window operators.

Parameters

keys:List[str]

List of keys in the incoming dataset along which the rows should be grouped. This can be passed as unpacked *args or a Python list.

Errors

Grouping by non-existent columns:

Sync error if trying to group by columns that don't exist in the input dataset.

Grouping by timestamp column:

Sync error if trying to do a groupby via the timestamp column of the input dataset.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    category: str
6    timestamp: datetime
7
8@dataset
9class FirstInCategory:
10    category: str = field(key=True)
11    uid: int
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(Transaction)
16    def pipeline(cls, transactions: Dataset):
17        return transactions.groupby("category").first()
Groupby category before using first

python

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    category: str
6    timestamp: datetime
7
8@dataset
9class FirstInCategory:
10    category: str = field(key=True)
11    uid: int
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(Transaction)
16    def pipeline(cls, transactions: Dataset):
17        return transactions.groupby("non_existent_column").first()
Groupby using a non-existent column

python

Aggregate

Operator to do continuous window aggregations. Aggregate operator must always be preceded by a groupby operator.

Parameters

aggregates:List[Aggregation]

Positional argument specifying the list of aggregations to apply on the grouped dataset. This list can be passed either as an unpacked *args or as an explicit list as the first position argument.

See aggregations for the full list of aggregate functions.

Returns

Dataset

Returns a dataset where all columns passed to groupby become the key columns, the timestamp column stays as it is and one column is created for each aggregation.

The type of each aggregated column depends on the aggregate and the type of the corresponding column in the input dataset.

Note

Aggregate is the terminal operator - no other operator can follow it and no other datasets can be derived from the dataset containing this pipeline.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    amount: int
6    timestamp: datetime
7
8@dataset
9class Aggregated:
10    uid: int = field(key=True)
11    total: int
12    count_1d: int
13    timestamp: datetime
14
15    @pipeline(version=1)
16    @inputs(Transaction)
17    def pipeline(cls, ds: Dataset):
18        return ds.groupby("uid").aggregate(
19            Count(window="1d", into_field="count_1d"),
20            Sum(of="amount", window="forever", into_field="total"),
21        )
Aggregate count & sum of transactions in rolling windows

python

First

Operator to find the first element of a group by the row timestamp. First operator must always be preceded by a groupby operator.

Parameters

The first operator does not take any parameters.

Returns

The returned dataset's fields are the same as the input dataset, with the grouping fields as the keys.

For each group formed by grouping, one row is chosen having the lowest value in the timestamp field. In case of ties, the first seen row wins.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    amount: int
6    timestamp: datetime
7
8@dataset
9class FirstOnly:
10    uid: int = field(key=True)
11    amount: int
12    timestamp: datetime
13
14    @pipeline(version=1)
15    @inputs(Transaction)
16    def pipeline(cls, ds: Dataset):
17        return ds.groupby("uid").first()
Dataset with just the first transaction of each user

python

Dedup

Operator to dedup keyless datasets (e.g. event streams).

Parameters

by:Optional[List[str]]

Default Value: None

The list of columns to use for identifying duplicates. If not specified, all the columns are used for identifying duplicates.

Two rows of the input dataset are considered duplicates if and only if they have the same values for the timestamp column and all the by columns.

Returns

Dataset

Returns a keyless dataset having the same schema as the input dataset but with some duplicated rows filtered out.

Errors

Dedup on dataset with key columns:

Sync error to apply dedup on a keyed dataset.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    txid: int
5    uid: int
6    amount: int
7    timestamp: datetime
8
9@dataset
10class Deduped:
11    txid: int
12    uid: int
13    amount: int
14    timestamp: datetime
15
16    @pipeline(version=1)
17    @inputs(Transaction)
18    def pipeline(cls, ds: Dataset):
19        return ds.dedup(by="txid")
Dedup using txid and timestamp

python

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    txid: int
5    uid: int
6    amount: int
7    timestamp: datetime
8
9@dataset
10class Deduped:
11    txid: int
12    uid: int
13    amount: int
14    timestamp: datetime
15
16    @pipeline(version=1)
17    @inputs(Transaction)
18    def pipeline(cls, ds: Dataset):
19        return ds.dedup()
Dedup using all the fields

python

Rename

Operator to rename columns of a dataset.

Parameters

columns:Dict[str, str]

Dictionary mapping from old column names to their new names.

All columns should still have distinct and valid names post renaming.

Returns

Dataset

Returns a dataset with the same schema as the input dataset, just with the columns renamed.

Errors

Renaming non-existent column:

Sync error if there is no existing column with name matching each of the keys in the rename dictionary.

Conflicting column names post-rename:

Sync error if after renaming, there will be two columns in the dataset having the same name.

1@source(webhook.endpoint("User"))
2@dataset
3class User:
4    uid: int = field(key=True)
5    weight: float
6    height: float
7    timestamp: datetime
8
9@dataset
10class Derived:
11    uid: int = field(key=True)
12    weight_lb: float
13    height_in: float
14    timestamp: datetime
15
16    @pipeline(version=1)
17    @inputs(User)
18    def pipeline(cls, user: Dataset):
19        return user.rename(
20            {"weight": "weight_lb", "height": "height_in"}
21        )

python

Kafka

Data connector to any data store that speaks the Kafka protocol (e.g. Native Kafka, MSK, Redpanda etc.)

Cluster Parameters

name:str

A name to identify the source. This name should be unique across ALL sources.

bootstrap_servers:str

This is a list of the addresses of the Kafka brokers in a "bootstrap" Kafka cluster that a Kafka client connects to initially to bootstrap itself and discover the rest of the brokers in the cluster.

Addresses are written as host & port pairs and can be specified either as a single server (e.g. localhost:9092) or a comma separated list of several servers (e.g. localhost:9092,another.host:9092).

security_protocol:"PLAINTEXT" | "SASL_PLAINTEXT" | "SASL_SSL"

Protocol used to communicate with the brokers.

sasl_mechanism:Optional[str]

SASL mechanism (e.g.SCRAM-SHA-256, PLAIN) to use for authentication.

sasl_plain_username:Optional[str]

SASL username.

sasl_plain_password:Optional[str]

SASL password.

Topic Parameters

topic:str

The name of the kafka topic that needs to be sourced into the dataset.

format:"json" | Avro

Default Value: json

The format of the data in Kafka topic. Both "json" and Avro supported.

Errors

Connectivity problems:

Fennel server tries to connect with the Kafka broker during the sync operation itself to validate connectivity - as a result, incorrect URL/Username/Password etc will be caught at sync time itself as an error.

Note: Mock client can not talk to any external data source and hence is unable to do this validation at sync time.

Schema mismatch errors:

Schema validity of data in Kafka can only be checked at runtime. Any rows that can not be parsed are rejected. Please keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

1from fennel.sources import source, Kafka
2from fennel.datasets import dataset, field
3
4kafka = Kafka(
5    name="my_kafka",
6    bootstrap_servers="localhost:9092",  # could come via os env var too
7    security_protocol="SASL_PLAINTEXT",
8    sasl_mechanism="PLAIN",
9    sasl_plain_username=os.environ["KAFKA_USERNAME"],
10    sasl_plain_password=os.environ["KAFKA_PASSWORD"],
11)
12
13@source(kafka.topic("user", format="json"))
14@dataset
15class SomeDataset:
16    uid: int = field(key=True)
17    email: str
18    timestamp: datetime
Sourcing json data from kafka to a dataset

python

Kinesis

Data connector to ingest data from AWS Kinesis.

Parameters for Defining Source

name:str

A name to identify the source. The name should be unique across all Fennel sources.

role_arn:str

The arn of the role that Fennel should use to access the Kinesis stream. The role must already exist and Fennel's principal must have been given the permission to assume this role (see below for details or talk to Fennel support if you need help).

Stream Parameters

stream_arn:str

The arn of the Kinesis stream. The corresponding role_arn must have appropriate permissions for this stream. Providing a stream that either doesn't exist or can not be read using the given role_arn will result in an error during the sync operation.

init_position:str | datetime | float | int

The initial position in the stream from which Fennel should start ingestion. See Kinesis ShardIteratorType for more context. Allowed values are:

  • "latest" - start from the latest data (starting a few minutes after sync)
  • "trim_horizon"- start from the oldest data that hasn't been trimmed/expired yet.
  • datetime - start from the position denoted by this timestamp (i.e. equivalent to AT_TIMESTAMP in Kinesis vocabulary).

If choosing the datetime option, the timestamp can be specified as a datetime object, or as an int representing seconds since the epoch, or as a float representing {seconds}.{microseconds} since the epoch or as an ISO-8601 formatted str.

Note that this timestamp is the time attached with the Kinesis message itself at the time of production, not any timestamp field inside the message.

format:"json" | Avro

The format of the data in the Kinesis stream. Most common value is "json" though Fennel also supports Avro.

Errors

Connectivity problems:

Fennel server tries to connect with Kinesis during the sync operation itself to validate connectivity - as a result, incorrect stream/role ARNs or insufficient permissions will be caught at sync time itself as an error.

Note: Mock client can not talk to any external data source and hence is unable to do this validation at sync time.

Schema mismatch errors:

Schema validity of data only be checked at runtime. Any rows that can not be parsed are rejected. Please keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

1from fennel.sources import source, Kinesis
2from fennel.datasets import dataset, field
3
4kinesis = Kinesis(
5    name="my_kinesis",
6    role_arn=os.environ["KINESIS_ROLE_ARN"],
7)
8
9stream = kinesis.stream(
10    stream_arn=os.environ["KINESIS_ORDERS_STREAM_ARN"],
11    init_position=datetime(2023, 1, 5),  # Start ingesting from Jan 5, 2023
12    format="json",
13)
14
15@source(stream)
16@dataset
17class Orders:
18    uid: int
19    order_id: str
20    amount: float
21    timestamp: datetime
Using explicit timestamp as init position

python

1from fennel.sources import source, Kinesis
2from fennel.datasets import dataset, field
3
4kinesis = Kinesis(
5    name="my_kinesis",
6    role_arn=os.environ["KINESIS_ROLE_ARN"],
7)
8
9stream = kinesis.stream(
10    stream_arn=os.environ["KINESIS_ORDERS_STREAM_ARN"],
11    init_position="latest",
12    format="json",
13)
14
15@source(stream)
16@dataset
17class Orders:
18    uid: int
19    order_id: str
20    amount: float
21    timestamp: datetime
Using latest as init position

python

Managing Kinesis Access

Fennel creates a special role with name prefixed by FennelDataAccessRole- in your AWS account for role-based access. The role corresponding to the role_arn passed to Kinesis source should have the following trust policy allowing this special Fennel role to assume the kinesis role.

See Trust Policy

Specify the exact role_arn in the form arn:aws:iam::<fennel-data-plane-account-id>:role/<FennelDataAccessRole-...> without any wildcards.

1{
2    "Version": "2012-10-17",
3    "Statement": [
4        {
5            "Sid": "",
6            "Effect": "Allow",
7            "Principal": {
8                "AWS": [
9                    "<role_arn>"
10                ]
11            },
12            "Action": "sts:AssumeRole"
13        }
14    ]
15}

python

Also attach the following permission policy. Add more streams to the Resource field if more than one streams need to be consumed via this role. Here the account-id is your account where the stream lives.

1{
2  "Version": "2012-10-17",
3  "Statement": [
4    {
5      "Sid": "AllowKinesisAccess",
6      "Effect": "Allow",
7      "Action": [
8        "kinesis:DescribeStream",
9        "kinesis:DescribeStreamSummary",
10        "kinesis:DescribeStreamConsumer",
11        "kinesis:RegisterStreamConsumer",
12        "kinesis:ListShards",
13        "kinesis:GetShardIterator",
14        "kinesis:SubscribeToShard",
15        "kinesis:GetRecords"
16      ],
17      "Resource": [
18        "arn:aws:kinesis:<region>:<account-id>:stream/<stream-name>",
19        "arn:aws:kinesis:<region>:<account-id>:stream/<stream-name>/*"
20      ]
21    }
22  ]
23}

python

Webhook

A push-based data connector, making it convenient for sending arbitrary JSON data to Fennel. Data can be pushed to a webhook endpoint either via the REST API or via the Python SDK.

Source Parameters

name:str

A name to identify the source. This name should be unique across all Fennel sources.

Connector Parameters

endpoint:str

The endpoint for the given webhook to which the data will be sent.

A single webhook could be visualized as a single Kafka cluster with each endpoint being somewhat analogous to a topic. A single webhook source can have as many endpoints as required.

Multiple datasets could be reading from the same webhook endpoint - in which case, they all get the exact same data.

Errors

Schema mismatch errors:

Schema validity of data only be checked at runtime. Any rows that can not be parsed are rejected. Please keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

Note

Unlike all other sources, Webhook does work with mock client. As a result, it's very effective for quick prototyping and unit testing.

1from fennel.sources import source, Webhook
2from fennel.datasets import dataset, field
3
4webhook = Webhook(name="prod_webhook")
5
6@source(webhook.endpoint("User"))
7@dataset
8class User:
9    uid: int = field(key=True)
10    email: str
11    timestamp: datetime
12
13@source(webhook.endpoint("Transaction"))
14@dataset
15class Transaction:
16    txid: int
17    uid: int
18    amount: float
19    timestamp: datetime
Two datasets sourcing from endpoints of the same webook

python

1df = pd.DataFrame(
2    {
3        "uid": [1, 2, 3],
4        "email": ["[email protected]", "[email protected]", "[email protected]"],
5        "timestamp": [datetime.now(), datetime.now(), datetime.now()],
6    }
7)
8client.log("prod_webhook", "User", df)
Pushing data into webhook via Python SDK

python

1import requests
2
3url = "{}/api/v1/log".format(os.environ["FENNEL_SERVER_URL"])
4headers = {"Content-Type": "application/json"}
5data = [
6    {
7        "uid": 1,
8        "email": "[email protected]",
9        "timestamp": 1614556800,
10    },
11    {
12        "uid": 2,
13        "email": "[email protected]",
14        "timestamp": 1614556800,
15    },
16]
17req = {
18    "webhook": "prod_webhook",
19    "endpoint": "User",
20    "data": data,
21}
22requests.post(url, headers=headers, data=req)
Pushing data into webhook via REST API

python

MySQL

Data connector to MySQL databases.

Database Parameters

name:str

A name to identify the source. The name should be unique across all Fennel sources.

host:str

The hostname of the database.

port:Optional[str]

Default Value: 3306

The port to connect to.

db_name:str

The name of the MySQL database to establish a connection with.

username:str

The username which should be used to access the database. This username should have access to the database db_name.

password:str

The password associated with the username.

jdbc_params:Optional[str]

Default Value: None

Additional properties to pass to the JDBC URL string when connecting to the database formatted as key=value pairs separated by the symbol &. For instance: key1=value1&key2=value2.

Error

If you see a 'Cannot create a PoolableConnectionFactory' error, try setting jdbc_params to enabledTLSProtocols=TLSv1.2

Table Parameters

table:str

The name of the table within the database that should be ingested.

cursor:str

The name of the field in the table that acts as cursor for ingestion i.e. a field that is approximately monotonic and only goes up with time.

Fennel issues queries of the form select * from table where {cursor} >= {last_cursor - disorder} to get data it hasn't seen before. Auto increment IDs or timestamps corresponding to modified_at (vs created_at unless the field doesn't change) are good contenders.

Note that this field doesn't even need to be a part of the Fennel dataset.

Warning

It is recommended to put an index on the cursor field so that Fennel ingestion queries don't create too much load on your MySQL

Errors

Connectivity Issues:

Fennel tries to test the connection with your MySQL during sync itself so any connectivity issue (e.g. wrong host name, username, password etc) is flagged as as an error during sync with the real Fennel servers.

Note: Mock client can not talk to any external data source and hence is unable to do this validation at sync time.

Schema mismatch errors:

Schema validity of data in MySQL is checked at runtime. Any rows that can not be parsed are rejected. Please keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

1from fennel.sources import source, MySQL
2from fennel.datasets import dataset, field
3
4mysql = MySQL(
5    name="my_mysql",
6    host="my-favourite-mysql.us-west-2.rds.amazonaws.com",
7    port=3306,  # could be omitted, defaults to 3306
8    db_name=os.environ["DB_NAME"],
9    username=os.environ["MYSQL_USERNAME"],
10    password=os.environ["MYSQL_PASSWORD"],
11    jdbc_params="enabledTLSProtocols=TLSv1.2",
12)
13
14@source(mysql.table("user", cursor="updated_at"), every="1m")
15@dataset
16class User:
17    uid: int = field(key=True)
18    email: str
19    created_at: datetime
20    updated_at: datetime = field(timestamp=True)
Sourcing dataset from a mysql table

python

Postgres

Data connector to Postgres databases.

Database Parameters

name:str

A name to identify the source. The name should be unique across all Fennel sources.

host:str

The hostname of the database.

port:Optional[str]

Default Value: 5432

The port to connect to.

db_name:str

The name of the Postgres database to establish a connection with.

username:str

The username which should be used to access the database. This username should have access to the database db_name.

password:str

The password associated with the username.

jdbc_params:Optional[str]

Default Value: None

Additional properties to pass to the JDBC URL string when connecting to the database formatted as key=value pairs separated by the symbol &. For instance: key1=value1&key2=value2.

Error

If you see a 'Cannot create a PoolableConnectionFactory' error, try setting jdbc_params to enabledTLSProtocols=TLSv1.2

Table Parameters

table:str

The name of the table within the database that should be ingested.

cursor:str

The name of the field in the table that acts as cursor for ingestion i.e. a field that is approximately monotonic and only goes up with time.

Fennel issues queries of the form select * from table where {cursor} >= {last_cursor - disorder} to get data it hasn't seen before. Auto increment IDs or timestamps corresponding to modified_at (vs created_at unless the field doesn't change) are good contenders.

Note that this field doesn't even need to be a part of the Fennel dataset.

Warning

It is recommended to put an index on the cursor field so that Fennel ingestion queries don't create too much load on your Postgres database.

Errors

Connectivity Issues:

Fennel tries to test the connection with your Postgres during sync itself so any connectivity issue (e.g. wrong host name, username, password etc) is flagged as as an error during sync with the real Fennel servers.

Note: Mock client can not talk to any external data source and hence is unable to do this validation at sync time.

Schema mismatch errors:

Schema validity of data in Postgres is checked at runtime. Any rows that can not be parsed are rejected. Please keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

1from fennel.sources import source, Postgres
2from fennel.datasets import dataset, field
3
4postgres = Postgres(
5    name="my_postgres",
6    host="my-favourite-pg.us-west-2.rds.amazonaws.com",
7    port=5432,  # could be omitted, defaults to 5432
8    db_name=os.environ["DB_NAME"],
9    username=os.environ["POSTGRES_USERNAME"],
10    password=os.environ["POSTGRES_PASSWORD"],
11    jdbc_params="enabledTLSProtocols=TLSv1.2",
12)
13
14@source(postgres.table("user", cursor="updated_at"), every="1m")
15@dataset
16class User:
17    uid: int = field(key=True)
18    email: str
19    created_at: datetime
20    updated_at: datetime = field(timestamp=True)
Sourcing dataset from a postgres table

python

Snowflake

Data connector to Snowflake databases.

Database Parameters

name:str

A name to identify the source. The name should be unique across all Fennel sources.

account:str

Snowflake account identifier. This is the first part of the URL used to access Snowflake. For example, if the URL is https://<account>.snowflakecomputing.com, then the account is <account>.

This is usually of the form <ORG_ID>-<ACCOUNT_ID>. Refer to the Snowflake documentation to find the account identifier.

role:str

The role that should be used by Fennel to access Snowflake.

warehouse:str

The warehouse that should be used to access Snowflake.

db_name:str

The name of the database where the relevant data resides.

src_schema:str

The schema where the required data table(s) resides.

username:str

The username which should be used to access Snowflake. This username should have required permissions to assume the provided role.

password:str

The password associated with the username.

Table Parameters

table:str

The name of the table within the database that should be ingested.

cursor:str

The name of the field in the table that acts as cursor for ingestion i.e. a field that is approximately monotonic and only goes up with time.

Fennel issues queries of the form select * from table where {cursor} >= {last_cursor - disorder} to get data it hasn't seen before. Auto increment IDs or timestamps corresponding to modified_at (vs created_at unless the field doesn't change) are good contenders.

Note that this field doesn't even need to be a part of the Fennel dataset.

Errors

Connectivity Issues:

Fennel tries to test the connection with your Snowflake during sync itself so any connectivity issue (e.g. wrong host name, username, password etc) is flagged as as an error during sync with the real Fennel servers.

Note: Mock client can not talk to any external data source and hence is unable to do this validation at sync time.

Schema mismatch errors:

Schema validity of data in Snowflake is checked at runtime. Any rows that can not be parsed are rejected. Please keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

1from fennel.sources import source, Snowflake
2from fennel.datasets import dataset, field
3
4snowflake = Snowflake(
5    name="my_snowflake",
6    account="VPECCVJ-MUB03765",
7    warehouse="TEST",
8    db_name=os.environ["DB_NAME"],
9    src_schema="PUBLIC",
10    role="ACCOUNTADMIN",
11    username=os.environ["SNOWFLAKE_USERNAME"],
12    password=os.environ["SNOWFLAKE_PASSWORD"],
13)
14
15@source(snowflake.table("User", cursor="timestamp"))
16@dataset
17class UserClick:
18    uid: int
19    ad_id: int
20    timestamp: datetime

python

S3

Data connector to source data from S3.

Account Parameters

name:str

A name to identify the source. The name should be unique across all Fennel sources.

aws_access_key_id:Optional[str]

Default Value: None

AWS Access Key ID. This field is not required if role-based access is used or if the bucket is public.

aws_secrete_access_key:Optional[str]

Default Value: None

AWS Secret Access Key. This field is not required if role-based access is used or if the bucket is public.

Bucket Parameters

bucket:str

The name of the S3 bucket where the data files exist.

prefix:Optional[str]

Default Value: None

The prefix of the bucket (as relative path within bucket) where the data files exist. For instance, some-folder/ or A/B/C are all valid prefixes. Prefix can not have any wildcard characters.

Exactly one of prefix or path must be provided.

path:Optional[str]

Default Value: None

A / delimited path (relative to the bucket) describing the objects to be ingested. The valid path parts are:

  • static string of alphanumeric characters, underscores, hyphens or dots.
  • * wild card - this must be the entire path part: */* is valid but foo*/ is not.
  • string with a strftime format specifier (e.g yyyymmdd=%Y%m%d)

If you have a large volume of data or objects and your bucket is time partitioned, it's highly recommended to include details of time partitioning in your path instead of providing * - Fennel can use this information to optimize the ingestion.

For example, if your bucket has the structure orders/{country}/date={date}/store={store}/{file}.json, provide the path orders/*/date=%Y%m%d/*/*

Exactly one of prefix or path must be provided.

Warning

Path is currently only available in beta - please request Fennel support to enable this.

format:str

Default Value: csv

The format of the files you'd like to ingest. Valid values are "csv", "parquet", "json", "delta" or "hudi".

delimiter:Optional[str]

Default Value: ,

The character delimiting individual cells in the CSV data - only relevant when format is CSV, otherwise it's ignored.

The default value is "," can be overridden by any other 1-character string. For example, to use tab-delimited data enter "\t".

Errors

Connectivity or authentication errors:

Fennel server try to do some lightweight operations on the bucket during the sync operation - all connectivity or authentication related errors should be caught during the sync itself.

Note: Mock client can not talk to any external data source and hence is unable to do this validation at sync time.

Schema mismatch errors:

Schema validity of data in S3 can only be checked at runtime. Any rows that can not be parsed are rejected. Please keep an eye on the 'Errors' tab of Fennel console after initiating any data sync.

Enabling IAM Access

Fennel creates a role with name prefixed by FennelDataAccessRole- in your AWS account for role-based access. In order to use IAM access for s3, please ensure that this role has access to read and do list files on the buckets of interest.

With that ready, simply don't specify aws_access_key_id and aws_secret_access_key and Fennel will automatically fall back to IAM based access.

Note

Fennel uses file_last_modified property exported by S3 to track what data has been seen so far and hence a cursor field doesn't need to be specified.

1from fennel.sources import source, S3
2from fennel.datasets import dataset, field
3
4s3 = S3(
5    name="mys3",
6    aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
7    aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
8)
9
10@source(s3.bucket("datalake", prefix="user"), every="1h")
11@dataset
12class User:
13    uid: int = field(key=True)
14    email: str
15    timestamp: datetime
S3 ingestion via prefix

python

1from fennel.sources import source, S3
2from fennel.datasets import dataset, field
3
4s3 = S3(
5    name="my_s3",
6    aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
7    aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
8)
9
10bucket = s3.bucket("data", path="user/*/date-%Y-%m-%d/*", format="parquet")
11
12@source(bucket, every="1h")
13@dataset
14class User:
15    uid: int = field(key=True)
16    email: str
17    timestamp: datetime
S3 ingestion via path

python

Deltalake

Data connector to read data from tables in deltalake living in S3.

Deltalake connector is implemented via s3 connector - just the format parameter needs to be setup as 'delta'.

Warning

Fennel doesn't support reading delta tables from HDFS or any other non-S3 storage.

1from fennel.sources import source, S3
2from fennel.datasets import dataset, field
3
4s3 = S3(
5    name="mys3",
6    aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
7    aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
8)
9
10@source(s3.bucket("deltalake", prefix="user", format="delta"), every="1h")
11@dataset
12class User:
13    uid: int = field(key=True)
14    email: str
15    timestamp: datetime
Sourcing delta tables into Fennel datasets

python

Hudi

Data connector to read data from Apache Hudi tables in S3.

Hudi connector is implemented via s3 connector - just the format parameter needs to be setup as 'hudi'

Warning

Fennel doesn't support reading hudi tables from HDFS or any other non-S3 storage.

1from fennel.sources import source, S3
2from fennel.datasets import dataset, field
3
4s3 = S3(
5    name="mys3",
6    aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
7    aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
8)
9
10@source(s3.bucket("deltalake", prefix="user", format="hudi"), every="1h")
11@dataset
12class User:
13    uid: int = field(key=True)
14    email: str
15    timestamp: datetime
Sourcing hudi tables into Fennel datasets

python

Avro Registry

Several Fennel sources work with Avro format. When using Avro, it's common to keep the schemas in a centralized schema registry instead of including schema with each message.

Fennel supports integration with avro schema registries.

Parameters

registry:Literal["confluent"]

String denoting the provider of the registry. As of right now, Fennel only supports "confluent" avro registry though more such schema registries may be added over time.

url:str

The URL where the schema registry is hosted.

username:Optional[str]

User name to access the schema registry (assuming the registry requires authentication). If user name is provided, corresponding password must also be provided.

Assuming authentication is needed, either username/password must be provided or a token, but not both.

password:Optional[str]

The password associated with the username.

token:Optional[str]

Token to be used for authentication with the schema registry. Only one of username/password or token must be provided.

1from fennel.sources import source, Kafka, Avro
2from fennel.datasets import dataset, field
3
4kafka = Kafka(
5    name="my_kafka",
6    bootstrap_servers="localhost:9092",  # could come via os env var too
7    security_protocol="SASL_PLAINTEXT",
8    sasl_mechanism="PLAIN",
9    sasl_plain_username=os.environ["KAFKA_USERNAME"],
10    sasl_plain_password=os.environ["KAFKA_PASSWORD"],
11)
12
13avro = Avro(
14    registry="confluent",
15    url=os.environ["SCHEMA_REGISTRY_URL"],
16    username=os.environ["SCHEMA_REGISTRY_USERNAME"],
17    password=os.environ["SCHEMA_REGISTRY_PASSWORD"],
18)
19
20@source(kafka.topic("user", format=avro))
21@dataset
22class SomeDataset:
23    uid: int = field(key=True)
24    email: str
25    timestamp: datetime
Using avro registry with kafka

python

Source Decorator

All Fennel sources are wrapped in the @source decorator applied on top of the datasets. This decorator specifies a bunch of options to configure the ingestion mechanism that apply to most data sources.

Parameters

every:Duration

Default Value: "1h"

The frequency with which the ingestion should be carried out. Streaming sources like Kafka, Kinesis, Webhook ignore it since they do continuous polling.

Note that some Fennel sources make multiple round-trips of limited size in a single iteration so as to not overload the system - every only applies across full iterations of ingestion.

since:Optional[datetime]

Default Value: None

When since is set, the source only admits those rows that where the value corresponding to the timestamp column of the dataset will be >= since.

Fennel reads as little data as possible given this constraint - for instance, when reading parquet files, the filter is pushed all the way down. However, in several cases, it's still necessary to read all the data before rejecting rows that are older than since.

disorder:Duration

Specifies how out of order can data from this source arrive.

Analogous to MaxOutOfOrderness in Flink, this provides Fennel a guarantee that if some row with timestamp t has arrived, no other row with timestamp < t-disorder can ever arrive. And if such rows do arrive, Fennel has the liberty of discarding them and not including them in the computation.

cdc:"append" | "native" | "debezium"

Specifies how should valid change data be constructed from the ingested data.

"append" means that data should be interpreted as sequence of append operations with no deletes and no updates. All SQL sources only support append CDC of as right now.

"native" means that the underlying system exposes CDC natively and that Fennel should tap into that. As of right now, native CDC is only available for Deltalake and Hudi.

"debezium" means that the raw data itself is laid out in debezium layout out of which valid CDC data can be constructed. This is only possible for sources that expose raw schemaless data, namely, s3, kinesis, kafka, and webhook.

tier:None | str | List[str]

Default Value: None

When present, marks this source to be selected during sync call only when sync call itself is made for a tier that matches this tier. Primary use case is to decorate a single dataset with many @source decorators and choose only one of them to sync depending on the environment.

preproc:Optional[Dict[str, Union[Ref, Any]]]

Default Value: None

When present, specifies the preproc behavior for the columns referred to by the keys of the dictionary.

As of right now, there are two kinds of values of preproc:

  • ref: Ref: written as ref(str) and means that the column denoted by the key of this value is aliased to another column in the sourced data. This is useful, for instance, when you want to rename columns while bringing them to Fennel.

  • Any: means that the column denoted by the key of this value should be given a constant value.

1from fennel.sources import source, S3, ref
2from fennel.datasets import dataset, field
3
4s3 = S3(name="my_s3")  # using IAM role based access
5
6bucket = s3.bucket("data", path="user/*/date-%Y-%m-%d/*", format="parquet")
7
8@source(
9    bucket,
10    every="1h",
11    cdc="append",
12    disorder="2d",
13    since=datetime(2021, 1, 1, 3, 30, 0),  # 3:30 AM on 1st Jan 2021
14    preproc={
15        "uid": ref("user_id"),  # 'uid' comes from column 'user_id'
16        "country": "USA",  # country for every row should become 'USA'
17    },
18    tier="prod",
19)
20@dataset
21class User:
22    uid: int = field(key=True)
23    email: str
24    country: str
25    timestamp: datetime
Specifying options in source decorator

python

Count

Aggregation to compute a rolling count for each group within a window.

Parameters

window:Window

The continuous window within which something need to be counted. Possible values are "forever" or any time duration.

into_field:str

The name of the field in the output dataset that should store the result of this aggregation. This field is expected to be of type int.

unique:bool

Default Value: False

If set to True, the aggregation counts the number of unique values of the field given by of (aka COUNT DISTINCT in SQL).

approx:bool

Default Value: False

If set to True, the count isn't exact but only an approximation. This field must be set to True if and only if unique is set to True.

Fennel uses hyperloglog data structure to compute unique approximate counts and in practice, the count is exact for small counts.

of:Optional[str]

Name of the field in the input dataset which should be used for unique. Only relevant when unique is set to True.

Returns

int

Accumulates the count in the appropriate field of the output dataset. If there are no rows to count, by default, it returns 0.

Errors

Count unique on unhashable type:

The input column denoted by of must have a hashable type in order to build a hyperloglog. For instance, float or types built on float aren't allowed.

Unique counts without approx:

As of right now, it's a sync error to try to compute unique count without setting approx to True.

Warning

Maintaining unique counts is substantially more costly than maintaining non-unique counts so use it only when truly needed.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    vendor: str
6    amount: int
7    timestamp: datetime
8
9@dataset
10class Aggregated:
11    uid: int = field(key=True)
12    num_transactions: int
13    unique_vendors_1w: int
14    timestamp: datetime
15
16    @pipeline(version=1)
17    @inputs(Transaction)
18    def pipeline(cls, ds: Dataset):
19        return ds.groupby("uid").aggregate(
20            Count(window="forever", into_field="num_transactions"),
21            Count(
22                of="vendor",
23                unique=True,
24                approx=True,
25                window="1w",
26                into_field="unique_vendors_1w",
27            ),
28        )
Count # of transaction & distinct vendors per user

python

Sum

Aggregation to compute a rolling sum for each group within a window.

Parameters

of:str

Name of the field in the input dataset over which the sum should be computed. This field can only either be int or `float.

window:Window

The continuous window within which something need to be counted. Possible values are "forever" or any time duration.

into_field:str

The name of the field in the output dataset that should store the result of this aggregation. This field is expected to be of type int or float - same as the type of the field in the input dataset corresponding to of.

Returns

Union[int, float]

Accumulates the count in the appropriate field of the output dataset. If there are no rows to count, by default, it returns 0 (or 0.0 if of is float).

Errors

Sum on non int/float types:

The input column denoted by of must either be of int or float types.

Note that unlike SQL, even aggregations over Optional[int] or Optional[float] aren't allowed.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    amount: int
6    timestamp: datetime
7
8@dataset
9class Aggregated:
10    uid: int = field(key=True)
11    amount_1w: int
12    total: int
13    timestamp: datetime
14
15    @pipeline(version=1)
16    @inputs(Transaction)
17    def pipeline(cls, ds: Dataset):
18        return ds.groupby("uid").aggregate(
19            Sum(of="amount", window="1w", into_field="amount_1w"),
20            Sum(of="amount", window="forever", into_field="total"),
21        )
Sum up amount in 1 week and forever windows

python

Min

Aggregation to computes a rolling min for each group within a window.

Parameters

of:str

Name of the field in the input dataset over which the min should be computed. This field must either be of type int or float.

window:Window

The continuous window within which aggregation needs to be computed. Possible values are "forever" or any time duration.

into_field:str

The name of the field in the output dataset that should store the result of this aggregation. This field is expected to be of type int or float - same as the type of the field in the input dataset corresponding to of.

default:Union[int, float]

Min over an empty set of rows isn't well defined - Fennel returns default in such cases. The type of default must be same as that of of in the input dataset.

Returns

Union[int, float]

Stores the result of the aggregation in the appropriate field of the output dataset. If there are no rows in the aggregation window, default is used.

Errors

Min on non int/float types:

The input column denoted by of must either be of int or float types.

Note that unlike SQL, even aggregations over Optional[int] or Optional[float] aren't allowed.

Types of input, output & default don't match:

The type of the field denoted by into_field in the output dataset and that of default should be same as that of the field field denoted by of in the input dataset.

1@source(webhook.endpoint("Transaction"))
2@dataset
3class Transaction:
4    uid: int
5    amt: float
6    timestamp: datetime
7
8@dataset
9class Aggregated:
10    uid: int = field(key=True<