In DataCater, pipelines process your data. They stream data in real-time between Apache Kafka topics (Streams) and can filter and transform the data on the way.
Pipelines are defined in a declarative YAML format. For ease of development, our UI allows you to interactively edit the pipeline definition.
By default, DataCater ships a set of pre-defined filters and transforms that are useful for the most common use cases in data preparation. Additionally, you can write custom filters and transforms in Python.