The Eventador Stream Processing Stack

April 8, 2018 in Product News

The Eventador Stream Processing Stack

Streaming data is everywhere. IoT, high tech manufacturing, national security, smart cities, web log analysis, systems telemetry, AI and ML workflows, and a myriad of other modern use cases are driving this trend skyward. At its core are technologies like (among others) Apache Kafka and Apache Flink – but it’s still very tricky to build, deploy and manage streaming workflows with them.

If you are building a state of the art data system – you are likely already looking at Apache Kafka and Apache Flink. You want to write stream processors to route, aggregate, filter, and otherwise mutate a stream of data, and make it useful to the rest of your organization. But curating and managing the technology stack is daunting and downright frustrating. You spend all your time managing the infrastructure and don’t have time to write those processors, which was your original goal anyway.

The Stack

Eventador allows you to quickly build and manage modern streaming data workflows on top of these state of the art platforms – Kafka and Flink. We deploy this proven stack (Kafka only or Kafka+Flink) into the cloud, layer it with our own control plane, and wrap it with expert 24x7x365 support. The Eventador Stack – You write the stream processor code, we handle the rest.

Deployment options

The Stack can be deployed in a number of different configurations. Most recently, we added the ability to deploy into your AWS account. It’s deployed to a self-contained VPC – connected via peering to your applications. Our control plane performs command and control over an SSL/VPN link from a dedicated endpoint. This enables:

  • Auto configuration. You don’t need to sweat any of the setup, it’s all done for you.
  • Data isolation and security. The streaming data remains inside your AWS account.
  • Fully managed yet a virtual on-premise architecture.
  • Billing enhancements. Cost allocation tagging for AWS assets.
  • Observable. Integration points to allow for high observability (including Datadog).
  • Kafka only, or Kafka and Flink configurations.
  • Customizable: Encryption at rest/on wire, EU GDPR compliance, AWS GovCloud and instance types.

You can read more about our enterprise deployment plan – how it’s designed and implemented on our post here.

“ made the entire process painless. They have been a true partner, starting with the initial evaluation and continuing through our production implementation. Their expertise with Kafka allows us to stay solely focused on building and shipping features while they manage our Kafka infrastructure.” – MATT MONTIGEL, DIRECTOR OF BACKEND ENGINEERING, BLEACHER REPORT/TIME WARNER


As one would expect with a cloud service, the Eventador stack is fully elastic. As you need more Kafka brokers or Flink task managers – simply click a button and the assets are automatically added to the cluster(s). Our backend provisioning manager deploys containers on the fly, adding resources to the cluster(s) thus adding concurrency and compute.


Projects are part of the control plane. They allow you to write and deploy stream processors right from your Github repo. There are templates to get you started quickly – or you can write them from scratch. We integrate nicely with your existing CI/CD pipeline and allow for team collaboration in your SDLC flow. Create deployments for development, staging, testing, and production – iterating, testing and finally deploying to production. Jobs are easily categorized, managed, deployed and monitored.

Popular language support

The Eventador stack utilizes Apache Kafka as it’s core, so any driver that speaks the Kafka wire protocol works just as it would if you managed Kafka yourself. You click to deploy new topics, choosing high availability and parallelism options. It’s easy to produce and consume data via the native drivers, writing and reading data to your pipeline. The Eventador Console has a number of language examples to get you started including Python, Java, Scala, Ruby, Node as well as SSL/SASL versions of the same.

Stream processors are written as Flink jobs. They utilize the powerful Flink API’s to process data and are written in Java, Scala, or even Kotlin. Eventador projects provide the entry point for building Flink jobs on the Eventador Stack. Link your code repo in Github to Eventador, and you are ready for one-click build and deploy. You can easily reference existing Kafka topics to consume and produce data. A number of data sinks are available or you can use your own. These processors form the core building blocks you use to build sophisticated streaming data pipelines.

Highly observable and supported

Deployments are easily monitored with the Console Dashboard – as well as (optionally) integrated with your monitoring system (Datadog for instance) or via REST API. The stack is fully observable via our JMX reporting daemon. Eventador has best in class support for the entire stack – not just Kafka or Flink, but the entire integrated stack. If you want to upgrade a Kafka version, want to know what KIPs affect you, or Flink version changes broke your processor – we are here to help you 24x7x365. We can not stress this enough, stream processing is extremely powerful, but it can also be daunting, we take the challenge personally to help you be successful and it’s included for FREE with every deployment.

Getting started

You can get started with the stream processing stack by signing up for an account.

Leave a Reply

Your email address will not be published. Required fields are marked *