Make Kafka® Better.
Innovate Faster.

Empower DataOps, Data Engineering and Data Analytics teams to turn streaming events into business achievements in seconds by using streaming SQL to query, iterate on, and build streaming jobs with SQLStreamBuilder or write and deploy your own Java/Scala Flink jobs via Runtime for Apache Flink®.

Features Try It For Free

Alternative Text

Streaming Data Breakthroughs. Not Bottlenecks.


The Eventador Platform broadens the ability of organizations to create value and drive innovation from streaming data by dramatically simplifying managing and scaling enterprise-grade, stateful streaming jobs.

Going beyond a simple SQL editor: SQLStreamBuilder eliminates the need for intricate programming and streamlines writing, iterating on, deploying, joining, and managing data streams with standards-compliant SQL.

Manage your entire Flink workload in one place: Runtime for Apache Flink lets you import, write, deploy, and manage Java/Scala jobs using the native Table, DataSet, and DataStream APIs.

Why Eventador

Whether it’s IoT sensor data, financial transactions, clickstream data or other events, streaming data is on a meteoric rise that is not slowing down. Business success is increasingly dependent on a new era of real-time, streaming data-based applications to power processes and innovation.

The Eventador Platform is the single engine for continuous, stateful stream processing that lets you more easily feed data to streaming applications.


RAPIDLY join, write and DEPLOY production data streams with SQL

Use familiar, ANSI-compatible SQL to iterate, inspect, and reason about data streams and condense the time needed to deploy enterprise-grade stream processing jobs.

AMPLIFY THE POWER OF APACHE FLINK WITH ROBUST DATA STREAM MANAGEMENT

Leverage a single pane of glass, deployed into your cloud account, for critical security, team, and project management as well as checkpoint and savepoint management.

Improve response time and ROI For Databases in your Streaming Pipeline

Easily filter and aggregate data before sending it to traditionally slow databases in order to make data streams more cost- and resource- effective.

Eventador Platform Architecture: Streaming SQL to Boost Kafka and Managed Apache Flink
Features

Building and Managing Streaming Data Pipelines Has Never Been This Easy and Powerful


Alternative Text
Standards-Compliant SQL

Based on Apache Calcite®, Eventador leverages ANSI SQL syntax for the unprecedented ability to use true SQL—not just "SQL-like"—for streaming data. This enables you to perform complex functions including joins, aggregations, filters and more with familiar SQL syntax.

Production-Ready Streaming

Deploy production-ready, fault-tolerant stream processors with robust state management—including checkpoint and savepoint management—and one-click scalability via Kubernetes. It is also easily monitored via JMX endpoints.

Intelligent SQL Parser

The Eventador platform includes a SQL parsing engine, which gives you instant feedback on SQL syntax. This enables you to interactively author streaming SQL statements until it produces the desired result, and then seamlessly run it as a persistent, production-grade job.

Interactive Console

Eventador's interactive console delivers a rich editing interface that enables simple authoring, validating, and launching of streaming SQL jobs, while also simplifying the management of stream processing jobs with extensive team, job and project management controls as well as seamless integration with Github.

Library of Sources & Sinks

Eventador enables users to simply connect to a powerful and growing ecosystem of sources and sinks in order to quickly build stream processors using Apache Kafka, including Confluent Cloud and AWS Managed Streaming for Kafka, as well as Amazon S3.

Schema Management

SQLStreamBuilder supports both JSON and Avro schemas and works natively with your Schema Registry or ours. It also supports complex types for both JSON and Avro schemas.


Build Entire Streaming Pipelines using ANSI SQL

Write, Iterate and Deploy Production-Grade Data Streams for Apache Kafka with Streaming SQL Engine

Joins

-- join multiple streams
select o.name,
       sum(d.clicks),
       hop_end(r.eventTimestamp, interval '20' second, interval '40' second)
 from click_stream o join orgs r on o.org_id = r.org_id
               join models d on d.org_id = r.org_id
 group by o.name,
        hop(r.eventTimestamp, interval '20' second, interval '40' second)

Aggregation

-- detect multiple auths in a short 
-- window and send to lock account 
-- topic/microservice
SELECT card,
MAX(amount) as theamount,
TUMBLE_END(eventTimestamp, interval '5' minute) as ts
FROM payments
WHERE lat IS NOT NULL
AND lon IS NOT NULL
GROUP BY card, TUMBLE(eventTimestamp, interval '5' minute)
HAVING COUNT(*) > 4 -- >4==fraud

Union ALL

-- union two different virtual tables
SELECT * FROM clickstream
WHERE useragent = 'Chrome/62.0.3202.84 Mobile Safari/537.36'
UNION ALL
SELECT * FROM clickstream
WHERE useragent = 'Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36'

Hyper Join

SELECT us_west.user_score+ap_south.user_score
FROM kafka_in_zone_us_west us_west
FULL OUTER JOIN kafka_in_zone_ap_south ap_south
ON us_west.user_id = ap_south.user_id;

Timestamps

-- eventTimestamp is the Kafka
-- timestamp as unix timestamp. 
-- Magically added to every schema.
SELECT max(eventTimestamp) FROM solar_inputs;

Window Query

SELECT SUM(CAST(amount AS numeric)) AS payment_volume,
CAST(TUMBLE_END(eventTimestamp, interval '1' hour) AS varchar) AS ts
FROM payments
GROUP BY TUMBLE(eventTimestamp, interval '1' hour);

Built with Best-of-Breed Technologies


The Eventador Platform is built using best-of-breed open source technologies and cloud providers.

Alternative Text
Alternative Text

Ready to boost what you can do with Apache Kafka?

Get started with a free trial.

Get Started Now