Query Kafka Better with Continuous SQL

Empower Developers, Data Engineering and Data Science teams to use ANSI compatiable Continuous SQL to query, iterate on, and build streaming jobs with SQLStreamBuilder or write and deploy Java/Scala Flink jobs via Runtime for Apache Flink®.

Contact Us Try it for Free

Alternative Text

Streaming data is core to our business. Eventador SQLStreamBuilder gave us the capability to ingest complicated feeds at massive scale and perform production-quality, continuous SQL jobs against them. This was something other competitors just couldn’t achieve—and it’s a complete game-changer for us.

Chris Ferraro, CTO and VP of Engineering, Digital Assets Data

Streaming Data Breakthroughs. Not Bottlenecks.

The Eventador Platform broadens the ability of organizations to create value and drive innovation from streaming data by dramatically simplifying managing and scaling enterprise-grade, stateful streaming jobs.

Going beyond a simple SQL editor: SQLStreamBuilder eliminates the need for intricate programming and streamlines writing, iterating on, deploying, joining, and managing data streams with standards-compliant Continuous SQL.

Manage your entire Flink workload in one place: Runtime for Apache Flink lets you import, write, deploy, and manage Java/Scala jobs using the native Table, DataSet, and DataStream APIs.

Why Eventador

Whether it’s IoT sensor data, financial transactions, clickstream data or other events, streaming data is on a meteoric rise that is not slowing down. Business success is increasingly dependent on a new era of real-time, streaming data-based applications to power processes and innovation.

The Eventador Platform is the single engine for continuous, stateful stream processing that lets you more easily feed data to streaming applications.

RAPIDLY join, write and DEPLOY production data streams with Continuous SQL

Use familiar, ANSI-compatible SQL to iterate, inspect, and reason about data streams and condense the time needed to deploy enterprise-grade stream processing jobs.


Leverage a single pane of glass, deployed into your cloud account, for critical security, team, and project management as well as checkpoint and savepoint management.

Improve response time and ROI For Databases in your Streaming Pipeline

Easily filter and aggregate data before sending it to traditionally slow databases in order to make data streams more cost- and resource- effective.

Eventador Platform Architecture: Streaming SQL to Boost Kafka and Managed Apache Flink

Building and Managing Streaming Data Pipelines Has Never Been This Easy and Powerful

Alternative Text
Standards-Compliant SQL

Based on Apache Calcite®, Eventador leverages ANSI SQL syntax for the unprecedented ability to use true SQL—not just "SQL-like"—for streaming data. This enables you to perform complex functions including joins, aggregations, filters and more with familiar SQL syntax.

Production-Ready Streaming

Deploy production-ready, fault-tolerant stream processors with robust state management—including checkpoint and savepoint management—and one-click scalability via Kubernetes. It is also easily monitored via JMX endpoints.

Intelligent SQL Parser

The Eventador platform includes a SQL parsing engine, which gives you instant feedback on SQL syntax. This enables you to interactively author streaming, Continuous SQL statements until it produces the desired result, and then seamlessly run it as a persistent, production-grade job.

Interactive Console

Eventador's interactive console delivers a rich editing interface that enables simple authoring, validating, and launching of Continuous SQL jobs, while also simplifying the management of stream processing jobs with extensive team, job and project management controls as well as seamless integration with Github.

Library of Sources & Sinks

Eventador enables users to simply connect to a powerful and growing ecosystem of sources and sinks in order to quickly build stream processors using Apache Kafka, including Confluent Cloud and AWS Managed Streaming for Kafka, as well as Amazon S3.

Schema Management

SQLStreamBuilder supports both JSON and Avro schemas and works natively with your Schema Registry or ours. It also supports complex types for both JSON and Avro schemas.

Build Entire Streaming Pipelines Using Continuous SQL

Write, Iterate and Deploy Production-Grade Data Streams for Apache Kafka with Streaming SQL Engine


-- join multiple streams
select o.name,
hop_end(r.eventTimestamp, interval '20' second, interval '40' second)
from click_stream o join orgs r on o.org_id = r.org_id
join models d on d.org_id = r.org_id
group by o.name,
hop(r.eventTimestamp, interval '20' second, interval '40' second)
view raw Joins.sql hosted with ❤ by GitHub


-- detect multiple auths in a short window and
-- send to lock account topic/microservice
SELECT card,
MAX(amount) as theamount,
TUMBLE_END(eventTimestamp, interval '5' minute) as ts
FROM payments
GROUP BY card, TUMBLE(eventTimestamp, interval '5' minute)
HAVING COUNT(*) > 4 -- >4==fraud
view raw Aggregations.sql hosted with ❤ by GitHub


-- join multiple streams
select o.name,
hop_end(r.eventTimestamp, interval '20' second, interval '40' second)
from click_stream o join orgs r on o.org_id = r.org_id
join models d on d.org_id = r.org_id
group by o.name,
hop(r.eventTimestamp, interval '20' second, interval '40' second)
view raw HyperJoin.sql hosted with ❤ by GitHub

Union ALL

-- union two different virtual tables
SELECT * FROM clickstream
WHERE useragent = 'Chrome/62.0.3202.84 Mobile Safari/537.36'
SELECT * FROM clickstream
WHERE useragent = 'Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36'
view raw UnionALL.sql hosted with ❤ by GitHub


-- eventTimestamp is the Kafka timestamp
-- as unix timestamp. Magically added to every schema.
SELECT max(eventTimestamp) FROM solar_inputs;
-- make it human readable
SELECT CAST(max(eventTimestamp) AS varchar) as TS FROM solar_inputs;
view raw timestamps.sql hosted with ❤ by GitHub

Window Query

SELECT SUM(CAST(amount AS numeric)) AS payment_volume,
CAST(TUMBLE_END(eventTimestamp, interval '1' hour) AS varchar) AS ts
FROM payments
GROUP BY TUMBLE(eventTimestamp, interval '1' hour);
view raw windowquery.sql hosted with ❤ by GitHub

Built with Best-of-Breed Technologies

The Eventador Platform is built using best-of-breed open source technologies and cloud providers.

Alternative Text
Add Kafka Source - Confluent Cloud, Amazon MSK or Your Own Kafka

Ready to boost what you can do with Apache Kafka?

Get started with a free trial or contact us to learn more.