The Streaming Data Engine for Killer Applications

Use Continuous SQL to materialize your Kafka streams into a goldmine for real-time ETL, data science, and RESTful application development.


Try it Free Get A Demo

Streaming data is core to our business. Eventador SQLStreamBuilder gave us the capability to ingest complicated feeds at massive scale and perform production-quality, continuous SQL jobs against them. This was something other competitors just couldn’t achieve—and it’s a complete game-changer for us.

Chris Ferraro, CTO and VP of Engineering, Digital Assets Data

Streaming Applications Need Queryable State.
Introducing Materialized Views On Streams.


01

Connect to Kafka topics, and define schemas

Easily connect to Kafka clusters including Confluent Cloud, AWS MSK, or your own self-managed Kafka. Automatically detect and define schemas for your JSON or AVRO topics. Build input transformations to normalize messy data or foreign data sources.

Alternative Text

02

Build and deploy jobs using Continuous SQL

Define and run stream processing jobs using ANSI standard SQL for streaming ETL, for filtering and joining for Machine Learning models, or for aggregating data for real-time dashboards.

Alternative Text

03

Define Materialized Views on streams

Create a view of the current state of the data for easy consumption in your applications, configure maintenance policies, job restart strategies and more. Data is fully indexed and storage is automatically managed.

Alternative Text

04

Query Materialized Views via RESTful endpoints

Create multiple RESTful endpoints to query materialized views by any column, define range scans, or pass in parameters. Creating dashboards, maps, and using data in notebooks has never been easier.

Alternative Text

Get Started By Selecting Your Cloud Provider

Built on Apache Kafka, Apache Flink and Kubernetes

Streaming Data Breakthroughs. Not Bottlenecks.


The Eventador Platform eliminates the barriers between streaming data and applications by broadening the ability of organizations to unlock value and drive innovation.

Alternative Text

Lower Your Total Cost of Ownership

No extra web-service, network, or database infrastructure needed

The Eventador Platform solves the complex problem of providing a queryable, time-consistent state of streams via materialized views—without relying on additional databases, web servers, load balancing tools or other complex infrastructure that can be slow and costly.

Alternative Text

Speed Streaming App Time-to-Market

Writing with low-level APIs in Java or Scala takes time and resources

The Eventador Platform goes beyond a simple SQL editor with a full Continuous SQL interface that eliminates the need for intricate programming and streamlines writing, iterating on, deploying, joining, and managing data streams with standards-compliant ANSI SQL.

Alternative Text

Increase the Productivity of Your Team

Let us manage it for you and spend your time creating killer apps

The Eventador Platform delivers the ability to manage your entire Flink workload in one place by letting you import, write, deploy, and manage Java/Scala jobs using the native Table, DataSet, and DataStream APIs.

Build Entire Streaming Pipelines Using Continuous SQL

Real-time Fraud & Anomaly Detection

-- production fraud and alerting job
SELECT *
FROM paymentauths
MATCH_RECOGNIZE(
PARTITION BY card
ORDER BY eventTimestamp
MEASURES
F.amount AS first_amount,
E.amount AS last_amount
ONE ROW PER MATCH
AFTER MATCH SKIP PAST LAST ROW
PATTERN (F+ E) -- match 1 or more rows
DEFINE
F AS F.amount IS NOT NULL AND F.amount > 10, -- lower boundary
E AS E.amount IS NOT NULL AND F.amount < E.amount) -- starting value less than ending value

Real-time ETL

-- detect multiple auths in a short window and
-- send to lock account topic/microservice
SELECT card,
MAX(amount) as theamount,
TUMBLE_END(eventTimestamp, interval '5' minute) as ts
FROM payments
WHERE lat IS NOT NULL
AND lon IS NOT NULL
GROUP BY card, TUMBLE(eventTimestamp, interval '5' minute)
HAVING COUNT(*) > 4 -- >4==fraud
view raw Aggregations.sql hosted with ❤ by GitHub

IoT

-- production IoT application job
-- display range remaining every 15 seconds
SELECT boardid,
tumble_end(eventTimestamp, interval '15' second) as TS,
CAST(ROUND(MIN(CAST(battery_level as numeric)),2) AS varchar)||'%' as state_of_charge,
MIN(CAST(trip_distance AS numeric)) AS distance_covered,
100-MIN(CAST(battery_level as numeric)) AS battery_pct_used,
MIN(CAST(trip_distance AS numeric))/(100-MIN(CAST(battery_level as numeric))) AS foot_per_battery_pct,
MIN(CAST(battery_level as numeric))*(MIN(CAST(trip_distance AS numeric))/(100-MIN(CAST(battery_level as numeric)))) as range_in_feet
FROM kickflips
GROUP by boardid, tumble(eventTimestamp, interval '15' second)
view raw IoT Applications.sql hosted with ❤ by GitHub

Customer Experience

-- union two different virtual tables
SELECT * FROM clickstream
WHERE useragent = 'Chrome/62.0.3202.84 Mobile Safari/537.36'
UNION ALL
SELECT * FROM clickstream
WHERE useragent = 'Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36'
view raw UnionALL.sql hosted with ❤ by GitHub
Features

Building and Managing Streaming Data Pipelines Has Never Been This Simple and Powerful


Standards-Compliant SQL

Leverage ANSI SQL syntax, based on Apache Calcite, for the ability to query with the familiar SQL you know—not just a "SQL-like" language.

Enterprise Grade

Deploy production-ready, fault-tolerant stream processors with robust state management—including checkpoint and savepoint management.

Intelligent SQL Parser

Get instant feedback on SQL syntax with the SQL parsing engine to interactively author statements. Perform queries against streams just like a database.

Materialized Views

Define materialized views using ANSI-SQL that are automatically indexed and maintained, and arbitrarily queried via RESTful endpoints. You can query by secondary key, perform range scans, and can utilize a suite of common operators against these views.

Input Transforms

Easily normalize JSON data by defining a schema for the output of the transform and then writing a javascript function that operates on each message after it’s consumed from Kafka (or any other source) but before you write SQL against it.

Javascript Functions

Quickly create custom functions and call them directly from SQL to handle more complex business logic with no need to restart your system, stop your cluster, or compile/recompile anything.

Fully Managed Apache Flink

Create and deploy Flink jobs—it doesn't matter if they are legacy jobs, you just cannot live without a specific library, or have massively complex logic.

Interactive Console

Simply author, validate and launch Continuous SQL streaming jobs with a rich, interactive editing interface as well as leverage seamless integration with Github.

Library of Sources & Sinks

Connect to an ecosystem of sources and sinks including Apache Kafka, ELK, and Amazon S3 as well as webhooks for custom application triggers.

Schema Management

Automatically detect and define JSON schemas, including nested and complex structures OR integrate with schema registry for AVRO schemas.

Ready to boost what you can do with Apache Kafka?