Streaming data is core to our business. Eventador SQLStreamBuilder gave us the capability to ingest complicated feeds at massive scale and perform production-quality, continuous SQL jobs against them. This was something other competitors just couldn’t achieve—and it’s a complete game-changer for us.
Chris Ferraro, CTO and VP of Engineering, Digital Assets Data
Connect to Kafka topics, and define schemas
Easily connect to Kafka clusters including Confluent Cloud, AWS MSK, or your own self-managed Kafka. Automatically detect and define schemas for your JSON or AVRO topics. Build input transformations to normalize messy data or foreign data sources.
Build and deploy jobs using Continuous SQL
Define and run stream processing jobs using ANSI standard SQL for streaming ETL, for filtering and joining for Machine Learning models, or for aggregating data for real-time dashboards.
Define Materialized Views on streams
Create a view of the current state of the data for easy consumption in your applications, configure maintenance policies, job restart strategies and more. Data is fully indexed and storage is automatically managed.
Query Materialized Views via RESTful endpoints
Create multiple RESTful endpoints to query materialized views by any column, define range scans, or pass in parameters. Creating dashboards, maps, and using data in notebooks has never been easier.
Get Started By Selecting Your Cloud Provider
Lower Your Total Cost of Ownership
No extra web-service, network, or database infrastructure needed
The Eventador Platform solves the complex problem of providing a queryable, time-consistent state of streams via materialized views—without relying on additional databases, web servers, load balancing tools or other complex infrastructure that can be slow and costly.
Speed Streaming App Time-to-Market
Writing with low-level APIs in Java or Scala takes time and resources
The Eventador Platform goes beyond a simple SQL editor with a full Continuous SQL interface that eliminates the need for intricate programming and streamlines writing, iterating on, deploying, joining, and managing data streams with standards-compliant ANSI SQL.
Increase the Productivity of Your Team
Let us manage it for you and spend your time creating killer apps
The Eventador Platform delivers the ability to manage your entire Flink workload in one place by letting you import, write, deploy, and manage Java/Scala jobs using the native Table, DataSet, and DataStream APIs.
Build Entire Streaming Pipelines Using Continuous SQL
Real-time Fraud & Anomaly Detection
|-- production fraud and alerting job|
|PARTITION BY card|
|ORDER BY eventTimestamp|
|F.amount AS first_amount,|
|E.amount AS last_amount|
|ONE ROW PER MATCH|
|AFTER MATCH SKIP PAST LAST ROW|
|PATTERN (F+ E) -- match 1 or more rows|
|F AS F.amount IS NOT NULL AND F.amount > 10, -- lower boundary|
|E AS E.amount IS NOT NULL AND F.amount < E.amount) -- starting value less than ending value|
|-- detect multiple auths in a short window and|
|-- send to lock account topic/microservice|
|MAX(amount) as theamount,|
|TUMBLE_END(eventTimestamp, interval '5' minute) as ts|
|WHERE lat IS NOT NULL|
|AND lon IS NOT NULL|
|GROUP BY card, TUMBLE(eventTimestamp, interval '5' minute)|
|HAVING COUNT(*) > 4 -- >4==fraud|
|-- production IoT application job|
|-- display range remaining every 15 seconds|
|tumble_end(eventTimestamp, interval '15' second) as TS,|
|CAST(ROUND(MIN(CAST(battery_level as numeric)),2) AS varchar)||'%' as state_of_charge,|
|MIN(CAST(trip_distance AS numeric)) AS distance_covered,|
|100-MIN(CAST(battery_level as numeric)) AS battery_pct_used,|
|MIN(CAST(trip_distance AS numeric))/(100-MIN(CAST(battery_level as numeric))) AS foot_per_battery_pct,|
|MIN(CAST(battery_level as numeric))*(MIN(CAST(trip_distance AS numeric))/(100-MIN(CAST(battery_level as numeric)))) as range_in_feet|
|GROUP by boardid, tumble(eventTimestamp, interval '15' second)|
|-- union two different virtual tables|
|SELECT * FROM clickstream|
|WHERE useragent = 'Chrome/62.0.3202.84 Mobile Safari/537.36'|
|SELECT * FROM clickstream|
|WHERE useragent = 'Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36'|
Leverage ANSI SQL syntax, based on Apache Calcite, for the ability to query with the familiar SQL you know—not just a "SQL-like" language.
Deploy production-ready, fault-tolerant stream processors with robust state management—including checkpoint and savepoint management.
Intelligent SQL Parser
Get instant feedback on SQL syntax with the SQL parsing engine to interactively author statements. Perform queries against streams just like a database.
Define materialized views using ANSI-SQL that are automatically indexed and maintained, and arbitrarily queried via RESTful endpoints. You can query by secondary key, perform range scans, and can utilize a suite of common operators against these views.
Quickly create custom functions and call them directly from SQL to handle more complex business logic with no need to restart your system, stop your cluster, or compile/recompile anything.
Fully Managed Apache Flink
Create and deploy Flink jobs—it doesn't matter if they are legacy jobs, you just cannot live without a specific library, or have massively complex logic.
Simply author, validate and launch Continuous SQL streaming jobs with a rich, interactive editing interface as well as leverage seamless integration with Github.
Library of Sources & Sinks
Connect to an ecosystem of sources and sinks including Apache Kafka, ELK, and Amazon S3 as well as webhooks for custom application triggers.
Automatically detect and define JSON schemas, including nested and complex structures OR integrate with schema registry for AVRO schemas.