SQLStreamBuilder allows you to declare stateful stream processors using SQL. It is massively scalable, fault tolerant and production grade. Using SQL to build streaming jobs allows for a new level of simplicity and power and makes building and managing complete stream processing topologies easy and quick.
Archive for the
‘Product Feature’ Category
Over the last quarter, we invested heavily in building a Cloud Native version of the Eventador Platform on Kubernetes (K8s) – for both Fully Managed Apache Kafka and Fully Managed Apache Flink. Part of the reasoning for our focus on Kubernetes and containers was to enable the quick and seamless adoption of additional cloud platforms,
Arguably the most powerful feature of Apache Flink is its ability to do stateful computations on a boundless stream of data. Apache Flink is the core of Eventador’s Fully Managed Apache Flink stack. That said, to understand the value of Apache Flink, it’s still important to know the difference between a checkpoint and a savepoint. […]
As I’ve come on board at Eventador and gotten ramped up, one particular subject that stood out immediately as crucial to us and that we talk about in-depth, is the reality of our fully managed support.
Streaming data is everywhere. IoT, high tech manufacturing, national security, smart cities, web log analysis, systems telemetry, AI and ML workflows, and a myriad of other modern use cases are driving this trend skyward.
Core support for Simple Authentication and Security Layer (SASL) was added to Apache Kafka in the 0.10.2 release. This allows for simple username/password authentication to Kafka using SASL. We are excited to add this authentication mechanism to the Eventador service. Here is how it works.
With the addition of Apache Flink – Eventador.io has a true end-to-end enterprise grade stream processing platform. We run the complex infrastructure and provide support, you can focus on your streaming code.
Since we first opened the doors at Eventador.io, customers have been building applications that make use of Apache Kafka for a wide variety of streaming data use cases. Over time, it became clear we were only solving for one part of the complete picture. With Kafka, our service had the data transport, durability, and scalability, […]
This release focuses on making the service even more robust, easier to use, and overall customer experience. Many of these features were inspired by direct feedback from you, our customers. Thank you for helping us build the best Apache Kafka™ managed service in existence.