SQLStreamBuilder allows you to declare stateful stream processors using SQL. It is massively scalable, fault tolerant and production grade. Using SQL to build streaming jobs allows for a new level of simplicity and power and makes building and managing complete stream processing topologies easy and quick.
Archive for the
‘Apache Kafka’ Category
Over the last quarter, we invested heavily in building a Cloud Native version of the Eventador Platform on Kubernetes (K8s) – for both Fully Managed Apache Kafka and Fully Managed Apache Flink. Part of the reasoning for our focus on Kubernetes and containers was to enable the quick and seamless adoption of additional cloud platforms,
With 2018 wrapping up, I wanted to take a few minutes (and inches of space here) to talk a bit about Eventador in this last year. This year, and the massive growth we’ve enjoyed, has been a rollercoaster of the best kind for the entire team.
I’m fresh off the plane from two back to back weeks in San Francisco – starting with the Kafka Summit 2018 and finishing up with Oracle OpenWorld. Two drastically different conferences, but they both reinforce our thinking and continue to bolster our opinion: The defacto system-of-record for the enterprise is moving to a distributed log […]
Streaming data is everywhere. IoT, high tech manufacturing, national security, smart cities, web log analysis, systems telemetry, AI and ML workflows, and a myriad of other modern use cases are driving this trend skyward.
Core support for Simple Authentication and Security Layer (SASL) was added to Apache Kafka in the 0.10.2 release. This allows for simple username/password authentication to Kafka using SASL. We are excited to add this authentication mechanism to the Eventador service. Here is how it works.
Early this week I gave a talk at the Austin Kafka/Stream Processing Meetup. It was a great time and we had a fantastic turnout. I wanted to share the slides, examples, and a couple of thoughts on the Meetup itself.
When we started Eventador.io in 2016 we needed a simple data source to help us build the platform on. We needed something that exemplified streaming data, something massively dynamic, and something with a lot of data. Tweets were played out, we wanted something better.
One of the omnipresent challenges of building a product from scratch is you don’t initially know exactly how customers will want to use it. You build the product you would want to use and are passionate about, however, you must also listen to customers as you evolve your product to deliver exactly what they really […]
This release focuses on making the service even more robust, easier to use, and overall customer experience. Many of these features were inspired by direct feedback from you, our customers. Thank you for helping us build the best Apache Kafka™ managed service in existence.