The Flink Table And SQL API With Apache Flink 1.6
Learn more about what (awesome) changes Flink 1.6 brought to the Flink table and SQL API with a step-by-step guide for using them.
Read More ⟶Flink, JSON and Twitter
While Twitter and WordCount are probably two of the most common ways to get started with streaming data. In this post, I’m going to explore some techniques to normalize, window, and introspect the data.
Read More ⟶The Flink Table And SQL API
Apache Flink offers two simple APIs for accessing streaming data with declarative semantics – the table and SQL APIs. We dive in and build a simple processor in Java using these relatively new APIs.
Read More ⟶From Zero To Stream Processing
This week, I gave a talk at the Austin Kafka/Stream Processing Meetup. It was a great time and we had a fantastic turnout. I wanted to share the slides, examples, and a couple of thoughts.
Read More ⟶Planestream – The ADS-B Datasource
When we started Eventador.io in 2016 we needed a simple data source to help us build the platform on. We needed something that exemplified streaming data, massively dynamic, and with a lot of data.
Read More ⟶Kafka Security: ACLs—Whitelisting Your Clients
Every Kafka deployment on Eventador has an associated access control list (ACL). The ACL defines what IP addresses are whitelisted and allowed access to produce and consume to and from your deployment.
Read More ⟶Connecting to Kafka
Getting connected and producing and consuming messages is relatively easy in Apache Kafka. In this post we go over connecting, producing a simple message, and consuming that message via a native python client.
Read More ⟶Eventador 0.5: Click For Kafka
With the release of Eventador 0.5 we are introducing new plans with one-click provisioning.
Read More ⟶