From zero to stream processing
Early this week I gave a talk at the Austin Kafka/Stream Processing Meetup. It was a great time and we had a fantastic turnout. I wanted to share the slides, examples, and a couple of thoughts on the Meetup itself.
When we were brainstorming this talk, we really wanted to ensure we captured how to get started building streaming applications in Flink in the most logically simplistic manner. A step by step guide, perhaps even for Java newbies, to getting from nothing to having a working stream processor.
Flink provides a number of API’s for working with streaming data. The newest, and most approachable is the Table API using Flink SQL. I chose this API to talk about. It strikes the balance between power and approachability in my mind. I will write on the API in a more detailed manner in a coming post, but for now you can check out the Github Repo, and see the slides below. Be sure to reach out if any of this doesn’t make sense, I made a mistake, or you just want a bit of help at email@example.com.
For the demo, I used our sample planestream ADS-B data that I have blogged about previously. I brought in one of our RTL-SDR devices to show off and talked a little bit about how the whole thing works. Hopefully this brought a unique perspective to the talk and kept it from being ‘just another talk’. We processed ADS-B data using the TrafficAnalyzer code. Live demos rarely work, this one happened to work just fine (after a few seconds of delay).
I’ll be honest here. I wasn’t quite totally prepared for the number of Spark related comparison questions. There were a ton of folks who are actively looking for a framework and API’s that work really nice with true streaming data. I suspect a follow on talk that dives deeper would really excite these folks.
A huge thanks to the folks at HomeAway for hosting the event! And a huge thank you to all the attendees who had a ton of super detailed and quite well thought out questions. I had a blast, and I hope everyone else did too.
Ok, here are my slides: