Since we first opened the doors at Eventador.io, customers have been building applications that make use of Apache Kafka for a wide variety of streaming data use cases. Over time, it became clear we were only solving for one part of the complete picture. With Kafka, our service had the data transport, durability, and scalability, but what was missing was a mature, accurate, and scalable component where customers could deploy applications that processes the data itself – Until now.
Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications. It’s a natural fit with Apache Kafka. Flink can be used for a myriad of use cases from AI/ML to manufacturing, transportation, IoT, and much more. Flink has a thriving community with primary support coming from Data Artisans. We are proud and humbled to join this community. If you are new to the stream processing ecosphere you can catch up on how Flink works here.
Apache Flink has become quite popular in recent years, a number of well-known companies (Uber, Netflix, Capital One) have started adopting it and weaving it into their data backbones for critical line-of-business processing applications. We’ve had numerous discussions and interest from customers about Flink. Our path became clear.
We are proud to announce you can now deploy Apache Flink 1.3 on the Eventador platform (beta). Including a number of enhancements that we think will make your life easier and more productive.
Flink on Eventador
Flink is a very mature and powerful processing system, and our philosophy has been to build upon and extend Flink to not only work in a cloud environment but to be awesome in a cloud environment. It needs to remain very powerful and robust, but also be easy to deploy code to, provision, and scale. To this end, we have built our product to wrap Flink in nice ways, but also extend its functionality and usefulness via common sense hooks, UI elements, transparency, and workflow management. Let’s chat about some of the features in a bit more detail:
It takes a single click to spin up a fully functional and ready to go cluster on AWS – complete with job managers, task managers, instrumentation, monitoring, the Eventador Console control plane, etc. You can choose from a variety of profiles for how big/powerful of a cluster you need for your project. Jobs are categorized into projects and they are associated with an existing or new Github repo. Code deploys are optionally automatic. Once running, the Eventador Console control plane has a simple one-pane view of the job status, run-times, and stats.
Eventador projects is a brand new component of our service. Projects allow developers to easily integrate existing SDLC workflows into a Flink project via Github. Eventador handles all the complexity of the build process (typically Maven) and the deploy process. This includes CI/CD components, Git Flow, or any other development lifecycle workflow you may have. When the code is merged (or via selection) it gets deployed automatically and seamlessly to your Flink cluster. This makes development, test, and production management of jobs a snap. It also wraps the entire Github ecosystem around Flink. If you have a specific integration process, you have specific branching strategies, or whatever, all those workflows apply directly to your Flink code and Flink deployment on Eventador.
Templates make building projects easy. Are you just getting started with Flink? Or perhaps you just need to get a simple filter deployed ASAP. Simply select a base template (or none) when you create your project. These templates not only serve to teach, they help reduce time to market. Templates are simply Github repositories – Eventador clones them when you create a project. A template includes all the components to build an entire Flink job – typically in Java. It’s a complete project with all the scaffolding and files required. You focus on the core logic, we got the rest.
For Flink GA we plan to expand the number of templates to cover a wide variety of use cases.
Eventador Console and Flink integration
The Eventador Console has an integrated view of the Flink cluster, showing jobs, the execution plan, the state and more. This gives you full control over running jobs and the overall topology of the cluster. The Flink UI is built into the Eventador Console – so it has security controls to ensure only your team has access to the information.
We are currently deploying Flink for a handful of early access customers. If you are interested in participating we would love to have critical feedback. We are also keenly interested in your workflow for building and deploying streaming applications. If you would like to join the beta, ping us here.