One of the omnipresent challenges of building a product from scratch is you don’t initially know exactly how customers will want to use it. You build the product you would want to use and are passionate about, however, you must also listen to customers as you evolve your product to deliver exactly what they really want. We wake up every day to this quest.
As we have grown, the larger footprint customers began to ask for additional deployment options. There was a common thread to these requests. They wanted the amazing scalability and performance of Kafka teamed with the powerful compute capabilities of Flink, as well as the support, tooling, metrics, management, and ease of use of the Eventador platform. But they wanted the clusters and their associated data to reside inside their AWS account.
The Eventador platform can now be securely deployed into your AWS account.
Here is how we were able to iterate on our product and revise our architecture to give these customers the configuration they wanted.
Some quick background info
First, a quick background primer on the Eventador architecture. The Eventador platform is loosely built around the microservice architecture paradigm. Core services are designed to handle various discrete tasks within the platform. There are services for creating clusters and interacting with a cloud provider, services for handling the console UI, a service for communicating with Kafka and Flink, a service for sending various run-time statistics, and so on. We utilize multiple languages and choose the best language for the job at hand. Mostly Java and Python. We use containers where appropriate and don’t where they aren’t. Service communications are done via various API’s including (perhaps not so predictably) via PostgreSQL and also (predictably) via Kafka as well as (obviously) HTTP(S). We refer to these services in totality as our control plane.
Our development and production plans
On the Eventador platform, creating a Flink or Kafka cluster using the control plane is called a ‘deployment’. A deployment is everything needed to run a cluster of Flink or Kafka. Deployments can be created, scaled up or down, and removed as needed. New deployments are created in a dedicated VPC and peered back to the control plane. Both the control plane and the deployments are in the Eventador.io AWS account and customers grant access to their applications using the console UI via whitelisting using ACLs, and may optionally use SSL for the client connections.
It’s worth noting and reiterating that deployments are dedicated Kafka and Flink clusters, they aren’t multi-tenant. Clusters live in a dedicated VPC. Customer data and metadata both reside in the Eventador AWS account, partitioned by VPC.
This is how historically our service worked, and how our development and production plans work today.
Engineering the next logical step
Over time, our larger customers were asking for additional flexibility. They wanted the clusters to be deployed into their AWS account, and peered to the VPC that their producing, consuming, and processing applications lived in.
This design has some benefits: – Common billing structures – Cost accountability measures – Security audit and standardization – Ease of integration with other AWS components like S3 – Partitioning between data and metadata concerns
Because our platform is built on services that perform discrete tasks it was easy to split things out to create a new paradigm for deployments. We created an entirely new control plane for enterprise accounts, scaled independently of them. The control plane is granted access via VPN over SSL to speak to the customer VPC. The customer VPC is peered with the customer application VPC. Only the services that need to talk to the clusters are permitted to. Deployments are created in any region, any zone, scaled independently and would roll up under the account of the customer.
Communications between the control plane and the deployment is done over SSL using a dedicated VPN link. Communications for metrics, management, and monitoring are all passed over this link. Cluster hosts are tagged for billing purposes as well as labeled clearly such that it’s known they are Eventador.io managed components.
- Metadata is captured and stored in the dedicated control plane
- Customer data lives in the customer account and VPC
By moving to this design we also gain a substantial latency reduction from application VPC to deployment VPC due to not using an AWS internet gateway. As much as a 5-6x reduction in fact.
Our architecture made redesigning and changing the boundaries around data and metadata logically very easy. Making the changes to the services to ensure that they still function appropriately and work reliably was a touch more work. Automating and orchestration still more work. However, in the end, yielded a nice clean architecture that’s easy to explain and reliable in practice.
Introducing the Eventador enterprise plan
With these changes, we can now offer an entirely new plan. When you sign up as an enterprise plan customer, our automation builds the control plane, the VPN components, and either we build or you build the associated account that will house the deployments. We have a number of models for doing this depending on the customer preference, time to market requirements, and comfort with AWS management. You can choose the plan that fits your project requirements and topology preferences. We are here to help at every step.
The complete platform
Eventador.io strives to be the best end-to-end streaming data platform. The enterprise plan is a huge leap forward in our capabilities to service the very largest customers with a fully managed, scalable, and high-performance platform.
Eventador allows you to focus on developing your real-time application and not the infrastructure behind it.
If you are interested in the enterprise plan (or any other plan), or have questions, give us a buzz and we can discuss how the enterprise plan works in more detail and how it can service your particular use case.