Eventador 0.5: Click for Kafka
With the release of Eventador 0.5 we are introducing new plans with one-click provisioning.
This allows you to deliver your projects in a more timely basis, save costs on valuable resources, leverage the cloud more effectively, get worry free 24×7 support, and have best in class data pipeline infrastructure simply by using Eventador.io.
I thought I would outline the 0.5 changes as well as expose some of the details behind our technology stack in the process.
Using Eventador, you can now provision a whole stack with a click. We handle provisioning Kafka, Zookeeper, PipelineDB, and Eventador Notebooks with a single click. We call this collection of data pipeline services a deployment. We include everything you need to start processing real-time data.
You can now choose from 3 distinct pricing plans: Developer, Small, and Medium.
These plans range from being good for development and testing, to small and medium production deployments. If you need something larger or more tailored to your needs we have custom plans available too.
All of these plans utilize a dedicated AWS EC2 host per logical service. One per broker, zookeeper, etc. Currently, we use cost effective but performant t2 and m4 EC2 instances. Storage is EBS gp2 Solid State Disk across the board. All of your deployments are assigned to your own dedicated VPC, and access is only allowed to services you whitelist by IP address and/or VPC peering.
Paid plans include:
- Dedicated Kafka Cluster (0.10.1)
- SSL connections option
- Dedicated VPC + peering
- IP ACL whitelisting
- PipelineDB SQL Stack (0.9.6)
- Unlimited Topics
- Eventador Notebooks (4.2.3)
- SSD Disk
- Stats Dashboard
- 24x7x365 Support
- Deployed on AWS US-East Region
Paid plans offer varying amounts of:
- Number of brokers (as many as you need)
- Number of zookeepers nodes (1 or 3)
These plans deploy in a few minutes and are completely ready to go. All plans are free for 30 days, and don’t require a credit card to get started.
The Sandbox Plan
We now have a free plan called the Sandbox plan. This plan allows you to not only have a simple way to experience Kafka, but also the entire breadth of the Eventador offering. We didn’t skimp on features, it’s not some trimmed back half-deployment. It’s got everything our paid plans have, just not as much of it. We also didn’t create a multi-tenant Kafka cluster. It’s your cluster, and only your cluster. All the security controls, endpoints, etc are dedicated to you. You don’t have to worry someone will see your data.
It was important to us to allow you to experience our entire offering without adding a bunch of ‘paid-only’ features. But this presented an Engineering dilemma: How would we give out a free plan to customers and not go broke doing it? How could we offer our entire stack but keep proper security and performance controls?
We decided to employ Linux Containers (LXC) for this task. LXC is operating system level virtualization for running multiple isolated containers on a single host. We take a simple AWS host and provision multiple unprivileged containers to build a completely isolated Eventador deployment. We pre-provision and test these environments, and when a customer shows up and wants one we can easily allocate that stack to the customer. We chose LXC because it’s stable, reliable, and has a good history regarding security patching. This gave us confidence to run a Sandbox plan on it, and gives us tools (cgroups) to ensure customers can be partitioned effectively on AWS compute. Additionally, it gives us the ability to abstract deployments from the underlaying infrastructure giving us control on costs yet still allowing customers to play with our entire stack. This design enabled the economics to be such that we could offer the plan for free.
The only restricted component of the Sandbox plan is overall performance. We limit the maximum amount of resources any given deployment can consume as well as topic count. Thus, the Sandbox plan isn’t good for production workloads. It is, however, great for simple POC work, R&D, exploring our service, teaching Kafka, and logical testing.
Of course we provide fully customized Kafka clusters in various AWS datacenters, with customized fault-tolerance options, versions, etc. These plans typically work best when you are interested in a very specific version and configuration option. To get a custom plan simply email us.
We hope you enjoy our new offerings and if you are interested in giving them a whirl, hope over to our plans page and check them out. And of course, if you have questions, ideas or feedback please ping us.