Over the past two decades I have witnessed many massive shifts in the data technologies landscape. I have seen a maturity of the RDBMS market, the adoption of NoSQL technologies, and organizations of every size struggling with data management in one way or another. Throughout all of this change, one area is starting to become more and more intrinsic to the data backbone of many companies: real-time data. And for good reason, the current pattern of writing data to some flavor of database or file sink and then processing that data using either a b-tree index path or parallel processing frameworks, or both isn’t able to meet real time needs. Some other mechanism must be engineered.
We founded eventador.io to solve an increasingly pervasive problem; data needs to be more real-time.
The demand comes from an ever evolving ecosystem that has figured out data matters. There is lots of it, and customers demand it to be contextually relevant, secured, and real-time. Companies have figured out that by using data they are more competitive and compelling. Real-time information is the cherry on top of the data cupcake. The demand is growing, and it’s growing rapidly.
Yet, there aren’t many options for these companies to be successful with low barriers to entry, high performance, default security, and a fast time to market. So we built the platform we dreamed of, and we called it eventador.io.
Let’s be clear here, real-time computing and real time data processing aren’t new technologies. Real-time data processing works by processing small bits of data as they flow into the system, as they happen. Data is not stored then retrieved to answer questions, rather, it’s immediately mutated, aggregated, and routed to requesting applications through data pipelines. It’s a completely different paradigm. The difference today is the compute ecosystem has changed around these technologies enabling them to be more powerful and lower cost. The advent and adoption of the cloud as well as messaging systems like Apache Kafka mark a distinct apex in this area of computer science.
The eventador.io platform uses Apache Kafka as it’s backbone. Kafka is a real time messaging system written in Java. Kafka was invented at LinkedIn to solve real time data problems, was open-sourced, and since has exploded in adoption. It’s now an Apache project, and at the core of eventador.io.
Kafka excels at processing real time events. It’s a messaging system, but it also has properties of a log structured database. Kafka is scalable, fault tolerant, and durable. Because of this, it’s a perfect foundation for processing real-time data.
Amazing as Kafka is, it’s not in itself a complete real-time data solution. This is where eventador.io steps in and extends Kafka to include important components like tight security, cloud based deployment and scaling, a SQL query interface, a schema manager, and simple consume and produce endpoints. These components unlock the potential of Kafka and transform it into a true fast-data platform.
Eventador.io is engineered to make deploying Kafka data pipelines really simple and useful right out of the box. We used our decades of experience building massive data systems at places like eBay/Paypal and ObjectRocket/Rackspace to build a completely unique and powerful platform. It’s the system that we would want to use.
We think that eventador.io will be useful in a number of verticals including wearables, sensor data, supply chain/manufacturing, gaming, social, and real-time analytics and data sciences just to name a few. To that end, for our beta we focused on a few key foundational areas:
We are having a blast building eventador.io, and love making customers successful. Starting today eventador.io is in private beta. We enjoy the opportunity to talk to customers and understand each unique use case and help build eventador.io to solve these real-world problems. I am so proud of what the team has built so far, and excited to evolve it, so sign up now and see what I am talking about!