The New System of Record

I’m fresh off the plane from two back to back weeks in San Francisco – starting with the Kafka Summit 2018 and finishing up with Oracle OpenWorld. Two drastically different conferences, but they both reinforce our thinking and continue to bolster our opinion: The defacto system-of-record for the enterprise is moving to a distributed log architecture, and the movement is well underway.

Of course, at Eventador we’ve believed this for some time, but even I was surprised by the trajectory. The modern enterprise has so much data, but predictably, it’s not just the data sitting in a data lake that is interesting. It’s the data that just came in a second ago that can change the game for the modern business. This bodes well for immutable distributed log technologies like Apache Kafka.

The keynote at Kafka Summit by Chris D’Agostino at Capital One was particularly on point. Capital One is pushing the state of the art in banking, and they are using data to drive it. They are building an Enterprise Streaming Platform to handle this new workload, and it’s, not surprisingly, built on Apache Kafka and Apache Flink. Further, a talk Bob Lehmann at Bayer Crop Science (formerly Monsanto) underscored this trend. They started building an Enterprise DataHub in 2014, for the same reason – to democratize data inside the organization. Of course, none of this is revolutionary in-itself; Netflix, Uber, and others have been building platforms that perform this function for some time, and they have wrapped their core business around it. The enterprise data hub has become the defacto system-of-record. Why now? What is driving this adoption?

The answer seems to be that modern application design is driving this trend. Timely data is needed to drive interesting use-cases and dynamics inside line-of-business applications. We’ve seen analytic workflows based on streaming data for some time, but the trend now is to power new, disruptive, compelling, and competitive applications via this same stream of data. The enterprise built the data hub and unlocked the ability for developers to build applications that are better than the competition, and that is driving the new enterprise data hub. Case in point, Capital One’s ENO real-time and automated application: driven from streaming events. This is a customer facing application, and it’s highly compelling – impossible without the events inside Kafka Topics and computations performed on top of Apache Flink. Uber powers Uber Eats this way, and the list goes on and on. It’s not just BI, DW, or ‘insights’. It’s line-of-business, customer-facing applications that are now feeding off the Enterprise Data Bus.

And why not? I was chatting with a developer of a very large and well-known corporation, and from their standpoint, they just want to develop applications that write (produce) data to the bus (Kafka) and they are done. Effectively, commit() against the bus and move on. It’s a similar dynamic as a traditional database request/response paradigm but in the new messaging/event paradigm. So it’s easy to push just about any possible event that happens in a business easily to the bus – the bus becomes more valuable.

This drives more development teams to want to use the bus. The advent and rapid adoption of microservices have made this natural. Developers simply need a connect string and creds to speak to the bus, and now any and every possible piece of information is flowing to you. You might need an aggregation or some filtering, and the modern bus has this capability (Flink or kstreams).

One evening in the business district over sushi, my Co-Founder, Erik Beebe and I traded quips and predictions about the future of the cloud-specifically the role of Kubernetes in the future of cloud computing. I think we both agree it’s early. Moving forward, containers and Kubernetes are game changers. Yes, I said it’s early. But an omnipresent theme at both conferences was how containers and Kubernetes are now the de-facto components to build higher-level services and manage clusters of distributed systems especially in the cloud.

At Eventador, we are building almost every new feature and service on Kubernetes. Having joined the Oracle Startup Ecosystem, this is of particular importance to us. Developing against multiple cloud environments and easily deploying discrete services for the enterprise data hub is front and center for us. Kubernetes environments like AWS EKS and Oracle Container Engine for Kubernetes are force multipliers, and they are changing the landscape for how a service like Eventador is deployed and managed.

It was great attending both conferences, demo’ing the Eventador Stack, and in particular, sitting down and having dinner with customers and understanding where they are in their streaming data lifecycle. We’ve worked hard to ensure customers have a fully managed solution from a simple managed Kafka environment all the way up to a full Enterprise Data Bus – and it was great to get feedback and learnings right from the front lines. Features like intrinsically integrated Apache Kafka and Apache Flink, our teams and projects interfaces, simple and multi-faceted security controls, easy integrations into the bus with Eventador Elements (coming soon!), and of course a fully managed hands-on support structure.

But most importantly, Eventador deploys into your AWS account, so you can have unparalleled security, billing, cost accounting, and pricing options. This is something that is quite unique to the Eventador platform, and an important design when considering a fully managed Enterprise Data Bus option – it’s also something customers asked for and love.

Back in Austin, it’s time to open the editor and get back to work. There is a ton to do, and we are eager to keep building features and capabilities into the Eventador Stack.

Leave a Reply

Your email address will not be published. Required fields are marked *