Senior Director of Marketing
In this episode of the Eventador Streams podcast, Kenny and I had the opportunity to chat with Maximilian Michels, a software engineer at Lyft working on their streaming pipelines, about his rich tenure with both Apache Flink and Apache Beam.
Check out a fun – and educational – discussion around Flink, Beam, how they benefit each other and what the future looks like in this episode of the Eventador Streams podcast:
Want to make sure you never miss an episode? You can follow Eventador Streams on a variety of platforms including Soundcloud, Apple Podcasts, Google Play, Spotify, and more. Happy listening!
Episode 09 Transcript: A Primer for Apache Beam on Flink with Maximilian Michels
Leslie Denson: For many of you, Lyft likely comes to mind when you think about companies doing top-notch work with streaming data, especially when it comes to using both Apache Flink and Apache Beam. Kenny and I are excited to be joined today by Maximilian Michels, who, if being a PMC member for both Flink and Beam isn’t enough, is also a software engineer working to develop features and improvements for Lyft’s streaming pipelines. Max is giving us a great behind the scenes look at both technologies in this episode of Eventador Streams, a podcast about all things streaming data.
LD: Hey, everybody, welcome back to another episode of Eventador Streams. Kenny and I are joined today with a very special guest that we’re super excited to talk to. Max Michels, who is currently a software engineer at Lyft who works with both Beam and Flink, which is in large part because he has a very lengthy and fantastic history with both of those technologies. So Max, we’re really excited to have you today to talk through them. Thanks so much for joining us.
Maximilian Michels: Hi Kenny. Hi, Leslie. Thanks for having me. I’m really glad to be here today.
Kenny Gorman: Yeah. Welcome.
LD: Why don’t you tell us a little bit about yourself, kind of the history of how you got started working with streaming technologies, what you guys are up to at Lyft, and then we’ll just dive into questions from there.
MM: Awesome. Yeah, sure. So yeah, the most obvious question is probably how did I get involved with Flink. Yeah, first of all, like the folks that originally started the company Data Artisans, or Ververica, I use those terms interchangeable. It’s sometimes hard because I like hard wired my brain Data Artisans.
KG: That’s right. That’s right. Us too.
MM: Yeah. So anyways, and they’re based in Berlin, originally only based in Berlin, and I also moved to build in to study here. So I studied … my background is in computer science and I’m not from the technical university as all these Data Artisan folks are from, I studied at the Freie University, if that’s interesting, and well, as a graduate student, I became really interested in distributed systems and that was due to a course that I attended at … Well, it was actually given at the Freie University, but it was lectured by two researchers from the Zuse Institute Berlin. If you’re not familiar with Konrad Zuse, he sometimes doesn’t get a lot of credit, but he was really a computer pioneer, like in the ’30s, he developed the first ever freely programmable computer called the Z3. I don’t know if you’ve heard about that?
KG: No, now you’re embarrassing me with my computer history. So keep going. This is good.
MM: No, I always bring this up because I’m kind of proud of it, a little bit, and well, it was a mechanical computer, so they were other pioneers that also did freely program computers that weren’t mechanical. But anyways, so in this lecture, we were also presented a research project called Scalaris, which was a distributed transactional database written in Erlang. That really got me hooked. So that was a position there at the research institute, and I started working there as a graduate student.
MM: Actually this is where I met Ufuk. I don’t know if you know Ufuk? You probably know him? From Data Artisans.
MM: Yeah. Well, he’s maybe a bit in the background these days, but he’s working on like, for example, the Data Artisans platform and then a lot on Flink in the past as well. So we were working together at this research institute and Ufuk, kind of, after a year he kind of got bored and he told me about this really cool research project at Technical University Berlin. So that’s how he got into the research group there. He ended up quitting his job at Zuse Institute and joined that group. That group was comprised of Stephan, Kostas, Robert, Fabian, and actually all the original founders.
MM: Yeah, and Ufuk told me also like they were building like a real open source project. I was like what is that? I mean, I didn’t … I mean, I knew that it was like proper open source project, but I was working on Scalaris, which was like this toy project for researchers. It was open source, but without a community. So later Ufuk told me, “Yeah, we’re building out a startup around this.” So I visited the office after I graduated and then I started working for Data Artisans in like October, 2014.
KG: Wait a second. So you went from Erlang to Java, right?
MM: I went from Erlang to Java. Yeah.
KG: Okay. You missed that part. I think that’s important, right?
MM: Yeah. I think Erlang is a great language for like concurrency and distributed systems. So that was definitely educational, but it’s also a bit … Well, it’s not object-oriented and has a lot of function elements, which doesn’t make it very appealing for like the masses, I guess-
KG: Right, for an open source project, Java is obviously a great place to start building.
MM: Exactly. Somehow in Germany at universities, the teachers, they love Java. So I had Java knowledge and worked on projects before. So I mean, Java was not new to me.
KG: That’s great.
MM: So yeah, I kind of started as like the second employee there besides the founders, like started right before me. So yeah, I got to experience the very early startup life and it was like learned a lot there in that time because the people there and you know them, they are very smart, very dedicated learners.
KG: Yeah. The thing that always … and you were part of this team. The thing that always really impressed me about Flink, and one of the reasons that we were initially drawn to it, I think we’ve talked about it a lot on the podcast is just there’s like some core thesis that had to be part of the stack. It had to be correct. It had to manage state and some of these things are just like core constructs that are super important in these projects. I think others have not focused on them and to their peril. I think that’s one of the reasons why Flink is so strong these days is just that kind of … I mean, even kind of an academic mindset around it, that it just has to have these core constructs in it. I think that’s pretty exciting.
MM: Yeah. That’s a very good observation. I think that really got hard wired in my brain. I mean, I had also research background, but building software and building the ideas for the softwares. It was just on a totally different level in that company and the kind of position and all the pre-thoughts that went into executing. I think that’s really unique and that’s really why Flink is so powerful.
KG: Yeah. Yeah. That’s for sure.
MM: I mean, I don’t know like how obvious is that is for people who started using Flink recently, but things were kind of different back then. Although Flink had a solid foundation through research and it had been a research product since I think 2010, under different names. So it was architecturally built for streaming. But at that time we were still trying to compete with Spark. We were trying to become like Hadoop’s successor in the batch world.
KG: Right. Right.
MM: That seems absurd, but I think that’s also really, I have to give Kostas and Stephan credit for this because they really, early 2015, they all pulled us in the room and they were like saying, “We have to change something. This is not going to work out. Spark is killing it and we have to try something really innovative.” So they pitched us like streaming and there was like streaming API before, which, I mean, it was, I would say, more like a toy feature. Like some people worked on it, but there wasn’t a whole lot of effort. It was not top priority going into this feature.
MM: Yeah. I mean, for example, there was … I mean, at the beginning, even no windowing, no event time, no watermarks, no state backend, checkpointing, safepoints, timers, like CEP for instance didn’t exist. Neither did SQL. I mean, there was some early work on it to enable SQL because there was the table API really early on, which came the basis for SQL. So a lot of things were like figuring out stuff we read like Google papers, MillWheel, Flume, the data for paper of course and experimented a lot with companies, POCs, had lots of discussion and yeah, often we found it took like a couple approaches to get it right, which is a common theme I think in software development.
KG: Yeah. Right. We were just talking about that last podcast is, man, where you started from and where you ended up or oftentimes in different places.
MM: Oh yeah. I mean, that’s definitely true for Flink. Like the streaming core definitely was helpful to like pivot towards streaming for sure. So that’s year, 2015, well, no, actually I think it was the end of 2014 that like Google came out with like the Google Cloud Dataflow SDK, which was like this framework where you could build a runner and then kind of the runner would execute this data flow program model. You could execute it like in different environments with like on Dataflow of course, with Flink, with Spark.
MM: It seemed like something really interesting. So we kind of jumped off on it, Yasha and me and yeah, shout out to Yasha, for sure he definitely had the initiative there and he saw the potential early on. So we implemented the Flink Runner for this and this SDK later became Apache Beam, you know?
MM: And actually I took a little bit of a detour. I left Data Artisans like after a little more than two years because I thought I should see how others are doing it. So I had joined a startup called Crate.io, Which built SQL layer on top of Elasticsearch. It’s actually, if you don’t know it, it’s a very cool product and it goes into … Well, I guess it goes to a similar direction as you are going with Eventador, but I’m less streaming, but like for exploring data.
KG: Yeah. I actually, I knew about Crate a while back and I saw that when we were talking earlier, I saw that you had been there and I poked at it, and it’s actually pretty cool. They’ve come a long way since the first time I looked at it. So that’s awesome.
MM: I think they have some really cool tech out there. I mean, it’s tough business. But, I mean, Crate was great, but I decided I really missed streaming and working on Beam, specifically, like full time because I only had been doing it on the side to like give like Flink some exposure in the Google world, let’s say. So the past two years I’ve been working on the Flink Runner and Beam Portability, which is like this multilanguage support, which enables Python pipelines in beam.
MM: That was fun but also it can feel a bit like being in an ivory tower when you develop these technologies over so many years and you don’t actually really get to use them as an end user. So I worked a bit with Lyft folks together through like the mailing list and met Thomas , who is like, pretty much a Beam advocate at Lyft, I’d say. So it was like a natural decision to switch the focus a bit towards the user side and eat your own dog food, that kind of thing. That was a really valuable experience, I think. I mean also with Flink, we’ve seen things don’t really work until you run them at scale, and with an actual use case.
KG: This is where my head was at too. I was wondering, like the killer question that I wanted to ask was, okay, so you’ve been working for years and years and years on Flink, you’ve been working for years and years on Beam, and then you go use them in production and like, was there an epiphany where you went, “Oh my God, we’ve been thinking about this differently,” or where was your head at after you had sat down and used these things in production for a couple of months. Was it good, bad? Was your mindset different? Where was your head at after that experience?
MM: That’s a good question. It’s not like using it was like a completely new experience, like, because I had been interacting on the mailing list. I knew people had problems, I knew stuff was not trivial to configure, you could run into all kinds of problems if you didn’t know what you were doing. But it’s just that now I was kind of responsible for it. I was like the Beam guy, and if something didn’t work, I had to somehow come up with a solution or tell people not to do things that I thought shouldn’t be done, but that weren’t clear to them because it’s not obvious if you didn’t build the tools. So it’s more, it was more like that.
MM: Then like you run into like production issues and like kind of hard to debug race conditions and all that stuff where you regret for some seconds that you made such design decisions or that you didn’t work more carefully or think about that in advance, but bugs are natural. I mean, Flink has them, Beam has them, and you just have to find them eventually and make sure they don’t happen again.
KG: Well, and that’s the coolest thing, right? Because that’s the most informed, possible position you can have on those projects because not only have you had a good lineage with them, but when you go to production, use them at scale, with a big use case that everybody … I think if everybody hasn’t listened to the videos and watched the presentations from Lyft, it’s fascinating how Lyft is using streaming technologies. And now that you’re sitting there using them with great use cases and business critical use cases, and now it’s the complete picture, now you get to see why you worked so hard in the early days to kind of think through these things and build them, and here you are, and you’re using them at scale. That’s got to feel pretty rewarding too?
MM: For sure it does. But it also made me think differently about the design decisions we made at some places, how much power we want to expose to the user, and how easy that stuff should be to be used. I think, I mean, there’s a case, I mean, SQL is a great case for simplifying things and I’m more and more of the opinion that we should make things simpler rather than building new features and stuff. Yeah, that’s kind of more … I get more humble and think like, maybe we should just focus on like important use cases and making things easier rather than linking the next great feature.
LD: It’s interesting that you say that, that resonates, I think, really well with me. We had Marton Balassi on the podcast a couple of weeks ago, and he said something very similar, in that, “Do some features, do them really well. Then you can start adding in, but just make it easy to use. Don’t try and go too big, too fast.” I think that’s been something that we internally have talked about a lot as well. So it’s, not all companies have that and not all people have that mindset, but it’s interesting now that we’ve talked about that a few different times on the podcast.
KG: Yeah, that was one of the epiphanies we had with SQL. Obviously, our background is database engineering and SQL, so we came kind of came at it from that angle. What was interesting when we were doing it early on and trying to get folks to …. basically we’ve had customers … The short of the long is that customers came to us and said, “Apache Flink seems awesome, but wow, it’s daunting,” and you know, we’re not hardcore Java developers. We’d say, “Well, it’s not that hardcore. Don’t worry here. Here’s a sample template. here’s how to use the API. Here, let us help you write your first transform. See, it’s not that hard.”
KG: Okay. It wasn’t really for their first couple of use cases, but then they were in production and then they needed to add features. We even had one guy say, “I could write this in Java, but I’d rather use SQL because I don’t want to be the one person on call that’s going to get paged when this job goes down. I’d rather have everybody be able to pitch in and mutate this job over time and contribute to its code base.” I thought that was like really, really mature. I think I’ve said that before, and I was like blown away by that comment. Then that’s kind of where we kind of tackled it too, is we thought, like it’s got to be easy for folks, approachable, and overall, that’s a good thing for the community because more and more people are going to use Flink more and more people are going to be exposed to it. I think the community, as a whole, is starting to think that way too, and I think that’s exciting.
MM: Yeah. I completely agree. Yeah. That’s a great example. Yeah. I think in the beginning, it’s kind of advantage or what’s the advantage for Flink to be all over the place and implement a lot of features because it gave the project a lot of traction and ability to adapt quickly. Now the focus should be stability and ease of use. Yeah, it wouldn’t make sense to like change that rapidly, like Flink used to in the past.
KG: So give us a feeling at Lyft. What does the day to day look like for you? Like the ins and outs of Beam and Flink at Lyft, kind of what are we experiencing? What are the hard problems you’re solving and what makes you excited for tomorrow and the next day kind of thing?
MM: Yeah, that’s a good question. Part of my date is to stay up to date with what happens in Flink and Beam because like we talked about it’s moving fast, although it’s slowing down. So I checked the mailing lists. Usually we have some discussions in the mailing list that I need to think about. The other part is of course checking my Slack, the Lyft Slack, and seeing what people have been up there. I usually have some tickets assigned, both in the open source as well as in like internal Jiras. So I’m fortunate enough that I can do like open source work at Lyft too. So that’s really nice. So I usually dabble between the two, like I try to implement a feature, or a bug fix in Beam, or push something out that I have tested internally at Lyft, and we contribute that upstream.
MM: Then, yeah. I help people answer questions about Beam or Flink. I debug like pipelines because as you know, issue with checkpointing, where the checkpointing suddenly is taking too long. I also was involved in the Kubernetes migration. Historically, Lyft has been operating on a very interesting legacy stack, which was Salt based, recently they poured a lot of applications to Kubernetes. So we needed to figure out how to use Kubernetes with Flink. And especially in the context of Beam, they’re like more valuables to consider because Beam has its own sidecar container for the Python processes if you use Python. So, yeah, just thinking about that and making sure we communicate with the infrastructure team who is sort of in charge for Kubernetes and the Kubernetes Operator and then combining it so it works with Beam and Flink.
KG: Yeah. I was going to mention the Operator. It’s open source and relatively popular as well.
MM: Yeah. I mean, there are a bunch of them out there actually. I think Google has one and some other companies also have one. It’s sort of unfortunate that there are like these many solutions out there and Flink doesn’t come with like something built in or so. But I guess that’s just the way it developed because everyone has different requirements, I’d say.
KG: Yeah, these days operator sprawl is not uncommon. I think every technology stack has multiple operators and multiple multiple different implementations for Kubernetes at this point.
MM: Yeah. You use Kubernetes as well, right?
KG: We do. Yeah, we do. It was funny, we were just talking about the love-hate relationship I’ve now admitted more times than I care to that it has abstracted things, so it makes debugging difficult and tricky. But at the same time, it is also, that abstraction layer has allowed it to be very much an important and powerful API for scaling in our elasticity story. So it’s a love-hate. On one hand, it does work, but when things break, dammit, it sucks. So it’s been a tough one, but ultimately, yeah. I mean, it’s part of our stack now. I think obviously it’s won the war. More and more folks know it and want to work with it, frankly. If we want to hire going forward and we want someone to be part of building our stack, Kubernetes is something where we can actually tap into a pretty good talent pool.
MM: Maybe also, Kubernetes is still in that phase where it was originally designed from microservices, but now people start building more like these … not so microservicey applications with it. So maybe there’s a case for a new API, which makes it easier to manage the life cycle of the application and not having everything built into these custom operators. I don’t know, I’m not involved with Kubernetes so much, but it feels like there should be a new paradigm in how you manage your Kubernetes application coming up. Although, these like base API is probably not going to go away.
LD: You also are one of the first or very early in with Beam, which is not something that we had a chance to talk as much about on the podcast because we haven’t had somebody who’s been necessarily working with it. So I think that also would be really interesting to our listeners going from … and we can talk about all the in between as well, but going from Flink to Beam and what you like about Beam, all of the good stuff around the inception of that and how y’all are using it would be awesome to hear.
MM: For those who are not super familiar with what Beam is. I’d say it probably has three attributes that kind of define it and differentiate it from other projects such as Flink. So, first of all, it’s like a unified programming model for batch and streaming. If you know Flink, then it still has two separate APIs one for streaming, one for batch. There’s ways to go around us of course. Using SQL, for instance, you can sort of transparently either use batch or streaming, and ultimately, maybe there will be a new API, but Beam always had that from the start. I think that’s really powerful because suddenly you only have to write your logic once. Then when you execute, you can still say, “I want this to run in streaming or batch,” but you can also just say, “Yeah, run it, figure it out. Go to streaming mode if necessary, otherwise use batch.” I think that’s pretty cool feature.
MM: Then it has, Beam has … the second attribute, I guess that’s important to mention is it can execute once you’ve written that pipeline, we call it pipeline, it’s like a Flink job. Once you have that, you can execute it using different execution engines. So you can actually use Flink, you can use Spark, you can use Dataflow, and there are others too. So that gives you a little bit more flexibility, which for some people is really a seller because they prefer managed services. So they maybe use Google Cloud, but they still want to not feel locked in and still be able to run it on their own cluster.
MM: Then the third argument, I think that’s what I’m most excited about is like the multi language support. Not only in terms of … Well we have SQL, and Scala in Flink, and Scala, it’s just a JVM based language that works. SQL, of course, is very powerful and useful. But if you want to implement some more custom logic, it can be tricky. You can register your function and so on, but it gets more involved. So I think Python support and Go, we also have Go, but let’s say Python is probably the biggest argument for Beam, and it’s really powerful because you have all these libraries like TensorFlow and NumPy that people use.
MM: And at Lyft, historically, everything was built with Python. Everything. So like when Jamie … he was also involved Data Artisans, when he joined Lyft and built the Flink team there, that’s a couple of years ago, he … I mean, it was not easy for him to convince people to write Java applications. We talked about this already. It’s just challenging if you’re used to coding Python and if you think more problem oriented, and you’re not strictly a hardcore engineer-
KG: That’s a great way to put it. Means to the end kind of mentality versus constructing something that’s probably more well thought out, with a longer term mindset, I would say.
MM: For Flink folks, Beam is interesting because pretty much, it’s just like running a normal Flink job on top of Flink, right? You just include the Beam JAR and you write in a different API, which I guess takes a little bit of time to get used to, but you have the advantage of having Python available, possibly other languages in the future.
KG: So who are the folks that are typically writing those jobs? Is it like folks running machine learning pipelines or is it like a data scientists or tell us about, like kind of who are the folks that are like ultimately making things and transforms and things with Beam at Lyft, that would be super interesting.
MM: The Beam project, it’s called the Beam project. That’s kind of originates from the compute team and the pricing team of joining forces because they realized they need something more real time. The reason for the real time was that to compute the prices at real time, they needed a different solution than like a dispatch jobs that would aggregate all the right data from the app and from drivers and then batch it into one hour chunks, and then always trying to execute a model that was pre-computed, but in the end, fit with old data, so always lagging behind like one hour or several hours. So they decided I needed a better solution there.
MM: So what we build is basically two pipelines, basically a pipeline that does the feature generation, and it’s all written in Python. It gets all the input, which is stored and in Kinesis, but we’re trying to move away to Kafka, but the source is Kinesis. And while we do some magic to generate our features for our model, and to be honest, like I’m not very involved in this, like I understand the basis of it, but that’s a lot of knowledge and logic that went into this, like historically, and many libraries and yeah, then that actually, the features, they get put into a Kafka topic. So we used to use Kinesis there, but now we use Kafka, which is I guess improvement.
MM: Then there’s a second pipeline, which reads that data. It fetches the model, which is still like not online training pre-computered by the pricing team, and then runs the features against the model. So this is the model execution. This was kind of the first project for Beam. We’ve seen that this works very well, and we have drastically reduced the time it takes to input the features and get real time price, which is delayed by usually 20 milliseconds, which is a really good improvement from like possibly hours before or minutes. Yeah.
KG: Yeah. That’s a fascinating use case. The ability to price in real time and make your company more competitive relative to the landscape of other folks in the space, or more profitable, in real time is it’s the benchmark for killer streaming use cases.
MM: For sure. Yeah. I mean, it’s definitely the trend of the time. If you’re not into real time and you have like a ride sharing app, which shows you like a realtime price, then you’re doing something wrong. The thing is we are trying to offer Beam and Beam pipeline, specifically, as a service at Lyft. We’re working on it. It’s not been like rolled out to all the teams yet, but we’re working on it because there’s now a high demand for replicating these two pipelines, but also like creating new pipelines based on Beam pipeline.
KG: The providing data as a service trend, especially for larger companies. Obviously there was Keystone and Uber had their own, and many different internal projects. Now it’s very common for companies to, “Hey, we got to, we got to build some sort of self service mechanism here because we can’t keep up with the amount of data end points, and demand from the business for various computations or models, or whatever that might be, even just BI in a lot of cases.” That’s interesting.
LD: We’ve talked a little bit about this in the context of what you’re doing on a day to day basis with Lyft, but just overall, what are you excited about in the future for Flink and Beam, or even just streaming in general? Like, what is it that makes you go, “Okay, yeah, this is still my calling. This is what I want to do because I can’t wait to see whatever it happens to be happen.”
MM: Yeah. That’s a good question.
LD: I always stump people with that one.
MM: Yeah. I think that’s a really tricky question because that goes like so deep, like kind of almost philosophically, like behind what we’re doing, because we’re doing that for a purpose, right? Not just because we’re busy. So I think there are a few things, like personally, I would like to see more consolidated efforts because I’m part of both Flink and Beam. I would like to see more collaboration, like for instance, there’s a Beam SQL and Flink SQL. I think they started more or less at the same time, maybe Flink a little bit earlier, and actually, they both used Apache Calcite under the hood.
KG: I was just going to mention that.
MM: Yeah. But they don’t actually share any code besides they both use Apache Calcite. So I think on the other side, we’re seeing some collaborations already. For example, there’s now the Flink Python Table API, which built on top of the Beam run time for the UDFs, which you can include in Python. So I think that was cool to see that not only Beam is consuming Flink, but Flink would also take something from Beam and use it. That makes me excited.
MM: There was also the plan to have a general purpose Flink Python API. What does it mean? That would be a way that the Flink Python API would actually compile to the Beam Protobuf format, which is like a language agnostic format. If we had that, then actually like the Flink Python API could run with the Beam Runners, it could we run on Dataflow, but actually it could still remain Flink specific and have like Flink specific optimizations. I think that’s a really cool plan. I’m not sure how realistic that is. I mean, it’s definitely like doable, but I don’t know if politics maybe might also play a role there.
MM: Yeah. But the project can definitely learn from each other. I think what Beam did with the Protobuf language agnostic format is a really cool thing. I’m thinking probably Flink will also head towards that direction eventually because as of now, everything is still Java based, the job graph. I think it would make sense to have like a language agnostic format and if Beam could use that, I mean, that would be great to see for me.
KG: That’s cool.
MM: Another thing I was really excited about was like cross language pipelines in Beam. So cross language means that you can mix different languages, so you actually can combine … Let’s say you have a Python pipeline, but you have a connector in Java. So you could actually do that with Beam. That’s something we worked on in like the past year and it’s working, but we still have some usability issues and deploying issues to like fully solve. So I’m really excited to get this like really complete. But I think that’s really exciting because it opens up in a lot of new possibilities because we have this problem that we have all this code from Java, but we can’t really use it easily, from within Python. So I think, that that’s a cool feature.
KG: Gotcha. Also, I just want to mention, there is a third member of our team on the podcast this morning. This is Maggie.
LD: I know, I’m so sorry. Hopefully the listeners won’t hear it, but my dog is crying outside the door that I have shut so that you can’t hear her, but she’s there.
MM: That’s the scratching. I heard something.
KG: I didn’t want the audience to think it was me when we were talking about a particular topic, and I was back there crying.
LD: I’m going to attempt to edit that out in post production because my dog has decided to lay outside the door to the room that I am in and cry because she wants in. So yes, there’s a third member of podcast here. She makes an appearance and all of our videos. It’s just a thing. Coming back to what you were saying. Hey, I love something people on that question because everybody has a slightly different, maybe viewpoint, of what they’re excited about and what they want to see, but it’s all really aspirational and it’s all really great.
LD: I mean, nobody listening to this podcast is going to disagree with me that streaming data is obviously here to stay. Things are just going to get bigger. The apps that you need to build are going to become more and more important and hearing from a bunch of different viewpoints, which is what we want to do in this podcast about what they want to see and are excited to see moving forward based on their personal experiences, whatever they may be, is something that I think as kind of the storyteller is really exciting to hear. So I appreciate it.
KG: Yeah, I agree.
MM: Yeah. Thanks for doing that. I mean, I think podcasts are really popular these days, but I’m still like … I think to do it really well and to invite like a diverse set of people and have like different topics, I think that’s … I mean, the community benefits from that exchange. That’s a great thing to do.
KG: Yeah, for sure.
LD: Awesome. Yeah, that’s what we want. There’s no shame. We’ll have pretty much anybody on here just because every personality and every viewpoint on this is going to be slightly different. There’s going to be underlying themes, and the community, to your point, benefits from all of it. So really excited and Max, this was great. Thank you so much for joining us. Hopefully we’ll have you again on the podcast sometime to talk a little bit more about what you’re doing, any updates that you have and kind of the growth in the community. Thank you so much.
KG: Yeah, for sure. It was awesome.
MM: Thanks for having me.
LD: Now, not only did you get a great insider’s look into both Flink and Beam, but you also got to meet the true star of the Eventador team, Maggie. In all seriousness, big thanks to Max for being on the podcast. Hopefully we’ve sparked your interest in learning more about using Beam on Flink. If we have, there are some great talks with Max on YouTube that I highly suggest you check out.
LD: As always, if you have any questions for Kenny and me, comments, or even suggestions for the podcast, you can reach out to us via email email@example.com or on Twitter @EventadorLabs. Or if you’re ready to get started with Flink or start benefiting from Continuous SQL on Flink, you can get started right now with a free trial of the Eventador platform at eventador.cloud/register. Happy streaming.