top of page

simplyblock and Kubernetes

Simplyblock provides high-IOPS and low-latency Kubernetes persistent volumes for your demanding database and other stateful workloads.

Kubernetes for AI, GKE, and serverless with Abdellfetah Sghiouar from Google (interview)

Updated: Jul 16

This interview is part of the Simplyblock Cloud Commute Podcast, available on Youtube, Spotify, iTunes/Apple Podcasts, Pandora, Samsung Podcasts, and our show site.


In this installment, we're talking to Abdel Sghiouar from Google, a company that needs no introduction. Abdel is Developer Advocate for GKE (Google Kubernetes Engine) and talks to us about Kubernetes, serverless platforms, and the future of containerization. See key learnings below on Kubernetes for AI, where to find the best Kubernetes tutorials for beginners and how simplyblock can speed up your AI/ML workload on Kubernetes. Also see interview transcript section at the end.


Key Learnings


Can Kubernetes be used for AI / ML?


Kubernetes can be used for AI/ML. There are several benefits for managing AI/ML workloads including:

  • Scalability: Kubernetes can easily scale up or down to handle the varying computational demands of AI/ML tasks.

  • Resource Management: It efficiently manages resources such as CPU, memory, and GPUs, ensuring optimal utilization.

  • Portability: AI/ML models can be deployed across different environments without modification, thanks to Kubernetes' container orchestration capabilities.

  • Automation: Kubernetes automates deployment, scaling, and management of containerized applications, which is beneficial for continuous integration and continuous deployment (CI/CD) pipelines in AI/ML projects.

  • Flexibility: It supports various AI/ML frameworks and tools, allowing for a versatile development and deployment ecosystem.

  • Reproducibility: Containers ensure consistent environments for development, testing, and production, enhancing reproducibility of AI/ML experiments.


These features make Kubernetes a powerful platform for deploying, scaling, and managing AI/ML applications. Many AI companies, including those running ChatGPT, utilize Kubernetes because it allows for rapid scaling and performance management of large models and workloads.


Where to find the best Kubernetes tutorials for beginners?


There are several excellent resources where beginners can find comprehensive Kubernetes tutorials, including various learning styles from video tutorials, in-depth articles, interactive labs to official documentation.

  • Kubernetes.io Documentation: The official Kubernetes documentation provides a wealth of information, including beginner-friendly tutorials, concepts, and guides. The "Getting Started" section is particularly useful

  • KubeAcademy by VMware: Offers free, high-quality video courses on Kubernetes basics, cluster operations, and application management.

  • Udemy: Offers a variety of Kubernetes courses, often including hands-on labs and real-world examples. Popular courses include "Kubernetes for the Absolute Beginners" and "Kubernetes Mastery.”

  • Coursera: Partnered with top universities and organizations to offer courses on Kubernetes. The "Architecting with Google Kubernetes Engine" specialization is a notable example.

  • edX: Provides courses from institutions like the Linux Foundation and Red Hat. The "Introduction to Kubernetes" course by the Linux Foundation is a good starting point.

  • YouTube: There are many YouTube channels that offer high-quality tutorials on Kubernetes. Channels like "TechWorld with Nana," "Kunal Kushwaha," and "DigitalOcean" provide beginner-friendly content.

  • Play with Kubernetes: An interactive learning environment provided by Docker, offering hands-on tutorials to practice Kubernetes commands and concepts.

  • Medium and Dev.to Articles: Both platforms have numerous articles and tutorials written by the community. Searching for "Kubernetes beginner tutorial" can yield many helpful results.


How can Simplyblock speed up your AI / ML workload on Kubernetes?


Simplyblock offers various features and tools that can significantly enhance the performance and efficiency of AI/ML workloads on Kubernetes. 

  • High performance, low latency: simplyblock provides a fully cloud-native storage solution, designed for predictable low latency workloads, such as AI/ML tasks.

  • Scalability: Kubernetes inherently supports scaling, and simplyblock enhances this capability by:

  • Storage Scalability: simplyblock is designed to seamlessly scale out with the growing amount of data in AI/ML use cases.

  • Automatic Rebalancing: when nodes or disks are added to a simplyblock storage cluster, stored data is automatically rebalanced in the background for highest read and write performance, as well as lowest latency, as close to a local disk as possible.

  • Improved Data Management: AI/ML workloads often involve large datasets. simplyblock improves data handling by:

  • Data Locality: Ensuring that data is processed close to where it is stored to reduce latency and improve performance.

  • Persistent Storage Solutions: Providing robust and high-performance storage solutions that can handle the I/O demands of AI/ML workloads.

  • Monitoring and Optimization: Effective monitoring and optimization tools provided by simplyblock help in maintaining performance and efficiency:

  • Performance Monitoring: Offering real-time monitoring of storage usage and storage performance to identify and mitigate issues quickly

  • Cost Optimization: Offering an easy way to reduce the cost of cloud storage without sacrificing performance, latency, or capacity, thereby reducing the overall cost of running AI/ML workloads on Kubernetes.

  • Enhanced Security: simplyblock ensures that AI/ML workloads on Kubernetes are secure by:

  • Secure Data Handling: Implementing encryption and secure data transmission between the AI/ML workload and the storage cluster.

  • Access Controls: Providing granular access controls to manage who can mount, access, and modify AI/ML workloads.


By leveraging the features provided by simplyblock, organizations can significantly speed up their AI/ML workloads on Kubernetes. These enhancements lead to improved performance, better resource utilization, scalability, and overall efficiency, ultimately resulting in faster model development and deployment cycles.


Transcript


Chris Engelbert: Hello everyone, today with me, another very good friend of mine. We have known each other for quite a few years. Abdel, and I don’t try to pronounce your last name. I’m really sorry [laughs]. Maybe you can introduce yourself real quick, like, who are you, what do you do, and where you come from.


Abdel Sghiouar: Sure. Abdel is fine. So yes, Abdel Sghiouar if you want. That's how typically people pronounce my last name. I'm based out of Stockholm, and I work for Google. I've been at the company for 10 years. I do Kubernetes stuff. I do a Kubernetes podcast, and I talk about Kubernetes and containers. I arguably do more talks about why you probably do not need any of these technologies than why you actually need them. Originally from Morocco, that's the back story. Have you made it to Devoxx Morocco yet?


Chris Engelbert: I was supposed to go this year, but I caught COVID the week before.


Abdel Sghiouar: Oh, yes, I remember.


Chris Engelbert: Yeah, that was really unfortunate. I was so looking forward to that. I was like, no.


Abdel Sghiouar: I keep promoting the conference everywhere I go, and then I forget who made it or who didn't. Okay, well, maybe 2024 is what happened.


Chris Engelbert: Yes, next time. I promise. Like last time, I promised last time, and I still failed. All right, you said you work for Google. I think everyone knows Google, but your role is slightly different. You're not working on a specific... Well, you work on a specific product, but as you said, you're trying to tell people why not to use it.


Abdel Sghiouar: Which I don't think my manager would be very happy if he knows that that's what I do.


Chris Engelbert: All right, tell us about it so I can send it to him.


Abdel Sghiouar: So, yes, I work on GKE. GKE is Google Kubernetes Engine, which is the managed version of Kubernetes that we do on Google Cloud. That's technically the product I work on. I think that we were having this conversation. I avoid being too salesy, so I don't really talk about GKE itself unless there are specific circumstances. And of course, Kubernetes itself, as a platform, it's more specific to an audience. You cannot just go to random front-end conferences and start talking about Kubernetes. No one would understand whatever you're saying. But, yeah, GKE is my main thing. I do a lot of networking and security within GKE. I actually started this year looking a little bit into storage and provisioning because, surprise, surprise, AI is here. And people need it. And when you are doing wild machine learning workloads, really, on anything, you need storage because these models are super huge. And storage performance is important. So, yeah, so that's what I do.


Chris Engelbert: I think predictability is also important, right? It's important for the data model. It's not like you want it sometimes fast and you want it sometimes slow. You want it very fast. You want it to have the same speed consistently.


Abdel Sghiouar: Yeah, that's true. It's actually interesting. I mean, besides the product itself, it's interesting. A lot of these AI companies that came to the market in the last, I'd say, year, year and a half, they basically just use Kubernetes. I mean, ChatGPT is trained on Kubernetes, right? It sounds like, oh, yeah, I just need nodes with GPU, Kubernetes, boom. And to be honest, it totally makes sense, right? Because you need to have a lot of compute power. You have to scale up and down really fast, depending on how many users you currently have, right? So I think anything like that with Kubernetes makes perfect sense.


Chris Engelbert: Exactly. So you mentioned you also have a podcast. Tell me about it. It's not like I have listeners that you don't have, but who knows?


Abdel Sghiouar: Yeah, it's the Kubernetes podcast by Google. It's a show that's been running for almost six years. I think we're close to six years. We've been doing it for two years, me and my co-host. So me and Kaslin Fields from Seattle. It's a twice a month show. We basically invite people from different parts of the cloud-native world, I would say. So we talk to maintainers. We talk to community members. We are looking into bringing some other interesting guests. One of my personal interests is basically the intersection of technology and science. And I don't know if your audience would know this, but at KubeCon in Barcelona in 2019, one of the keynotes that was actually done was done by people from CERN who came on stage and showed how they could replicate the Higgs boson experiments on Kubernetes. So that was before I was a host. But we are exploring the idea now of finding organizations that their job is not really technology. They're doing science, but they're using Kubernetes to enable science. And we're talking to them. So there's quite a lot of interesting things happening. That's how we are going to be releasing very soon. But yeah, so it's a one-hour conversation. We try to do some news. We try to figure out who bought who, who acquired who, who removed which version, who changed licenses in the last two weeks since the last episode. And then we just interviewed the guests. So yeah, that's the show essentially.


Chris Engelbert: Right. And correct me if I'm wrong, but I think you're also writing the "What Happened in Kubernetes" newsletter this week.


Abdel Sghiouar: This week in GKE. Yes. Which is not only about GKE. So I have a newsletter on LinkedIn called "This Week in GKE", which covers a couple of things that happened in GKE in this week, but also covers other cloud stuff.


Chris Engelbert: All right. Fair enough. So you're allowed to talk about that.


Abdel Sghiouar: Yeah, yeah. It's actually interesting. It started as a conversation on Twitter. Somebody put a tweet that said this other cloud provider without mentioning a name introduced a feature into their managed Kubernetes. And I replied saying, we had this on GKE for the last two years. It's a feature called image streaming. It's basically a feature to speed up the image pool. When you are trying to start a pod, the image has to be pulled very quickly. And then the person replied to me saying, well, you're not doing a very good job talking about it. I was like, well, challenge accepted.


Chris Engelbert: Fair enough. So I see you're not talking about other cloud providers. You're not naming people or naming things. It's like the person you're not talking about. What is it? Not Voldemort. Is it Voldemort? I'm really bad at Harry Potter stuff.


Abdel Sghiouar: I'm really bad, too. Maybe I am as bad as you are. But yes.


Chris Engelbert: All right. Let's dive a little bit deeper. I mean, it's a cloud podcast. So when you build an application, how would you start? Where is the best start? Obviously, it's GKE. But--


Abdel Sghiouar: Well, I mean, actually, arguably, no. So this is a very quick question. I mean, specifically, if you're using any cloud provider, going straight into Kubernetes is probably going to be frustrating because you will have to learn a bunch of things. And what has been happening through the last couple of years is that a lot of these cloud providers are just offering you a CAS, a container as a service tool, Fargate on AWS. I think it's called ACS, Azure Container Services on Azure, and Cloud Run on GCP. So you can just basically write code, put it inside the container, and then ship it to us. And then we will give you an URL and a certificate. And we will scale it up and down for you. If you are just writing an app that you need to answer a web request and scale up and down on demand, you don't need Kubernetes for this. Where things start to be interesting or where people start looking into using Kubernetes, specifically-- and of course, we're talking here about Google Cloud, so GKE, is when you start having or start requiring things that this simple container service platform doesn't give you. And since we're in the age of AI, we can talk about GPUs. So if you need a GPU, the promise of a serverless platform is very fast scaling. Very fast scaling and GPUs don't really go hand in hand. Just bringing up a Linux node and installing Nvidia drivers to have a GPU ready, that takes 15 minutes. I don't think anybody will be able to sell a serverless platform that scales in 15 minutes.  [laughs]


Chris Engelbert: It will be complicated, I guess.


Abdel Sghiouar: It will be complicated, yes. That's where people go then into Kubernetes, when you need those really specific kinds of configuration, and more fine-tuning knobs that you can turn on and off and experiment to try things out. This is what I like to call the happy path. The happy path is you start with something simple. And as you do in this case, it gets more complicated. You move to something more complex. Of course, that's not how it always works. And people usually just go head first, dive into GKE.


Chris Engelbert: I'm as big as Google. I need Kubernetes right now.


Abdel Sghiouar: Sure. Knock yourself down, please. Actually, managed Kubernetes makes more money than container service, technically. So whatever.


Chris Engelbert: So the container service, just for people that may not know, I think underneath is basically also something like Kubernetes. It's just like you'll never see that. It's operated by the cloud provider, whatever it is. And you basically just give them an image, run it for me.


Abdel Sghiouar: Exactly. If people want to go dive into how these things are built, there is a project called Knative that was also released by the Kubernetes community a few years ago. And that's typically what people use to be able to give you this serverless experience in a container format. So it's Kubernetes with Knative, but everything is managed by the cloud provider. And as you said, we expose just the interface that allows you to say container in, container out.


Chris Engelbert: Fair. So about a decade ago people started really going into first VMs and doing all of that. Then we thought, oh, VMs. So it all started with physical servers are s**t. It's bad. And it's so heavy. So let's use virtual machines. They're more lightweight. And then we figure out, oh, virtual machines are still very heavy. So let's do containers. And a couple of years ago, companies started really buying into this whole container thing. I think it basically was when Kubernetes got big. I don't remember the original Google technology, Kubernetes was basically built after Bork was started.


Abdel Sghiouar: Bork, yes.


Chris Engelbert: Right. So Bork is the piece, it was probably borked. Anyway, we saw this big uptake in migrations to the cloud, and specifically container platforms. Do you think it has slowed down? Do you think it's still on the rise? Is it pretty much steady? How do you see that right now?


Abdel Sghiouar: It's a very good question. I think that there are multiple ways you can answer that question. I think that people moving to containers is something that is probably still happening. I think that what's probably happening is that we don't talk about it as much. Because-- so the way I like to describe this is because Kubernetes as a technology is 10 years old, it's going to be 10 years old in June, by the way. So June, I think, 5th or something. June 5, 2014 was the first pull request that was pushed, that pushed the first version of Kubernetes. It's going to be a commodity. It's becoming a commodity. It's becoming something that people don't even have to think about. And even cloud providers are making it such a way that they give you the experience of Kubernetes, which is essentially the API. And the API of Kubernetes itself is pretty cool. It's a really nice way of expressing intent. As a developer, you just say, this is how my application looks like. Run it for me. I don't care. And so on the other hand, also this is-- also interesting is a lot of programming language frameworks started building into the framework ways of going from code to containers without Docker files. And you are a Java developer, so you know what I'm talking about. Like a Jib, you can just import the Jib plug-in in Maven. And then you just run your Maven, and then blah, you have a container. So you don't have to think about Docker files. You don't really have to worry about them too much. You don't even have to learn them. And so I think that the conversation is more now about cost optimization, optimizing, bin packing, rather than the simple, oh, I want to move from a VM to a container. So the conversation shifts somewhere else because the technology itself is becoming more mainstream, I guess.


Chris Engelbert: So a couple of years ago, people asked me about Docker, just because you mentioned Docker. And they asked me what I think about Docker. And I said, well, it's the best tool we have right now. I hope it's not going to stick. That was probably a very mean way of saying, I think the technology, the idea of containerization and the container images is good. But I don't think Docker is the best way. And you said a lot of tools actually started building their own kinds of interface. They all still use Docker or any of the other imaging tools underneath, but they're trying to hide all of that from you.


Abdel Sghiouar: Yeah. And you don't even need Docker in this case, right? I think just very quickly, I'm going to do a shameless plug here. We have an episode on the podcast where we interviewed somebody who is one of the core maintainers of ContainerD. And the episode was not about ContainerD. It was really about the history of containers. And I think it's very important to go listen to the episode because we talked about the evolution from Docker initially to the Open Container Initiative, the OCI, which is actually a standardization part, to ContainerD, to all the container runtimes that exist on the market today. And I think through that history, you will be able to understand what you're exactly talking about. We don't need Docker anymore. You can build containers without even touching Docker. Because I don't really like Docker personally.


Chris Engelbert: I think we're not the only ones, to be honest. Anyway, what I wanted to direct my question to is that you also said you have the things like Fargate or all of those serverless technologies that underneath use Kubernetes. But it's hidden from the user. Is that the same thing?


Abdel Sghiouar: I mean, yes, in the sense that yes, because Kubernetes is a sign, it's becoming a commodity that people shouldn't actually-- people would probably be shocked to hear me say this. I think Kubernetes should probably have never gotten out like this. I mean, the fact that it became a super popular tool is a good thing, because it attracted a lot of interest and a lot of investments. I do not think it's something that people should learn. But it's a platform. It's something you can build on top of. I mean, you need to run an application, go run it on top of a container as a service. Do not learn Kubernetes. You don't have to, right? Like, I can see how we put it once in a tweet, which is basically, Kubernetes is a platform to build platforms. That's what it is.


Chris Engelbert: That makes sense. You don't even need to understand how it works. And I think that makes perfect sense. From a user's perspective, the whole idea of Kubernetes was to abstract away whatever you run on. But now, there's so many APIs in Kubernetes that abstract away Kubernetes that it's basically possible to do whatever. And I think Microsoft did a really interesting implementation on top of Hyper-V, which uses micro VMs, whatever they call those things.


Abdel Sghiouar: Oh, yes, yes, yes, yes, yes, yes.


Chris Engelbert: The shared kernel kind of stuff, which is kind of the same idea as a container.


Abdel Sghiouar: Yeah, I think it's based on Kata Containers. I know what you're talking about.


Chris Engelbert: But it's interesting, because they still put the Kubernetes APIs on top, or the Kubernetes implementation on top, making it look exactly like a Linux container, which is really cool. Anyway, what do you think is or will be the next big thing for the cloud? What is the trend that is upcoming? You already mentioned AI, so that is out.


Abdel Sghiouar: That's a very big one, right? It's actually funny, because I'm in Berlin this week. I am here for a conference, and we were chatting with some of our colleagues. And the joke I was making was next time I go on stage to a big conference, if there are people talking about AI at the same conference, I will go on stage and go, "OK, people talked about AI. Now let's talk about the things that actually matter. Let's talk about the thing that people are using and making money from. Let's stop wishful thinking, right?"  I think Kubernetes for AI is big. That's going to be around. AI is not going to disappear. It's going to be big. I think we're in the phase where we're discovering what people can do with it. So I think it's a super exciting time to be alive, I think, in my opinion. There's like a shift in our field that lots of people don't get to experience. I think the last time such a shift happened in our field was people moving from mainframes to pizza box servers. So we're living through some interesting times. So anyway, I think that that's what's going to happen. So security remains a big problem across the board for everything. Access, security, management, identity, software security. You're a developer. You know what I'm talking about. People pulling random dependencies from the internet without knowing where they're coming from. People pulling containers from Docker Hub without knowing who built them or how they were built. Zero ways of establishing trust, like all that stuff. So that's going to remain a problem, I would say, but it's going to remain a theme that we're going to hear about over and over again. And we have to solve this eventually. I think the other thing would be just basically cost saving, because we live in an interesting world where everybody cares about cost saving. So cost optimization, bin packing, making sure you get the most out of your buck that you're paying to your cloud provider. And I think that the cloud native ecosystem enabled a lot of people to go do some super niche solutions. I think we're going to get to a stage now where all these companies doing super niche solutions will be filtered out in a way that only those that have really, really interesting things that solve real problems, not made up problems, will remain on the markets.


Chris Engelbert: That makes sense. Only the companies that really have a solution that solves something will stay. And I think that ethics will also play a big part in the whole AI idea. Well, in the next decade of AI, I think we need to be really careful what we do. The technology makes big steps, but we also see all of the downsides already. The main industry that is always on the forefront of every new technology already is there, and they're misusing it for a lot of really stupid things. But it is what it is. Anyway, because we're already running out of time. 20 Minutes is so super short. Your favorite stack right now?


Abdel Sghiouar: Yeah, you asked me that question before. Still unchanged. Go, as a backend technology, I don't do a lot of front-end, so I don't have a favorite stack in that space. Mac for development, Visual Studio Code, VSS for coding. I started doubling into IntelliJ, but recently I kind of like it, actually. Because I'm not a Java developer, I never had a need for it, but I'm just experimenting. And so Go for the backend. I think it's just backend. I only do backend, so Go.


Chris Engelbert: That makes sense. Go and I have a love-hate relationship. I wouldn't say Go is a perfect language, but it's super efficient for microservices. When you write microservices, all of the integration of Let's Encrypt or the ACME protocol, all that kind of stuff, it's literally you just dump it down and it works. And that was a first for me, coming from the Java world. A lot of people claim that Java is very verbose. I don't think so. I think Java was always meant to be more readable than writeable, which is, from my perspective, a good thing. And I sometimes think Go did some things, at least, wrong. But it's much more complicated, because Java is coming from a very different direction. If you want to write something really small, and you hinted at the frameworks like Quarkus and stuff, Go all just has that. They built it with the idea that the standard library should be pretty much everything you need for a microservice.


Abdel Sghiouar: Exactly.


Chris Engelbert: All right. We're recording that the week before KubeCon. KubeCon Europe, will I see you next week?


Abdel Sghiouar: Yes, I'm going to be at KubeCon. I'm going to speak at Cloud Data Rejects on the weekend, and I'll be at KubeCon the whole week.


Chris Engelbert: All right. So if that comes out next Friday, I think, and you hear that, and you're still at KubeCon, come see us.


Abdel Sghiouar: Yes. I'm going to be at the booth a lot of times, but you will be able to see me. I am a tall, brown person with curly hair. So I don't know how many people like me will be there, but I speak very loud. So you'll be able to both hear me and see me.


Chris Engelbert: That's fair. All right. Thank you, Abdel. Thank you for being here. I really appreciated the session. It was a good chat. I love that. Thank you very much.


Abdel Sghiouar: Thanks for having me, Chris.



Key Takeaways


In this episode of Simplyblock's Cloud Commute Podcast, host Chris Engelbert welcomes Abdellfetah Sghiouar from Google, who talks about his community work for Kubernetes, his Kubernetes podcast, relevant conferences, as well as his role at Google and insights into future cloud trends with the advent of AI.


  • Abdel Sghiouar is a Google Kubernetes expert with 10 years of experience at the company. He also hosts a podcast where he talks about Kubernetes and containers.


  • Abdel focuses on networking, security, and now storage within GKE (Google Kubernetes Engine), especially for AI workloads. He emphasizes the importance of predictable latency and high-performance storage, such as simplyblock, for AI, rendering Kubernetes essential for AI as it provides scalable compute power.


  • His podcast has been running for almost 6 years, featuring guests from the cloud-native ecosystem, and covers cloud-native technologies and industry news. He also writes a LinkedIn newsletter, ‘This Week in GKE,’ covering GKE updates and other cloud topics.


  • Abdel highlights the shift from physical servers to VMs, and now to containers and discusses the simplification of container usage and the standardization through projects like Knative.


  • He is of the opinion that Kubernetes has become a commodity and explains that it should serve as a platform for building platforms. He also highlights the importance of user-friendly interfaces and managed services.


  • Abdel believes AI will continue to grow, with Kubernetes playing a significant role. Security and cost optimization will be critical focuses. He also emphasizes the need for real solutions to genuine problems, rather than niche solutions.


  • While asked about his favourite tech stack, Abdel includes Go for backend, Mac for development, and Visual Studio Code for coding. He also likes IntelliJ and is currently experimenting with it. Chris also appreciates Go's efficiency for microservices despite some criticisms.


  • Chris and Abdel also touch upon conferences, particularly Devoxx Morocco and KubeCon.

Comments


bottom of page