Podcast Episode 8 – The Rapidly Emerging Practice Area in Artificial Intelligence: AI DevOps

The Rapidly Emerging Practice Area in Artificial Intelligence: AI DevOps

Ira Bell: Today we talk with Ryan Chynoweth, a Data Scientist at 10th Magnitude, and Colin Dembovsky, a Solution Architect at 10th Magnitude. In this discussion, we talk about AI DevOps, which is a rapidly emerging practice area in Artificial Intelligence. Those who mature beyond technical agility and organizational agility to become digital disruptors are leading the charge in this type of digital transformation.

Ira Bell: Hi, I’m Ira Bell, the CTO at 10th Magnitude and I’m here with Ryan Chynoweth, a Data Scientist at 10th Magnitude, and Colin Dembovsky, a Solution Architect at 10th Magnitude, and a Microsoft DevOps MVP. Gentlemen, it’s so great to have you both on the podcast. I want to open this right up and ask if Colin, maybe you could define the difference between AIOps and AI DevOps.

Colin Dembovsky: Thanks, Ira. That’s a great question. I got to do some work with a customer which I’ll probably get into some more details a bit later, but that started me thinking about how to do DevOps with AI. When we started talking about it internally at 10th Magnitude, we started calling it AIOps initially, and then when I started doing a little bit of reading, I realized that there is actually something called AIOps, which is different from AI DevOps, and AIOps is really applying machine learning and algorithmic patterns to operational logs for example.

Colin Dembovsky: So, if you have a large data center and you’re generating millions of events a minute, you want to be able to sift those events and figure out which events are actually critical and which are just noise. So, that would be an application of AIOps where you’d be applying these data science algorithms to operational data. That’s really a AIOps side of things as far as I understand it.

Colin Dembovsky: Then, AI DevOps, which is really what we want to talk about today, is how to apply the DevOps principles, patterns, and practices that we so know and love for cloud solutions, or for deploying applications, and how to apply that in the realm of AI and where the target persona is a data scientist as opposed to say a developer.

Ira Bell: That’s really interesting. So, let me ask a little bit about AI DevOps then. What typical workflows does a data scientist have that might require DevOps?

Ryan Chynoweth: Yeah, Ira. That’s a really great question. As Colin mentioned, the AI DevOps is really great for all AI solutions, but it really varies depending on the way you implement it. The three ways I see implementation is just through batch processing, so this is very typical with data warehousing, where you’re using stored procedures to extract and transform your data. Another way that we implement AI solutions is with a web service API. I really enjoy web service implementations because it allows you to add intelligence to existing and new applications. It’s a simple API call for your software developers, and it’s really easy for your data scientist to deploy in Azure.

Ryan Chynoweth: I love the Azure Machine Learning Service. It’s a great way to wrap up your code in a Python container and just deploy it to Kubernetes. Really the final way I see AI solutions being deployed is Stream Processing. So, you have … typically it’s a large amount of data, it’s either streaming from your application or devices and it’s flowing through Azure using some type of queuing resource like an Event Hub or Kafka Cluster, and you’re trying to produce real time predictions or analytics so that you can catch something as it occurs. So this is really common with fraud detection, processing these transactions as they come in instead of doing a batch process later. But the AI DevOps portion of this is different for every one, because the tools are different for each one as well.

Ira Bell: Would either of you mind sharing some specifics on customer implementations you’ve been a part of?

Colin Dembovsky: Sure, I’ll jump in. I went to a customer in New York, they’re a pretty large PR firm, and they initially contacted us to help them with their data scientists, so we weren’t working with developers, we were called in to work with their data science team, and the problem they had was their data scientists were running their Jupyter Notebooks or new laptops and doing number crunching on their laptops, and then uploading the results to a file share somewhere, right? Which is pretty typical for a lot of data scientists.

Colin Dembovsky: They’re just grabbing their Jupyter Notebooks and they’re doing their work, and they’re training new models and that sort of thing. But what happened was, as their team started to grow, one of their team members would create this Notebook and he’d run the Notebook, and he’d bring in some dependencies and some packages, and that would enable him to run the work that he was doing. When he handed that off, they were actually using Git for source control, which was great to see.

Colin Dembovsky: When he handed it off to his next team member, that guy would run the Notebook but because he didn’t have the correct dependencies installed, the Notebook would fail. So they had a lot of friction in their processes, especially as they started working on larger projects where they had to collaborate more, just trying to manage their dependencies. So that was really the problem statement they had.

Colin Dembovsky: We came in initially to work with him to see if we could get their workloads onto Kubernetes, and the advantage of doing that is that when you create the container image to run it in a Kubernetes Cluster, that container image contains the application code, in this case a Jupyter Notebook for example, but it also bundles up the dependencies. So, this would help them to minimize the friction because they’d be able to run their Notebooks inside a container which has all the correct dependencies. So, they wouldn’t be messing around with dependencies and trying to figure out which dependencies are installed on their laptops and which aren’t.

Colin Dembovsky: So, that’s initially what we did. We ended up actually going a bit further with them, and by the time we left, their data scientists were checking code into Azure DevOps, which is Microsoft’s DevOps platform, and there was an automation process that went in after they did their check-ins, or their commits, and at the end of the day, they were able to run their workloads in the Cloud in a Kubernetes Cluster in Azure, and upload the results to a database in Azure as well.

Colin Dembovsky: And so instead of now running things on their laptops and only being able to run those workloads when their laptops are on, they now offloaded all of that work into Azure using Azure DevOps and some DevOps practices so that they could actually scale their team, they could work more effectively, they removed some of the friction. It was really a great project to be a part of because I typically work with developers and ops folk, because I come from the DevOps side of the world, but we’re seeing more and more companies adopt AI and machine learning, and so this was a great opportunity for me to actually take some of those DevOps practices and put the in place with a team that doesn’t have the DevOps know how.

Colin Dembovsky: It was really great walking away from them because the team felt really confident in doing their workflows and in collaborating and they really didn’t know that much about automated builds and automated releases, and even Kubernetes. That was all just supporting them and they could get on with their work without being tripped up by the kind of plumbing and maintaining dependencies and so on. So it was a really great story and they’re still using the pipelines that we put in place very effectively.

Ira Bell: That is really great, Colin. I bet it was really illuminating for those data scientists to see the containerization process work, so that they didn’t have to kind of think about those dependencies. It came up in a state they were expecting when they needed it. So, you mentioned that they ended up automated pipelines and such, but would you mind going into a little more detail about how you got them to that point?

Colin Dembovsky: Yeah, absolutely. This was where it got really exciting for me, because like I said, when we started with them, they were using Git for source control, so at least they had some kind of handle on the importance of source control, which of course is really critical for DevOps. But we took them a bit further and we created some templates for them. I think they were ARM templates, if I remember correctly. It may have been Terraform as well, to spin up the Azure resources they required. So they need a Kubernetes Cluster.

Colin Dembovsky: Again, these guys were data scientists, they weren’t really that knowledgeable there. They had some idea of what containers were, but they certainly weren’t going to be spinning up Kubernetes Clusters and maintaining those Clusters. They just wanted to run some batch processes, and we suggested Kubernetes as a good mechanism for doing that. So, we were able to get these templates in so they started using infrastructure as code. And then beyond that, we got some build automation in, and so we would take the repository where they were checking in their Notebooks and we would create a build, which would build that image. And then we had an automated release process which would then release those container images into environments.

Colin Dembovsky: We had a dev environment a prod environment, so they could run it in dev first to make sure that their Notebooks were working, and that the algorithms looked like what they needed to look like, and then they able to just go into the release management plane and just hit approve once it had been tested in dev to promote that up into production. So, we were just taking some of the practices and patterns that we see in “traditional development” like we were doing … if you’re developing a web app, you would be doing source control, you would be doing automated builds, you’d be doing release management, you’d be doing all of those things in your day-to-day job.

Colin Dembovsky: So, we were taking those same things and applying them with the data science team and it was great to see how quickly they grabbed onto it and were able to run with it, which was a little bit surprising for me, just because I knew they didn’t have much background in that, so it was actually a pretty smooth process for them to get on board with that. In a couple of weeks we were able to take them from this team of disparate laptops to a really smooth operating machine that’s able to run workflows in the Cloud and manage dependencies and run times and really take them up to really enterprise collaboration for their data science team.

Colin Dembovsky: The other thing we did which was great to see was, they would email each other, so I’m working on this thing, and okay, I need you to look at this, and most of the communication was done by email and they may have had a spreadsheet somewhere with a list of their to-do items. So another thing we got them onto was work item management in Azure DevOps.

Colin Dembovsky: So then they were able to actually go use all the planning tools and the work item tracking tools to start tracking their efforts, and they started actually implementing some sprints and started taking some of the agile work management principles into their data science as well, instead of just again, just opening up my laptop and doing some Notebooks, I’m now actually collaborating from a backlog with my teammates so we know what’s going on, we know what we’re planning for the next sprint or two, we can actually monitor where we are, we can raise impediments. So again, it’s taking all of those DevOps principles and leveraging them with the data scientists so that we can apply DevOps to our AI workflows. And that’s what we were able to do with them. One of my favorite stories from last year, one of my favorite customers to work with from last year.

Ryan Chynoweth: You know, Colin, you mentioned a ton of transformation that you did with that specific client, but what I just wanted to point out was that … you said it without saying it, is you changed everything for these data scientists on how they get from their laptop to production, but their actual development environment kind of stayed the same. Like they didn’t have to change what they were doing too much. They kind of had the same familiar feel with Notebooks and everything was the same for them, except the way that they were checking code and communicating and getting stuff into production was all automated and much easier.

Ryan Chynoweth: A lot of times that is kind of a hurdle when data scientists try to get to the cloud, is you have to change the way you work, whether that’s a new IDE or you have to learn a new library that interacts with this resource in Azure. But what you did was a ton of changes but the data scientists kind of got to continue the way they were doing it. Is that kind of correct?

Colin Dembovsky: Yeah, absolutely. That was one of the fun things about that whole project. I know some surface stuff about AI, I understand the concepts broadly, but I certainly don’t spend my day in Jupyter Notebooks all day, right? So, when it comes to creating algorithms and actually doing analytics and predictions and so on, I’m happy to hand it off to you and other smart people that can do that. But it was just great to be able to apply those DevOps principles in there and to see how well it actually meshed.

Colin Dembovsky: I think that was the thing that was so surprising to me was, at first the brief that we were brought into was kind of just, “Help us with some Kubernetes,” right? Or, “Help us manage these dependencies.” It actually morphed into bringing DevOps into their data science team, and it was really eye-opening for me to see, oh actually, we can apply these principles into that realm of AI and machine learning. So, that was a really fun project. And as you said, there wasn’t a massive amount of change to the way the teams were working from day-to-day.

Colin Dembovsky: They were still working in Notebooks and so on, but in terms of operationalizing their AI … if operationalizing is a word … but actually taking the work that they were doing and being able to run it in a Kubernetes Cluster with all the monitoring and all the ops and all the governance that goes around running workflows, and that whole process being pretty transparent to them. They didn’t really have to worry too much about it because of all the automation we put in place. So that was really … like you said, that was what was so great about it was that they got to really become part of something that’s enterprise grade but are still able to just focus on their work and not have to do a ton of learning on a whole bunch of plumbing and so on, right? So that was really great.

Ira Bell: That’s really powerful stuff, guys. So, as we’re thinking about AI DevOps, which is really merging to incredibly complex but wonderful disciplines, I have to ask, and I’ll start with Colin, does a data scientist suddenly have to start learning about containers and pipelines and such, as they go into this area, AI DevOps?

Colin Dembovsky: Again, that’s a really great question, Ira. I think as we see more and more companies going on this digital transformation journey and going beyond just lifting and shifting into cloud, and even beyond choosing to work on serverless or PaaS or SaaS offerings, as we see companies modernizing and leveraging cloud, we’re going to see more and more demand for workloads that actually make intelligence or help us gain intelligence from the data that we have.

Colin Dembovsky: We’re able to get more and more data because we can scale in the cloud. We essentially get infinite scalability in the cloud, so we’re generating more and more and so there’s going to be more and more requirements for analyzing our data and doing predictive analytics and all of those wonderful things that AI and ML bring to the table. As that demand increases, we’re going to have increasing demands to cycle quickly, right? So, the data science teams are going to have to do their training faster and turn around their models quicker and all of that without sacrificing quality. And that’s where the DevOps disciplines come in.

Colin Dembovsky: Now, to get back to your question really, does a data scientist need to know about Kubernetes and containers and pipelines? I think it’s good to have some idea of what those things are, and what they can bring to the table, but there’s no need to really get deep into it and be actually authoring the pipelines or running Kubernetes Clusters, right? I think that’s where dev and ops merge the worlds of developers and operational … IT operation folk, and brought those people onto the same team so that they could use their strengths and become cross-functional. I think that’s the sweet spot for organizations, is to say, “Well, can we take dev and ops and our data scientists and bring them onto the same team and apply all those DevOps principles to the data science workflows?” That’s where it’s going to become important.

Colin Dembovsky: So, do they have to have a deep knowledge of that? I would say no. But having some idea of what pipelines are and what CI/CD is, and how to do effective branching in a Git repository, some of these things are definitely going to stand data scientists in good stead because that demand’s going to go up and the teams are going to grow and so there’s going to be more need for collaboration. And whenever you have that collaboration, you’re going to have that need to manage it so that we don’t step on each other’s toes. So there’s going to be more and more need for data scientists to at least have a high level understanding of those DevOps things like containers and pipelines and managing work on backlogs and so on.

Ira Bell: Well, that makes so sense, Colin. So, basically what I took away from that is that anything that a data scientist can learn about DevOps is good. But they should probably just kind of continue to focus and be specialized in data science, and then just kind of grow into the horizontal understanding of DevOps as a whole.

Colin Dembovsky: Yup.

Ira Bell: So with that, I guess I’ll kind of turn the table to Ryan and say, well, does someone who’s in DevOps who beings to work in the AI DevOps space, do you think that they need to kind of become adept in Artificial Intelligence?

Ryan Chynoweth: Pretty similar to Colin, a DevOps engineer or whatever you want to call that person, they don’t need to know the in-depth details of creating machine learning solutions. They need to understand the process that a data scientist will go through, but in the end, machine learning, deep learning solutions, it is just a data transformation. There are advanced data transformations that are difficult to develop, but in the end you have data coming in, you apply a function and you have new data coming out in the form of a prediction. So as long as someone who is setting up those DevOps pipelines understands that, they’re pretty much set. But again, they don’t need to get into the nitty-gritty of what algorithm they are using. As long as they kind of understand what is the data scientist trying to do, and how do they get that into production?

Colin Dembovsky: Yeah, I’ll jump in there as well. I love working on AI opportunities and part of the reason why I love it so much is because I get to collaborate with colleagues like Ryan who are AI, ML, deep learning experts and I get to bring that DevOps side. That’s where we’ve really seen an exponential growth, an exponential change, is where we’ve taken the strengths of both of those personas, taken the data science persona and taken the DevOps persona and when we get those personas onto the same team, we really find some economies of scale, which are very difficult to do if you have to try and put both of those hats on. So that’s really where the sweet spot is, is where we’re combining the data scientists with the DevOps engineers and applying the best of both worlds and really mixing those two together to produce something that’s truly wonderful. That’s really where we’ve seen some amazing projects come up.

Ira Bell: Okay, guys. One final question, and I gotta throw you a curve ball here. Who wins at a game of chess, DevOps or AI?

Colin Dembovsky: It’s DevOps all the way, Ira.

Ryan Chynoweth: My reinforcement model might take a little bit to learn but I think in the end it’ll win.

Colin Dembovsky: How many times would we have to play it though for it to win?

Ryan Chynoweth: Well, I mean, it would play against a computer for a while, learning how to play. You just run a couple hundred thousand simulations and you’re good to go.

Colin Dembovsky: Yeah, but without the pipeline, there’d be no way to actually run the simulations.

Ryan Chynoweth: Oh, I just need a GPU for this.

Colin Dembovsky: Ah, so you’re not going to apply DevOps to your machine learning model.

Ryan Chynoweth: No, I just want to beat you, Colin.

Ira Bell: Amazing, guys. Well, thank you very much for your time, Ryan and Colin. It’s truly a pleasure to work with you, and I hope we can have you as guests on our podcast again soon.

Colin Dembovsky: Thanks, Ira. Thanks, Ryan.

Ryan Chynoweth: Thank you.

Ira Bell:  Thanks for listening to the Art of Digital Disruption. At 10th Magnitude, we’re proud to create the path for organizations to stay competitive and disrupt their industries. For more information on innovation and how you can disrupt your industry, visit www.10thmagnitude.com/agilityquadrant and download our latest whitepaper. If you’re ready to begin implementing the practices and benefits of AI DevOps to operationalize your AI workflows, contact info@10thmagnitude.com to learn about our two week AI DevOps engagement.

By |2019-03-14T18:24:04+00:00March 14th, 2019|

Leave A Comment