>>>Let’s go ahead and get started. I would like to thank everybody For coming to the session today for a couple of reasons. One, it’s the third day of build. I know folks are a little tired. It’s still early in the morning. We had the good fortune of not Having the 8:30 session, which people are either really, really Motivated to be here for those sessions, or they’re still here From the night before of the the second thing is i’d like to Thank you for coming to this session because i know there’s a Lot of other great sessioned happening right now, and i’m Very humbled that you come to this one. So what we’re going to be talking about today is what’s New with azure machine learning. Before we get into the talk, a Quick survey, show of hands in the room. How many folks are using azure machine learning today? awesome. Thank you. How many folks have gone to a Session this week that talked about azure machine learning? Basically, the whole room. That’s cool to see. A couple of other demographic questions. How many people are python Preference? r? C#? scala? There’s one up front. Julia? Okay, we’ve reached the long tail. Let’s go ahead and get into this. For those folks in the room who aren’t familiar with azure Machine learning, i’m going to spend a little bit of time Talking about what we’re up to, the problems we’d like to solve with the service. I’m really excited to have great friends and customers here from Asos to talk about how they’re using azure machine learning today to solve some pretty Interesting problems. And we’ll go from that to what’s New, what are some of the new capabilities, and what’s coming next. Um really excited to be here. The team excited to talk about What they’ve worked on really, really hard. So they also thank you for coming to this session. Let’s first chat a little bit about the different pieces and Parts of the ai capability that’s are in azure. It’s day three, so you might have noticed we’re talking about Ai a little bit at this conference. How we think about the world is Really in terms of a set of infrastructure, base cape alts That you can use. We’re building our enclosed from The hardware up to be accelerated for artificial Intelligence workloads, and that starts with what chips we put in The machines. Mark russinovich is having a really interesting talk right Now at this point. I encourage you to check out That talk as well, where he’ll dive into what we’re doing in The data centers. On top of that, we have Different compute and data infrastructures that you can leverage because we know Customers have lots of different kinds of data. Shapes. It’s different sizes. It’s stored in different systems. And there is no ai without data. So that’s important. And we have a set of compute Engines that really meet lots of different needs depending on What you’re trying to do. If you have a data intensive Load, azure is the best to run spark in with azure databricks. Alex sutton talked about the batch ai service earlier this Week and about some of the really large scale gpu-based Deep learning workloads we’re running inside of azure. In addition to the infrastructure, we have tools. Tools and frameworks are critically important. One of the principles of what We’re doing inside of azure is that fund amountally we’re open To you using the tools and frameworks of your choice. We would be pretty silly if we did this presentation and said, Hey, we’ve got a lot of really interesting machine learning Stuff, but you can only use one framework. We have customers that are using Every framework you can name, and the next time we meet, there Will be two more frameworks they’re using that we’ve never Heard of before. That’s just the pace of what’s Happening in the ecosystem, and it’s a feature of the platform That you can get this productivity with any framework Or tool of your choice. Then we get into the services. We really divide the ai services up into three different Categories — conversational ai. This is about putting text in Speech-based bots into your applications. We’ve talked about those a bunch This week. That’s all i’m going to say there. The next is a set of pre-trained Ai capabilities. This is where we at microsoft Have done the hard work around getting the data, building the Models, deploying the models, and all you have to do is make An http request and build those into your apps. You can build a classifier on top of those. You can take your own examples of speech in order to create a Speech engine that’s optimized around your — the words and vocabulary that are familiar in The setting that you’re trying to optimize for. And then you always need the ability to go beyond that. This is where the custom ai services come in, and this is What we’re doing with azure ml, which is really about when you Want the most control and flexibility, the ability to pick And choose the framework you want to use. This is the tool of choice for that. When we think about what it means to build a model, we Really think of three stages to this. The first, again, there is no ai without data. So the first part of any one of these projects — and you’ll Hear asos talk about this as well — is about getting the Data. And then it’s about understanding the data, shaping The data, cleaning it, getting it into the right size and shape For you to then use inside of your next phase, which is Building and training the model. Building and training the model Is somewhat like software development, but it’s a lot more iterative. You really, really end up exploring a lot of different Ways, tools and techniques, configurations, in order to find The models that are going to perform the best. Then once you have that model that performs really, really Well, you get into the last phase, which is i want to deploy this. This is where i actually want to go and deploy this into the Cloud, deploy it to the edge. I want to Put it inside of a mobile application i’m building. This is where i want to use the model. What i’m not showing on this Diagram is there’s one more piece of this, which is once you Deploy the model, it’s super, super interesting to understand How is that model actually doing. So there’s actually a stream of Telemetry that’s going to come off of the model and go right Back into the system, and that’s critically important because Customer’s behavior is going to change, the types of data they Use are going to change, so you want to be able to update this As quickly as possible. So what is azure machine learning? Azure machine learning is our platform. It’s fully managed for custom ai Development. We’re really trying to solve Four problems. The first is we want you to be Able to build, train, deploy, and manage those models across Any data at any scale. We don’t want you to not be able To use the service because you need thousands of cores in order To train, but we also want you to be able to start just on your Machine, train the model there, and be able to deploy it. The second thing we want to do is boost developer productivity With agile development. This is — when you look at Successful organizations — and asos will talk about this as well. It’s incredibly important to reduce the latency from the time You have an idea to when you can deploy that model to when you Can see how it is performing in the real world. The more we can do in order to shorten that cycle, to make that Loop go faster, the more productive you’re going to be And the more creative you’re going to be. This is the secret sauce to Really unlocking the flywheel of an organization that’s going to Take advantage of machine learning. This next one about being able to deploy these models Everywhere is also critically important to our customers. Our customers don’t just want to deploy into the cloud although a Lot of them choose too. They may need to deploy on premise. They may have highly sensitive data they can only score inside Their data center, and they want to deploy a model there. They may have concerns around latency. They need to be able to deploy To a remote facility that can withstand a lot of internet Connectivity. Then finally what we’ve been Talking about a lot, and we showed a number of these this Week, is being able to deploy to edge devices. This is where you want to take the model and put it as close to The event as possible. I want to be able to take a Model and deploy it on camera. I want to be able to deploy it On a sensor that’s on a tractor. And the reason i do that is so That i can make that decision as quickly as possible as close to The event as possible. This last one i already chatted About, but this is about being open to the tools and platforms That you are using today, and are story with azure machine learning is that, if it runs on Python, which is almost the entirety of the ecosystem, you Can go ahead and use it here, and we’ll be adding other things As we go forward. There are two major buckets To azure machine learning, the Capabilities are all about the build and train phase. And the problems we’re trying to solve here are what is the code And the configuration and all of the dependencies that were Required in order for me to run my training job? This is really critically important because, as i said, Development is an iterative task, and i want to be able to Go back in time, and i want to see how my team is doing. I want to watch my team of data scientists as they build models And see which one is actually performing the best. So what we’re going to do is on top of all of these frameworks And tools, we’re going to capture all of the metrics about Those training runs. You can query those. You can create reports on top of those. I’ll show an example of this a little later. What this allows you to do is find exactly the model that you Want to take and then deploy. The next set of capabilities are Around model management. This is, once i have that model, How do i go and deploy it? this ranges for challenges from How do i discover the models that i have to actually Deploying them. We’ve taken a huge bet on top of Docker, and it’s been super exciting to see how this has Played out because docker and containerization is what gives Us that flexibility. We can take those models, and we Can deploy them up to aks. If you want to have a scale-out Large scale kubernetes cluster. If you just have a quick single Instance service that you want to deploy, you can deploy that To azure container instances. If you don’t want to use our services, you can take that Docker container and run it somewhere else where kubernetes Runs. You can run that on prem in your Own kubernetes infrastructure or run that on top of azure stack. That’s also what allows us to take the models and deploy them To edge devices, many of which are capable of running containers. The last thing to point out here is the 0 approach that we’ve Taken integrates very, very easily within cidc platforms. You’ll see asos talk about this. They wanted to integrate the Model deployment with the rest of their software deployment Life cycle. This goes back to the point Around ability and increasing the rate by which you can get Those models deployed. With that introduction to Machine learning, i’d like to welcome up bob strudwick, the Ceo of asos, and i’m super excited to have him here. Bob, come on up. [Applause]>>hi there. Very excited to be at build this year. Just by way of background, we Were here last year talking about the Replatforming of our entire consumer facing software Estate into azure targeting platform as a service. So we sort of know something about the business of developing Sort of high availability services for the cloud. And sort of in the simplest terms, asos is a pure play Ecommerce operator on a mission to be the world’s number one Destination for fashion loving 20-somethings. By way of scale, we’re talking 2 billion of turnover, 87,000 Products, 60 million active customers, 100 million page Requests a second, services that scale to sort of 4,000, 5,000 Requests a second, 1,200 people in asos technology — that type Of scale. And the data science and data Scientisted have been part of the landscape at asos for maybe Three, four years, that type of thing. And like most organizations we sort of started in a small, experimental way. Now you fast forward on to now, and visual search and product Recommendations and fit Analytics and conversational search through facebook Messenger are all examples of successful integration of External cognitive services and our own data science Into the asos software stack. But i guess for us, like many organizations that sort of Inspire to be truly data drifb, there sort of comes a point Where you have to try and take this beyond an activity of a Data science team and aspire for This to be an activity that’s embedded in every software Engineering team in your crew. So we thought we would share Some thoughts about how we sort of embarked on that journey and How we tried to sort of demistify and productize data Science as a prerequisite for The solution of data science to all our teams. I guess we sort of start with the idea that data science , Like all other software design and Build activities, can be boiled down to a very few things that You have to be very good at. And this reference architecture Sort of represents, i suppose, our view of what those Capabilities are as sort of logical or conceptual level Before you get into the business of languages and Machine learning frameworks and runtime Environments and all of that. And here they are. The model preparation , model Training, model management, realtime and batch compute, and The sort of engineering wrap that allows you to take those Capabilities and turn them into high availability services with Security and all of that. And by creating that model Of the capabilities that you need, You’re sort of able to have an objective measurement of where You are, and the dimensions across where you need to Improve, and where technology can play a part in sort of Creating a step change in where you’re at. So what was our sort of take on — what were we after in Pursuit of high performance data science, and where does azure Machine learning fit in for us? i guess you sort of start with The idea that we considered that an iterative approach to data Science was a result of the high productivity story. Experimentation is such a critical part of a machine Learning project. The simple truth is that there Are no guarantees of success. You may not be able to find Patterns in the data. The patterns in the data that You do find may not be strong enough to deliver the business Benefit you’re after. So you have to be able to fail fast. You have to be able to pivot and move on. So while some of this was new and challenging for us and the Creation of an iterative approach to data science as Well, much of what’s here in the Pursuit of high productivity were things that we knew Something about. How to build how availability Services, how to use technology To automate, build, and deploy. How to use technology to Automate provisioning. These were things we solved at scale, and many other Organizations have too. So for us, the adoption of azure Machine learning was as much about making sure it didn’t get In the way of capabilities that we already had. It was as much about that as it was about the introduction of New capability. So what were we sort of really After, i guess? in two dimensions. First was the sort of normalizing of service boundaries between data science And the rest of the world. So problem statement from back Then. Data science capabilities Solved, hard to resolve. And what we were after was the Establishment of formal service boundaries so data science back Services could just be given responsibilities to a coherent Separation, just like other services, that those services Would have the same slas in terms of availability and Performance and security and so on. And that we would be able to iterate the data science part of It, to go into production of an mvp of data science, to see that Bit evolve around on engineering wrap that sort of delivered that Rock solid technical implementation. That would be the underpinning Of this iterative approach. Secondly, the way you go about Producing the services is high productivity as possible. High friction handover between different roles, limited Opportunities to pair up in a way that’s like routine Elsewhere, limited support for data model traceability. And what we were after was an ecosystem where The frameworks that make data science highly Productist could coexist with the frameworks that do the same Thing for software engineering. We wanted to integrate with cdic And solutions. We wanted to support Repeatability and traceability in our experimentation, and then To support the successful models Into production and have traceability from deployed Service back to model. So that’s enough background. I am going to hand you over to naeem khedarun, and saul vargas. And they’re going to take you through our first azure machine Learning service, mainly brand recommendations. Ct-so over to you guys.>>Good morning. My name is saul. I’m a data scientist. Here with me on stage is naeem, A software engineer. As he was pointing out, we Decided to test the capabilities that azure machine learning Offers and the new ways of working between engineers and Data scientists with the following proposal, which is Brand recommendations. Brand recommendations for us is One of the particular ways we want to help customers in our Website and in our apps by means of providing them with Personalized recommendations on brands we think our customers Might be interested in, either because it’s Already the favorite brand or because we Think it’s brand they don’t know and may like it. The customer select one of these recommended brands, they have The freedom of going through the whole offer that asos has for That particular run. That’s something that for us is Complimentary to experiencing that which we already offer, which is personalized Recommendations at the product level. One of the requirements we had for developing this brand Recommendations approach was it could be extended or generic Enough that in the future we could do other types of Recommendations of brand groupings, for instance, Recommending product types, colors, styles, et cetera. With that idea in mind, what we decided to go as the first Approach was taking a look at our Our already existing model for doing personalized product Recommendations and just take a look at the brands of those Products. To give you a more specific as Well, we have a customer who’s a running enthusiast, and we now To do accurate recommendationed. So in this case, we see Sportswear that we hope is quite relevant for the customer. We’re going to run something that is Logically equivalent to this. So this is obscure Representation of this. If you’ve been paying attention To matt, you know we didn’t implement this in sql, but in python. We were very happy to do it that way. The outcome will be that we get A set of recommendations, and the order is determined by the Number of products that brand is represented in the Recommendationed. In this particular case, our own Brand, the asos brand will be the first as to products showing Up in these personalized recommendations. It’s interesting to stop a little bit and see the internals Of that implementation. The ideas that we’re relying on A well-known approach for doing personalized recommendations, Which is matrix factorization. In essence, matrix factorization Is about coming up with vectors that represent both customers And products in such a way that by doing an operation such as The dot product between a customer vector and a product Vector, you can get a prediction of the relevance of a product For a given customer. The way we implemented this Particular approach before having aml as part of our tool Set was the following. We had a training phase, Training component using historical data to train the Model to come up with these vectors. This is something that we are training as a batch job on a Daily basis, and the current machine learning relies on spark Learning libraries. Once we have that model, it’s Hand over to different component and online prediction engine. This one will do the actual computation of these products to Recommend products in realtime. In particular, for this Component, the implementation that we have was implemented in The .Net framework. In order to understand why we Had this particular scheme, information.>>Thank you, saul. So that first version of brand Recommendations as across three separate teams. If you had a data size function That’s grown over current years, this might look familiar. We had a dedicated data size team. They looked to find data sets applicable to the particular proposition. In this case, write some data prep in python to structure some Features that could feed into their machine learning models. They would experiment with a few different algorithms and a Parameter to each one, and then the output of that team would be An algorithm selection and the hyperparameters for that Algorithm. We would give this to the big Data engineering team, who would create two new pipelines. One pipeline would be the data prep pipeline, write it and ship It to azure storage. Second would read from azure Storage, run the training code, and output the training model. The output of that team is a trained model, and that go to Training teams. They would run against that Model to score the products. They would compose it with other Microservices from asos and create an end point we can give To our web and app teams. They would read the api and send It out to our customers. One of the things we went Through was the amount of input it took from the teams to get an End to end vertical slice working between all the teams. That increased our time to market up to — it took us a few Months to get a solution out. So when we began working with The new version of azure machine learning, we wanted to create a Team that would encompass all the roles we thought we needed To try all the new features it had to offer. That included the data Scientists to write the algorithms and models. It included the data engineers to write it at scale and create Our pipe lines. We brought in api engineers that Could write scaleable performant and reliable apis. And we got help from our platform engineers to help us Build the cidc pipelines from day one. We also brought in performance test engineering help, and they Helped us scale through the systems. They brought the low test rigs And the large environments. And the other thing we sought to Do is redefine architectural Boundary and complete data science in a single team and Output to the rest of the teams at asos restful apis and Contractual based messaging. The other thing we looked to do Was homogenize our tech stack. It has fast web frameworks and has python for us to do our big Data pipelines. We wanted to do tdd and Encouraged testing from day one. We picked this as our unit of choice. At this time, data scientists and engineers working together We got a test running at 80 on this particular solution. It wasn’t a figure that was mandated. It was something that evolved naturally. With that, it gave us confidence To be able to change, optimize, and refactor our code knowing we Hadn’t broken the system or affected our teammates. We use application insights for health monitoring, gave us Request data, request durations, cpu, memory, networking stats, And it also gave us the ability to instrument part of our code To find out where we’re spending the time. For cluster health, we used log Analytics. We installed the ms agent on all The nodes. Machine learning cli helped us Write our cidc pipelines from day one. We could position kubernetes clusters, deploy models and Services. And integrated ecosystem of what We already have at asos. Because we had an integrated Team and a homogenized tech stack, we could get up and Running in several weeks. We could shift the activities to Run down the development life cycle. One of those was performance testing. On the first iteration of the solution, we ran our first load Test, and the results were not positive. Response times spiking into seconds. And request per second for a single machine was around ten Requests per second. Some of our larger scale Systems, we normally perf test upwards of 1,000 Requests per second, and that was important. I could lock into the portal and move the slider up to 1,000 Machines, but i think my boss would literally kill me. We used the code instrumentation to narrow down the bit taking The most time, and we located it to be the dot product. And after investigation, that dot product was running in Operation in memory across the data set that was hundreds of Mega bytes big. I needed to get some help. As an engineer, i could change the libraries, rewrite them to Be more optimal or change the scale, but none of the solutions Would be fit for this problem. I got help from the data Scientists on the team.>>In order to understand what Was going on, once we located the problem was happening, we Needed to redistribute it and reflect a little bit on what was Going on. Just to recap, the idea that in this approach, you Get one vector that represents a customer in some kind of heidi Dimensional space, and the way you compute the relevance for The customer is by doing this product incorporation. Basically, you would get different — out of this dot Product for every individual customer, you would get some Kind of estimate of the relevance of the event products, And just by sorting them by most relevant to least relevant, you Could come up with a selection that would be your recommendation. Now, the main problem with this thing is that this operation Is — the complexity is linear with respect to the number of Products, and that’s no good. Basically, the moment that we Have tens of thousands of products, this operation, as you Can see, that’s not to scale, and we need to do something to Improve on this. Now, something that my parents Told me when i was a child was that, if you want to look for something in your bedroom Quickly and efficiently, you need to organize it first. And this is basically what we did but applied to the context Of product vectors. The idea is that you have Product vectors, and if you want to organize them, one good way Is by using a library called annoy. There are several others. This one, what it does in particular is to apply some kind Of random splitting to your vector space where your products Are in such a way that you end up building binary trees. In the binary trees, the leaves are the different packets that You came up with in this random partitioning, and the idea is That products that are very similar to each other, that are Close, will appear in the same leaves because this is a process That is done randomly, you don’t always get that. If you don’t to ensure that at some point you need to give Products in proximity are in the same leaf, you basically repeat This process a number of times. Here this Parameter you can set it to ten, thousands. More or less, it’s a parameter that you need to optimize. Once you have these binary trees, you can pass from the Previous linear search to a binary one. So the idea is that you get Customer vector, and you can look for the branch. You would optimize this corporation. You do it once for every of these randomly generated binary Trees, and in the end, you end up having an approximation of The top products that you would recommend to the customer. This is an approximation. So you can — basically, you Need to sacrifice a little bit the accuracy with respect to the Previous solution, but in turn you get the complexity that’s Much more reasonable for our case. In this case, it’s something that this case logarithmically With respect to the number of products. Once we had this idea, we Implemented it. We rerun the performance test, And then we go to double numbers. In this case, we’re not talking Any more about seconds of response time. We are talking about tens of milliseconds, and then i’m going To request per seconds now at an acceptable level around 800.>>So by having an integrated team of data scientists and Engineers working together, we managed to reduce our time to Market from several months to a matter of weeks. We could spend less time coordinating between teams and More time applying data science to different problems in the business. We had happier data scientists because they could see the Results of their experiments in production in front of customers Earlier on, and we had happier engineers because we could learn More about the data science methodology, machine Learning and from the experts in our Team. On that note, thank you for Listening. I hope you find it interesting. I’d like to hand you back to matt winkler. [Applause]>>thank you guys. So that was a really good example of all of the problems That we’re trying to solve with azure machine learning Applied to a very real scenario that These guys are going to be deploying into product. Now i’d like to go and switch to the what’s new part. It’s been in public preview for a little while. We launched at ignite. We’ve had things that worked really well, and we’ve had Opportunities to do some things betterment the things that Worked really well, as you heard in the case study we just talked About, this notion around experimentation management, Keeping track of the experiments, giving the Flexibility around the model management and deployment and The monitoring of those is super, super critical. That’s really resonated well with a lot of the customers we Talked to. This next one is really Important, which is around the ability for you to scale from Your desktop up into the cloud. We know a lot of experiments Start locally. We know that’s where you’re Going to go and find the initial ideas before you party on the big data set. So the flexibility with azure machine learning has really helped customers be more Productive there. This notion around deploying Anywhere has also been something. We’ve got a lot of customers deploying to places not the Cloud. And that’s really a result of The work we’ve done on top of docker. A couple of opportunities we’ve got, one is we heard feedback Around how can we make model deploimtd easier? The feedback from you on this that have been sending in smiles And frowns, that’s really helping shape where we want to Go and make this easier. We got a lot of requests around Taking the simplicity of the dev test loop even further for Container based deployments. Folks didn’t want to deploy an Entire kubernetes clusterment they Wanted to take a single container instance and get Something up and running. One of the things is the ability To take the sdk and use that interactively from a notebook. The last thing — the last two challenges that we’ve heard were About really — i love that i Got all these options around compute. Can you make it easier for me to create those? I’ll be happy to talk about what we’re doing there. And the last one is a question we got, for everyone who’s Visited the booth, you probably overheard people ask this. It’s people who know they have a challenge they want to use Machine learning for, but they don’t know where to start. They can go on the internet and maybe they can find a tutorial, For a lot of common tasks, we have customers doing things Around vision and text. We wanted — we constantly got Questions about how do i start solving this problem? Those are some of the things that have gone into some of the Changes we’ve made and some of the things we’ve announced so far at build. So a couple of things. Summary and overview of what’s New. We’ve got in preview today the Hardware accelerated models powered by project brainwave. From this point, i’ll deep into all of these. I’m not going to spend too much time about chatting about them. Azure ml packages for forecasting. We’ve got a model gallery for Onnx models, that i’m pretty excited to show what you can do With that. And then what’s next, i will Dive deeper into these today, not tomorrow. That’s a bug on my slide. Apologies. We’ve got an updated sdk, taking That feedback from wanting to use the sdk from a notebook. We’ll show what’s going on there. Common task we see around Hyperparameter tuning. This is something that inside The company we do at a really massive scale, and we’ve learned A few tricks doing that, and we’re deploying those Capabilities for you to use now inside of azure machine learning. We’re adding a couple of different compute targets and Making it easier for you to get those. And we’ve announced a couple of Different enhancemented to iot based deployments. For the next chunk of the talk, i’m going to walk through all of These. So let’s start off first with The packages. At microsoft, we’ve got hundreds Of data scientists working with customers — big customer, small customers. The thing that’s common to all of them is that they come to all Of us with really interesting problems much as we’ve worked With those customers to solve those problems, we’ve started to See a series of patterns emerge. When you’re doing vision, There’s a set of very common tasks that need to happen. There’s a common approach to how you go and solve those problems. When we look out at the landscape, when we look at a Deep learning framework, we Really see — we don’t see those patterns. We don’t see that help in those Frameworks. We see, hey, here’s a bag of Really awesome, super performant, super scaleable Stuff, that you can go ahead and start using, but we don’t really Have a great way to accelerate someone solving specific problems. So one of the things that we’ve packages for vision, text, and forecasting. These are three common scenarios we see. What i love about these is one that they layer right on top of Azure machine learning. To that means all of the Training runs that you do, you get all of that rich logging. You get all of the experiment tracking. You can deploy those through the same way. We’ve already got a customer who has taken these models and Deployed them to a windows 10 device for remote field inspection. So the person walks around with this wearable device and can Look at things and find faults in those. That’s one of those, because We’re building on top of containers, it just works. Of it’s really great to see how quickly our customers can get up To speed on these. We’re trying to take the steps Around ingesting the data and building and training the model And giving you a lot of help in accelerating that. And the last thing that’s really important to point out is we’re Trying to put it very naturally on top of the frameworks. What that means is you can take our defaults, and we think They’ll work pretty well. They’ve worked pretty well for Customers. The team that built the drone Demo for the keynote that flew over and inspected the pipes was Able to turn around the pipe flaw detection model in under a Day, starting from scratch and just a lot of pictures of broken Pipes. And so that’s an example of the Productivity with the defaults, but then if those defaults don’t Work for you, you can peel back the onion and go each layer down And have a fair amount of control over what’s happening There. So this is a really great example. On the left-hand side, you see a very basic classification Sample. So this is from karos, which is Actually a high productivity way to go and write these models on Top of other frameworks. We don’t obviously have to go Into the code. I know it’s super small to read. I know that’s kind of the point. This is basically the code to do this. If you squint here, you can see i’m going over the images and Trying to perform a bunch of common tasks and data Manipulation on those. This is taken straight from the Hello world image classification sample. So if we go and look at what That code looks like with the packages, you’ll see here we’ve Packaged it all in a set of functions which make this a lot easier. You can see my favorite line is where i pull in The different ways to augment and change the data set. I want to rotate and flip and crop it. This takes up 20 of the lines of code on the other side. And so this is a great way to go ahead and get started. Trying to solve a computer vision problem. We have the same thing for text. We have the same thing for forecasting. If you want to get started with These today, go to aka.Ms/ml-packages. That will go ahead and take you into the documentation site. It will tell you how to download these examine get started with Them. It’s very, very simple to Download and run a couple of pip commands to get these Packages, and as you start scaling up to the cloud, you can Do that because it builds on top of azure machine learning. So those are the packages. The next are hardware accelerated models, and this is Something that we’re really, really excited about. What this is, this is building On a learning we’ve had in azure for a while, which is fpgas can Drive performance. We’ve been using fpgas for a While now in order to offload the computation for how we Figure out where to route packets. What we’ve also learned is They’re really great for running realtime inference of machine Learning models. And so this is something inside Of bing that we’ve been doing for a while. If anyone went to eric boyd’s how we do ai at microsoft talk On monday, we walked through some of the use case that’s we Have inside of bing. It’s important to know that the Scenario of realtime scoring, which is i don’t want to wait Until i’ve got 32 images or 100 images batched up. I need to, as quickly as possible, get a result back on a Single instance of data, whether that’s a single picture coming On camera, a single search query, and that’s where we see These hardware accelerated models are really, really good. And so what we’ve released are the azure ml models, powered by The brainwave project. If you saw doug berger and ted Way’s talk yesterday, they did a super deep dive on all of this. Just for kind of a reference, what we’re seeing is that for The resnet 50 model we’ve got deployed, this is a really Common network topology that’s used for image processing, for Transfer learning. We can go ahead and score a Single image in about 1.8 Milliseconds, and when we Compare that single instance scoring time, this is orders of Magnitude better than trying to do this on a cpu. As well as other ways to accelerate this, and we’re Providing these at a really low cost. So we think the price Performance on these is really kind of leading in the industry. Right now we’ve got the resnet 50 model. We’ve gotten more models that are coming soon. I just want to show a little bit of code here for what this looks Like. Just from a deployment Perspective, this is another use case that we talked about with Jabil, which is attempting to do analysis of circuit board images to find flaws. So what they do is they build and train a model with azure Machine learning. They use the resnet hardware Accelerated model to talk about a ton of interesting features of Those images, and they take the last layer off and they hand That in to their own classifier to say is this broken or not? This is the manufacturing equivalent of hot dog or not. And then we go ahead and deploy that onto the fpgas to Accelerate that resnet 50 portion. Let’s look at what this looks Like in code. So what i’ve got here, this is Just taking the notebook that’s provided in the project, running It in azure notebooks. This is hooked up to my azure Machine learning account. You have to fill out a survey Request in order to get access to these because we’re in a Preview stage with these right now. We want to make sure we get these to folks who are going to Use them. So you have to request access to This and then the part that i Want to show her, i pull in the resnet model itself. Pull in the ai package. And then what’s interesting here Is you go ahead and define a pipeline of the different Happen. What we often see in these Scenarios is part of this accelerated by the fpga. Some of it getting executed on cpu. That’s part of what we’re Pulling together here. You’ll see i pull in three Different stages of my pipeline in order to get the data. For this stage right here, which is what’s taking the bulk of the Time, i’m pulling in that model, the resnet 50 model. This is where in the future you’ll be able to point other Model types here. I just go and deploy the service. And this takes under a minute to go ahead and deploy the service To get that out there, and then the other thing that we’ll do is We actually go ahead and create a client for you, so you can Test that right away. The protocol format is grpc. So you can take any client of that to get super, super low Latency from your application. And then you can just go ahead And make the calls to that here. So here we’re Image. This will go ahead and return. On this case here, we built a classifier on is it a snow leopard or not? So that’s really easy to get started with. Go ahead and go — there’s a much easier url, which is Aka.Ms/aml/real-time/ai. You can get started with the Code today and go ahead and fill out that survey. Last thing i want to talk about is the onnx model gallery. This is something where we see people want to be able to use Models that are already been built. It’s incredibly expensive and Time consuming to build a lot of these models, and other folks Have taken on that time and expense for you. We want to make sure those are very easy to consume into your Applications. If you go to gallery.Azure.Ai, And i’ll flip out of here now. You’ll see we’ve got a set of Models for doing kind of the hello world, digit recognition, We’ve got yolo, v2. I think we’re working on a yolo v3. This is you only look once from the clever acronym division. This is lots of capabilities. We’ve got a couple of Interesting models here that are all onnx models that you can Easily download. Why is onnx really, really important? Is this is a common interchange format for these models. You can download one of these models, and if you’re building a Windows application, you can use win ml to drag and drop the Model in visual studio and be able to run the model inside of Your app. This is something that was shown In the keynote yesterday morning, but you can take any of These and just pull them right into your app. The other thing that you can do is very easily take these and Then deploy these as services. So if you want to deploy an Object detection service, all you have to do is download this, And then we’ll flip over here. This is another notebook that’s Running, and this is on top of the model management capabilities. The first thing that i need to do is just write a score.Pi Function, and you’ll see here what i do in this, i have two Methods here, anit and run. Anit is where the computer spins Up, any prework i have to do. Run is the invocation of the service itself. And so with just a little bit of code here, you’ll see that i Load up the model, and then i go and invoke that. Here we’re using the microsoft cognitive tool kit in order to Provide our onnx runtime. You can also use nvidia, tensor Rt, flex net. There’s a few different Implementations of this. Once i have this, all i need to Do is then is deploy. I’m sneaking ahead a little bit And showing some of the sdk. Here what i’m doing is i’m going To deploy to an azure container instance. I define, hey, i want this to Use one core and a gigabyte of ram. I’m just playing around with this. I just want to try it out. I specify the model that i want to deploy, a little runtime and The dependencies to pull in. And then i’ll go ahead and Deploy this. And so this will take about Three or four minutes right now. We want to go make that faster. In three or four minutes, you’ll have an end point that’s up and Running that will enable you to go and score against this model. What’s cool about this is you’ve skipped the build and train step. Someone else has already done that for you. You’ve taken this and now deployed this model. Again, it’s gallery.Azure.Ai/models will Take you right there. I’ll put a brick in for the gallery. There’s a ton of sample projects there, real world scenarios that Are deployed there. A lot of those are based on Customer engagements we’ve had. If you want to build a recommendation engine, you can Go find a solution. If you want to do anomaly Detection on top of iot workloads, there’s a solution there. Definitely go ahead and check that out. Most of those, you can get all Of the source code and a lot of the data that was used to train the model. They provide a really great starting place. So now let’s talk about the sdk. This is something that you have Asked us for, which is give us a really easy way to put this into Notebooks and be able to use this. There’s some really nice capabilities on this. You can install this anywhere you can run python basically. You can install on top of 2.7 Or 3.5 Plus. It will run in a python ide or In a notebook. And there’s a couple of things That we’re trying to do here. One is we’re trying to make it Really easy to provision compute targets. I’ll show how to do That in the next slide. We want to make it very easy to Run those training jobs local to the notebook or dispatch those To go run on a large scale cluster like batch ai. One of the really neat features that we’re adding here is some Hyperparameter tuning capabilities that are based on Things that we’ve done internal ly, and then all of those things I talked about being able to do with the service, being able to Find run history, query that, deploy the models — all of that Can take place from the sdk. So let me just go and show what The hello world for this looks like. So, again, here i’m in azure notebooks. The nice thing is you can run This anywhere you can run python. I’ve got a very basic machine Learning model that i’m building out of one of the data sets Inside of scikit learn. And what i want to do here is I’m going to actually do a little bit of sweeping of the Parameter space. I want to find the different — The value for the learning rate that’s going to help me find the Best performing model. And what you’ll see here is i’m Actually doing a set of runs. And so i’m saying that i’m going To start the run, and then the rest of this code looks like Exactly what you’d get if you went to any of the books on, Hey, how do you scikit learn? the one thing we’re doing Special here is logging out a specific set of metrics. Remember what i said is the ability to keep track of all the Metrics, this is where you say these are the things that are Interesting to me. So here i’m saying what were the Columns? what is the mean squared area? And what was the value of alpha? this is important because i can Come later and query that. I enumerate over those, and then I say the run is done. I went over 19 different Parameter options. And then what you’ll see here is I can actually go and say, hey, for that run, give me all of Those child runs that occurred and let me load up all of the Metrics for those runs. And then if you just look here, What i’m saying is, hey, can you please find for me the run that Had the lowest mean square and i can pick different metrics. What’s flexibility is you can pick the metrics you care about. Whether it’s an f1 score. It’s really up to you. I’m running a query here and saying, hey, give me the best one. This is when it used this value of alpha. If you don’t want to do this programmatically, if you’d like To visualize this, this is just data. I can go ahead and pull the run Metrics in and plot those, and you’ll see here, yes, right About there is where i found the lowest value, which is the value That i’m trying to optimize for here. So this gives you a very Flexible way to just treat all of your runs just as data. So then any other tricks that you know to apply on data, Plotting these to visualize them, putting them into other Selection criteria, programmatically selecting one, You can go ahead and do that. So now Let’s go ahead and find that best model. We’re going to promote that one, which says, hey, this is a model I want to publish, and now we’ve got to just define how do i want To score that model? so you’ll see here, this is very Similar to what i showed on the previous slide about the onnx Mobile gallery. I write a simple python file. What’s nice about this is it’s just python. If you have a custom logging library, if you want to call out To cosmos db in order to get cached values, you can go ahead And write that here. It’s not a black box. It’s simply a python file within Anit and a run. So you’ll see here in my init, i Load that model. I run the prediction here. And then i just return the result as json. If it you don’t like json, that’s totally okay. You can return it as something else as well. You can also author environment configuration file. It’s just a xaml file that deposition — dependencies. Here again, this is the same code as before. This is the compute definition. I can go ahead and deploy this. We’ve still got a bunch of perf Work to do. This will get faster, which is Why i’m not clicking through all of these in realtime. At the end of this, we generate a nice client for you. I can go ahead and do this, and you’ll see i’m actually scoring An example right here. The other thing that’s nice About this is i can get the scoring uri so i can take this And put this into my application. The other thing is if you replace score with json, we’ll Return the swagger definition for the service. Why is this really important? there’s a whole tool chain that Can generate clients to c# and ruby and go and swift and Whatever else you care about from that swagger definition. This allows you to consume a service. The other thing i want to show here is there’s a couple of Pieces that i can’t run in realtime. This will run anywhere that Python runs. You can go ahead and open up an Instance of cloud shell which gives you a bunch of compute Right in the azure portal, and you can actually install this Sdk and start using it. You actually need to have Nothing on your machine. You can browse to the azure portal. You don’t need to deploy another vm, and you can actually start Doing everything i just showed directly from cloud shell. We don’t think a lot of people Are going to do that, but we think it’s a super cool example Of why we think this is important because we’re giving you this compute environment to Go ahead and use. We’ve added support for azure Container instance in batch ai. Aks, the kubernetes service will Be lighting up in just a little bit of time. Currently, we’re on top of azure container service right now as That migrates to aks, so are we. I just want to show real quick Two code snippets here. The first one is about creating A batch ai cluster. And what this allows me to do is Specify, hey, i’d like to create a batch ai cluster. I want it to auto scale. I’d like to start with one node So i don’t have to pay a whole lot, but i’d like it to — this One’s really simple. It can go up to two nodes. If you want to go up to 200 nodes, that’s very easy to do. The other thing you can specify, if you want priority cluster. What that means is the vms aren’t guaranteed to live, but The bonus to you is that they cost about 80 less. So you can specify that. We’ll see a lot of customer Who’s are very interested in doing that for kind of large Scale nightly batch jobs. They’ll try it on the low Priority compute, and if their job doesn’t get done, then They’ll go ahead and deploy it and pay full price. That’s a really nice way you can kind of optimize your spend. And then i’ve shown the azure container instance a couple of Times now. So i’m not going to dive into That sample. So let’s talk about Hyperparameter tuning. This is something saul talked About from asos, which is as part of the task , You want to explore the space. You want to find the right combination and configuration That will produce the right model. So this is something inside the Sdk. We’ve got an api that’s called Hyperdrive. You can see that right up here. And what this allows you to do is define these are the Parameters that i want to do a search on top of. I can define how i want to do that search. I can do a grid based search. I can do a random search. This is incredibly important as you start doing hyperparameter To scale because you don’t want to waste compute. So if you can detect early on that the model that you’re Building is not going to be one of the best ones, you can go Ahead and terminate that run. This is something that the Bing — this exact technology is used by the bing team. It’s used by the speech team. And a number of teams Internally. And what we’ve seen — and you Can kind of just see on this graph here, very, very kind of Simple, but there’s a set of these runs, like this green one And this orange one and this red one that all got killed early. While this may not seem like much for something that’s only Got five or six runs, what we’ve seen when we run this at scale Is we can typically reduce the amount of compute consume by About 50. So that gives you a much more Efficient utilization of all of our hardware, and it helps you Find the best model and explore A much larger space at the same cost as i much higher ie Hyperparameter tuning approach. So next thing to talk about. Iot deployments. At connect, we unveiled the Capability to take azure machine learning trained models, deploy Those to iot devices. One big thing to talk about here Is that in partnership with qualcomm we announced a Developer kit. So what this is is a really cool Camera with hardware acceleration on it that you can Build and deploy models for it with azure machine learning. We see this. The number of vision scenarios That we see our customer ss coming and talking to us about, It’s almost every conversation we’re having. What this gives you is a very nice way to target the qualcomm Hardware, which is going to be running in lots and lots of Places and azure machine learning is the way to go and Build and deploy those models. You can sign up to early access To the dev kit today. Other than that, we haven’t made Any significant changes to our iot deployment story. But if you’re interested in that, you can go to the ai tool Kit for the iot edge. This is a set of sample projects And everything that you need to go ahead and get started. This is something that we’ve got a number of customers that are Running. It’s not just — sometimes when I hear iot, you think things, and you think tractors, but you Also see a number of interesting implications in retail or places Where you want to deploy. The edge isn’t just a sensor. It can look like a computer. We see a lot of times where you Want to deploy somewhere that doesn’t have a full i.T. Infrastructure there, and you can very easily target deploying The models there to process register receipts, to process Information that’s coming in from cameras, and those such Things. So we are on the homestretch With about ten minutes to go. What’s going to happen next? So there’s a couple of other things in addition to those feature that’s we’re doing. We’re going to be rolling out global ly Globally. I think we’re up to more than 50 Regions worldwide. That’s going to be something We’re going to continue to deploy across the regions. Soon we’ll be bringing the service general availability. We’re adding in it all of those features that i just showed Before we get to ga. So what should you do now? Right now you can tryout the accelerated hardware models, Fill out the survey form. You can tryout the ml packages. Those are available today for vision, text, and forecasting. You can go and use the onnx model gallery. Fu find any of those interesting models that you want to take, Deploy those into a windows application, or deploy them as a Service. The capabilities i showed after That. The sdk functionality, we’re Currently in preview on that. We currently created an insiders Program. If you fill out this form, a Brief survey, that’s what we’re using to bring folks into the Private preview to start using these. The thing that we ask in return For giving you access to the bits is your feedback. As you can see, the changes that we’re making are directly in Response to customers like you. So that feedback is super, super critical to the team. If you want to get started with azure ml, just go ahead and go To azure.Com/ai. This is the entry point to all The interesting ai capabilities we have. Check out the gallery. If you’re interested in the Camera, go ahead and sign up for the dev kit. With that, i am very, very grateful. Thank you very much to the folks From asos who were here. Very much appreciate having you here and talking about your story. I’d like to thank all of you again for coming. We’ll be around afterwards for questions. So if you have any, just come on Up. And thank you very, very much And have a fantastic rest of build.