March 30, 2020
Andrew Ng Teaches AI Strategy for the Enterprise (CxOTalk #365)

Andrew Ng Teaches AI Strategy for the Enterprise (CxOTalk #365)


Today on CXOTalk, we are speaking with truly
one of the foremost AI computer scientists, educators in the world. Andrew Ng, tell us about the work that you
do and give us a flavor of your background. There’s a lot of hype, a lot of excitement,
a lot of promise but sometimes overpromises about AI. I’m excited about helping companies navigate
this really complicated environment and figure out how it can actually help your business. I hope, over this conversation, we can dive
into that. I draw a lot on my experience starting and
leading the Google Brain team, which helped Google become good at AI, and also leading
AI at Baidu, which arguably made it China’s greatest AI company. I think lessons from those organizations and
many others that we work with are useful for a lot of companies trying to figure out this
very disruptive AI technology. You also were the founder of Coursera and
you are a professor at Stanford. I was the co-founder of Coursera. I taught about three million people worldwide,
AI, machine learning and AI; more than anyone else on the planet at this point, I think. I also continue to teach at Stanford. I’m an adjunct professor. You have all of this experience. I guess the first question that we have, the
place that we have to start for businesspeople is AI. You mentioned the term “hype” and there is
all this hype. What is it exactly? There are a lot of tools that AI people like
me have. But it turns out that 99% of their recent
wave of economic value driven by AI is through one idea. The technical term is “supervised learning.” All that means is an AI that is very good
at figuring out input to output or A to B mappings such as input an email and output
is spam 01. That’s your spam filter. The recent rise of voice conversational agents,
that’s based on speech recognition, which is a type of AI that inputs an audio clip
and outputs the text transcript. The most lucrative application of this is
probably online advertising where all the large ad platforms have AI that input an ad
and outputs where you click on those ads. This is driving tremendous amounts of revenue. I think a lot of the challenges that face
us across multiple industries today is how to adapt this amazing technology that’s just
computing input to output mappings into valuable business use cases, not just into software,
into an industry where a lot of this has started, but in every industry. To what extent do business people need to
understand the nature of those input/output mappings that you were just describing? Not everyone needs to become an expert in
AI, but it is a general-purpose technology which is disrupting every industry. Sometimes, my friends and I have challenged
each other to name an industry that AI will not disrupt in the next decade and actually
had a hard time coming up with one. My favorite example was the hairdressing industry. I thought, well, can we really replace my
hairdresser with an AI? Although, interestingly, once I said this
on stage, one of my friends who is a robotics professor was in the audience and she heard
me say this. After the talk, she stood up and she pointed
at my head. She said, “Andrew, for most people’s haircuts
I can’t build a robot to cut their hair but, for your hairstyle, a robot could totally
do that,” so maybe even the hairdressing industry. AI, like electricity, is a general-purpose
technology that’s transforming every industry. Today, we can’t imagine any industry running
without electricity. It was really difficult to figure out how
companies should embrace electricity and we’re still going through that in the case of AI. When you talk about the mappings, the inputs,
and the outputs, how is this different from established technologies? In other words, what are the unique characteristics
that make AI so disruptive across so many different types of industries? This idea of learning input to output mapping
has been around for many decades. Some would argue maybe even over a century. But this has started working incredibly well
just in the last few years thanks to the rise of a technology called deep learning or neural
networks. A decade ago, we couldn’t build speech recognition
systems that were that accurate. You would feed in audio and it would have
all sorts of transcription errors. But because we now have a lot of data, a lot
more, say, audio data and transcripts, when you feed these huge amounts of data into neural
networks, also called deep learning, we now have very accurate speech recognition systems. This is pairing the rise of adoption of mobile
voice search for the Web search engines as well as smart speakers like Amazon Echo, Google
Home, Apple’s Siri, and many others. What we’re seeing is that, in a lot of industries,
when you find the right business uses cases, you’re now able to build these input/output
mappings much more accurately than what was possible just a few years ago. For the right business use cases, this means
it can be valuable in a way that wasn’t possible. In manufacturing where AI does a lot of the
work, we do work in automatic visual inspection. Say you’re a factory and you manufacture a
smartphone. Rather than needing many people to use their
eyes to check if there’s a scratch on the smartphone to inspect it for quality, it is
now possible to have an AI take a picture of a smartphone and very accurately detect
whether or not the smartphone has a scratch, a dent, or some other defect in it. Visual inspection is another maybe few surprising
applications where this is making inroads into manufacturing. What about adoption? As you talk about different industries and
making inroads, that implies the adoption of AI, AI techniques, data, and a host of
other related issues. Talk about adoption in the enterprise. How do we go about it? AI has transformed the software Internet industry. If you look at companies like Google, Baidu,
Microsoft, Facebook, Amazon, even the not quite so large software and tech companies,
most of them at this point, they get AI. While there’s still room for improvement for
even the largest tech companies, on average they’re using AI very effectively. I think the next wave for AI will be for it
to transform all of the other industries, everything from manufacturing, agriculture,
transportation, logistics, travel, and healthcare. I think the industries outside the software
Internet still have much more headroom to grow. A lot of these industries have already been
pursuing digitalization. Take healthcare. Today, your health records are much more likely
to be in the form of an electronic health record, like a digital image for your x-ray
rather than an x-ray film. A lot of these industries have data but the
process of figuring out what are the valuable use cases for AI, how to hire the talent to
build the projects, and then also how to deploy this is still being figured out in these industries. Over the last couple of years, I think we’ve
had a lot of companies run a lot of pilot projects. Many pilot projects did not make it into production
because it is difficult to have the right judgment about, how do you select the most
valuable project to work on? In fact, here in Silicon Valley, I see a lot
of startups running proof of concept after proof of concept. I think some people use the term POC to describe
that. I think we, as an industry, we still need
to get better at both identifying the valuable proof of concepts and then also making sure
we take these things to production. Software, for instance, has been ahead of
the game in this but the other industries are still in the process of figuring it out. Landing.ai and I, hopefully, we could play
a big role in helping these industries figure it out as well. What should a business leader do who is looking
at this and looking at their own company? What do we do? Do we hire a bunch of data scientists? How do we even start in the right way? AI is disruptive enough that I advise most
business leaders to start now if you haven’t already. I think one piece of advice is it’s better
to start small than to start too large. I’ve seen more companies fail by starting
too big than fail by starting too small. In fact, if a CEO or other leader sets a grand
vision that ultimately doesn’t succeed, sometimes that very large failure sets a company back
by a couple of years. This is not a good time to be set back by
a couple of years because the whole company is losing faith in AI. I think it’s better for most companies to
start small, to deliver a quick win, and then to build momentum from there. Take one example. When I was leading the Google Brain team,
at that time there was skepticism even within Google about deep learning. There was skepticism in the whole world because
it’s still taking off. My first internal customer at Google was the
Google speech recognition team. Speech recognition in Google is a nice project. It’s an important project, but it’s not Web
search or advertising. By doing a smaller project to make the Google
speech recognition system more accurate, that then helped other teams gain faith in this
modern AI. My second internal customer, building on the
momentum from speech, was the Google Maps team where we used Optical Character Recognition
to read the house numbers of essentially Google Street View data to more accurately locate
houses on Google Maps. Only after those couple initial successes
did I then start a more serious conversation with the Google advertising team. I think the number one piece of advice to
a lot of leaders is to start small and then to try to deliver a quick win, maybe in 6
to 12 months. Then use that to teach the organization how
AI works, what are the valuable use cases, and then to do bigger and bigger projects
over time. How does a business leader even know what’s
the right kind of project? This is not like business process software,
right? This can actually disrupt your business and
so to choose the right project requires a core baseline of knowledge, even about the
technology. First, I’d say, building AI projects is hard. What I usually recommend to companies is to
brainstorm a list of projects. Today, because organizations like DeepLearning.ai
and Coursera, there are a lot of people with a good level of technical knowledge about
machine learning. I bet most large enterprises, maybe every
large enterprise, will have people taking machine learning and deep learning courses
with DeepLearning.ai or with Coursera. There’s probably some technical talent in
most large organizations by now. If you engage that technical talent to brainstorm
a few use cases, then you can. With Landing.ai, when I work with a company,
I usually recommend brainstorming at least half a dozen projects. Then, for each of them, we spend sometimes
a few weeks doing both technical diligence to convince ourselves that their project is
feasible from a technical point view as well as business diligence where we like to model
out the bottom-line impact or the top line, bottom line, or other key metric impacts of
this project. Only after we’ve convinced ourselves that
this is a valuable project do we then commit the resources to then spend potentially several
months or maybe many months to execute on the project. I think, for large enterprises, if you get
together with your technical teams to brainstorm, don’t brainstorm one project. One pro tip: I’ve found that very often the
number one project that the CEO gets excited about, that’s often not the right project
to invest in. If you step back, brainstorm a list of projects,
do technical diligence and business diligence, then after a few weeks you can hopefully pick
one or two valuable pilot projects to commit to and then go from there. You can get going with a small machine learning
or data science team, maybe a handful of people like five people. Sometimes that teaches you the early lessons
you need to then build a bigger team. Another story I still remember at Google was
our first GPU server. It was just some server under some guy’s desk. Having a small GPU server with a handful of
GPUs in it, that was important for teaching us and teaching Google how to work with GPUs
and how to share GPUs. That one server taught us so many lessons
that were then useful for the later buildout of these capabilities. You are a technologist by background. Why did you decide to start a company that’s
focused on adoption rather than pure technology? I think both are important. I’m excited about technology and I also feel
that even though the technology is considerably advanced, I was just at the Neuroscience Conference
last week listening to the latest breakthroughs of which there are many I’m excited about. In order for AI to become more widespread,
I think we take the technology that’s already invented, a lot of which is open-source, but
there’s a lot of work that’s needed to be done to adapt this to different industries. For AI to reach its full potential, I think
many industries still have not figured out how to scope projects, how to resource projects,
how to manage projects. A lot of companies underestimate the change
management aspects of how to roll out this very disruptive technology. AI is automation on steroids. When we take one task out of someone’s job
and we have AI automate it, it could help that person become much more productive. It could also make them feel threatened about
their job in some cases, and I think a lot of companies underestimate the change management
aspect. I think AI is so disruptive that it really
gets at the heart of many industry verticals. I think CEOs, CXOs should be asking themselves
questions like, “When it becomes pervasive, how does this disrupt your industry? What are the new activities that create value? What are the new defensible activities, and
what is your theory of value creation?” To make an analogy with the rise of the Internet,
which is another very disruptive technology, I think a taxi company plus a website was
not an Internet company. I say this with a lot of respect to taxicab
companies building websites. Uber, Lyft, DB, Grab, those were the Internet
companies that figured out how the Internet changes the core activities of what transportation
means. AI is equally disruptive. In all of these industries from manufacturing
to retail to agriculture to healthcare to many others, it will change the core of what
it means to be a leading company in all of these industry verticals. I think now is the time for a CEO or for an
executive team to figure this out. One of the biggest challenges with technology
is it’s affecting a lot of industries. It is causing more industries to have winner-take-all,
winner-take-most, dynamics. Once upon a time, you could live a pretty
happy life being a small farmer, a small agriculture farmer, small chicken farmer somewhere. What has happened with the rise of technology
is that centralized organizations, technology providers, everything from Tyson Chicken,
John Deere, Rotech Co., or Monsanto create technology and can aggregate data from across
the country using the Internet, use IoT Internet, process the data, and distribute insights
in technology back to all of these farmers. Even in farming, we’re seeing a centralization
of power and this is a trend that’s been underway for many years and it’s accelerating in every
industry, which places a lot of pressure on CEOs to figure it out. We have a question from Twitter from Sal Rasa. What about the change management dimension? If we think about change management historically
with enterprise software, it was mostly about doing your job differently because the process
is different. But with AI, there’s also business model change
and other types of more profoundly disruptive changes. How does change management evolve in this
world? I think that there are two types of AI technology. One type looks at the existing workflow and
automates part of it to make the system more efficient. Certainly, in manufacturing, rather than needing
an inspector to spend so much time inspecting scratched smartphones or other things, maybe
that one task could be automated. There’s a system that needs to be built around
this newly automated task. When an AI flags a smartphone that’s defective,
what happens next? Do you get a human inspector to reinspect
it? Do you send it for reworking? Did you discard it right away? When you inject an AI component into a workflow,
what are the inputs and who is supposed to look at the AI systems decisions and do something
about it? That’s change management. Maybe another example, one of my teams has
rolled out a pilot system in a standard hospital that is examining EHRs, electronic health
records, to try to recommend patients for consideration for palliative care, for end
of life care. If we think that the patient has a high risk
of mortality over the next 3 to 12 months, then we surface their electronic health record
to a palliative care doctor who may decide to look at it. The doctor decides whether or not to reach
out to that patient’s physician to recommend consideration for palliative care. When an EHR, electronic health record, system
starts generating these recommendations, who in the hospital is supposed to look at these
recommendations? How does this impact the patient? How do you make sure doctors ultimately are
in control? What are the ethics around recommending someone
for palliative care? How do you make sure the doctors have the
time and space to even look at these things and fit into the daily workflow? These are some of the very complicated change
management issues we have to address when designing and rolling out AI solutions. There is so much hype out there. Pretty much every technology in the world
these days says, “We are AI,” and they position themselves as an AI company as opposed to
whatever process it is that they automate. What should businesspeople do to see through
that hype? A couple of years ago, I feel like a lot of
companies would slap an AI sticker on a lot of products and assume that that makes the
product more attractive. I think we’re past that now for which I’m
very glad. Actually, a few years ago, I would see companies
slap an AI sticker on the whole company and then that just affects the market cap of the
whole company. I think we’re actually getting past that,
which is a great sign. Today, I think technology in isolation is
completely useless. Most days, I don’t actually care if something
is AI or is not AI, but I tend to look much more at the business fundamentals, so what
are the inputs and outputs? How does this actually move business metrics
that you care about? Use that to assess the quality of an AI project. I think credibility does matter too. Today, in Silicon Valley, from where I’m speaking
to you, almost every company says they use AI. I think, to survive, almost every company
will need to figure out their AI story, but there’s still a huge difference between the
teams that are truly distilled in AI and have a track record of knowing how to figure this
out and deploy it versus teams that are still in the process of learning it. I know that it is difficult for a non-AI expert
to judge who has the real stuff and who is still using AI in a very basic way. I think track record and credibility are one
way to judge. Then the other is to get the technical team
really to think deeply about the actual business impact of the technology rather than get too
excited just about the technology. Does the technology for a businessperson really
matter? The technology now works much better than
ever before and so it does matter. For example, if you look at online advertising,
and I think it’s not the most inspiring application of AI but certainly incredibly lucrative. AI teams that are more experienced actually
drive small, relevant ads to show users and that increases click-through rates. That has a very direct impact on the bottom
line of the large, online ad platforms. Speech recognition, the ability to recognize
speech more accurately is absolutely critical for a user’s willingness to adopt these. In a manufacturing context, we see that the
accuracy of screening out what are defective parts, with smartphones and other things,
versus non-defective. That has a direct impact on what needs to
be thrown away, what needs to be reworked, what the human inspector needs to reinspect. That has a direct impact. Maybe for some businesses, you can get by
with okay AI. I think, for a lot of businesses, the quality
of the AI in terms of it giving accurate judgments is absolutely critical to the business performance. It sounds like your focus is first and foremost
on what are the business outcomes that we are going to achieve and how will this technology
enable us to achieve those outcomes as opposed to, “Oh, this is an interesting technology.” I like to figure out the outcomes and work
backward and figure out the technology to help us get there. At Landing.ai, we did have a few potential
customers approach us with basically a pitch. They’re basically asking, “Hey, Andrew, can
you help us slap an AI sticker on our company?” That’s it. The answer to that is always no. We don’t want to just help people paint AI
on their business unless there’s something real there. I think it could work. I think in some stock markets still around
the world, slapping an AI sticker does impact valuation. But if that’s the only impact, I just don’t
want to do that because I’m not interested because I think AI does have a strong impact
on the fundamentals of many businesses and can help a lot of people drive a lot of global
economic growth. But it takes deep thinking to marry AI with
all of these different industries. For what it’s worth, it took us a long time
to figure out a lot of details on how to use machine learning and AI in the software Internet
industry. It was hard figuring out how to do speech
recognition, how to do computer vision. A few of us realized what a big hit Optical
Character Recognition would be until several years in, so it was really difficult to customize
AI to the software Internet industry. I think it will be equally difficult to customize
AI to all of these different industries, but that’s the work that I think we need to do
over the next many years. I have a question. Maybe we could call this the advanced level
of change management. I think, for many business leaders, it is
difficult to envision what the technology can do for their business. You were describing at Google with OCR and
trying to envision the implications of that. How can businesspeople adopt the technology
having that broader, more expansive view when their organizations, at the same time, are
just struggling or being successful in producing whatever they produce today? In the case of the Google OCR project, I’m
going to try to give credit to the right person. I may get this wrong, but I personally heard
about the idea first from an engineer at the time called Yuval Netzer. It totally was not my idea. Yuval, at the time, was working with the Google
Maps Street View team and she came to me and said, “Hey, Andrew. Work on OCR. I see the potential for deep learning.” I think, in this case, the idea didn’t come
from me. It came from someone that was learning about
deep learning and we all were figuring out deep learning but was close enough to the
product to have the business insight to say, “Hey, Andrew. Can we work together?” I think a lot of people contributed over the
many years, obviously. I think one of the interesting things is that
when you give business leaders or product leaders a few insights about AI, often they
will come up with great ideas about where they can apply it in their business. One thing I did several months ago was to
release a course titled “AI for Everyone” with the goal to teach a nontechnical audience
enough about AI that they can start to think through business use cases for whatever industry
you’re in. It was one of the top-performing courses by
DeepLearning.ai and Coursera this past year. I ran into a company a few months ago which
told me that they were thinking of trying to get all 2,000 of their employees to take
“AI for Everyone.” I don’t know if that’s overkill, but I thought,
“Wow, this is incredible because you get your team, certainly your executive team because
that’s really important, but then also others in the management level or an individual technologist
to learn a bit about AI.” Then that unleashes a lot of creativity as
people that are close to your business can figure out where to apply supervised learning
or other AI technologies. I interviewed on this show the CEO of Nokia. He realized the importance of machine learning
and so he went back and became a developer, learned, and created a course. Then they asked every person inside the company,
every engineer inside the company, to take this course. Education seems to be a fundamentally important
part of this advanced change or disruption if a business leader wants to use AI to improve
their organization and their business outcomes. Yes. A few years ago, I remember attending a meeting
with mainly Fortune 500 or very large company CEOs. This was a few years ago now. I remember the coolest people in the room
were the CEOs that had figured out their Internet strategy because everyone else was looking
to them to ask, “How did you use the Internet?” to this retail business or that distribution
business and so on. We’re actually starting to see this already. I think, a few years from now, at similar
CEO meetings, the coolest people in the room will be the men and women that figured out
their AI strategy a little bit earlier and everyone will be looking to them to ask, “What
are the insights you saw for my business as well?” Because AI is so disruptive, companies have
figured out their Internet strategy. Today, almost everyone needs to have basic
knowledge about the Internet, almost everyone in a corporate environment, certainly. I think we’re definitely getting there in
the case of AI as well. What about the ethical dimensions? What are some of the ethical considerations
and what is it about AI that leads to a discussion of ethics more so than with other technologies? Over the last two years, we have seen a techlash
and erosion of trust in technology as a force for good. If you look at the Edelman Trust Barometer,
for example, and I think Financial Times last year called the word of the year “techlash.” I may have the details wrong, but there’s
this erosion of trust in technology. If we look across technology, I think the
most destructive, the shiniest, most destructive technological force today is AI. There are other things as well, but it’s really
leading the charge in software technology, certainly, in terms of making new things possible
that were not possible before. The AI community is a very powerful community. I think our decisions, the decisions of the
community, the efficacy committee can be used to build addictive digital products that waste
a lot of people’s time to make them see more ads or something, or we can go to industries
and help make industries more efficient, drive economic growth, and lift up people’s wages. We can use AI to provide better educational
opportunities. I think these are all questions that our community
is getting better at addressing. The other ethical challenge that I think is
raised by AI is the concentration of power. I alluded briefly to how the Internet and
AI, as an accelerating force, for industry after industry, not just software, I think
it is creating a concentration of power because the Internet allows a centralized space to
pull data together and AI makes it possible for them to process that data in more efficient
ways than ever possible before. We’re turning a lot of industries. We’re infecting them with the Internet dynamics
or AI dynamics of winner-take-most or winner-take-all types of effects. I think even though we’re creating tremendous
wealth, I think we need to have a serious conversation to make sure that the tremendous
wealth we’re creating is shared in a fair way. I think, finally, issues of bias, an auto-generation
of fake news, the potential of AI to split human society apart and undermine democracy. I think there are some uses of AI that I do
not support. I find that we still have important work ahead
of us, which I am working on, which many of us are working on to try to make sure that
our community, the AI community behaves with the highest ethical standards and, frankly,
that we hope the whole tech world regains the trust, some of which we had lost over
the last couple of years. Can you weave in the notion of data and the
centralization of data that you were describing and how that links to the centralizing of
power and the unequal concentrations of wealth that you just alluded to? I feel like the lives of farmers here in the
United States are getting tougher and there are many, many reasons for that. One of the trends is that it’s easier, it’s
more possible, easier than ever before for a centralized technology player to use IoT
to aggregate weather, soil data from all around the country, process insights, push insights
back out to farmers all around the country and then to capture a larger and larger fraction
of the excess value that’s created through these types of optimizations because farmers
now really need these technologies, really need this data. The market power, the pendulum is swinging
to centralized players. I see the centralization of the players of
the power even at the individual corporate level. Take a retailer. Imagine a large brick and mortar retailer. Once there was a balance of what decisions
were made in the centralized headquarters and what decisions were made at the distributed
retail stores you have all around the country. For example, if you have a retail store out
in Florida, they would have better local knowledge to know when is it hot in Florida, when is
it cold. Maybe we shouldn’t sell winter jackets in
Florida. Whereas someone in headquarters may say, “Oh,
now it’s winter. Let’s ship winter jackets everywhere.” Still, there was a mix of local knowledge
that the local person would know versus the centralized knowledge at headquarters. With the rise of the Internet and IoT, when
something is sold in Florida, headquarters can find out about that, maybe in less than
a second or hundreds of milliseconds later. Headquarters can now aggregate all the data
from all around the country, process it, and push insights back out to the distributed
retail stores in every state around the country. Even at the individual corporate level, I’m
seeing the pendulum swing in terms of moving, not 100%, but in terms of increasingly moving
toward centralized decision making because the Internet and AI are aligned, the centralization
of data and very efficient processes of the data. When this happens to many industries, everything
from manufacturing, agriculture, transportation, logistics, healthcare, this creates a potential
for there to be a smaller number of incredibly well-off players with tremendous market power. What are the impacts on jobs, on making sure
the wealth we’ve created is fairly shared? I think all of these are very important questions
that I struggle to answer. I think we need society, government, public/private
partnerships, companies to figure out what’s the best way forward. Some of these are very important policy issues
that will certainly play an increasingly large role over the coming years. At the same time, from an individual corporate
standpoint, would it be correct that those companies who are able to develop the vision,
the understanding of the technology, and the power of aggregating centralizing data, as
you described, those companies who see that today are very likely going to be among the
group that is the most successful tomorrow. Yes. I believe one of the challenges for a lot
of executive teams is that AI is one of several forces adding to the concentration of power
in these types of winner-take-all dynamics in multiple industries. These winner-take-all dynamics, they do have
a positive feedback loop. If you are ahead in technology, maybe you
have a better product, so having a better product means you have more users. Maybe more farmers use your products and so
you get more data. Having more data means you can prove your
product quality even better because having more data improves your AI and as even more
people use your product. There are these positive feedback loops that
allow a company that’s a little bit ahead to race ahead even faster. One of the challenges for executives teams
is, if you wait a few years, it may be too late because, over the next two years, say,
a competitor steals the market on you, then those winner-take-all dynamics may mean that,
over the next decade or two, that they will just become dominant, absent of maybe outside
forces such as government intervention or other things that may happen. I think this is why I see a lot of C-suite
feeling a lot of urgency to act now because you miss the boat and it’s quite possible
some companies will look back two years from now and say they really missed the boat. If you miss the boat now and don’t make the
right transition choices now, even while there’s some uncertainty about the right business
use cases, by the time it’s obvious it will probably be too late for some companies. In addressing this set of issues for business
leaders, which is more important: understanding the technology implications, what is possible,
and the role of data; or understanding potential future business models? For a lot of businesses, almost every business
I speak with, certainly, actually, for every single large company I’ve spoken with and
spent time to dive deep into, there are use cases for AI. What I see in a lot of companies is you probably
have a lot of digital data. Maybe things used to be written on pieces
of paper ten years ago. A lot of our supply chain and logistics were
written on pieces of paper maybe 20 years ago. We’ve made a lot of that digital. A lot of companies already have data. Having an AI team, a machine learning team,
or data science team process that data can already generate actionable business insights
right now for a lot of the companies. I think, getting that flywheel going, starting
to build that team now, figuring out the use cases now is important to help the company
as you learn about what is AI. Then, it’s more mature, to figure out what
is this AI power of the future. I have a document that I published some months
back called “The AI Transformation Playbook.” If you search online and read “The AI Transformation
Playbook,” it lists all the set of steps that I recommend for companies to figure out their
AI-enabled future. One of the surprising things about “The AI
Transformation Playbook,” feedback I got from multiple CEOs was, I list step one as executing
pilot projects: start small, get something going. Steps two and three as building a team and
providing training. Only step four as figuring out your AI strategy. So few CEOs actually read that and gave me
feedback to say, “Look, Andrew. Why is figuring out my strategy step four? I want to figure out the strategy. I’ll go to my board and get it funded, get
it authorized, and we can execute.” I actually pushed back. I said for a lot of companies that don’t know
enough about AI, if you try to figure out your strategy as step one, you end up with
a very academic strategy or you end up with a strategy that feels like it’s copy/pasted
from the newspaper headlines. Like I read in the newspaper that data is
important, so my strategy is to collect a lot of data. That usually doesn’t work. I actually recommend to companies to start
small, gain momentum, and only after your company knows better what building AI feels
like, then you will be in a much more thoughtful place, a much better place to draft a thoughtful
strategy and a thoughtful vision for how AI will change your industry and how it changes
where to play, where not to play, what creates value, and what’s … [indiscernible, 00:39:17]. We have another question from Twitter. How can leadership determine the AI capability
maturity of their own organization so they can choose the right product and improve their
AI immaturity over time? I have seen, unfortunately, CEOs make mistakes
in hiring the wrong technology person if they weren’t qualified to hire the technology person. If you can find trusted partners, people that
have experience doing this, sometimes they can come in and help you benchmark the quality
of the internal efforts. I find it difficult. If you can find one right leader with a credible
track record, as in your chief AI officer to build up the team, things could go well. Unless you’re able to find a partner with
that knowledge, it is difficult to benchmark. We always say, find a good partner with knowledge
and track record that could help you take a look at what’s going on in AI and help you
shape it. I know that you have started several companies
and you’re trying to create an AI ecosystem. I’m really interested. What are you doing there and why? The world now has access to this very disruptive,
very transformative AI technology. For it to reach its full potential, multiple
things need to be done. Landing.ai or Landing.ai is our enterprise-facing
organization that helps many large enterprises figure out AI adoption. We started with a manufacturing focus, also
doing a lot of work in agriculture and increasingly in healthcare. DeepLearning.ai is our educational arm. DeepLearning.ai does a lot of work with Coursera,
but we’re trying to build education and community to help individuals figure out AI. We teach deep learning on Coursera. “AI for Everyone” is offered by Deep Learning,
which is to help people break into AI through education and community building. Then AI Fund is a startup studio that builds
new companies from scratch. I found when I was leading Baidu’s AI group
that the most fun part of my job was trying to systematically create new businesses using
AI and AI Fund is a startup studio. We have more ideas for exciting companies
to build and people to work on them. We systematically build companies and then
capitalize them and send them on their way. I think these are three of the key pieces:
helping large enterprises, building new companies, and then education needed in order to move
the world forward and create value for everyone. Hopefully, take everyone with us as we create
a lot of value using AI. It sounds like you have an underlying vision
that’s not just about technology. It’s not just about making money. What’s the unifying factor that kind of keeps
this going? AI is a very disruptive technology. An AI-powered future will be very different
than the world we live in today. I think that we could relieve a lot of humanity
from repetitive, routine work. We could drive tremendous economic growth. McKinsey estimated $13 trillion worth of global
economic growth, global GDP creation. This is a big impact on people’s jobs and
livelihoods in some cases, and so I think we need a better educational system as well
to make sure that people can gain the skills they need to always have meaningful work to
do. I think that with all of us working together,
I hope we can build this exciting AI-powered world where humanity would be much more powerful. Much as the Industrial Revolution automated
away a lot of manual labor, I think AI is the next wave of automation. How to build that future, now to do it in
an ethical way, and how to make sure we don’t just create wealthy for Silicon Valley and
Beijing and a few cities, but we make everyone better off. I think these are all difficult questions
that we’re wrestling with but that we must solve together. A couple of other very quick questions. One is a question from Kanupriya Agarwal who
says, “How does one get in touch with you for collaboration?” The website, Landing.ai, and our various websites
have contact forms, so it’s easy for you to get in touch. Advice that you have, kind of summed up advice
that you have for business leaders that are listening to this. Tactically, I would say, get going if you
haven’t already and start small. There are much more details in both “AI for
Everyone,” a course by DeepLearning.ai hosted on Coursera, and then “The AI Transformation
Playbook.” I would say, start small and then execute
quick pilot projects, gain knowledge, and then consolidate and keep building to bigger
and bigger projects. We did this in the software Internet industry
and it was hard. If it feels hard for your industry, let me
just tell you it was hard in the software Internet industry as well. I think it will be hard, but that work will
be very meaningful in terms of us working to help transform and make multiple industries
more efficient. What advice do you have for government policymakers
in regard to AI and innovation? I was really excited when one country told
me that they’re trying to get the entire presidential cabinet to take “AI for Everyone.” I think, number one, get educated on AI. I see different countries’ leaderships have
very different levels of knowledge about AI. It is really complicated but having a basic
knowledge will enable countries to draft much more thoughtful policies. Then I think we do need regulations to enable
and empower new AI applications that come to market while also protecting consumers
and protecting safety and privacy. The government needs to play a huge role both
to protect individuals as well as to help create a much better future than is possible
today. We’ve been speaking with Andrew Ng. He is the founder and CEO of Landing.ai, along
with a whole bunch of other companies. You need to research him. Andrew, thank you for taking time to be with
us today. Thanks a lot, Michael. This is an important topic that … [indiscernible,
00:45:41] around AI, so I’m really glad we had a chance to chat. Everybody, thank you for watching. Before you go, please subscribe on YouTube
and hit the subscribe button at the top of our website. We’ll send you awesome information about upcoming
shows. Thanks so much, everybody. I hope you have a great day and we’ll see
you again next time.

Leave a Reply

Your email address will not be published. Required fields are marked *