April 1, 2020
Opening Keynote (GDD India ’17)

Opening Keynote (GDD India ’17)


PANKAJ GUPTA: Good
morning, everyone. I am Pankaj. I work in Google’s office
right here in Bangalore. I lead our engineering
teams in India and Singapore for a very important initiative
called Next Billion Users. I’m really excited to
be here with you today at Google Developer
Days, because I myself am a developer. I have used many Google
APIs over the years. This is the first
GDD ever in India, and this is the largest
developer event that Google has held in India ever. [CHEERING] But first, I want to talk to
you about why and what Google is doing in India and
in emerging markets all around the world. Today, Google has seven products
with more than a billion users each around the world. But we know that
our next billion users will come from
different parts of the world than our first billion. Our next billion
users will not come from US, or Canada, or Germany. They will come from
India, Indonesia, Brazil, Nigeria, and similar countries
all around the world. In India alone, we expect
650 million Indians to come online by 2020. That’s why, a couple
of years back, we started the Next Billion
Users initiative at Google, as we noticed that the featured
users of Google’s products are going to be different
than our current users. Perhaps the most
important difference is that they’re going to be
mobile first and, in fact, largely mobile only. The smartphone is
their first computer. It is the best camera they’ll
probably ever own, and also probably their first video
device that they ever carried around. Let’s take a look at users in
India, Indonesia, and Brazil and compare them to the US. The users in these countries
are incredibly young. I mean, just look at
all the attendees here. They’re organizing
fast, and they’re aspirational with disposable
incomes growing rapidly. They are savvy. A large fraction of them have
prepaid plans and multiple SIMs to get the best
voice and data plans. They are unique. They have a strong sense of
their own identity and culture that is different from
the rest of the world. And yet, we had Google
believe that they have the same fundamental needs. When they get online, they
want to talk to friends. They want to be entertained. They want to understand
the world around them. They want information to
make their lives better. If you saw search
queries from Mumbai, they’re not that different
from search queries from New York City. If you look at search
queries from Mumbai, you might find queries
like, what time does this train leave? Where is the nearest doctor? When’s the new movie with
Deepika Padukone coming out? But right now, they
face serious challenges to getting the
information they need and having a good experience
with the internet. They have low spec
phones, which are running usually a very old
version of Android, and their storage is
running out constantly. They have serious
connectivity issues. Data can be slow
and intermittent. It can sometimes take minutes
to load a map, even more to buffer a video. And when they do manage to
connect to the internet, they find that there is
not much localized content. Let me give you an example. Wikipedia in Hindi–
which, by the way, is now the fourth most spoken
language in the world– has just 2%, 2% of the content
of Wikipedia in English. So Google’s approach to
this is pretty simple. First, we want to
ensure that everyone has access to internet. The rest is meaningless without
a working internet connection. Second, we want
to build platforms that enable developers
like yourselves to build meaningful
experiences for everyone. Third, we want to
build products that are directly relevant to
our next billion users. Let me start with access,
which really is the foundation. In India, we have partnered
with Tata Trusts and RailTel to provide Google station, which
is high speed Wi-Fi in hundreds of train stations across India. It’s the largest
public Wi-Fi project in India with millions of
people now using the service. In addition to
access, we also need to create awareness
and educate people on how to use the internet. Our Internet Saathi
initiative is now in 100,000 villages in
12 states across India. We have over 25,000
Internet Saathis. These are women who
have been trained to help other women
in the village learn about the
internet and how it can be used to better their lives. These Saathis have trained
over 12 million women over the past two
years, and the impact that the internet is
having on these women and their communities
is truly incredible. Of course, Google cannot solve
all the needs of everyone everywhere by itself. So we want to make sure that
we make strong platforms that allow everyone to contribute
and grow with the internet. Because that ability to
participate and contribute to the internet is key. It’s what, in turn, makes the
experience better for everyone. While we have lots more work to
do, one thing we are proud of is our support of many
languages in our key platforms. At the end of the day,
Google is a products company. So we are working hard to try
to make our products fast, relevant, and accessible in
our users’ own languages. Last year, we
launched offline maps, which lets people
save a map over Wi-Fi and then use the map just as
they would if they were online. In 2013, we started
letting people take videos offline
on YouTube, launching first in India, as well as
in Indonesia and Philippines. Now this feature is in more
than 80 countries worldwide. So this is the cool thing. We have learned that
when we tap into a user’s insight, whether on
how people connect or how they overcome
constraints in any market, these insights tend
to hold true globally, which allows us to make
products better for everyone. In this case, I myself,
use maps online and YouTube online all the time. And when the market needs
it, we will build products that are made for India first. That’s exactly what we did
a little over two months ago when we launched Tez, a
consumer payments and commerce mobile app that
leverages UPI to build a brand new and refreshing
payments experience to users all over India. Tez is made for India first. It is made to be as
simple to use as cash and provides Google scale
security to our users. It’s been out a little
bit over two months, and we have seen more
than 10 million users creating more than 74
million transactions. If you haven’t checked it
out, please do so today. Just yesterday, we launched
another brand new product made for our next billion
users first called Datally. Datally is a mobile
data savings app that helps people get the
most out of their mobile data. Datally has three
core features– understand your data,
control your data, and save data by finding
free Wi-Fi near you. Now, we are proud of what
we have achieved so far, but we’re also aware that
there is a lot more work to do. And we are just getting started. So I’ve just given you
a glimpse into what we are working on for the
Next Billion Users markets. I might be biased,
but I truly believe that India is an amazing
place for technology. Fortune 500 companies,
startup hubs, entrepreneurs, dev centers, they’re all
blooming across India. And we have some of the
best talent in the world. I’m lucky enough to work
with many on a daily basis. That’s why we believe
it’s very important for us to meet and work with all of you
through events such as these. We want to hear your feedback
on our products and programs so that we can give
you what you need to turn your ideas into
reality, whether you’re building for the next billion,
like me and my team, or you’re building apps for
use all across the globe. We want to enable you
to focus on the problems that you’re trying to
solve and minimize the pain points of building a product. The Google Developers
team is on the ground in over 130 countries. And within India,
thousands of you are participating as developer
experts and as part of GDGs, and we are continuing to grow
the Indian developer ecosystem through programs such as
Women Techmakers, the Google Developers Agency
Program, and Launchpad. We are also working
on providing trainings to deepen your technical
capabilities year around. In fact, you might
recall the goal that we have committed
to– to train 2 million Indian developers,
2 million Indian developers by 2020. To date, we have engaged
over 500 developers through our various
training programs, along with more than
1,000 faculty members from 400 colleges. Additionally, 11 state
technical universities have adopted our android
developer fundamentals curriculum. We also recently launched
developer student clubs in 23 states, and they have
trained over 6,000 students in just three months. [CHEERING] Finally, we have recently
announced a partnership with Pluralsight to
offer free developer and IT content to help skill
100,000 Indian developers through the
Pluralsight platform. We are really inspired
by the talent in India. And we want to continue to
help cultivate the developer ecosystem here. Let me give you an example. Meet Jmit. Jmit’s father is
a street mechanic. Growing up, Jmit
always assumed he’d follow in his
father’s footsteps, but he always loved to code. After completing his
six month training to becoming a mechanic,
he asked his father, hey, can I take some time to
pursue my real interest, my real dreams, of
becoming a developer? His father agreed, and
from then, each day he went to a part of town where
he could access public Wi-Fi. He sat on this
footpath, you see here, where the signal
was the strongest, and began taking Android
courses through Udacity. After he completed
this training, he began applying for jobs
as a professional developer. And today, Jmit
supports his family with the salary he’s
earning as a developer. Jmit’s story is just one
of many inspiring stories that motivate us to continue
pushing towards our goals to train as many
developers as possible and build the products
and platforms that are most useful to all of you. Now, I’d like to bring
up some colleagues to share updates on the products
across our developer platforms. Let’s get started
with how we are continuing to improve the
Android development process. Please welcome
Dan, and thank you. DANIEL GALPIN: Pankaj. Namaste and good morning. It is the best time ever
to be an Android developer, and I can say that because
I’ve been developing Android for over nine years. I’ve been at Google for
over seven of those years, but I’ve never seen
anything like what we have now, this
incredible confluence of meaningful developer changes. We’re seeing ever more powerful
tooling, a clear path forward for app design, a new
programming language, support for on-device
machine intelligence, and fundamental improvements
to the distribution model. And much of this change derives
from listening to all of you in our developer community. All of this is happening
amidst the incredible momentum that Android continues to have. We’re seeing 2 billion
active devices on Android and 82 billion apps
installed from Play. And what’s even more amazing
is how this momentum is making so many developers successful. The number of developers
with over a million installs grew 35% in the last year. And to leverage
this distribution to build great businesses, we
expanded direct carrier billing to reach 900 million devices
with over 140 operators. Altogether, the number
of people buying on Play grew over 30% last year. But that’s not enough. We know we can make
distribution even better by removing the friction
from app installs and making the entire
experience more dynamic. Instant Apps is
one of our big bets in bringing more
users to your apps, and our early partners
are seeing great results. Onefootball saw that
the number of users who read news and shared content
increased 55% in their Instant App, while Vimeo increased
their session durations by 130%. Now, Makaan found that the
conversion rate in their real estate app increased by
2.8x compared to mobile web. And there are many more
stories like these. Now, in IO, we opened
up Android Instant Apps to all Android
developers, which means anyone can now build and
publish an Android Instant App. And since then we’ve
made Instant Apps available to more than
500 million Android devices across countries
where Google Play operates. Now your Instant App is
downloaded as needed, feature by feature. And you enable this by
organizing your project into feature modules. And then you can use the exact
same code in both your Instant App and your installable app. We’re using a process
of refactoring your app into these feature modules using
the new Modularize refactoring action in Android Studio. Modularize helps you move code
in resources between modules. We’ve also included optimization
tools for more efficient asset delivery with support for
on the wire compression. When you’re ready, you would
just upload your Instant App APKs together with your
installable APK in the Play Console. And to get started
building in Instant App today, visit g.co instant apps. Now, at Google I/O
back in May this year, we announced that Kotlin is
a fully supported Android programming language, and the
developer community’s support for Kotlin was a huge
driver of our decision to embrace the language. But since that
announcement, we’ve seen a massive increase
in Kotlin activity. The number of apps in the
Play Store that used Kotlin has grown by three
times, and we observe that 17% of Android Studio
projects are now using Kotlin. Of course, Android Studio
3.0 is released now and bundled with full
support for Kotlin, including Kotlin
templates for project and activities, in-IDE
Kotlin Lint support, while the 3.1 Canary adds
Lint support for Kotlin on the command line. But we didn’t stop there. We’ve built docs and
samples around Kotlin, and we’re continuing to
develop more in this area. We published the Android
Kotlin Guides on GitHub to provide guidance for Android
Kotlin style and interop. We support Library 27. We have started adding
knowability annotations to make the APIs friendlier
to use in Kotlin, and we’re doing all of this
while increasing our commitment to the Java
programming language. We already have support
for many Version 8 APIs and support for
language features, such as lambdas and
method references back to any SDK
version with desugar. For me, Kotlin makes
programming Android more fun and productive,
seamlessly operating with the Android
Standard Libraries, while combining a
dense syntax along with modern features, such
as functional programming and the ability to write DSLs. Now, minimizing install
friction with Instant Apps and the Kotlin
programming language are just two of the
ways in which we’ve listened to your feedback. We’ve made substantial
improvements to Android Studio
focusing on speed, smarts, and Android platform support. I mean, you can see all the
speed and smarts updates we’ve made to Android
Studio behind me, but I want to call out
one thing in particular. Your feedback has
made driving sync and build time down our
number one priority, benchmarking with a real-life
100 module project since 2.2 build config time dropped from
three minutes to two seconds, and we’re continuing to
work on build performance. In Android Studio
3.1, now in Canary, you can try out D8,
our new DEX compiler, which compiles faster and offers
smaller dex files while having the same runtime performance. On the emulators, we’ve
added the Play Store for end-to-end testing. And you’ll find awesome features
for Android Oreo in Android Studio, like end-to-end instant
app supports, O system images, improved profilers, and
tons of O helper tools, like a tool to make building
adaptive icons easy. And to download Android
build dependencies, we’re now distributing,
of course, through our own
Maven repository. Now, you’ve asked us to make
Android Frameworks easier, like writing an opinionated
guide to best practices a better solution
for life cycles. Architecture components, our
libraries for common tasks, of course, is that
stable release, with libraries for the
view model pattern, data storage, and
managing activity, and fragment life cycles. We also have preview
support for paging, which makes it
efficient and easy to support for huge
datasets with recycler view. App quality is an
essential piece to growing a
successful business. We took a sample of apps
and analyzed the correlation between app quality
and business success, and what we learned
is when apps move from average to good quality, we
see a sixfold increase in spend and a sevenfold
increase in retention. So quality is queen. So to help you ensure you’re
targeting the devices that work best for your app, you
can now target specific devices in the Play Console. You can browse a
detailed device catalog. And then if you need a
certain amount of RAM, or you have issues with a
specific system on system chip, you can set targeting rules
to address this as well. And prior to
excluding devices, you can even see your
installs, rating, and revenue details per device. We’ve also got Android Vitals
Dashboards in the Play Console. So you can now
see aggregate data about your app to
help you pinpoint common issues, excessive crash
rate, ANR rate, frozen frames, slow rendering, excessive
wake ups, and more. These are enhanced
by improved profilers and new instrumentation
in the platform. And speaking of
platform, Android O adds so much for developers. We’re in developer
preview for Oreo 8.0, which has Android Go
optimizations of targeting, as well as a new neural
networks API which lays the foundation for
our developer community to build accelerated on-device
applications in machine learning, including image
recognition and prediction. Of course, O has vastly
improved font support and auto sizing text view,
notification controls, and a new native pro-audio API. We’ve made massive
improvements in the run timing, including the concurrent
copying collector in a series of optimizations
to make your apps run smoother. We’ve introduced adaptive
icons to improve the launcher experience and continue
to harden Android security with Google Play Protect now
enabled on every Google Play device. We’ve improved
accessibility, added support for autofill and
smart text selection, added support for a wide gamut
color and extra long screens, and improved
multi-display support. I’ll be diving into all of this
and much more in more detail later today. Now let’s talk about some of the
ways we’re extending Android. ARCore uses motion tracking,
environmental understanding, and light estimation to
blend virtual content with the real world, as seen
through your phone’s camera. It’s being offered as
a preview so that you can start experimenting with
building new AR experiences and give us feedback. This preview is the
first step in a journey to enabling AR capabilities
across the whole Android ecosystem. Android Things makes developing
connected mass market products easy by providing the
same Android development tools, best-in-class Android
framework, and Google APIs that make developers successful and
mobile on a trusted platform. And I’m excited to announce
that just today, Android Things Developer Preview
Six was released. Android Things’ hardware is
based on system-on module architecture. The SOM contains a CPU,
the memory, the networking and other core components
in a very small package that can be produced
cheaply, since they are are generic parts made
in large quantities. For each SOM, Google
provides the kernel, drivers, and other software as part
of a board support package. So during prototyping
and development, you attach into a break
outboard to connect IO. For production, you can
build your own custom board for the SOM, reducing
cost and simplifying hardware development. On the software side of things,
you build a standard API and upload it to
your fleeter advices by our developer console. Google provides an over
there update mechanism so you can roll out
updates to your devices, and you can get security
updates from the same people who maintain Android. So you can focus on
your core business instead of having to worry about
patching kernels in libraries. And since Android Things is
Android, you can not only use familiar tools, like Android
Studio, Kotlin, and Firebase combined with
Cloud IoT Core, you can also leverage the
power of TensorFlow and the Google Assistant
in your products. Now, it’s still early days, but
we’re seeing incredible growth in Android Auto. We continue to expand the number
of Android Auto-compatible cars through great partnerships
with over 50 car brands. There are over 300 Android
Auto compatible car models and aftermarket systems
available today, which is triple the
number from one year ago. And it’s well on its way to
becoming a standard feature in every new car. We’ve also made Android Auto
available to all Android users with the launch of the
standalone phone app, opening up the
platform and ecosystem to many millions of drivers,
no matter what kind of car they drive. Now, during the holiday
season last year, Android Wear saw 72% growth, and that was
before Wear 2.0 launched. The number of brands
supporting Android Wear doubled from 12 last year
to 24, and the choice of Android Wear Watch has
doubled from 23 last year to 46. Apps are taking
advantage of Android Wear 2.0 and its standalone
functionality, which allows apps
to work no matter what platform the
watch is connected to. Finally, our strong partnerships
with pay TV operators and hardware
manufacturers allowed us to double the number of
activated Android TV devices last year. And we expect that trend to
continue and further increase. We’re seeing this both across
partners in this set-top box and the smart TV form factors. We’ve expanded our international
footprint to 70 countries, and there are now more
than 3,000 Android TV apps in the Play Store. And Android TV supports
the Google Assistant for a smarter content
search and to be the center of the connected home. With so many ways for people
to interact with Android, the strong communities that are
supporting Android development, the improvements we’ve placed
in the platform, tooling, language, and the
distribution of Google Play, it really is the best time ever
to be an Android developer. Please check out the training
sessions and code labs to learn how we’re helping
developers make Android great. Now, all of the
Android foreign factors are tapping into the power
of the Google Assistant. So it’s my pleasure to
welcome my colleague Sachit to the stage to talk about
what the Google Assistant means to developers. Thank you. [CHEERING] SACHIT MAHRA: Thanks, Dan. Hi, everyone. As you just heard,
the Google Assistant is now available across many
devices, from your phone to your TV. It’s also available on
voice activated speakers, like Google Home. You, as a developer,
have the ability to leverage actions
on Google, to build conversational experiences
through the Google Assistant. So today, I’m going to tell
you about all the new features we’ve added to the
Actions on Google Platform to make your apps for the
assistant even more capable. You can build apps for all
sorts of assistive use cases, for voice and visual interfaces,
like shopping for clothes or ordering food
from a lengthy menu. With UI elements, like
image carousels, lists, and suggestion chips,
users can see more. They can also
seamlessly transition between voice, typing, or
taps in a single conversation in order to easily get
things done with your app. We’ve also opened up powerful
transactions experiences in the US and UK, helping
developers grow their business by making it easy to complete
purchases of physical goods and services through the
Google Assistant on phones. This can be done with
Google-facilitated payments or their own stored
payment methods for users who sign
into their app. Speaking of sign-in, there
is a seamless one-tap step for linking their rewards
account to the assistant. Orders can be tracked,
modified, or even repeated using the
transaction history view accessible in
the Google Assistant. But none of this matters if
users can’t discover your app. We’ve rolled out
an app directory within the assistant experience
on your phone with What’s New and What’s Trending sections,
which will constantly change and evolve, creating
more opportunities for your app to be discovered by users. We’re using your app’s
description and sample invocations to map users
natural search queries to new task-based
subcategories of apps. We’re even launching
a new For Families badge designed to help
users find apps that are appropriate for all ages. And to make Find
Your App easier, the assistant is also
learning from the directory and other information provided
by you, the developer. Thanks to these signals,
the assistant can often respond to general requests,
like “play a game,” with a few different
options from third parties. Improving discovery is
very important for us. So you can expect ongoing
investment and improvements in this area. Once users have
found your app, they want frictionless
assistive experiences. We are committed to enabling you
to build for innovative new use cases. So just in the
last couple months, we’ve exposed specific assistant
capabilities to developers. For instance, developers
can now transfer their app’s conversation from a
voice-activated speaker to a mobile phone mid-dialogue. We have improved the voice
UI development capabilities by giving developers
more control over conversational mishaps,
like unrecognized input or cancellation. We’ve even introduced a
proactive updates feature in Developer Preview, which
allows you to request users to register for regularly
scheduled updates or even push notifications. This opens new doors for
app reengagement and usage. Imagine being able to
connect to your user each day to remind them
about an upcoming event or provide them
with an urgent alert directly through the assistant. In order to leverage
all these features, it’s also important to us
that the development process is smooth. The Actions console is your
central hub for development. It helps you work as a
team, choose the right tool for development,
and collect data on your apps usage, performance,
and user discovery patterns. It’s integrated with the
Firebase and Google Cloud console so that it’s
easy to incorporate into your existing Google
development projects. In addition to
the console, we’re also providing you with
access to developer tools that let you quickly and easily
build apps for the assistant. Since the launch
of our platform, we’ve worked with an expanding
number of developer tools companies to make
their solutions compatible with
Actions on Google. We’ve also expanded
the capabilities of the newly renamed Dialogflow,
our own conversation building tool, launching new features,
such as an inline code editor, multi-language support,
and in-dialogue analytics. One tool that I’m extremely
proud to promote is Templates. These allow you to build a fully
functional, high quality app for the assistant,
with no code at all. Just pick a template type,
such as a trivia game or flash cards, fill in a spreadsheet
with your content, and within minutes,
you’re ready to publish. In fact, I want every
single one of you to try this as soon as you can. I promise it’s that easy. While we’re still in the
early days of the platform, we are focused on
making it more robust and expanding its
reach and capabilities. One way we’re doing that is by
supporting the Google Assistant SDK used to embed the assistant
into your own custom hardware devices, such as those
powered by Android Things. Or with our Smart
Home integration, it’s possible for
device makers to build IoT devices that can be
controlled from the Google Assistant. Another way we’re
expanding is by constantly working to open up to
new languages for Actions on Google. We recently launched in UK,
Australian, and Indian English, as well as French,
Korean, Japanese, Spanish, and other languages. We’re excited for the road
ahead, and we want more of you to join us by developing
for the platform. With an addressable audience of
more than 100 million devices, new capabilities, like
proactive updates, and an improved
developer experience, we think this is an incredible
opportunity for all of us. The magic of the
assistant is enabled by Google’s deep investment
in AI and the cloud. So to tell you more about
that, please welcome Kaz. [APPLAUSE] KAZUNORI SATO: Hello, everyone. I’m Kaz Sato. I’m Developer Advocate
for Google Cloud Team. It’s [? invaluable ?]
for developers. I directly introduce machine
learning solutions and services from Google. What are AI, machine
learning, and neural network? There is no scientific
definition what is AI, but you can say it’s the
science to make things smart, or like building an
autonomous driving car, or like a computer drawing
a beautiful picture. And there has been
many approaches to realize the vision
of AI, and one of them is the Machine Learning, or ML. With ML, you can
program your computer with data, not with the
program code written by human programmers. So computers can find
certain patterns from data to solve various problems. In ML, there are many
different algorithms. And one of them
is deep learning, or deep neural network. And since 2012, we have been
seeing a big breakthrough in the area of neural network. So Google has been making
a significant investment in developing a neural
network technology. Google has been deploying
deep learning technologies in over 100 production
projects, such as Google search, or Android, or Maps,
and Gmail, and so on. For example, Google
Waters lets you search in waters with keywords. Deep learning algorithms
recognizes your objects in waters, and so
you don’t have to add labels or tags by yourself. Inbox mobile app has
the smart reply feature that uses natural
language processing technology to generate
replies for each email thread. So you can just choose one of
them to reply on this thread. Over 12% of responses
on the app are already generated by this feature. Google Translate has
introduced a new neural machine translation model to
generate natural improvements in translate to text. And now Google is
focusing on externalizing the power of machine learning
to customers and developers. One of those ML products
is Cloud ML API, the pre-trained machine
learning models, such as Vision API
for image recognition, or Speech API for
voice recognition, and the NL API for natural
language processing. Another ML solution
is TensorFlow. TensorFlow is an open source
software for machine learning and development that lets
you train your own customized machine learning model. It is your standard ML
to use inside Google for developing any new machine
learning or AI basic services. And Google has opened
sourced it in November 2015. TensorFlow is
scalable and portable. You can start trying
TensorFlow with your laptop when learning it with GPU,
or tens of hundreds of GPUs on the cloud. Because TensorFlow
is scalable, you don’t have to change
your TensorFlow format to bring it to your large
scale distributed training. Also once you have finished your
training with the TensorFlow model, you can run it on various
devices, such as smartphones or Raspberry Pi and everything. Results benefits, TensorFlow got
the most popular big popularity from the open source world, and
the most popular deep learning framework in the industry now. TensorFlow is used by many large
enterprises and the project for the PoCs, or
Production Cases. Lastly, I’d like to show
a demonstration called “Find Your Candy.” It’s a demonstration
that integrates the machine learning APIs
and TensorFlow as a total ML solutions. Let’s take a look at the video. [VIDEO PLAYBACK] – And give it to you. – Awesome. OK. – All right. So click on that, and
speak into the mic. – May I have some gum? – So it understood
what you said. May I have some gum? Now it’s going through
natural language processing. It’s identifying the
noun, the noun there. And gums. So now it will then match
based on the model that’s been modified. Come on, come on, come on. And it is picking chewing gum. And there, the camera identified
Extra, Long-Lasting Watermelon Gum. Now the camera– and over,
and there’s your gum. – [LAUGHING] That’s great. – Machine learning in and out. – So I get to keep this, right? – Yeah, actually, I’ve got
like, seven boxes back there, please, everybody take one. [END PLAYBACK] KAZUNORI SATO: ML APIs and
TensorFlow provides you real-world ML solution that
allows you to bring the latest deep learning technology
to solve your own business problems today. So with that, I would like
to invite Anita on stage to tell you a bit more
about TensorFlowLite. ANITA VIJAYAKUMAR:
Thank you, Kaz. My name is Anita, and
I’m the Technical Project Manager with Tensorflow
on the Google Brain Team. Bangalore is my
hometown, and I’m really excited to be here
with all of you. Go Bangalore! [CHEERING] I will be introducing
you to TensorFlowLite and why we need an on-device
machine learning library. On one side, machine
learning traditionally has been run on
powerful machines with tremendous amount
of compute power. On the other side,
mobile devices are ubiquitous and are getting
more and more powerful. Some of these devices
have more compute power than what NASA had when they
first sent a man on the moon. Think about it for a minute. We essentially walk with
super computers in our pockets these days. These trends enable us to
shift some of the machine learning workloads from the
cloud back to the device, specifically enabling
machine learning inference on mobile
and embedded devices, thus pushing the boundaries
a little further. There are several reasons why
on-device machine learning is useful. First, application
developers might want to maintain functionality
and do inference while offline. Second, applications may
have low latency requirements of the order of milliseconds and
really cannot afford round trip back to the cloud. Third, specific
sensitive applications might have requirements
for the data to not leave the device,
thus ensuring user privacy. There’s also a need
for the applications to work under low
bandwidth where you don’t have the luxury
of downloading a huge model at the time of inference. Fourth, processing
problems needs to be done without turning
on power hungry radios. These are some of the
motivations to do on-device ML. Even though on-device ML
sounds like a great idea, mobile devices come
with many challenges and have to operate under
constraint environments compared to their
workstation counterparts. There’s limited network
bandwidth, limited memory, sometimes even
limited computation. At the same time,
these mobile devices have very aggressive release
and engineering cycles, which means there’s hardware
heterogeneity, which leads to supporting machine
learning on specialized hardware like GPUs and DSPs. We decided that making a
product whose sole focus was mobile devices is essential. TensorFlow is primarily
for large devices and TensorFlowLite
for smaller devices. Put it simply, TensorFlowLite
is a machine learning library to do inference on mobile
and embedded devices that’s easier,
faster, and smaller. The primary goals
TensorFlowLite are low latency, small
binary footprint, and optimized throughput. TensorFlowLite has support
for Android Neural Network API that enables hardware
acceleration, leveraging custom accelerators on the phone. We released the first
developer preview of TensorFlowLite a
couple of weeks back, and we have support for popular
image classification models as well as text-based
Smart Reply models. We can’t wait to see
what you all come up with using this on-device
inference library. Now, please welcome Tal on
stage from the Chrome team to tell you more about Chrome. Thank you. TAL OPPENHEIMER: Thanks, Anita. Hi, everyone. My name is Tal. I’m from the Chrome team, and
I’m excited to talk about some of the improvements
that we’ve made on the web over the past year. The web is big. With over 2 billion
instances of Chrome, we know that the web
has tremendous reach. But one of the true
strengths of the web is that it’s bigger
than any single browser. So regardless of
whether the device is a smartphone, or a laptop,
or a desktop, or a tablet, they all have a browser. So any web-based experience
is available on these billions of devices today. And we’ve seen those have a
real impact on how many users web apps are reaching. We’ve all seen how quickly
mobile has been growing, and native apps
have been growing at an incredible pace with it. But what’s really remarkable
is that even with the web’s large initial reach, we’ve
seen the average monthly web audience growing even faster. And because of
this growth, we’re seeing the web
expand into new areas with experiences
like WebVR being built on the web platform. So with the web pretty
much everywhere, we’re constantly trying
to push the boundaries on what it can do. Over the past
year, we’ve shipped hundreds of additional APIs that
cover a range of capabilities from making it easier
to integrate payments to building fully
capable off-line media experiences directly on the web. But beyond just these
core capabilities, we’re also ensuring that
the mobile web works well with the India stack. For example, with
our Request API, it’s really easy to tie
into popular payment methods for every region. So in India, we’ve
ensured that it integrates with Google’s new
payments app, Tez, so it integrates with local
businesses, banks, and India’s unified payments interface. And with all of
these capabilities, the modern mobile
web also allows developers to build deep,
rich mobile experiences with something that we call
Progressive Web Apps, or PWAs. PWAs are about helping
web developers leverage the web’s new capabilities to
build high class experiences that really feel immersive. They can load quickly. They work offline. And you can even send
notifications to users. And we’ve seen a number of
amazing experiences taking advantage of these
new capabilities. As just one example, there’s
Ola, a popular ride sharing app based right here in
India, who recently built a progressive web app to
help reach users in tier two and tier three cities. Here they have a polished,
fast, immersive experience that works on any connection. It can send users notifications,
and it’s been completely on the mobile web. So it’s already accessible
on billions of devices. And we’re excited to announce
that the reach of this PWA technology is huge, as the
core technology powering this is now supported
across top browsers, including UC browser in India. And with the ability to create
such immersive experiences like this, we also want to
make sure that you can get back to it really easily. Add to Homescreen has
always allowed users to add an experience to
their Android Homescreen. But with our improved
Add to Homescreen flow, when you add a
PWA to your homescreen, it’s fully integrated
into the platform. So to users, it feels like
any other app experience on their device. It’ll appear in the
Android launcher alongside your Android
apps, and it’ll even appear in Android storage settings. But since it’s a PWA,
it’s inherently small. So users are able to get
an immersive experience without requiring
extensive storage space. And this fast, integrated,
improved Add to Screen flow is available now. So with all of these
new capabilities, we’ve also been working
to make sure it’s easy for web developers to
build these experiences. We’ll be going into
a lot more detail on how to develop PWAs
throughout the mobile web track. But no matter how you’re
building your web app, Lighthouse is a tool
that can show you how to improve your web experience. Lighthouse is a Chrome
extension and command line tool that quickly
audits your site to identify how you
can improve your app’s performance, accessibility,
and progressive web appiness. And we’re excited to announce
that as of M60, Lighthouse is now directly
integrated into dev tools. So now you can quickly see
how your website is doing and what to do next
directly in Chrome. And with all of
these tools, we’ve seen just how easy it
can be for companies to take advantage of these
new capabilities for their web experience. To give another example, there’s
Voot, a popular video streaming site also based here in India. And it’s an experience
built on the web so users can get to it directly. And with our new Add
to Homescreen flow, it can also be easily
accessed from the App Launcher or the Android screen. And when you open it,
you get a high class, immersive experience. It automatically
rotates to allow for full screen experiences. And with some of
the newest APIs, it can even support downloading
of videos and offline playback. And since it’s built
on the web, users can get this entire experience
immediately on their devices. And this is just
one example of many. Leveraging the modern
mobile web is now the norm around the world. Whether they’re building
a PWA from scratch or leveraging the
latest capabilities on their existing
web experience, companies everywhere are
seeing a tangible impact on their key metrics. With the modern mobile web,
it’s possible to easily build immersive, fully
capable experiences that can reach billions of people
around the world today. And now, let’s turn
our focus to what we’re doing to make it
easier to develop apps and grow your business. Please welcome Francis. FRANCIS MA: Hi, I’m Francis,
and I lead the Firebase Product Team. Our mission is to
help developers like you build a better app and
grow more successful business. At I/O 2016, we’ve
expanded Firebase from a set of backend services
to a broad mobile platform to help you solve many of
the common problems you face across the lifecycle
of your app, from helping you build faster
and easier with products like remote cloud storage
and Realtime Database, to helping you better understand
and grow your users with tools like Analytics and
Cloud Messaging. Whether you are
starting something new or looking to extend
the existing app, we’re here to help
you so that you can channel more of
your time and energy towards creating
value for your users. And we make this available
all through a single, easy to use SDK available
across platforms. To date, there are over
1 million developers that have used Firebase, and we’re
humbled that so many of you have trusted us with your
apps, and we’re committed to helping you succeed. Over the last year, our team has
made many updates to Firebase, and I’d like to
highlight a few of these. First, let’s start
with backend services, where we provide you
with the core building blocks that help you build
your apps faster and easier. One of these is Cloud Firestore,
a NoSQL document database that scales automatically. Cloud Firestore features a new
document collections data model and makes it a
lot more intuitive for you to structure your data. It’s also fully
auto-managed, and it’s built on Google’s
global infrastructure so that it will
automatically scale with you, and you don’t have to worry
about managing your machine sizes, RAM allocation,
or networks. Now, Firestore, like
other Firebase products, also works with Cloud Functions. Cloud Functions gives you a
way to deploy your JavaScript code to the cloud
and execute it based on the nature of your request or
through other events happening across Firebase. So for example, you
can write a function to extend Cloud Firestore to do
some service side processing, like data validation, whenever
a document is uploaded. With Firestore functions
and other Firebase back end services, our infrastructure
will scale with your workload automatically from
prototype to planet scale, freeing you from managing
your own servers. So let’s switch gears to talk
about some other updates that could help you better understand
and improve your app’s stability. Since welcoming the
Fabric team to Google earlier this year, we’ve
integrated Crashlytics into Firebase. Firebase Crashlytics is our
flagship crash reporting product that helps you monitor
and fix crashes and app errors. And in addition to
monitoring app crashes, it’s also really
important to understand how your app performs
out in the field, because users often
abandon slow running apps. And that’s where Firebase
Performance Monitoring can help you better understand
how your app performs across a diversity of devices
and network conditions. Now with just one
line of code, you can get insights into your
app start up time and network latency. As well, you can
add custom metrics to really understand
how your app performs through those
critical user flows that you really care about. And this is a great way
to find those bottlenecks in your app that
could be impacting your user engagements or even
your business bottom line. So in addition to helping
you build a better app, Firebase also helps you
grow and engage more users. First, let’s talk about
Cloud Messaging, or FCM, which gives you an easy
way to send notifications to engage your users. FCM is integrated
with Analytics. So it gives you many options
to send targeted notifications to different user
groups or app versions. Another great way to
drive user engagement is by creating a
personalized experience. And Remote Config helps you do
that more easily by enabling you to change your app’s
configuration remotely and at runtime. It’s also integrated
with Analytics so you can fine tune and
customize your app experience to different user
segments or operations. Now, many developers use
FCM and Remote Config to create more
targeted experience. But we’ve also heard
from many of you that you want an easier
and more powerful way to test different variants. And for that, we have recently
released first class A/B testing support for Firebase. With A/B testing, you can
test different variants of notification messages
or configuration values to different groups
of users, and it will help you figure out
which of these variants perform best for the goals
that you’ve specified. So for example,
you can figure out whether the orange
button or the blue button helped drive more
user purchases. I’m also very excited to
share that we’ve recently taken our first step of bringing
Google’s machine learning to Firebase with the release
of Firebase Predictions. Predictions applies ML
to your analytics data and helps you predict user’s
behavior, like churn, spend, or other events that
you’ve specified that are important to your app. Now, it’s also integrated
with other Firebase products. So you can take
targeted actions, like triggering and in-app
promotion using Remote Config, to users who are
most likely to spend, or say send to
push notifications to target users who
are likely to churn, or run A/B tests across
these different groups. I’m very excited to be here
sharing these updates on behalf of the Firebase team and
meeting many of you here. I look forward to
hearing your feedback and continue to work hard to
help you build a better app and grow a more
successful business. Thank you. PANKAJ GUPTA: Thanks, everyone,
for joining us this morning. I hope you’re all
as excited as I am about the progress we have
been making with our developer product and platforms. Thank you to our speakers,
Dan, Sachit, Kaz, Anita, Tal, and Francis. For the rest of the
day, you’re invited to participate in technical
sessions, trainings, code labs, and explore the
sandboxes right outside. So please enjoy the Google
Developer Days India event. Thank you. [MUSIC PLAYING]

12 thoughts on “Opening Keynote (GDD India ’17)

  1. I LOVE FAVORITES RADIO HERE CHANTHA HOUR TECHOLOLY THANKS YOU DEVELOPERS INDIA FINENESS GOOGLE DEVELOPERS FOR CHANTHA HOUR PLEASE AT FRESNO CA OKAY I LOVE YOU CHANTHA HOUR.

Leave a Reply

Your email address will not be published. Required fields are marked *