Google I/O 2024: Untangling Mixed Messages

Google’s annual I/O developer conference is always a mixed bag of consumer-facing features, concepts that might never make it to market, and developer tools. This year’s keynotes were even harder to follow, as Google mixed in AI research concepts without context, AI capabilities that are only available for labs, and important new features for consumers in Android that weren’t included in the live-streamed keynotes at all.

To make sense of it all, I’ve grouped Google’s announcements by audience rather than the completely random approach that Google itself took and surfaced the most important announcements Google buried in sessions later in the week. You’re welcome.

Consumers: Search & Photos

Google is an advertising company, and its main product is Search. AI Overviews launched in the U.S. during I/O, and Search using video as an input is coming soon. Google showed off a demo where AI diagnosed a malfunctioning appliance and gave instructions on how to fix it; if this becomes a common use case, there could be a lot of room to promote YouTube videos. Of course, another use case for AI is summarizing YouTube videos, so there’s a chance this becomes a closed loop where AI watches things so that it can watch other things. Somewhere in that loop, Google will undoubtedly serve up even more contextual ads that AI will not be allowed to skip or summarize.

Gemini in Google Photos is learning context for better search within your photo archive. This is undoubtedly useful but feels all kinds of creepy. Coming this summer.

Consumers: Android

There’s plenty of AI coming to Android but some of the most meaningful Android improvements weren’t in the keynote. In Android 15 you’ll be able to add physical cards to your digital Wallet by taking a picture of them. For example, that makes it easy to always have your health insurance card or gym membership card with you as long as you have your phone on you.

Given that your data is usually more valuable than your hardware, Android 15 will include features that will lock your phone if the phone detects sudden motion from someone yanking it out of your hand or let you lock it remotely before they steal your banking data. You’ll get a private space to hide a virtual phone with its own apps and messaging apps and login data — which also can foil thieves, keep private photos especially private, or conceal affairs from spouses.

That’s not it for the consumer-oriented security improvements that Google buried in breakout sessions. One-time passwords are now blocked from notifications (unless its going to an app on your wrist). Screen-sharing will also block notifications and other sensitive information — a common attack vector — and Android makes it clearer whenever you are actively sharing anything. Google will now alert you if you are on an unencrypted (likely fake) cellular network.

The Gemini Foundation model is moving on-device to Pixel phones with Gemini Nano and Android 15 later this year. This opens up all kinds of on-device AI for scam detection during live phone calls, and the fact that the AI analysis happens entirely on-device means that your conversations are still private. If this works, it will save so many people from loss. Research shows that younger consumers are scammed as/more often than elderly, so this technology is broadly applicable.

Similarly, Google already scans apps in the Play Store daily for malware, but it is introducing on-device Play Protect that analyzes “behavior signals” for when an app appears benign but is actually trying to scam you. If permissions and interactions with other apps appear suspect, the app gets flagged, Google reviews it manually, and then warns users or disables the app remotely. This service uses the Private Compute Core, which preserves privacy and doesn’t collect personal data, and it is not limited to Pixel. Pixel, Honor, Lenovo (Motorola), Nothing, OnePlus, Oppo, Sharp, Transsion, and other manufacturers will deploying live threat detection later this year. Samsung isn’t listed, but that may be a marketing issue; it might participate or have an equivalent program through Knox.

Then there’s all the rest of Google’s AI coming to Android, starting with Gemini taking over for Google Assistant. Circle to Search is already on 100 million smartphones, thanks largely to Samsung’s rapid software rollout to its entire current and recent premium lineup, and Google’s own expanding Pixel phones. (Google’s $500 Pixel 8a started shipping recently, and my impressions of the review unit are extremely positive). Circle-to-search should reach 200 million phones by the end of the year, and it is getting more capable. As of today Circle-to-Search can answer basic math word problems — with explanations, no cheating! — and the ability to solve more complex equations is coming later this year.

Gemini in Messages on Android can provide in-app insights and generate images to memeify your responses. For better or worse, consumers really want the ability to do this, so this may be the most immediately consequential AI capability announced at I/O. 

Consumers & Enterprise: Work

Gemini for Workspace is Google’s version of Copilot for Microsoft 365. As with Microsoft, the goal here is to monetize increased productivity by having Gemini pull out data from email attachments and then chart them, summarize meetings, or provide a virtual assistant for coordination. In some cases, Gemini for Workspace could paper over deficiencies in Google’s apps by letting AI create pivot tables and charts that Excel has long provided natively. Other data analysis features appear to be advanced enough to compete with dedicated tools like Tableau. However, Google was terribly unclear on which features are shipping when; some are “next month,” others, “later this year,” and still others possibly never.

Google has tools in the labs for content creators — musicians, filmmakers, and YouTubers especially — that may compete with Adobe and Apple. No word on when these will ship, but Google has already been trialing them with select pros.

Developers, AI Researchers, and, Most Importantly, Investors: AI

There are few companies that can garner broad developer adoption across different platforms; Google is one of them. Over 1.5 million developers are using Gemini to help them code, and over 1 million people are beta testing Gemini advanced on phones.

Gemini’s AI model has a “one million token context window,” but Gemini 1.5 Pro will have a two million token context window. Google provided this statement without any context for humans, but it loosely means that when an LLM looks to understand input, it considers a certain number of parameters of what the text itself and surrounding text might mean. Other AI vendors list context tokens in the thousands or tens of thousands, so Google’s claim sounds impressive, though who knows if this is meaningful in practical application. AI vendors typically promote the mind-numbingly large number of parameters used in training their LLMs, so Google is trying to find a different metric for investors to think that it is beating OpenAI’s ChatGPT.

Developers can actually pay to use Gemini, but Google has lots of projects in the works showing off different approaches to AI using the Gemini model as a base. Some of these will reach the market, some are in beta, some are just science projects, and I could not tell you which is which — I’m not sure that Google knows, either. NotebookLM is a text-based repository AI available in the U.S. today as “an experiment,” and it is getting super conversational. DeepMind is AI trying to create general intelligence, discover new drugs, and possibly overtake humankind. (No, seriously, Google brought up its AI Responsibility officer to assure us that Google is carefully considering the threat of AGI world domination.) Google showed off AI for video creation, and a version of Gemini, Gemini Flash, that can be condensed down to run on device.

The most impressive AI probably-just-a-science-experiment-for-now is Project Astra, which can understand context in video — including video you shoot yourself. This made for an impressive demo today, but if you could combine Astra and Flash, then you could have smart glasses that could privately record everything you see, giving you photographic memory with AI-powered search. Could this lead to a dystopian future depicted in the movie, Strange Days? Sure, but you’d never lose your keys again. Perhaps not coincidentally, in the AR breakout session at Google I/O, Google teased a pair of smart glasses that look a lot like what might have evolved from its North acquisition if Google hasn’t actually killed the product and just forgot about it for a while.

Developers: Android, Auto, and WearOS

In developer sessions after the keynote, Google highlighted new tools aimed at making it easier to create dynamic apps that scale for foldable and tablet form factors. You could argue that these should have been in the platform years ago, but Google is paying attention now. Android is getting more efficient AV1 software decoding, more obvious “back” navigation, and tweaks for better battery life by limiting apps’ ability to foreground resources.

Google Cast is coming to cars with Android Automotive OS, starting with Rivian. Android Auto development is now split out by tier — you can make apps car-specific, and change whether the car is driving or parked. Multitasking via picture-in-picture is now supported on Android TV.

Despite Fossil pulling out of the market, Wear OS grew its user base by 40% in 2023. I suspect that this is mostly thanks to Samsung, but Chinese watch makers and Google’s own Pixel Watch line certainly played a part. WearOS 5 will be focused on improving battery life, and at I/O Google announced only tools to help minimize resource usage, create richer WearOS watch faces, and health APIs making it easier to track running in realtime.

What Was Missing

Advertising: Google is an advertising company, and its main product is Search. Despite the fact that Google is completely overhauling Search and will need to find new ways to monetize the more-expensive-to-provide AI-based results, Google surprisingly did not discuss advertising at all in its keynote. Yes, this is a developer conference, but Google hardly limited its announcements to them – and developers often monetize their work with ads, too.

Spatial Computing: Google did show off visual AI engines that will be fully realized when incorporated into smart glasses or headsets, and it had sessions on geospatial AR content in Google Maps. However, nothing was presented about its work with Qualcomm and Samsung to create a headset to compete with Apple and Meta. If this work is anywhere near fruition, I/O would have been a perfect place to whet developers’ appetites.


To discuss the implications of this report on your business, product, or investment strategies, contact Techsponential at avi@techsponential.com.