What would you like from more AI in Google Apps?


“If you asked people what they wanted, they would say faster horses.” This sentiment, combined with derivatives such as “people don’t know what they want until you show it to them,” makes predicting the future of technology difficult since it takes a single innovation to completely change the paradigm. This is particularly the case for the upcoming wave of AI features for new and existing Google apps.

Wrong idea

Google wasn’t shocked by what was to come. The company has spoken publicly about Natural Language Understanding (NLU) and Large Language Models (LLMs) at its last two I/O developer conferences, its biggest event each year. There was a language model for Dialog apps in 2021 with talking to a Pluto demo, and last year’s LaMDA 2 with the ability to demo through an AI Test Kitchen app.

There’s also the Multi-Task Unified Model (MUM) that could one day answer “I climbed Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” and the future ability to take a picture of a broken bike part in Google Lens and get instructions on how to fix it.

In addition to detailing its technology, Sundar Pichai said more bluntly that “natural conversational capabilities have the potential to make information and computing more accessible and easier to use.” Search, Assistant, and Workspace are specifically named products that Google hopes to “[incorporate] Better conversation features. “

However, as the recent speech proves, that wasn’t enough to make people remember. Instead, Google has a responsibility not to provide more specific examples that have captured the public’s awareness of how these new AI features can benefit the products they use every day.

Then again, even if more concrete examples were introduced in May of 2022, they were quickly cemented by the launch of ChatGPT later that year. OpenAI’s offering/product is available to use (and pay for) today, and there’s nothing more tangible than experience. It has stimulated many discussions of how direct responses affect Google’s advertising-based business model, with the reasoning that users will no longer need to click on links if they actually get the answer as a constructed sentence and summary.

What has stunned Google is the speed with which competitors have integrated these new AI advances into shipping apps. Given Code Red, the company clearly didn’t think it’d have to release anything other than demos so soon. The safety and accuracy concerns are something Google has explicitly emphasized through its current previews, and executives are very quick to point out how what’s on the market today “could make things,” which would be reputational damaging if it were launched on something the size of Google Search.

What will happen

Announcing the layoffs, a leak from the New York Times appeared the same day describing more than 20 AI products that Google was planning to showcase this year, as soon as I/O 2023 in May.

These ads, likely led by a “search engine with chatbot features,” appear to be largely intended to match OpenAI toe-for-toe. Of particular mention is Image Generation Studio, which looks like a competitor to DALL-E, Stable Diffusion, and Midjourney, with the Pixel wallpaper creator possibly an offshoot of that. Of course, Google will wade directly into the artist backlash that has resulted from the AI ​​generative imagery.

Besides research (more on that later), none of the leaks seem to fundamentally change how the average user interacts with Google products. Of course, this has never been Google’s approach, which has been to stock existing products — or even just parts of them — with small amenities as technology becomes available.

There is Smart Reply in Gmail, Google Chat, and Messages, while Smart Compose in Docs and Gmail doesn’t quite write email but the autocomplete suggestions are really useful.

On the Pixel, there’s Call Screen, Hold for Me, Direct My Call, and Clear Calling where AI is used to improve the use cases of the phone’s native keys, while on-device speech recognition allows for excellent recording and faster assist. Of course, there is also computerized imaging and now Magic Eraser.

This is not to say that Google has not used AI to create completely new applications and services. Google Assistant is the result of advances in natural language understanding, while the computer vision that makes searching and rating possible in Google Images is something we take for granted more than seven years later.

More recently, there’s Google Lens for searching visually by taking a photo and appending questions to it, while Live View in Google Maps provides AR directions.

Then there is search and artificial intelligence

After ChatGPT, people imagine a search engine where your questions are directly answered by a sentence created entirely for you/that query, compared to getting links or displaying “featured snippet” quotes from a related website that might be the answer.

Looking at the industry, it’s as if I’m in the minority in my lack of enthusiasm for conversational experiences and straightforward answers.

One problem with experience that I expect is not always (or even frequently) the desire to read an entire sentence for an answer, especially if it can be found by just reading a single line in the dashboard; Be it a date, time, or some other simple fact.

Meanwhile, it will take time to trust the generative and summarizing capabilities of chatbot search from any company. Featured snippets at least allow me to see and decide if I trust the post/source that is producing the quote.

In many ways, that straightforward sentence is what smart assistants have been waiting for, as today’s Google Assistant turns to facts (dates, addresses, etc.) it already knows (knowledge boards/graph) and feature snippets otherwise. When you interact with sound, it’s safe to assume that you can’t easily look at a screen and want an immediate answer.

I understand that the history of technology is littered with frequent updates being crushed in short order by new, game-changing innovations, but the technology just doesn’t seem to be there yet. I remember the early days of voice assistants who explicitly tried to clone humans in a box. This next wave of AI has approximate shades of a human answering your question or doing a task for you, but how long will this novelty last?

FTC: We use affiliate links to earn income. more.


Check out 9to5Google on YouTube for more news:



Source link

Write a Reply or Comment

Your email address will not be published. Required fields are marked *