Open menu

This is my first new blog post since four years. It is the Google AI Overview panel that prompted me to do so, now that it has also been introduced in the Netherlands. For those who do not know Google AI Overview: it is basically an automatically generated answer to your search query.

I believe that it is a troubling development that Google and other search engine providers increasingly try to guess the correct answer to a user's query - instead of providing a list of results, as used to be the case.

In this post, I will explain how this will negatively change our search behavior, and discuss how AI can be used for actually improving our search - not by taking over the process, but by encouraging us to look further. I will link to some relevant further reading too.

GAI, LLMs and chatbots are not to be blindly trusted

First, I provide quickly some basic technological background.We all know by now that generative AI and large language models (LLMs) may provide surprisingly well-formulated and largely correct answers to user prompts and other types of input. But these answers are generated based on probabilities - and people and bots who simply guess might simply be wrong. When people do this, we may call it 'lying', when AI does it, we kindly call it 'hallucinating'.

Some time ago, I asked AI to write my own biography and I was amused to read that I was apparently a professor at the Rijksuniversiteit Groningen and a specialist in... well, I forgot, but it was definitely not my field of expertise. Things may become worse if a bot would advise students who cook for the first time to use glue to ensure that all ingredients do not fall from the pizza.

This is in line with what Vint Cerf wrote in a CACM column: "Chatbots are great for entertainment and fiction. They are less useful for factual answers although there is a lot of effort being put into retraining with human feedback to improve accuracy."

Searchers need to explore

Perhaps the real problem is not just that AI and LLMs are not perfect, but that AI interfaces seduce users to trust the outcomes: if users ask a question or issue a query, and  receive a plausible answer, then they probably will not search any further. However, when you have a task to solve that goes beyond a simple fact check, you want to make sure that receive the best answers in order to make the correct or most desirable decisions.

For instance, if you want to plan a weekend theater trip to London. Before you book (and pay for) this trip, you want to ensure that you have the most convenient and affordable flight, a hotel in the right neighborhood, trustworthy travel directions, and proper theater seats. Suppose you get a Google Overview that gives you a narrative with reasonable options, users may be tempted to just go for that - and, once there, realize that there were options that they would have liked much better, such as a cheaper hotel at a more convenient location.

The irony is that they could have found these options themselves, if they would have simply explored all these options. Or, as Jaime Teevan and colleagues put it already back in 2006: if they would have gathered context by 'orienteering' to what they actually need: the perfect search engine is not enough.

We users are lazy

When I say that users are lazy, this is not meant as criticism: I am a lazy user too and am happy to spend as little time as needed for looking up a word, checking out a restaurant, buying my groceries, or selecting a series to watch.

Just look back at how streaming services such as Netflix have significantly changed the way we watch television. Up until about 2007, families would receive a weekly television magazine. I remember how I marked down the programs I wanted to see during the upcoming week. Nowadays, we can conveniently select the first series that we like from a personalized feed of recommendations. Is that progress? Well, it is, but only if you are not interested in finding out what else is out there. In a journal article, we explain how strategies for active decision making can help in reversing this process.

So, back to Google AI Overview. It has already been established that the automatically generated answer may be wrong, false, incomplete or just not the best answer that you could have received. Google even acknowledges this above the Overview panel. However, users will still be satisfied with the answers anyway and often look not further, because the answer is (or seems) good enough. This is not a new insight: for instance, theories on information foraging and decision making have shown this for decades. But there are (many) situations where users need to be challenged to explore just a bit further.

AI Overviews should challenge, not obey

Artificial Intelligence definitely has a purpose in helping us with our search. Ryen White from Microsoft describes in a CACM article how to advance the search frontier with AI agents. In the article, he shows how planning a vacation in Paris involves various subtasks that could be supported by clever AI agents - similar to my example on the trip to London above.

In another CACM article, a conceptual solution for problems with automated decisions by AI (such as users believing Overview panels, or blindly selecting the first restaurant recommended by TripAdvisor) is given: AI should challenge, not obey (yes, I borrowed the article's title for my subheading).

AI support and enhance the user's search process, not try to take over

What Google tries to do with the Overview panels is to provide users with a convenient one-stop solution for everything users try to find out. I hope that, by now, you understand why this doesn't work - and why users will still happily adopt them. Instead, AI could challenge the user to check whether the results are up-to-date and correct, or to look whether there are even better

Suppose we have a lazy conference organizer who needs my biography. It is very likely that in an automatically generated biography there will be factual errors (such that I am affiliated with the Rijksuniversiteit Groningen - which is not unlikely, but not the case at this moment). Instead, suppose that the conference organizer finds an outdated page that mentions Radboud University Nijmegen as my affiliation (I worked there until 2022), wouldn't it be great if an AI panel would remind them to check - for instance - LinkedIn whether it is still my current affiliation. They would then learn that I currently work at Utrecht University.

Or, returning to the weekend theater trip to London: if I would try to book a hotel in the city center, wouldn't it be nice to learn about cheaper, nicer hotels that are a bit outside, but perfectly connected by public transport? That would allow me as a user to consider alternative options and make active decisions - it may even convince me that what I actually want is to go to Paris instead of to London.

Google's search results are polluted by AI

All these observations above lead to my personal rant: I have become so unhappy with Google's search result pages that I recently set DuckDuckGo as my default search engine. Not for privacy reasons (which are very valid reasons, by the way), but for usability reasons.

I have become increasingly dissatisfied by the information panels, the product suggestions, the news snippets and the infamous 'people also ask' that are supposed to be helpful, but which they rarely are. The introduction of the AI Overview panel is the final straw. I encountered it already about a month ago during a business trip to Chicago and I was not impressed at all.

DuckDuckGo has been my browsers' standard search engine for about a week, which means that I automatically use it. And everytime I am pleasantly surprised by the clean result pages. Not just because this is what it was like in the old days, but because it actually works.

eelcoherder 256px

Privacy Engineering, User Modeling, Personalization, Recommendation, Web Usage Mining, Data Analysis and Visualization, Usability, Evaluation

Dr. Ir. Eelco Herder
Associate Professor
Human-Centered Computing Group
Information and Computing Sciences
Utrecht University

Buys Ballotgebouw
Princetonplein 5
3584 CC Utrecht
The Netherlands

Email: e.herder@uu.nl

linkedin

facebook