In a letter to Sundar Pichai, the CEO of Google, several U.S. Congress members, including Adam Schiff and Hank Johnson, have raised concerns about the new Google AI giving wrong answers.The AI Overview feature, designed to deliver quick AI-generated summaries on various topics, is facing scrutiny over its accuracy and the reliability of its sources. This highlights the broader issues of misinformation and trust in AI-created content.
The letter poses 11 detailed questions regarding the AI Overview feature. In this article, I will provide insights into each question based on available information provided by Google, the patent underlying the AI Overview, recent research, and members of the SEO community. I will include Google’s relevant comments on the topic if direct answers are unavailable.
First, let’s examine the concerns raised by Congress over the Google AI giving wrong answers and Google’s responses so far.
Note: This article is neither an attack on nor a defense of Google. If I am missing anything or have the wrong information, please let me know so I can update the article. Also, I have no interest in engaging in political arguments over this topic.
Table of Contents
Congressional Concerns Over Google AI Giving Wrong Answers
The letter cites several instances of AI Overviews disseminating untrustworthy or false information. For example, it erroneously cited The Onion, a satirical site, suggesting that eating rocks could supply essential minerals.

Another error included the false claim that former President Barack Obama is Muslim.
These mistakes, the congress members argue, can have severe implications given the dependence of Americans on Google for news on critical topics such as politics, health, and elections.
As Nate Hake, the founder of travellemming.com, notes below, Google has potentially “created the absolutely perfect ecosystem for disinformation.”
In addition to Schiff (D-Calif.) and Johnson (D-Ga.), Donald S. Beyer Jr. (D-Va.), Lori Trahan (D-Mass.), and Pramila Jayapal (D-Wash.) also signed the bill.
Google’s Response to AI Overview Problems
Following numerous incidents, Liz Reid VP, Head of Google Search issued a statement addressing the issues with AI Overviews. First, she stated that several of the examples people shared were “faked screenshots.” These examples included ” topics like leaving dogs in cars, smoking while pregnant, and depression.”
Ms. Reid acknowledged that “inaccurate or unhelpful” summaries certainly did appear. She attributed these issues, in part, to “uncommon” or “nonsensical” search queries. For instance, regarding the query “How many rocks should I eat?”, she mentioned it is rarely asked on Google.
Ms. Reid said that Google refined the system to better detect ‘nonsensical queries,’ limit user-generated content, and impose restrictions on specific queries where an overview is not helpful.
Visit the Google AI Overview Library
Find in-depth articles, research, FAQs, and tracking tools to enhance your knowledge and improve your performance with AI Overviews.

Congress’s Questions: What We Know So Far
The letter from Congress to Google poses eleven broad-ranging questions about Google’s AI Overviews, including how the system works, selects queries to provide an overview for, determines source trustworthiness, and conducts fact-checking, among other aspects.
Here is what we know, and don’t know, so far.
Google’s Method for Selecting Queries for AI Overviews
Q: How does Google’s AI Overviews feature determine if an overview would be helpful to the user, synthesize information from across the web, and choose which information to include in the provided summary of key insights?
Determining Helpfulness: Google has not disclosed the specific criteria used by the AI Overview system to determine which queries trigger summaries. However, Google states that AI Overviews are specifically designed to help “with more complex questions that might have previously taken multiple searches or follow-ups.”
Supporting research indicates the nature of queries that typically activate AI Overviews. Advanced Web Ranking‘s study in July suggests that queries likely to trigger an AI Overview often contain five words and include terms like “how,” “SEO,” “safety,” “tips,” “practices,” “manage,” “understanding,” “importance,” “prepare,” and “best.” Additionally, a June study by BrightEdge indicated a higher likelihood of triggering AI Overviews for queries including phrases such as “best,” “what is,” “how to,” and “symptoms of.” SE Ranking adds that longer queries also tend to activate AI Overviews, suggesting that the system is calibrated to recognize and respond to queries demanding more detailed responses or involving greater complexity.
Synthesizing Information from Across the Web: The Google patent provides insights into how AI Overviews work. When a user submits a query, the system decides whether to create a summary using a large language model (LLM) based on its training data, or to seek out and synthesize additional sources from the internet through Google’s index. This dual approach allows the system to leverage both its training data and the extensive information available online, providing a comprehensive synthesis of relevant information.
Choosing Information to Include: The selection of information to include in the summary is influenced by several measures. These criteria include the document’s positional ranking, selection rate, language, geographical area, freshness, and more. Trustworthiness is a critical factor, referenced multiple times in the patent, with specific attention to the author, domain, and inbound links. Ms. Reid further explains that “the model is integrated with our core web ranking systems and is designed to perform traditional ‘search’ tasks, like identifying relevant, high-quality results from our index.” This integration ensures that the summaries source from what Google defines as authoritative and credible sources.
Ensuring Content Accuracy and Trustworthiness
Q: Can businesses optimize their content, regardless of accuracy, to be more likely to be included in AI Overviews summaries?
Hypothetically, yes.
To answer this question, we first need to understand how the system builds the summary and identify potential areas that could be manipulated to generate false information.
Overview of AI Overviews and Gemini’s Capabilities: AI Overviews utilize Gemini, a large language model (LLM) with its own knowledge base derived from extensive training data. Like other LLMs, Gemini is trained on a vast corpus of data, which might include erroneous information. To enhance accuracy, Gemini employs retrieval augmented generation (RAG), allowing it to pull in external documents (Google’s index) to ensure information is up-to-date and accurate.
Ms. Reid, elaborating on the integration of this model with Google’s core systems, states, “The model is integrated with our core web ranking systems and is designed to carry out traditional ‘search’ tasks, like identifying relevant, high-quality results from our index. That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further. Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.”
Advantages of Retrieval Augmented Generation (RAG): Rhiannon Williams a reporter for the MIT Technology Review, notes the benefits of RAG, stating, “One major upside of RAG is that the responses it generates to a user’s queries should be more up to date, more factually accurate, and more relevant than those from a typical model that just generates an answer based on its training data.”
Challenges and Limitations: Despite its advanced capabilities, the system is not foolproof. Williams explains, “In order for an LLM using RAG to come up with a good answer, it has to both retrieve the information correctly and generate the response correctly. A bad answer results when one or both parts of the process fail.” This highlights the dual challenges in the accuracy of retrieval and the generation of responses.
Implications for Businesses: If AI Overviews sometimes include inaccurate information due to flawed document retrieval or generation, there is a hypothetical possibility for businesses or website owners to influence their presence in AI Overviews. They can potentially optimize their content to be ranked higher than the source documents used by the AI. By improving their site’s relevance and reducing the ’embed distance’—a measure of content relevance between the content in the summary and the on-page content—businesses might increase their chances of being featured in AI Overviews.
Monitoring Usage Trends Post-AI Overviews Rollout
Q: How has the rate of Americans using the Google search feature changed since the AI Overviews feature was rolled out? Please provide specific data trends with respect to searches about elections and political information, health or medical information, and current events.
In the AI Overview announcement in May, Liz Reid, VP and Head of Google Search, stated, “People have already used AI Overviews billions of times through our experiment in Search Labs.” She added that “the links included in AI Overviews receive more clicks than if the page had appeared as a traditional web listing for that query.” This refers to its previous incarnation known as the Search Generative Experience (SGE). However, Google has not provided any updated usage statistics since AI Overview’s official release.
We cannot determine rate of usage of the AI Overview with any analytics tools at this time. Google does not offer visibility or user interaction data for AI Overviews in its analytics tools such as Google Search Console. According to Search Engine Land, “Google will not break down impressions and click data for AI Overview links in Google Search Console.” However, third-party tools like Semrush and Zip Tie provide some insights into which queries trigger AI Overviews, offering indirect evidence of its visibility.
Addressing Misinformation and Fact-checking
Q: What existing fact-checking and content policies does Google extend to the AI Overviews feature? How does Google enforce these policies to prevent dissemination of misinformation and disinformation?
Google likely applies its existing content policies to AI Overviews, leveraging the same standards applied to organic search results.
AI Overviews source information from both its training data and the Google index. When sourcing from the Google index through RAG, the system employs the company’s established content policies, which are designed to ensure the reliability and accuracy of the information presented.
Regarding the enforcement of these policies, Ms. Reid emphasized, “Accuracy is paramount in search. AI Overviews are built to only show information that is backed up by top web results.” This statement correlates high positional rankings with the accuracy of information. The system’s algorithm utilizes specific factors or signals to help rank accurate information higher in search results. Consequently, the quality of these search results serves as a benchmark for the content included in AI Overviews.
There are a couple issues here i will highlight. First, Google has indicated that their system does not understand documents. The below image is from an internal presentation from 2016.

Dr. Marie Haynes, owner of Marie Haynes Consulting, Inc, noted about Google in “their guide to how they fight disinformation they say, “Our ranking system does not identify the intent or factual accuracy of any given piece of content. However, it is specifically designed to identify sites with high indicia of expertise, authority, and trustworthiness.” But still, this doesn’t tell them whether content on pages is actually likely to be accurate.”
Second, AI Overviews do not only link to the top results for the query. As recent studies have found ( Advanced Web Ranking, Authoritas, and SE Ranking) AI Overviews often contain links to URLs that are either low or unranked in the organic search results for that query. As indicated in the patent, the AI Overview system will seek out documents responsive to related queries as well.
Google’s enforcement of its content policies is managed manually and through its system algorithm. Its policy states: “Google’s automated systems help protect against objectionable material. Search results should be useful and relevant, and limit spam responses. We may manually remove content that goes against Google’s content policies, after a case-by-case review from our trained experts. We may also demote sites, such as when we find a high volume of policy content violations within a site.”
User Warnings on Potential Misinformation
Q: How does Google warn users that AI Overviews could be providing misinformation and disinformation? Has the company increased the presence of warning labels regarding misinformation and disinformation since launching AI Overviews? If so, please describe how. If not, please explain why.
Google has incorporated a few mechanisms to warn users about the potential inaccuracies in AI Overviews and to highlight the experimental nature of this technology. The AI Overview feature states that it is experimental in the lower left corner of the overview, serving as an immediate visual cue to users about the nature of the content.

Specifically for finance-related queries, the notice directs the user to consult a professional.

And for health-related, it directs the user to consult a professional for medical advice.

Additional information about the quality of the content is accessible but not immediately visible; it requires user interaction. Users must click on the “Learn more” option located in the upper right corner of the AI Overview pane. This section states that “This overview was generated with the help of AI” and that it “is experimental and information quality may vary.” See image below:

Google has not yet implemented the visible confidence annotations described in its patent, which could categorize information with either text or color-coded confidence levels (green for high confidence, orange for medium, and red for low).

The absence of these annotations in the live AI Overviews suggests either ongoing development challenges or a decision to evaluate the effectiveness and user response to current warnings before introducing more complex indicators.
Response to Misinformation
Q: What is the company’s current capability and response timeline regarding the identification, vetting process, and removal of misinformation and disinformation provided through AI Overviews?
During the early rollout of AI Overviews, Google was seemingly plugging holes in a sinking ship. It appeared that they were rapidly disabling misleading summaries within a few hours of their detection. A week after the AI Overview rolled out, Lily Ray, the Vice President, SEO Strategy & Research at Amsive, shared on X that “every viral AI Overview mishap seems to be disabled a few hours later.”
I reached out to Lily Ray to discuss how Google’s AI Overviews are doing more recently. She shared the following:
“Throughout both the SGE beta testing as well as in live AI Overviews, it appears Google is definitely focused on reducing misinformation and inaccuracies in AI-generated answers. However, due to the nature of LLMs, it’s still entirely possible for AI to get things wrong, or for the answers to be seen as biased or one-sided. Google appeared to have disabled AI answers from appearing for controversial queries for a while as it worked through these challenges, but they appear to be coming back in recent weeks. I believe we will continue to see some issues arise related to the perceived quality or accuracy of the answers – this is an inevitability when LLMs are used to answer questions.”
Google’s proactive and quick response to errors shows their commitment to maintaining the integrity of AI-generated content, but the complexity of AI systems means that continual monitoring and adjustment are necessary.
Community Feedback Utilization
Q: How many users have used the feedback feature to report misleading information provided through AI Overviews? How does Google use this feedback to improve the accuracy of information provided in future searches?
Google has reported receiving positive feedback from users, though it has not disclosed the volume of this feedback. In an earnings call on July 23, Pichai stated that Google is “pleased to see the positive trends from our testing continue as we roll out AI Overviews, including increases in search usage and increased user satisfaction with the results.”
How does Pichai define user satisfaction with AI Overviews? Is he referring to feedback utilization, or does his definition include interaction data such as link clicks, dropdown expansions, and carousel interactions? Currently, these questions remain unanswered.
How can users provide feedback on AI Overviews?
The AI Overview offers the ability to leave feedback in two ways. First, through a thumbs up or thumbs down that is located at the bottom of the AI Overview.

For more specific feedback, the user can select the Learn more option and select Feedback.

In this section, the user can inform Google if the AI Overview was offensive/unsafe, unhelpful, not factually correct, product design or functionality, or other. There is also a text box where the user can provide specific details.

To date, Google has not publicly disclosed the number of users who have reported misleading information in AI Overview summaries.
Response to Misinformation and System Improvements
Q: Google spokesperson Colette Garcia said in a statement that the company is “taking swift action where appropriate under our content policies and using these examples to develop broader improvements to our systems.” Please elaborate on what these actions are and the status of any improvements to the AI Overviews feature.
Content policy violations in AI Overviews are rare, according to Ms. Reid. She stated that Google “found a content policy violation on less than one in every 7 million unique queries on which AI Overviews appeared.”
She elaborated that the system has difficulty with satirical and sarcastic content that often appears in forums. Recent research indicates that Google has taken action in this respect. BrightEdge found “a steep decline in the citations from UGC sources that might contain content that is hard for the LLM to distinguish from authoritative ones” and that they saw “Reddit and Quora citations fall off almost entirely in AI Overviews.”
Ensuring Clinical Accuracy in Health-Related AI Overviews
Q: What specific steps is Google taking to ensure that health-related search results through AI Overviews are clinically accurate and scientifically based? Are personnel at Google aware of any instances where health-related search results from AI Overviews resulted in bodily harm to a user?
Advanced Web Ranking’s study found that health and safety are among the top industries where AI Overviews are currently showing. This underscores the concern of the Congress members concern for accuracy. Google has stated that the AI Overviews have tight restrictions for news and health-related topics. As per Ms. Reid:
“For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.”
Google’s content policy states that they “don’t allow content that contradicts or runs contrary to scientific or medical consensus and evidence-based best practices.” It appears that Google is making efforts to ensure that AI Overviews are citing trusted sources for health-related searches. The Advanced Web Ranking study found that the “websites that appear in AI Overviews are trusted information sources and are highly visible in organic search results.” These include trusted sites like mayoclinic.org, healthline.com, and webmd.com. See the top 10 in the table below.

Google has not publicly reported any instances of bodily harm to a user as a result of erroneous information shared in an AI Overview.
Government Collaboration for Information Accuracy
Q: How is Google working with relevant government entities, including the Cybersecurity and Infrastructure Security Agency (CISA), to ensure that only accurate information is provided through AI Overviews, particularly with respect to information about voting and elections?
The process Google uses to handle government requests for the removal of AI Overview content is probably similar to its standard approach for all content. Google provides a specific page for governments, including the United States, to submit removal requests. These requests are then evaluated by Google “to determine if content should be removed because it violates a law or” their “product policies.”

The United States has used this procedure quite often. In 2023, the United States submitted 1,169 requests through this page. Of those , 205 were for web search. Most cited reasons were defamation (221), trademark (161), privacy and security (153), and bullying/harassment (128). There was one request submitted regarding electoral law. The data on the site is only through 2023 and does not include any AI Overview-related requests.
Regarding First Amendment rights, current legal interpretations do not view U.S. content removal requests as violations. Recently, the Supreme Court overruled a lower court’s decision regarding a “challenge made on free speech grounds to how officials encouraged the removal of posts deemed misinformation, including about elections and COVID.” To date, the government’s use of standard request procedures has not been found to violate the First Amendment.
Transparency in Source Utilization
Q: What steps does Google plan to take to make clear what sources are being used to generate each part of an AI Overview, since different sources of information may have different degrees of trustworthiness?
The AI Overviews provide clues as to the sources used to create and verify the statements. First, the AI Overviews may be broken down into sections which show the documents that support each section. If you select the arrow under each section, it will show the specific sources cited:

Second, the links may have text fragments that identify on that page the specific content that is verifying the statement in the AI Overview.
Text fragment example:

When you select the link, it can take you to the exact text on the page referenced. The text will be highlighted. Links do not always have text fragments, and even if they do, they may not always highlight the specific on page text:

YouTube videos often include a time stamp in the URL indicating the portion of the video that is used for reference. When selected, it will take the searcher to the specific location in the video the text is referencing.
Time stamp in the URL indicates seconds:

These indicators might not be easily understood or readily apparent to the average user.
This past weekend, Lily Ray, Glenn Gabe, and Barry Schwartz noticed that Google is testing new formats for indicating sources. These include link icons in AI Overviews that show citations in an overlay window.
Future Implications for Google and AI
This concern from Congress in Google’s AI Overview signals an increasing awareness and concern over the impact of AI technologies on public information. With the upcoming election, there are legitimate concerns over the potential for misinformation to influence voter behavior and public opinion.
It challenges Google to enhance transparency and accountability in how AI-generated content is created, the sources it uses and links to, and how it’s presented. Additionally, it raises questions about what ethical responsibilities tech giants have in moderating content while pursuing innovation.
Conclusion
The concerns raised by Congress regarding Google AI giving wrong answers highlight the wider issues of misinformation and trust in AI-generated content. Google’s response so far, while in many cases proactive, leaves several of Congress’s questions partially or wholly unanswered, highlighting a gap in transparent communication and the potential for misinformation.
There’s a clear need for ongoing monitoring and adjustment of how AI Overviews are generated and the sources they utilize. Google must continue to refine its processes, improve the accuracy and transparency of AI-generated content, and ensure that the AI Overviews do not compromise ethical standards or public trust.
Note: I will updated if and when Google’s responses to the letter and its questions become public.
