Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Google wants AI to build web pages instead of just writing text

Generative UI proves text chatbots are becoming obsolete.

byAytun Çelebi
November 20, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Google unveiled Generative UI on Monday, a technology that allows AI models to generate fully customized interactive interfaces in response to user prompts, powered by Gemini 3 Pro and rolling out in the Gemini app and Google Search’s AI Mode to deliver dynamic experiences beyond static text responses.

The core functionality of Generative UI involves creating diverse outputs such as web pages, interactive tools, games, and simulations based on any question or instruction provided by users. This approach shifts from conventional chatbot interactions, which typically output only text, to producing complete, interactive user interfaces tailored to specific needs. The rollout begins in the Gemini app, where users encounter these generated elements directly, and extends to Google Search’s AI Mode, enhancing search results with interactive components.

A research paper titled “Generative UI: LLMs are Effective UI Generators,” released alongside the announcement, details the evaluation process. Human evaluators reviewed AI-generated interfaces against standard large language model outputs, excluding generation speed as a variable. The results showed a strong preference for the interactive interfaces, indicating their effectiveness in user engagement and comprehension. This paper, authored by Google researchers including Fellow Yaniv Leviathan, provides empirical support for the technology’s viability.

Within the Gemini app, Google tests two distinct implementations of Generative UI. The dynamic view leverages Gemini 3’s coding abilities to design and code bespoke interfaces for each individual prompt. This process involves analyzing the prompt’s context to adapt both the content presented and the interactive features included, ensuring relevance to the user’s intent. For instance, the system generates code on the fly to build elements like buttons, forms, or visualizations that respond to user inputs in real time.

The visual layout implementation, by contrast, produces magazine-style views featuring modular interactive components. Users receive a structured layout resembling a digital publication, with sections that can be expanded, modified, or interacted with further. This format allows for visual storytelling combined with functionality, such as draggable elements or embedded simulations, making complex information more accessible through graphical means.

Google emphasizes the technology’s ability to personalize outputs according to the audience. As stated in the company’s research blog, “It customizes the experience with an understanding that explaining the microbiome to a 5-year-old requires different content and a different set of features than explaining it to an adult.” This tailoring involves adjusting language complexity, visual aids, and interaction levels to match the recipient’s knowledge and age, drawing on the model’s contextual reasoning capabilities.

In Google Search, access to Generative UI occurs through AI Mode, limited to Google AI Pro and Ultra subscribers in the United States. Users activate it by choosing “Thinking” from the model dropdown menu, which then processes queries to generate tailored interactive tools and simulations. This integration enriches search experiences by providing hands-on explorations of topics, such as financial calculators or scientific models, directly within the search interface.

https://storage.googleapis.com/gweb-research2023-media/media/Dynamic_View_Van_Gogh_1920x1080.mp4

Video: Google

The underlying system combines Gemini 3 Pro with specific enhancements: tool access enables image generation and web search integrations, allowing the AI to incorporate real-time data and visuals into interfaces. Carefully crafted system instructions guide the model’s behavior to align with user expectations, while post-processing steps correct common errors like layout inconsistencies or factual inaccuracies. These components work together to refine outputs before presentation.

To advance external research, Google developed the PAGEN dataset, comprising websites designed by experts across various domains. This collection serves as a benchmark for training and evaluating UI generation models. The dataset will soon become available to the broader research community, facilitating studies on AI-driven interface creation and improvement.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

https://storage.googleapis.com/gweb-research2023-media/media/AIM-CAPYBARA-RNA-1920x1080-Under20MB.mp4

Video: Google

Current versions of Generative UI exhibit certain constraints. Generation times often exceed one minute, depending on the complexity of the prompt and interface required. Outputs occasionally contain inaccuracies, such as incorrect data representations or functional glitches, which Google identifies as active areas of research. Efforts focus on optimizing speed and reliability through iterative model updates and refined processing techniques.

This unveiling aligns with the launch of Gemini 3, Google’s most advanced AI model to date. Gemini 3 Pro achieved a score of 1,501 on the LMArena leaderboard, outperforming prior iterations in overall performance metrics. On the GPQA Diamond benchmark, designed for PhD-level reasoning tasks, it reached 91.9 percent accuracy. Additionally, without external tools, it scored 37.5 percent on Humanity’s Last Exam, a comprehensive test of advanced knowledge across disciplines.


Featured image credit

Tags: generative UIGoogle

Related Posts

What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

November 19, 2025
Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

November 19, 2025
A wireless eye implant helps patients with severe macular degeneration read again

A wireless eye implant helps patients with severe macular degeneration read again

November 18, 2025
Light powered tensor computing could upend how AI hardware works

Light powered tensor computing could upend how AI hardware works

November 17, 2025
Japan researchers simulate Milky Way with 100 billion stars using AI

Japan researchers simulate Milky Way with 100 billion stars using AI

November 17, 2025
Google reveals AI-powered malware using LLMs in real time

Google reveals AI-powered malware using LLMs in real time

November 12, 2025

LATEST NEWS

Amazon claims its new AI video summaries have “theatrical quality”

Google finally copies the best feature from Edge and Vivaldi

Perplexity launches free agentic shopping tool with PayPal

You should keep your Snapdragon 8 Gen 3 if you want to run emulators

Netflix grabs the Home Run Derby in fifty million dollar baseball deal

OpenAI says its new coding model can work for 24 hours straight

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.