Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

How LLMs are quietly becoming the ultimate city historians

Large language models are good at reasoning about images, but they struggle when the dataset grows beyond a few thousand images. Visual Chronicles was dealing with millions. So the researchers designed a bottom-up strategy. First, detect tiny local changes like a new sign or a removed tree. Then, cluster them into broader city-wide trends.

byKerem Gülen
April 14, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Urban change usually sneaks up on us. A new café here. A painted overpass there. But what if you could see an entire decade of a city’s visual transformation, automatically captured, sorted, and explained by AI?

That is exactly what a new research project called Visual Chronicles set out to do. Developed by researchers from Stanford and Google DeepMind, this system used multimodal large language models (MLLMs) to analyze over 40 million Google Street View images from New York City and San Francisco. It spotted trends humans would not easily notice.

The impossible problem of scale

Tracking small changes over time is nothing new in computer vision. But most previous work needed labels or focused on specific things like cars or faces. This project was different. The goal was open-ended: what changed most often in these cities, over a decade?

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Simple question. Brutally hard in practice.

Large language models are good at reasoning about images, but they struggle when the dataset grows beyond a few thousand images. Visual Chronicles was dealing with millions. So the researchers designed a bottom-up strategy. First, detect tiny local changes like a new sign or a removed tree. Then, cluster them into broader city-wide trends.

AI’s detective work on the streets

Here is how it worked in action:

  • Step 1: Compare images of the same location over time.
  • Step 2: Ask the AI to describe what changed, with evidence from the images.
  • Step 3: Group similar changes found across the city.
  • Step 4: Verify those trends with further AI checks.

This hybrid approach let the system detect subtle changes. Outdoor dining setups after COVID-19. New solar panels on rooftops. All spotted without drowning in data or generating abstract answers like “economic growth.”

So what did it find?

In New York City, the AI spotted a dramatic rise in:

  • Security cameras: 745 new installations across neighborhoods.
  • Fences around parking lots: 509 new additions.
  • Sidewalk upgrades: 519 new red ADA warning pads.

In San Francisco, the decade’s signature trends looked different:

  • Solar panels: 1504 new rooftop installs, especially visible from raised highways.
  • Dedicated bus lanes: 751 new lane conversions for public transport.
  • Bike racks: 1799 new racks, mostly near downtown.

The COVID years left visual fingerprints everywhere

The researchers also focused on the pandemic period, capturing how city streets adapted after 2020. Outdoor dining exploded in San Francisco, with 1482 new setups recorded between 2020 and 2022 alone.

And then there was the blue overpass. A freeway section in San Francisco was painted ‘Coronado Blue,’ a detail spotted 481 times in Street View images after 2020.

In New York, the system was also used to track retail store changes. It revealed two opposite trends:

  • Openings of bakeries and juice shops in gentrifying areas.
  • Closures of grocery stores and bank branches in older retail zones.

Because why not. Researchers ran a final experiment, asking the AI to look at random images and find “unusual things.”

The winner? Giant abstract sculptures scattered across New York City. Over 200 instances of public art installations, all grouped by the model.


Do AI models trust their regulators?


Why this matters far beyond Street View

Visual Chronicles shows how future AI tools could let companies, governments, or researchers track changes in any large visual dataset. Satellite images. Factory floors. Any place that changes over time.

It is also a warning. AI does not just “see” images. It explains them back to us in ways that shape what we believe is happening. The more we trust these automated trend reports, the more we need systems that balance AI’s speed with human caution.

Visual Chronicles is an early example of that kind of system. It is precise enough to find real patterns, scalable enough to handle millions of images, and grounded enough to leave the storytelling supported by evidence.


Featured image credit

Tags: LLMs

Related Posts

Your future quantum computer might be built on standard silicon after all

Your future quantum computer might be built on standard silicon after all

November 25, 2025
Microsoft’s Fara-7B: New agentic LLM from screenshots

Microsoft’s Fara-7B: New agentic LLM from screenshots

November 25, 2025
Precision Neuroscience proves you do not need to drill holes to read brains

Precision Neuroscience proves you do not need to drill holes to read brains

November 24, 2025
New Apple paper reveals how AI can track your daily chores

New Apple paper reveals how AI can track your daily chores

November 23, 2025
Why your lonely teenager should never trust ChatGPT with their mental health

Why your lonely teenager should never trust ChatGPT with their mental health

November 21, 2025
Google wants AI to build web pages instead of just writing text

Google wants AI to build web pages instead of just writing text

November 20, 2025

LATEST NEWS

Speechify adds voice typing and assistant to Chrome

Copilot exits WhatsApp on January 15 citing policy shift

Rockstar co-founder critiques EA and Microsoft’s AI expectations

Gemini’s upcoming Projects feature mirrors ChatGPT workspaces

OpenAI moves ChatGPT Voice into main chat thread

Perplexity launches Instant Buy AI shopping assistant with PayPal

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.