Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OmniGeo understands cities better than you do

GeoAI—short for geospatial artificial intelligence—has always been a tricky game. Think of it this way: it’s like trying to understand a city by reading five languages at once.

byKerem Gülen
March 21, 2025
in Research

You’d think predicting dementia death rates or mapping city noise would require teams of experts, ground surveys, and satellite imaging firms. But a new AI model—developed by researchers at Beijing Jiaotong University and the University of Montreal—claims it can do all of that at once, just by looking at maps, tweets, and images. The system is called OmniGeo, and if the research lives up to its promise, it could redefine how we read cities, disasters, and human environments in real time.

Why decoding geospatial data is so hard

GeoAI—short for geospatial artificial intelligence—has always been a tricky game. Think of it this way: it’s like trying to understand a city by reading five languages at once. You’ve got satellite imagery, street-level photos, public health stats, tweets full of slang and hashtags, and location data from thousands of pinned places. Each of these data types speaks a different dialect—and most AI systems today are only fluent in one or two.

Existing models might be good at classifying remote sensing images or tagging locations in text, but when you throw all these tasks into one pot, things fall apart. That’s where OmniGeo steps in: it’s a single AI system trained to handle them all.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The team behind OmniGeo engineered a multimodal large language model (MLLM)—a kind of AI that can interpret satellite images, geospatial metadata, and natural language all at once. It’s based on open-source models like LLaVA and Qwen2, but it’s fine-tuned for five core domains: health geography, urban geography, remote sensing, urban perception, and geospatial semantics.

Instead of building one model for each task, OmniGeo handles them all simultaneously. The secret? Instruction-based learning paired with what the researchers call “multimodal fine-tuning.” In simple terms, it learns from image-caption pairs, time-series data, spatial vectors, and more—all aligned around the same locations.

Let’s talk real-world applications

Here’s where things get interesting. OmniGeo has been trained to:

  • Forecast dementia-related death rates at the county level using historical data and satellite imagery.
  • Detect the primary function of urban neighborhoods—like whether an area is dominated by schools or commercial offices—based on street-level data and POI (point of interest) counts.
  • Assess how “noisy” or “lively” a street is, based purely on images and associated captions.
  • Parse location descriptions in tweets during natural disasters—like extracting “21719 Grand Hollow Lane, Katy, TX” from a flood rescue request.

That last use case alone is enough to hint at this model’s potential in emergency response and smart city management.

How OmniGeo sees the world

Technically speaking, OmniGeo works by converting geographic data into readable narratives. For example, satellite images are turned into natural language captions (“green areas with sparse industrial zones”), then aligned with structured data like death rates or POI distributions. All of this is wrapped into an instruction dataset, allowing the model to learn in context, like a human would.

It’s not just theoretical. OmniGeo outperformed GPT-4o and other leading models in key geospatial tasks, including scene classification, location recognition, and urban function prediction. In some cases, it cut error rates by more than half. Even in subjective areas like urban perception—how “beautiful” or “depressing” a street looks—it proved impressively accurate.

Why now?

Cities are becoming harder to manage and easier to surveil. With climate events, population booms, and public health crises hitting all at once, policymakers need faster tools for interpreting geospatial chaos. OmniGeo is arriving at a moment when AI is finally capable of absorbing high-dimensional data across formats.

The difference? Most large models today just talk. OmniGeo sees, hears, and understands space.

OmniGeo is a blueprint for what future geospatial AI could look like: one system trained across modalities, aligned with real-world inputs, and ready to generalize.

If ChatGPT is your language assistant, OmniGeo might be your city’s next emergency brain—translating visual chaos and location clutter into real-time, actionable insight.

And it does it all without ever stepping outside.


Featured image credit: Kerem Gülen/Midjourney

Tags: AI

Related Posts

Just 250 bad documents can poison a massive AI model

Just 250 bad documents can poison a massive AI model

October 15, 2025
71% of workers are using rogue AI tools at work, Microsoft warns

71% of workers are using rogue AI tools at work, Microsoft warns

October 14, 2025
Google taught your voice assistant to understand what you mean

Google taught your voice assistant to understand what you mean

October 14, 2025
Apple researchers just made AI text generation 128x faster

Apple researchers just made AI text generation 128x faster

October 13, 2025
Have astronomers finally found the universe’s first dark stars?

Have astronomers finally found the universe’s first dark stars?

October 10, 2025
KPMG: CEOs prioritize AI investment in 2025

KPMG: CEOs prioritize AI investment in 2025

October 9, 2025

LATEST NEWS

Microsoft’s biggest-ever Patch Tuesday fixes 175 bugs

Jensen Huang says every Nvidia engineer now codes with Cursor

Apple unveils new iPad Pro with the M5 chip

Apple Vision Pro gets M5 chip upgrade and PS VR2 controller support

Attackers used AI prompts to silently exfiltrate code from GitHub repositories

Android 16 now shows which apps sneak in your security settings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.