Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OmniGeo understands cities better than you do

GeoAI—short for geospatial artificial intelligence—has always been a tricky game. Think of it this way: it’s like trying to understand a city by reading five languages at once.

byKerem Gülen
March 21, 2025
in Research
Home Research

You’d think predicting dementia death rates or mapping city noise would require teams of experts, ground surveys, and satellite imaging firms. But a new AI model—developed by researchers at Beijing Jiaotong University and the University of Montreal—claims it can do all of that at once, just by looking at maps, tweets, and images. The system is called OmniGeo, and if the research lives up to its promise, it could redefine how we read cities, disasters, and human environments in real time.

Why decoding geospatial data is so hard

GeoAI—short for geospatial artificial intelligence—has always been a tricky game. Think of it this way: it’s like trying to understand a city by reading five languages at once. You’ve got satellite imagery, street-level photos, public health stats, tweets full of slang and hashtags, and location data from thousands of pinned places. Each of these data types speaks a different dialect—and most AI systems today are only fluent in one or two.

Existing models might be good at classifying remote sensing images or tagging locations in text, but when you throw all these tasks into one pot, things fall apart. That’s where OmniGeo steps in: it’s a single AI system trained to handle them all.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The team behind OmniGeo engineered a multimodal large language model (MLLM)—a kind of AI that can interpret satellite images, geospatial metadata, and natural language all at once. It’s based on open-source models like LLaVA and Qwen2, but it’s fine-tuned for five core domains: health geography, urban geography, remote sensing, urban perception, and geospatial semantics.

Instead of building one model for each task, OmniGeo handles them all simultaneously. The secret? Instruction-based learning paired with what the researchers call “multimodal fine-tuning.” In simple terms, it learns from image-caption pairs, time-series data, spatial vectors, and more—all aligned around the same locations.

Let’s talk real-world applications

Here’s where things get interesting. OmniGeo has been trained to:

  • Forecast dementia-related death rates at the county level using historical data and satellite imagery.
  • Detect the primary function of urban neighborhoods—like whether an area is dominated by schools or commercial offices—based on street-level data and POI (point of interest) counts.
  • Assess how “noisy” or “lively” a street is, based purely on images and associated captions.
  • Parse location descriptions in tweets during natural disasters—like extracting “21719 Grand Hollow Lane, Katy, TX” from a flood rescue request.

That last use case alone is enough to hint at this model’s potential in emergency response and smart city management.

How OmniGeo sees the world

Technically speaking, OmniGeo works by converting geographic data into readable narratives. For example, satellite images are turned into natural language captions (“green areas with sparse industrial zones”), then aligned with structured data like death rates or POI distributions. All of this is wrapped into an instruction dataset, allowing the model to learn in context, like a human would.

It’s not just theoretical. OmniGeo outperformed GPT-4o and other leading models in key geospatial tasks, including scene classification, location recognition, and urban function prediction. In some cases, it cut error rates by more than half. Even in subjective areas like urban perception—how “beautiful” or “depressing” a street looks—it proved impressively accurate.

Why now?

Cities are becoming harder to manage and easier to surveil. With climate events, population booms, and public health crises hitting all at once, policymakers need faster tools for interpreting geospatial chaos. OmniGeo is arriving at a moment when AI is finally capable of absorbing high-dimensional data across formats.

The difference? Most large models today just talk. OmniGeo sees, hears, and understands space.

OmniGeo is a blueprint for what future geospatial AI could look like: one system trained across modalities, aligned with real-world inputs, and ready to generalize.

If ChatGPT is your language assistant, OmniGeo might be your city’s next emergency brain—translating visual chaos and location clutter into real-time, actionable insight.

And it does it all without ever stepping outside.


Featured image credit: Kerem Gülen/Midjourney

Tags: AI

Related Posts

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.