Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Smart homes are watching and now they can explain what you’re doing

GNN-XAR is the first explainable Graph Neural Network (GNN) for smart home activity recognition

byKerem Gülen
February 26, 2025
in Research
Home Research

Smart home technology is changing forever, and one of its most impactful applications is Human Activity Recognition (HAR). HAR enables smart systems to monitor daily activities such as cooking, sleeping, or exercising, providing essential support in domains like healthcare and assisted living. However, while deep learning models have significantly improved HAR accuracy, they often operate as “black boxes,” offering little transparency into their decision-making process.

To address this, researchers from the University of Milan—Michele Fiori, Davide Mor, Gabriele Civitarese, and Claudio Bettini—have introduced GNN-XAR, the first explainable Graph Neural Network (GNN) for smart home activity recognition. This innovative model not only improves HAR performance but also generates human-readable explanations for its predictions.

The need for explainable AI in smart homes

Most existing HAR systems rely on deep learning models such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). While effective, these models struggle with explainability, making it difficult for users—including medical professionals and data scientists—to understand why a specific activity was detected. Explainable AI (XAI) seeks to mitigate this by providing insights into model decisions, enhancing trust and usability in real-world applications.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Graph Neural Networks (GNNs) have emerged as a powerful tool for modeling time-series sensor data in smart homes, as they can capture both spatial and temporal relationships between sensor readings. However, existing GNN-based HAR approaches lack built-in explainability. This is where GNN-XAR differentiates itself, offering an innovative solution that combines graph-based HAR with interpretability mechanisms, making it the first of its kind in the field.


Emoti-Attack: How emojis can trick AI language models


How GNN-XAR works

GNN-XAR introduces a novel graph-based approach to sensor data processing. Instead of treating sensor readings as isolated events, it constructs dynamic graphs that model relationships between different sensors over time. Each graph is processed using a Graph Convolutional Network (GCN), which identifies the most probable activity being performed. To ensure transparency, an adapted XAI technique specifically designed for GNNs highlights the most relevant nodes (sensor readings) and arcs (temporal dependencies) that contributed to the final prediction.

The graph construction process is a key innovation in GNN-XAR. Sensor events—such as motion detections, appliance usage, and door openings—are represented as nodes, while edges capture their temporal and spatial relationships. The system distinguishes between two sensor types:

  • Explicit interaction sensors (e.g., cabinet door sensors), which generate both ON and OFF events.
  • Passive sensors (e.g., motion detectors), where only activation events matter, and duration is computed.

To maintain structure and efficiency, the system introduces super-nodes that group related sensor events. This allows the GNN model to process complex sensor interactions while keeping computations manageable.

How GNN-XAR explains its decisions

Unlike traditional deep learning models, which provide only classification outputs, GNN-XAR uses GNNExplainer, a specialized XAI method tailored for graph-based models. This method identifies the most important nodes and edges that influenced a prediction. The key innovation in GNN-XAR is its adaptation of GNNExplainer to work seamlessly with smart home data, ensuring that explanations are both accurate and human-readable.

For example, if the system predicts “meal preparation,” it may highlight events such as repeated fridge openings followed by stove activation, providing a logical and understandable rationale for its classification. The model then converts this explanation into natural language, making it accessible to non-expert users.

Experimental results

GNN-XAR was tested on two public smart home datasets—CASAS Milan and CASAS Aruba—which contain sensor data from real homes. The model was evaluated against DeXAR, a state-of-the-art explainable HAR system that uses CNN-based methods. The results showed that GNN-XAR not only provided more accurate predictions but also generated more meaningful explanations compared to existing XAI-based HAR methods.

Key findings include:

  • Slightly higher recognition accuracy than DeXAR, especially for activities with strong temporal dependencies (e.g., “leaving home”).
  • Superior explainability, as measured by an evaluation method using Large Language Models (LLMs) to assess explanation clarity and relevance.
  • Improved handling of complex sensor relationships, enabling more reliable HAR performance.

Featured image credit: Ihor Saveliev/Unsplash

Tags: smart homes

Related Posts

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.