Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Study: Improving the interpretability of ML features for end-users

by Kerem Gülen
July 4, 2022
in News, Machine Learning
Home News
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

The amount that specific features employed in the model contribute to its prediction is frequently described in explanation techniques that help consumers comprehend and trust machine-learning models. For instance, a doctor may be interested in learning how much the patient’s heart rate data affects a model that forecasts a patient’s chance of developing a cardiac disease.

Does the explanation technique help, though, if those characteristics are so difficult to understand for the end-user? If you are curious about the development of ML, check out the history of machine learning, it dates back to the 17th century.

Table of Contents

  • A taxonomy is created to improve the interpretability of ML features
  • The baseline of the study
  • Interpretability of ML features for professionals

A taxonomy is created to improve the interpretability of ML features

To make it easier for decision-makers to use the results of machine-learning algorithms, MIT researchers are working to make more interpretable features. They created a taxonomy based on years of fieldwork to assist developers in creating features that are simpler for their target audience to understand.

“We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself,” says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.


Join the Partisia Blockchain Hackathon, design the future, gain new skills, and win!


A taxonomy is created to improve the interpretability of ML features by researchers beacuse we are using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features.
MIT researchers are working to make features that are more interpretable

The researchers identified the properties that enable features to be understood by five different user types—from artificial intelligence professionals to those who may be impacted by a machine-learning model’s prediction—to create the taxonomy. They also provide guidelines on how model makers can convert features into simpler formats for laypeople to understand.

They believe that their study would encourage model developers to think about incorporating interpretable elements early on rather than trying to move backward and concentrate on explainability later.

Laure Berti-Équille, a visiting professor at MIT and the research director at IRD, Dongyu Liu, a postdoc, and Kalyan Veeramachaneni, a principal research scientist at the Laboratory for Information and Decision Systems (LIDS) and the head of the Data to AI group, are the co-authors of the MIT research. Ignacio Arnaldo, a key data scientist at Corelight, has joined them. The study is in the peer-reviewed Explorations Newsletter, June edition of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining.

The baseline of the study

Machine-learning models use features as input variables, often taken from the dataset’s columns. According to Veeramachaneni, data scientists often hand-pick and create the model’s features to ensure that the characteristics are generated to increase model accuracy rather than whether a decision-maker can understand them.

He and his team have been working with decision-makers to address machine learning usability issues for several years. Due to their lack of understanding of the characteristics influencing forecasts, these domain experts, most of whom lack machine-learning expertise, frequently lack confidence in models.

A taxonomy is created to improve the interpretability of ML features by researchers beacuse we are using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features.
The cornerstone of the researchers’ taxonomy is the notion that one size does not fit all.

They collaborated on one study with doctors from an ICU at a hospital who utilized machine learning to forecast the likelihood that a patient may experience difficulties following cardiac surgery. A patient’s heart rate trend over time is one example of a feature displayed as aggregated values. Even though features defined in this way were “model ready” (the model could interpret the data), physicians didn’t know how to calculate them. According to Liu, they would prefer to observe how these aggregated attributes connect to the original measurements to spot irregularities in a patient’s heart rate.

A team of learning scientists, on the other hand, valued qualities that were aggregated. They would prefer relevant features to be grouped and labeled using words they recognized, like “participation,” rather than having a feature like “number of posts a student made on discussion forums.”

“With interpretability, one size doesn’t fit all. When you go from area to area, there are different needs. And interpretability itself has many levels,” explained Veeramachaneni.

The cornerstone of the researchers’ taxonomy is the notion that one size does not fit all. They detail which characteristics are probably the most significant to particular users and specify characteristics that can make features more or less interpretable for various decision-makers.

A taxonomy is created to improve the interpretability of ML features by researchers beacuse we are using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features.
The taxonomy says, if you are making interpretable features, to what level are they interpretable?

For instance, machine-learning developers may prioritize predictive and compatible variables, as these features are anticipated to enhance the model’s performance.

The needs of decision-makers who have no prior experience with machine learning, however, might be better served by features that are human-worded, that is, they are described in a way that is natural for users.

“The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with,” said Zytek.

Interpretability of ML features for professionals

To make features easier for a particular audience to understand, the researchers also provide feature engineering strategies that developers can use.

Data scientists use methods like aggregating data or normalizing values to change data throughout the feature engineering process so that machine-learning models can process it. Additionally, most models cannot handle categorical input without first converting it to a numerical code. Often, it is almost impossible for laypeople to unravel these transformations.

Undoing some of that encoding may be necessary to produce interpretable characteristics, according to Zytek. As an illustration, a typical feature engineering method arranges data spans so that they all have the same amount of years. To make these qualities easier to understand, one may categorize age ranges using human terminologies, such as newborn, toddler, kid, and teen. Or, says Liu, an interpretable feature can be the raw pulse rate data rather than a processed feature like average pulse rate.

A taxonomy is created to improve the interpretability of ML features by researchers beacuse we are using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features.
It will be possible to improve the interpretability of ML features for everyone.

“In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible,” explained Zytek.

Based on this research, the researchers are creating a system that enables a model developer to manage intricate feature transformations more effectively and produce explanations for machine-learning models oriented toward people.

Additionally, this new system will translate algorithms created to explain model-ready datasets into formats decision-makers can comprehend. Thus it will be possible to improve the interpretability of ML features for everyone. The industry is focused on these machine learning models. For instance, a new ML method will be the driving force toward improving algorithms.

Tags: interpretabilityMachine LearningML

Related Posts

Adobe Firefly AI: See ethical AI in action

Adobe Firefly AI: See ethical AI in action

March 22, 2023
Runway AI Gen-2 makes text-to-video AI generator a reality

Runway AI Gen-2 makes text-to-video AI generator a reality

March 21, 2023
Can Komo AI be the alternative to Bing?

Can Komo AI be the alternative to Bing?

March 17, 2023
GPT-4 powered LinkedIn AI assistant explained. Learn how to use LinkedIn writing suggestions for headlines, summaries, and job descriptions.

LinkedIn AI won’t take your job but will help you find one

March 16, 2023
OpenAI released GPT-4, the highly anticipated successor to ChatGPT

OpenAI released GPT-4, the highly anticipated successor to ChatGPT

March 15, 2023
What is multimodal AI: Understanding GPT-4

Tracing the evolution of a revolutionary idea: GPT-4 and multimodal AI

March 15, 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

LATEST ARTICLES

Exploring the mind in the machine

Adobe Firefly AI: See ethical AI in action

A holistic perspective on transformational leadership in corporate settings

Runway AI Gen-2 makes text-to-video AI generator a reality

Maximizing the benefits of CaaS for your data science projects

Microsoft 365 Copilot is more than just a chatbot

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy
  • Partnership
  • Writers wanted

Follow Us

  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.