• The National Institutes of Health is funding a big research project to collect voice data and develop artificial intelligence capable of diagnosing patients based on their speech.
  • Scientists may even be able to detect unhappiness or disease from a person’s speech.
  • The program is a collaboration between USF, Cornell, and 10 other colleges, and it is not the first time researchers have utilized artificial intelligence to study human speech.
  • They hope to collect 30,000 voices by the end of the four years, along with data on other variables such as clinical data and genetic information.

Voices provide a wealth of knowledge. They can even assist in identifying illnesses, according to researchers developing an app for this purpose. The National Institutes of Health is supporting a large research initiative to collect voice data and create artificial intelligence that can diagnose patients based on their speech.

Diagnosing a patient using their voice and artificial intelligence 

Everything from vocal cord vibrations to breathing patterns when you talk can provide information about your health, according to laryngologist Dr. Yael Bensoussan, director of the University of South Florida’s Health Voice Center and a leader in the research.

New Artificial Intelligence Can Diagnose A Patient Using Their Speech
Artificial intelligence will make use of the vocal cord vibrations to breathing patterns while making a diagnosis

Bensoussan says, “We asked experts: Well, if you close your eyes when a patient comes in, just by listening to their voice, can you have an idea of the diagnosis they have? And that’s where we got all our information.”

Someone with Parkinson’s disease may talk slowly and quietly. Slurring is an indication of a stroke. Scientists may even be able to identify sadness or cancer. The team will begin by recording the voices of persons suffering from five conditions: neurological illnesses, voice disorders, mental disorders, respiratory disorders, and pediatric problems such as autism and speech difficulties.

The research is part of the National Institutes of Health’s Bridge to AI program, which began more than a year ago with more than $100 million in federal funding, with the objective of establishing large-scale healthcare databases for precision medicine.

“We were really lacking large what we call open source databases. Every institution kind of has its own database of data. But to create these networks and these infrastructures was really important to allow researchers from other generations to use this data,” Bensoussan added.

New Artificial Intelligence Can Diagnose A Patient Using Their Speech
The project is part of an artificial intelligence program that has more than $100 million in federal funding

This is not the first time academics have used AI to analyze human voices, but it is the first time data on this scale will be collected – the initiative is a cooperation between USF, Cornell, and ten other universities.

“We saw that everybody was kind of doing very similar work but always at a smaller level. We needed to do something as a team and build a network.”

The ultimate objective is to create an app that may help general practitioners connect patients to specialists, therefore bridging access to remote or underserved populations. In the long run, iPhones or Alexa may detect changes in your voice, like a cough, and recommend that you seek medical treatment.


Artificial microswimmers can navigate similarly to natural microorganisms, thanks to AI


To get there, researchers must first collect data because AI can only be as good as the database from which it learns. They want to gather roughly 30,000 voices by the end of the four years, with data on other indicators — such as clinical data and genetic information — to match.

“We really want to build something scalable because if we can only collect data in our acoustic laboratories and people have to come to an academic institution to do that, then it kind of defeats the purpose.”

New Artificial Intelligence Can Diagnose A Patient Using Their Speech
Researchers are trying to gather roughly 30,000 voices to train the artificial intelligence

Legal issues may hinder the development of the artificial intelligence

There are a few stumbling hurdles. HIPAA, the statute that governs medical privacy, is unclear on whether researchers can share their voices. Yael Bensoussan said on the matter, “Let’s say you donate your voice to our project. Who does the voice belong to? What are we allowed to do with it? What are researchers allowed to do with it? Can it be commercialized?”


Australian researchers developed a new artificial intelligence to fight wildlife trafficking


Voices are generally recognizable, but other health data may be removed from a patient’s identity and utilized for the study. Every institution has distinct restrictions about what may and cannot be communicated, which raises a slew of ethical and legal issues that a team of bioethicists will investigate.

Previous post

Finding loopholes with machine learning techniques

Next post

The rise of virtual influencers in the early stages of the metaverse