Three new studies from leading institutions including Hebrew University, Google Research, and Caltech have shed new light on the relationship between artificial intelligence and the human brain. The research suggests that AI models process language in a way that strikingly resembles biological neural activity, while simultaneously influencing how humans speak in the real world.
These investigations utilize deep-learning frameworks and linguistic analysis to explore how AI aligns with brain function, how it alters our vocabulary, and how it can help simulate biological neurons.
The brain builds meaning like an LLM
A team led by Dr. Ariel Goldstein at Hebrew University, in collaboration with Google Research and Princeton, used electrocorticography (ECoG) to record direct electrical activity from the brains of participants listening to a 30-minute podcast. They compared these signals to the layered architecture of large language models (LLMs) like GPT-2 and Llama 2.
The study found a remarkable alignment:
- Early layers: The brain’s initial neural responses matched the shallow layers of AI models, which handle basic linguistic elements.
- Deep layers: Later neural responses, particularly in Broca’s area, aligned with deeper AI layers that process complex context and meaning.
“What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models,” said Goldstein. This suggests that despite their different structures, both the human brain and AI models construct meaning incrementally, layer by layer.
To support further discovery, the team has released the complete dataset of neural recordings to the public, allowing scientists worldwide to test alternative theories of language processing.
“Lexical Seepage”: AI is changing how we speak
In a separate investigation, linguist Tom Juzek from Florida State University analyzed 22 million words from unscripted podcasts to measure the impact of AI on human speech. Comparing data from before and after the release of ChatGPT in 2022, the study identified a phenomenon Juzek calls “lexical seepage.”
The research found a sudden surge in specific words commonly generated by AI, while their synonyms showed no similar increase. These words include:
- “Delve” (to investigate deeply)
- “Meticulous” (showing careful attention to detail)
- “Garner” (to gather or collect)
- “Boast” (referring to possessing a feature)
“AI may literally be putting words into our mouths, as repeated exposure leads people to internalize and reuse buzzwords they might not have chosen naturally.”
Unlike slang that spreads socially, this shift comes from algorithmic outputs found in texts and articles. The analysis raises questions about the potential standardization of human speech and the flattening of regional dialects under the influence of uniform AI terminology.
NOBLE: Simulating neurons 4,200 times faster
At the NeurIPS conference, scientists from Caltech and Cedars-Sinai introduced NOBLE (Neural Operator with Biologically-informed Latent Embeddings). This new deep-learning framework can generate virtual models of brain neurons 4,200 times faster than traditional methods.
While traditional solvers use complex differential equations that demand heavy computing power, NOBLE uses neural operators to replicate the behavior of actual biological neurons, including their firing rates and responses to stimuli. This speed allows researchers to scale simulations to larger brain circuits involving millions of interconnected cells.
The framework aims to accelerate research into brain disorders like epilepsy and Alzheimer’s by allowing scientists to test hypotheses rapidly without relying solely on limited animal or human experiments.





