[ad_1]
There is no one-size-fits-all model of the brain.
Machine learning has helped scientists understand how the brain generates complex human behaviors, brain activity patterns such as working memory, behaviors such as impulsivity, and conditions such as depression. Scientists can use these methods to develop models of these relationships, which can then be used to make predictions about people’s behavior and health.
However, models only work if they represent everyone, and past studies show that they don’t. For each model, there are certain individuals who just don’t fit.
Researchers from Yale University A recent study in the journal analyzed who these models are likely to fall into, why it happens, and what can be done to fix it. Nature.
The study’s lead author and MD-Ph.D. According to Yale School of Medicine student Abigail Green, models must be applicable to anyone to be most helpful.
“If we want to move this type of work into clinical application, we need to make sure that the model is applicable to the patient sitting in front of us,” she said.
Green and her colleagues are considering two approaches that they believe will help models provide a more accurate psychiatric classification. The first is by correctly classifying the number of patients. For example, a diagnosis of schizophrenia covers a wide range of symptoms and can vary greatly from person to person. Researchers may be able to more accurately classify individuals with a better knowledge of the neurological manifestations of schizophrenia, including its symptoms and subtypes.
Second, some characteristics, such as spontaneity, are characteristic of different situations. Understanding the neural basis of hypersensitivity may help clinicians more effectively treat the symptom, regardless of the medical diagnosis.
“And both developments will have implications for therapeutic responses,” Green said. “The better we can understand these subgroups of individuals who may or may not have the same diagnoses, the better we can design treatments for them.”
But first, she said, models must be generic for everyone.
To understand model failure, Green and her colleagues first trained models that could use patterns of brain activity to predict how well a person would score on various cognitive tests. When tested, the models accurately predicted how many individuals would score. But for some people, they are incorrect, assuming that people score poorly when they actually score well, and vice versa.
The research team then looked at what the models failed to classify correctly.
“We found that there was consistency — the same individuals were being misclassified across tasks and analyses,” Green said. “And people who were misclassified in one data set had something in common with people who were misclassified in another data set. So there was actually something meaningful about the misclassification.”
Next, they wanted to see if these same misclassifications could be explained by differences in the brains of those individuals. But there were no consistent differences. Instead, they found that misclassifications were related to clinical factors such as age and education and symptom severity.
Finally, they concluded that the models did not reflect cognitive ability alone. They instead reflected more complex “profiles” — improvements in cognitive abilities and various sociodemographic and clinical conditions, Green explained.
“And the models fail someone who doesn’t fit that stereotypical profile,” she says.
As one example, the models used in the study reported higher scores on cognitive tests of additional learning. Any high-achieving individuals with low education do not fit the profile of the model and are therefore often mistakenly labeled as low scorers.
Adding to the complexity of the problem, the model fails to capture sociodemographic data.
“Sociodemographic variables are included in cognitive test scores,” Green explained. Basically, how cognitive tests are designed, administered, scored, and interpreted can introduce bias into the results obtained. And discrimination is an issue in other fields as well; He describes how input bias in research affects models used in criminal justice and health care.
“So the test scores themselves are a combination of cognitive ability and these other factors, and the model predicts the combination,” Greene said. That said, researchers need to think carefully about what they actually measure in a given test and what this model predicts.
The authors of the study made several recommendations on how to alleviate the problem. During the design phase of a study, they suggest that scientists should use strategies that reduce bias and increase the validity of the measures they use. And researchers should, whenever possible, use statistical approaches that correct for residual biases after collecting data.
Taking these steps leads to models that better reflect the cognitive construct under study, the researchers said. However, they understand that it is unlikely to completely eliminate bias, so it should be noted when interpreting the results of the model. Additionally, more than one model may be necessary for some measurements.
“There will be times when you just need different models for different groups of people,” said Todd Constable, professor of radiology and biomedical imaging at Yale School of Medicine and senior author of the study. “One model does not fit all.”
Reference: “Brain-phenotype models that defy sampling logic fail” by Abigail S. Green, Xilin Shen, Stephanie Noble, Corey Horin, C. Alice Hahn, Jagriti Arora, Fuyuze Tokoglu, Marisa N. Spahn, Carmen I. Carrion, Daniel S. Baron, Gerard Sanakora, Vinod H. Srihari, Scott W. Woods, Dustin Scheinost, and R. Todd Constable, 24 Aug 2022; Nature.
DOI: 10.1038/s41586-022-05118-w
[ad_2]
Source link