Grant recipients hailing from academia, industry, and clinical practice will receive funding to develop new aging-related technologies and innovations.
The Johns Hopkins Artificial Intelligence and Technology Collaboratory for Aging Research, or JH AITC, has announced the recipients of its third round of grant funding. Totaling just over $1.5 million, this round supports the collaboratory’s mission by funding the development of artificial intelligence technologies to improve the health and independence of older adults.
Eight applicants from academia, industry, and clinical practice were selected through a competitive national grant review process. Their awards will support a diverse set of research projects and technologies aimed at improving the health and quality of life of millions of older Americans and their caregivers. Awardees will each receive up to $200,000 in direct costs over one year, as well as access to resources and mentorship from university experts.
Launched in 2021 with a $20 million grant from the National Institute on Aging, the JH AITC is a national hub for innovation in healthy aging and cross-disciplinary collaboration within the Johns Hopkins community and beyond. Its primary goal is to connect this research network with outside stakeholders, including older Americans and their caregivers, technology developers, and industry partners.
This round, the JH AITC has invested in research that leverages AI to support healthy aging and mitigate the effects of dementia.
Kimia Ghobadi, the John C. Malone Assistant Professor of Civil and Systems Engineering in the Whiting School of Engineering, is partnering with Dew-Anne Langcaon of Honolulu-based home care provider Vivia Cares, Inc. on a pilot project to build an AI-driven smart scheduling module that will help care providers save time and reduce costs.
“As the U.S. population ages, its need for home care visits is growing,” says Ghobadi. “Our goal is to use AI and optimization to build smart scheduling systems that can accommodate a diverse set of services for patients to best meet their needs—and can also optimize caregivers’ tasks to best utilize their time.”
In another pilot, Laureano Moro-Velazquez, an assistant research professor in the Department of Electrical and Computer Engineering and a member of the Whiting School’s Center for Language and Speech Processing, aims to use non-invasive physiological measures to diagnose Alzheimer’s disease in its earlier stages.
“Most biomarkers are invasive, such as lumbar punctures, or very costly, such as PET scans,” says Moro-Velazquez. “In this project we propose a new AI-powered platform to obtain digital biomarkers more easily and cost-effectively by collecting speech, eye movement, and handwriting data simultaneously while patients perform cognitive tests.”
The last Hopkins-led pilot is headed by Kelly Gleason, an assistant professor in the School of Nursing. The project aims to improve the use of large language models—which power applications like ChatGPT—in responding to caregivers of patients with Alzheimer’s disease and related dementias, or ADRD.
“Specifically, we will develop a guide for ‘prompt engineering’ to optimize large language models to better support caregivers of patients with ADRD, focusing on helping them recognize and support care partners and respond to dementia-related concerns,” says Gleason. “We believe this will improve the use of LLMs in health care, which is already happening—for example, ChatGPT is already embedded in multiple health systems.”
Other pilots funded this round include:
- Characterizing and Stratifying Cognitive Impairment Using Cognitive and Speech AI (Shifali Singh, Harvard University): This project will rely on traditional, gold-standard metrics in combination with mobile, “real-time” ecological momentary assessments and actigraphy monitoring—a non-invasive method of tracking a person’s rest and activity cycles. Collecting both active and passive data from these diverse methods may allow for precise and accurate identification and characterization of cognitive impairment levels beyond what is traditionally achieved in the clinic using standard neuropsychological evaluations and “mild” and “major” neurocognitive disorder descriptors.
- AI-Enabled Personalized Training for Caregivers of Elders with ADRD (Neal Shah, CareYaya/YayaGuide): This project aims to revolutionize ADRD caregiving through technology and training, addressing the urgent need for skilled caregivers in an aging society. The project’s specific goals target key aspects of development, training, evaluation, and expansion to ensure that YayaGuide emerges as an effective, scalable solution in ADRD caregiving.
- AI for Predicting Adverse Health Events in the Elderly Through Wearable Devices (Warren Pettine, Mountain Biometrics): This pilot proposes developing an AI system for detecting beta blocker adherence in the elderly population; the technical innovations in modeling achieved through this project may be applicable to a wide variety of adverse health events in the elderly, such as infection onset or general decline.
- Smartphone-Based Fall Prevention Therapy and Monitoring for Older Adults (Yannick Cohen, Brightway Health; Dennis Anderson, Beth Israel Deaconess Medical Center, Harvard Medical School): Brightway Physical Therapy is an innovative remote therapeutic monitoring software that uses computer vision to analyze and provide real-time feedback to patients on at-home physical therapy exercise technique.
- Helping Older Adults Hear in Noisy Social Situations Using Novel Hardware and AI (Shariq Mobin, AudioFocus): This pilot will develop a hearing assistive technology that uses a novel in-and-around-the-ear microphone array coupled with acoustics-informed deep learning models to effectively filter out distracting noise sources in acoustically loud and dynamic settings.