Digital
Health
Review
We build and amplify awesome things
About this work:
EPISTEMIC INJUSTICE
IN HEALTHCARE AI PROJECT
Healthcare is rapidly going digital and AI models have yet to be evaluated on their epistemic impact.
It is our belief that epistemic injustice, the construction of incomplete knowledge bases in medical research, data and practices, must be examined to better understand adverse impacts on communities as these data will inevitably influence existing and future training models.
“Ideas are easy. Implementation is hard."
Our Approach
Our research marries institutional archival histories with Real World Data to better understand how to engage in responsible use of AI practices in a public health context.
By understanding the depth of epistemic injustice in healthcare systems and technology, we strive to unlock unforeseen potential of reducing harm while still driving innovation in the market.
Through this research, we aim to provide a framework for successful development and use of AI within the framework of epistemic injustice.
This work will focus on researching:
- Use cases of specific medical research practices and affected communities
- Examination of health policies and laws
- Data examination and modeling
- Financial opportunities and market value in epistemic rated AI
- Report and guidelines on AI model bias reduction
Every contribution matters. Make your opinions and expertise heard. Join us in this project!
Open call for contribution closes August 31, 2024.
This is unprecedented work to examine the level of epistemic injustice as it pertains to health AI expansion.
This project was made possible by a multi institutional and industry consortium. As we embark upon this work, we will collaborate with established national institutions, initiatives and experts on medical research and health AI.
To launch this collaboration, we are proudly partnered with academic research, data modeling and review institutions.
All photography sourced from Pexels and Unsplash and falls under the Pexels License and Unsplash License.
Contributing institutions include:
FAQs
In simple terms, epistemic injustice occurs as a result of "incomplete" knowledge bases in medical research, practices and data. This can occur unintentionally due to a lack of knowledge and diverse data, biased viewpoints on populations and perpetuation of these view points in medical teachings, text and data collection. Most importantly, epistemic injustice attacks the foundational learning of everything from clinical research to medical practice, causing harm to populations via adverse outcomes to their health.
The top use cases of the $7.2 billion invested in 2024 for health AI include patient diagnostic, administrative/ clinical workflow and therapeutic R&D.
Without proper evaluation of epistemic injustice in healthcare data and practices, we are inadvertently building faulty systems that can cause immediate harm and generations to recover from. And with digital health outcome data being more closely monitored by regulatory and federal bodies - tracking deployed health AI and their outcomes will be needed more broadly in the future.
This project benefits health AI researchers, builders, policy makers and the general public. With contextual understanding of the extent and level of epistemic injustice in commonly used datasets, these stakeholders can ensure less harm is perpetuated as we continue to develop health AI.
Most importantly, innovators and funders of the next wave of health AI will benefit from better understanding the risk profile of the tools they are building and the investments they are making - leading to new discovery opportunities and better market clarity.
Health AI researchers, builders, policy and law makers, patient advocates and funders who are invested in contributing to the future infrastructure of Healthcare AI should join as contributors.
Those that are able to contribute their experiences, expertise, data access and network to this project are welcomed!