This article draws on recent medical research which shows how potentially biased models informing our health care systems have impacted COVID-19. These biased models could exacerbate the impact the COVID-19 pandemic is having on people of colour. AI is not intrinsically objective, and any biases inherent in these technologies could reinforce racial structural injustices within […]
This article discusses mobile apps that aid the self-diagnosis of skin conditions. The apps do intend to be inclusive of all skin types, however, the training data was revealed to contain only 3.5% of images for darker-skinned patients. Ninety per cent of the database was made up of people with fair skin, darker white skin, […]
Debiasing artificial intelligence: Stanford researchers call for efforts to ensure that AI technologies do not exacerbate health care disparities
Medical devices utilising AI technologies stand to reduce general biases in the health care system, however, if left unchecked, the technologies could unintentionally perpetuate sex, gender, and race biases. The AI devices rely on data-driven algorithms to inform health care decisions and aid in the diagnosis of diseases. After examining the biases inherent in these […]
An estimated one million black adults would be transferred earlier for kidney disease if US health systems removed a ‘race-based correction factor’ from an algorithm they use to diagnose people and decide whether to administer medication. There is a debate surrounding whether or not race-based correction should be removed, on the one hand, it perpetuates […]
AI technologies are being used to diagnose Alzheimer’s disease by assessing speech. This technology could aid early diagnosis of Alzheimer’s. However, it’s evident that the algorithms behind this technology are trained on and for a specific tone of voice, excluding people of colour from the benefits of this technology.
A team of medical ethics researchers are arguing that bias and discrimination within AI design and deployment risk exacerbating existing health inequity. The Covid-19 pandemic has disproportionately affected disadvantaged communities, and the uncritical deployment of AI in the fight against covid-19 risks amplifying the pandemic’s adverse effects on vulnerable groups by exhibiting racial biases. Although […]
This article explores how the pulse oximeter, a device used to test oxygen levels in blood for coronavirus patients, exhibits racial bias. Medical journals give evidence that pulse oximeters overestimated blood-oxygen saturation more frequently in black people than white.
COVID-19 care has brought the pulse oximeter to the home, it’s a medical device that helps to understand your oxygen saturation levels. This article examines research that shows oximetry’s racial bias. Oximeters have been calibrated, tested and developed using light-skinned individuals. For a non-white person, inaccurate readings could be fatal.
Health monitoring devices influence the way that we eat, sleep, exercise, and perform our daily routines. But what do we do when we discover that the technology we rely on is built on faulty methodology and legacy effects of racial bias?
An estimated 40 million people in the US alone have smartwatches or fitness trackers that can monitor heartbeats. However, some people of colour may be at risk of getting inaccurate readings. Heart rate trackers rely on technology that is designed for and tested on lighter-skinned individuals, meaning that the technology could be less reliable for […]