Health Monitoring Devices have gained popularity over the past few years, and hold promise in helping people to reach their wellness goals. However, these devices rely on un-representative data-driven algorithms, which leaves ethnic minorities vulnerable to their ineffectiveness.
Algorithms can be used to decide when to withhold loans, mortgages and even bank accounts, on the basis of who is likely to make money for the bank. Minority ethnic groups can be disproportionately disadvantaged within these prediction systems by being determined as not meeting the criteria for lending, even when others with the same financial status get approved.
There is a very low representation of minority ethnic groups in the population of people employed in AI. This is a huge cause for concern since without a diverse set of AI creators, our AI models are bound to be prone to blind spots that bias against and ultimately harm minority ethnic groups.
Computer vision technology have a harder time recognising people with darker skin or from asian background. One main reason for this is that the datasets used by companies to train facial analysis software use mainly white examples and are not truly representative of diversity in society.
AI for recruiting is the application of artificial intelligence (such as machine learning, natural language processing, and sentiment analysis) to the recruitment function. It has the potential to mitigate, but is also likely to amplify bias.
AI has the potential to unlock new jobs – by allowing activities that are dangerous for humans or not even possible today, for example deep sea diving or space exploration.
AI-equipped robots will be able to perform dangerous or repetitive physical tasks, thus creating safer workplaces (e.g., wearable robots that support human strength may prevent musculoskeletal disorders).
AI-assisted automation of tasks will eliminate jobs in an uneven way across ethnic groups, further marginalising groups that are already at risk of lower pay and career advancement.
Technology to protect remote exam taking against fraud has quickly become necessary as exams move online. Combinations of machine learning algorithms, facial recognition technology and eye tracking software are used to make sure that the person taking the exam is who they say they are and identify cheating.