The use of Artificial Intelligence (AI) is growing rapidly across the globe in different fields including healthcare, agriculture, education, finance, manufacturing, legal services, and human resources. In the field of human resources, AI is being harnessed to reduce the need for human intervention when carrying out various processes such as recruitment, payroll and performance management.
To foster responsible use of AI, the United Nations Educational, Scientific and Cultural Organization (UNESCO) has established 10 core principles of the human-rights centred approach to the ethics of AI. These principles include among others ‘human oversight and determination’ and ‘fairness and non- discrimination’. This article shall focus on these 2 principles.
The principle of human oversight and determination means that AI systems should not displace the responsibility and accountability that ultimately lies with human beings. This is important noting that AI systems mimic human intelligence by processing large amounts of data to make decisions. As such, it is critical to ensure that a natural person has ultimate authority and control over the system making these decisions. It is advisable for employers who rely on AI to ensure that a natural person can oversee the operations of the system. In the European Union, the need for such a mechanism has already been provided for in the European Union (EU) Artificial Intelligence Act – Regulation 2024/1689.
As regards the principle of fairness and non- discrimination, it is important for employers to ensure that AI systems being relied on do not result in unfairness or discrimination. This concern is not academic as it has been established that AI driven systems are subject to biases emanating from their human creators. Use of such systems without appropriate safeguards may therefore expose an employer to discrimination-based claims founded on among others Article 27 of the Constitution of Kenya, 2010 and the Employment Act, 2007 which both prohibit direct and indirect discrimination.
An employer in the United States of America (iTutorGroup Inc.) paid USD 365,000 to settle a lawsuit that had been filed against it by the United States Equal Employment Opportunity Commission (the EEOC). The case was anchored on the allegation that iTutorGroup Inc. had used an AI powered hiring system that discriminated against women and men over 55 and 60 years by automatically rejecting them.
Part of the settlement terms in the iTutorGroup Inc. case entailed the discriminated applicants being given an opportunity to re-apply and a report thereafter provided to the EEOC on which applicants were considered, the outcome of each application and a detailed explanation where an offer was not made. Notably, the sum of USD 365,000 was to be distributed to over 200 applicants who were automatically rejected by iTutorGroup Inc.’s software. This case demonstrates that the reputational and financial cost of inadequate human oversight to guard an organisation against AI based discrimination claims can be significant.
The case further emphasizes the importance of human oversight and having clear policies guiding the use of AI systems in sensitive employment related processes such as recruitment. Employers will likely be held accountable/liable for decisions made by the AI tools that they use either by themselves or through third parties. For this reason, it is prudent for employers to obtain legal advice and continually test and monitor the AI tools relied upon for their processes. This shall reduce the risk of legal claims founded on bias, unfairness and discrimination committed by AI systems.