News & Press: Members articles

The Role of the Board in Artificial Intelligence Ethics

Monday, 30 May 2022   (0 Comments)

Authored by: Xitshembhiso Russel Mulamula, Cert.Dir, PhD Candidate (AI Governance) 

Artificial intelligence (AI) is seen as an enabler to achievement in spaces such as education, science, healthcare and climate change. AI promises enormous benefits for economic growth, social development, safety improvement and human well-being. As more and more enterprises employ AI and its use expands into new areas, companies may find it increasingly important to adopt AI to avoid the competitive disadvantages of not utilising it. However, issues such as data biases, privacy, and ethical problems pose significant risks for humanity and societies. This opinion piece was motivated by the increasing ethical controversies surrounding the application of IA technologies.

The Social Dilemma, a chilling documentary on Netflix, shows former top Silicon Valley executives sharing ethics concerns about the systems they helped create. It reveals the extent to which social media and other tech platforms can isolate and manipulate users – reinforcing existing interests and viewpoints while stoking societal divisions. It shows how these tech companies profit from disseminating disinformation or fake news rather than the truth. Thus, it sparks a debate on how to embed ethics in the development of AI to avoid the potential harms of algorithms and machine learning. 

As AI advances, a critical issue to address is the ethical and moral challenges associated with AI, such as the case of algorithmic discrimination in facial recognition systems. Most researchers found that facial recognition systems misidentify African and Asian people more often than white people. In some studies, African American and Asian people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search.

Another case of algorithmic bias or discrimination was that of Amazon – their algorithm for machine learning to recruit showed bias against women. The algorithm was based on the number of resumes submitted over the preceding 10 years and the candidates hired. As most of the candidates hired were men, the algorithm also favoured men over women. Other ethical concerns in AI include the IBM Watson-Texas MD partnership, where the AI system failed to give suitable medicine suggestions to patients, or the Microsoft-developed AI chatbot, which was released but soon withdrawn after it had learned Nazi propaganda and racist insults.

In South Africa, we face further challenges, stemming from structural biases, security, privacy and the accuracy of collected information. Machine learning algorithms in institutions such as banks and insurance companies can be used to recommend applications for approval. One is wary that algorithms may be “deliberately blinded” to an applicant’s race, gender or class. AI decision-making can be impaired if trained on inaccurate or biased data. Suppose algorithms go wrong or someone with bad intentions manipulates a system. This could have terrible repercussions that potentially extend to the loss of life and ultimately involve reputational risks for corporations.

These are ethical concerns with which boards should be preoccupied as we move towards the Fourth Industrial Revolution. King IV™ explains that the board must govern ethics to ensure that the ethical culture within an organisation aligns with the tone set by the board. This is done through the implementation of appropriate policies and practices. Therefore, it is the responsibility of the board and executive management to ensure that technological advancements in AI align with the organisation’s ethical culture. King IV™ also mentions how to align technological advancement such as AI with ethical principles:

  • Ethical leadership: The board should lead ethically and effectively. The board must set the tone from the top and provide leadership and guidance on how the organisation should exploit AI technologies.
  • Organisational values, ethics and culture: The governing body should govern the organisation’s ethics in a way that supports the establishment of an ethical culture. The board must ensure that the Code of Ethics and Code of Conduct guide the responsible development of policing technologies and AI features in products and services, including when to use and not use AI. The content and principles embodied in the Code of Ethics and Code of Conduct must be integrated into employee training.
  • Responsible corporate citizenship: The board should oversee governance and activities that demonstrate the company’s good corporate citizenship. It must ensure AI compliance with the Constitution, laws and standards as well as company policies and procedures, strategy, and the Codes of Conduct and Ethics. The board should also ensure, through engagement with stakeholders and communities, that AI advancements improve the material well-being of the societies in which the organisation operates. This will ensure that racial discrimination, data biases, and privacy are addressed in all product design stages and further provide buy-ins from society.

AI ethics are not merely about “right or wrong” or “good or bad”, and there is no quick solution to AI's ethical and moral issues. However, the ethical and moral issues associated with AI are critical and need to be addressed by boards. The future of humanity may very well depend on the correct development of AI ethics.