Generative AI and the Ethics Race: Are We Leaving Responsibility Behind?

0

The current situation reveals a stark contrast between the pace at which the industry is advancing, and the slower progress observed in academia and civil society. While policy, trust, and safety move ahead at a moderate pace, the industry is sprinting ahead without proper consideration of the baton of ethics, safety, and governance. The significance of Artificial Intelligence (AI) is well known, and it carries with it a complex set of expectations, ideologies, desires, and fears, holding a major influence on various aspects of our lives.

In a short span of time, the usage of Generative AI and Large Language Models (LLMs) has become widespread and has attracted the attention of the masses. However, despite the enthusiastic response these tools have received, discussions surrounding LLMs, and other Generative AI tools rarely delve into issues of responsibility, accountability, and exploited labour.

While the public is captivated by the limitless potential of these innovative technologies, it is crucial that we address these issues with utmost importance. The absence of such discussions could result in a situation where the benefits of LLMs and other Generative AI tools are maximized while their potential harms are overlooked.

Generative AI is an advanced form of AI technology that can produce several types of content, including text, imagery, audio, and synthetic data. The recent buzz around Generative AI has been driven by the simplicity of new user interfaces that can create high-quality text, graphics, and videos in mere seconds.

The technology behind Generative AI is not entirely new, as it was introduced in the 1960s in chatbots. However, it was not until 2014, with the introduction of Generative Adversarial Networks (GANs) – a type of machine learning algorithm – that Generative AI could create convincingly authentic images, videos, and audio of real people.

Another type of Generative AI model that has gained muchattention is Large Language Models (LLMs). As the name suggests, LLMs are models that have been trained on substantial amounts of text data from across the internet, such as Wikipedia, scientific articles, books, research papers, blogs, forums, and websites. This training aims to enable the model to generate added content like the text data it has been trainedon.

The disparity between the pace of progress in industry and academia has become increasingly apparent. According to a recent study by Stanford University, up until 2014, the most significant machine learning models were developed by academia. However, in the years since, the industry has surged ahead in the field. In fact, as of 2022, there were thirty-two significant machine learning models produced by the industry, dwarfing the output of academia. To create innovative AI systems, vast quantities of data, computing power, and funding are required, and big tech companies inherently possess greater amounts of these resources compared to civil society and academia.

Yet the consequences of this trend are not all positive. As a result of the imbalance, incidents related to ethical misuse of AI have surged twenty-six times since 2012, according to recent statistics by AIAAIC – (standing for ‘AI, Algorithmic, and Automation Incidents and Controversies’) an independent, non-partisan, public interest initiative). These incidents include a range of disturbing occurrences such as AI-generated images of a convicted American rapper performing in prison, a deep fake video of Ukrainian President Volodymyr Zelenskyy surrendering, and many others. The need for responsible use and governance of AI is now more pressing than ever.

We are standing at a crossroads where it has become imperative to confront tough questions surrounding the development and integration of AI. What political implications does it hold? Whose interests are being served, and who bears the greatest risks? How can we regulate the use of AI?

Generative AI (GenAI) – deep learning models that can produce outputs beyond text, such as images or audio – is a disruptive technology that has the potential to revolutionize several fields including education, media, art, and scientific research. However, the impact of GenAI on both the production and consumption of science and research are particularly worrisome because it requires domain expertise to detect when GenAI has “hallucinated” or generated falsehoods that are confidently passed off as the truth. This underscores the need for vigilant oversight and thoughtful regulation of GenAI, and any other forms of AI that are likely to have significant societal impacts.

Throughout history, disruptive technologies have sparked a spectrum of emotions ranging from great hope to deep-seated fear. In fact, the emergence of certain revolutionary innovations has stirred a sense of trepidation and anxiety, as society grappled with the unknown implications of these game-changing creations. For instance, the printing press was met with trepidation, with some fearing that it would erode moral values and lead to societal decay. Similarly, fast-moving automobiles were believed to cause harm to people’s internal organs, while the telephone was thought to wreak havoc on the cherished institution of family values.

While many of these fears were proven to be unfounded, other unforeseen dangers surfaced, such as the significant impact of personal automobiles on the environment. Therefore, it is no easy task to reliably predict the social and economic impacts, risks, and development pathways of disruptive technologies. Despite this, it is essential that we do not completely stop scanning the horizon, but rather periodically reassess and evaluate the risks and benefits of modern technologies. By doing so, we can effectively balance the potentially transformative benefits of modern technologies with their inherent risks and ensure that they are used in the most responsible and beneficial manner possible.

It is crucial to recognize that advanced systems like LLMs, foundation models, and GenAI technologies cannot truly “inhabit” the complex and dynamic reality that human beings inhabit. Unlike humans, who engage in collaborative interactions that build and reinforce a shared world of experiences using the powerful agency of language to convey intention, establish truth, and solve problems, these systems lack the capacities for intersubjectivity, semantics, and ontology.

Despite their impressive feats of rhetorical prowess, even systems like ChatGPT cannot navigate the ever-changing landscape of scientific reasoning or contribute to the process of scientific meaning-making, which requires a deep understanding of complex scientific concepts and the ability to engage in ongoing, nuanced discussions with other experts in the field.

One crucial question is how best to assess the potential biases present in the training data sets of these systems, as well as other social, statistical, and cognitive biases that may emerge during their creation and use.

It is important to consider the impact that LLMs may have on existing biases, both by enhancing them and by potentially introducing new biases or helping to remove existing ones. These are open-ended questions that require careful consideration and analysis, as they have the potential to shape the future of AI development and deployment.

In conclusion, it is vital for civil societies and the scholarly community to remain vigilant in monitoring the ongoing development and use of LLMs and other AI technologies. It is essential that AI research laboratories prioritize research and robust stakeholder consultations to develop more reliable detectors that can identify potential harms and biases.

While there may not be easy answers readily available, it is equally important to avoid characterizing the situation as irresolvable or as reaching a point of no return. Academia and civil society organizations must forge a steadfast partnership with the industry, with a shared commitment to identifying and implementing solutions that balance the rapid pace of technological progress and the well-being of humanity at large. Only through this dedicated collaboration can we hope to achieve a sustainable future that is both ethical and technologically advanced.

Author: Osei Manu Kagyah, Member, Institute of ICT Professionals Ghana (IIPGH), works at the nexus of technology and society as a Technology Policy Advocate / Analyst. For comments, contact kagyahosei@gmail.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here