Article by: Shivani Naidoo
The resounding buzz in the technology realm regarding artificial intelligence (AI) is a result of the emergence of a number of remarkably efficient AI programs, driving transformative advancements across various sectors. Most notably, ChatGPT, an advanced AI language model, developed and released by OpenAI in November 2022, amassed an estimated 100-million monthly active users by January 2023, making it the fastest-growing consumer application in history. ChatGPT utilises deep learning techniques to generate contextual responses that impressively mimic human writing styles.
AI technology transcends novelty; it is here to stay. And as disruptive technology goes, AI has evoked concerns ranging from the extreme (a fear of an AI apocalypse) to the more conservative, with apprehension of job losses due to automation. AI has seamlessly integrated into our lives, often going unnoticed. It appears in everything from facial recognition and predictive text on our smartphones, to AI-powered resumé-screening and ranking by large corporates. However, the emergence of generative AI programs that create images and write text has blurred the line between human and machine, raising unnerving questions.
Despite these advancements however, AI has not yet crossed a significant threshold. It can be categorised into two main types: “narrow AI”, which can only perform one narrowly defined task (or a small set of related tasks), as is the case with ChatGPT; and “general AI”, which denotes systems that perform intelligent behaviour across a range of cognitive tasks, akin to Jarvis or KITT. Currently, the AI systems in use fall under narrow AI, which are capable of generating text but lack self-awareness. Nevertheless, the concept of “deep learning” has revolutionised narrow AI, where, unlike traditional programs that require explicit human instruction, deep-learning programs can teach themselves with minimal guidance and massive datasets.
This remarkable progress has led to applications that go beyond generating text. While concerns arise regarding job displacement, automation has historically transformed and created new employment opportunities. Lawyers may find themselves collaborating with AI rather than being replaced by it, thus providing a competitive edge. While AI can augment certain tasks and streamline processes, it is unlikely to entirely replace the role of lawyers. Instead, those who embrace AI and leverage its capabilities will be better positioned to navigate the evolving legal landscape.
Useful AI Disrupters in the Legal Industry:
- Legal Research: AI can enhance legal research by efficiently analysing vast amounts of legal data, providing relevant information and precedents.
- Document Review: Machine-learning algorithms streamline document review by analysing contracts, emails and legal briefs, identifying important details and expediting the review process.
- Contract Analysis and Drafting: AI can aid in contract analysis, management and drafting by automating tasks, ensuring consistency and extracting key terms and clauses.
- Predictive Analytics: AI algorithms can leverage historical legal data to predict case outcomes, assess risks and inform legal strategies.
- Legal Chatbots and Virtual Assistants: Legal chatbots and virtual assistants can offer basic legal information, guidance and support, increasing access to justice for individuals without direct legal assistance.
- Improved Efficiency and Cost Savings: AI automation improves efficiency and saves costs by handling repetitive tasks, enabling legal practitioners to focus on more complex work.
AI Challenges in the Legal Industry:
The “black box” problem
As the inner workings of AI systems become increasingly complex, it becomes difficult to understand how they arrive at their decisions or predictions. This lack of transparency raises concerns about accountability and fairness. If AI is extended to make legal decisions, we may face difficulties in explaining the rationale behind AI-generated outcomes, potentially compromising the right to a transparent and just legal process.
AI models, especially those using generative techniques, can create seemingly realistic but entirely fabricated content, also known as “hallucinations”. In legal contexts, this can have severe consequences, such as the generation of false evidence, misleading legal arguments or the dissemination of inaccurate information. This risk highlights the need for robust verification mechanisms and human oversight to ensure the integrity and accuracy of AI-generated outputs.
AI models are trained on large datasets, and if those datasets contain biases or reflect societal inequalities, the AI systems can inadvertently perpetuate and amplify those biases. This can result in biased legal decisions, unfair outcomes and unequal treatment. Careful attention must be given to data collection, pre-processing and ongoing monitoring to mitigate the risks of biased AI in the legal domain.
South African Regulation of AI (or lack thereof):
While South Africa has not yet taken steps to formalise policy documents or introduce bills in parliament for regulating AI, it is evident that the existing laws contained in POPIA and the Electronic Communications Act are inadequate. The current issue with AI is not its intelligence, but rather its inherent unpredictability. We cannot always control how it responds to new guidance, which poses a significant challenge since we are increasingly relying on AI in various impactful applications.
Where to From Here?
The European Union’s approach to AI regulation is noteworthy as they are establishing a framework that categorises AI applications based on their level of risk. High-risk systems, such as those involved in employment, public services or posing risks to life and health, would be subject to stringent obligations before being allowed in the market. These obligations would encompass aspects like data quality screening, programming transparency, human oversight, reliability and accuracy testing, and cybersecurity.
Where AI advancements are being pioneered by countries that do not share South Africa’s context, it is important that our regulation of AI is guided by the values embodied in our Constitution, to promote a just and equal society. By addressing the challenges that arise head-on with a combination of technical solutions, ethical considerations and regulatory frameworks, the legal profession can harness the power of AI while upholding fairness, justice and the rule of law.