AI Takes the Gavel: Contract Laws’ New Sidekick in Automated Decision Making

AI Takes The Gavel by Jash Mistry

 

INTRODUCTION

 

What is AI? There are many ways to answer this question, but one place to begin is to consider the types of problems that AI technology is often used to address. In that spirit, we might describe AI as using technology to automate tasks that “normally require human intelligence.” This discussion of artificial intelligence emphasizes that the technology often focuses on automating certain types of functions, functions that would constitute intelligence if performed by humans. Some examples will help explain this picture of artificial intelligence. Researchers have successfully used artificial intelligence technology to automate complex activities such as playing chess, translating languages, and driving vehicles. Why are these AI tasks in general rather than automation tasks because they all share common characteristics? That is, when people perform these activities, they use higher-order cognitive processes related to human intelligence. For example, when humans play chess, they use a variety of cognitive skills such as reasoning, strategy, planning, and decision-making. Similarly, when a person translates from one language to another, the higher centers of the brain are activated to process symbols, context, language, and meaning. Finally, when a person drives, a variety of brain systems are used, including those related to vision, spatial awareness, situational awareness, movement, and judgment. Common machine-learning techniques that readers may have heard of include neural networks/deep learning, naive Bayes classifier, logistic regression, and random forests. Because machine learning is the predominant approach in AI today. Now that we’ve talked about AI in general, we can move on to how it can be used in law. The essence of “Artificial Intelligence and the Law” involves the use of computational and mathematical methods to make the law more understandable, manageable, useful, accessible, and predictable.

 

 

THE EMERGENCE OF AI IN LEGAL PRACTICE

 

The term “person” is derived from the Latin term persona, which means those recognized by law as capable of having legal rights, and being bound by legal duties. Artificial Intelligence (AI) is disrupting almost every industry and profession, some faster and more profoundly than others. Unlike the Industrial Revolution which automated physical labor and replaced muscles with hydraulic pistons and diesel engines, the AI-powered revolution is automating mental tasks. AI is a challenge before the law because of its implications, and the wide spectrum of its application. The law only covers and protects the interests of humans. A classic example of perfect liberty is one where no one has any exclusive right to prevent the occurrence of a given act. Establishing an ethical relationship between the machine or AI and the human being is the sole responsibility of the inventor. Humans work on values set by the law or by society. Machines are not designed to learn human values. Recently, an attempt has been made to create such a machine, but results are yet to come. The legal realm is presently undergoing a notable metamorphosis primarily attributable to the escalating assimilation of artificial intelligence (AI). This upsurge in AI-based legal software transcends mere ephemeral trends; it denotes a seminal transformation fundamentally altering the operational landscape of the legal industry. This discourse delves into the profound ramifications of artificial intelligence on the legal sector, elucidating how the proliferation of AI legal software is fundamentally transmuting legal methodologies and delineating the consequences of this evolution for the profession at large. The global legal AI market size was estimated at USD 1.04 billion in 2022 and is expected to grow at a compound annual growth rate (CAGR) of 18.2% from 2023 to 2030. The market’s growth can be attributed to an upsurge in demand for automation in legal applications such as eDiscovery, case prediction, regulatory compliance, and contract review and management among others. Law firms and legal departments face the challenge of managing large volumes of data and documents. However, the emergence of artificial intelligence (AI)-based approaches helped legal firms, legal departments, and governments to streamline tasks such as contract review, legal research, due diligence, and document analysis. AI can generate content as well as analyze it. Unlike AI used to power self-driving cars where mistakes can have fatal consequences, generative AI does not have to be perfect every time. The unexpected and unusual artifacts associated with AI-created works are part of what makes it interesting. AI approaches the creative process in a fundamentally different way than humans, so the path taken or the result can sometimes be surprising. This aspect of AI is called “emergent behavior.” Emergent behavior may lead to new strategies to win games, discovering new drugs, or simply expressing ideas in novel ways. For AI to draft legal contracts, for example, it will need to be trained to be a competent lawyer. This requires that the creator of the AI collect the legal performance data on various versions of contract language, a process called “labeling.” This labeled data then is used to train the AI about how to generate a good contract. However, the legal performance of a contract is often context-specific, not to mention varying by jurisdiction and an ever-changing body of law. Plus, most contracts are never seen in a courtroom, so their provisions remain untested and private to the parties. AI generative systems training on contract runs the risk of amplifying bad legal work as much as good.

 

 

OPPORTUNITIES AND OBSTACLES AHEAD

 

We all know that a lawyer’s time is valuable and expensive. Identifying methods to conserve time within the legal sector, while ensuring accuracy and adherence to regulations, is imperative. This optimization not only serves the interests of attorneys but also benefits clients. Legal firms and in-house legal teams can scrutinize current processes to pinpoint tasks prone to consuming excessive time, susceptible to human error, or amenable to automation. By integrating AI solutions, these entities can expedite essential aspects of legal services, such as thorough document examination, meticulous proofreading, and in-depth legal research. In turn, attorneys will have more time to focus on advising and counseling clients. Mundane tasks are effectively eliminated from the daily rotation. AI can streamline and quickly perform these tasks, thereby saving attorneys significant research, drafting, or review time, allowing them to devote more time to complex, higher-order strategizing, and case-analysis work that requires their expertise, like client engagement and proactive counseling. In effect, much of the administrative work will be done for the user. Artificial intelligence tools assist with massive data sets with high precision and simultaneously recognize patterns in the relationships between words or data to identify key information and find mistakes or inconsistencies. It can search contracts and other legal documents, find relevant information, and complete these manual tasks instantly. Not only does this save time and diminish painstaking routine tasks, but it also helps humans avoid error and burnout. As corporate legal teams navigate AI in the workplace, they will continue to discover all of the powerful ways it can assist them with both daily and long-term tasks. From drastically improving efficiencies, mitigating risk, and ensuring compliance to reducing stress and driving better client service, Using AI software often necessitates uploading sensitive client documents and/or information onto the cloud or web. Storing sensitive legal data in AI systems raises concerns about data security and privacy. This can pose potential risks to client confidentiality if the platform’s security measures are penetrated. Overreliance on AI might lead legal professionals to lose their core legal research and writing skills. This may affect their competency in situations where AI tools are unavailable or unsuitable, and often weaken an attorney’s ability and desire to figure some things out for themselves, without reliance on a robot. After all, AI is software, and as with all tech, glitches can occur. Interruptions due to software bugs, network issues, or cyberattacks can disrupt workflow and potentially compromise client data. It’s best to make sure our skills remain sharp, and we never put too much dependence or reliance on any AI software or technology in general. AI systems generally rely on large amounts of data to learn and make predictions. Such data may include sensitive information, such as personal or financial data. AI algorithms that require this type of data to train effectively may create problems for organizations to comply with data protection laws. AI systems, unlike trained attorneys, do not have to acquire a license to practice law and therefore will not be subject to ethical standards and professional codes of conduct. If an AI system provides inaccurate or misleading legal advice, who will be responsible/accountable for it? The developer or the user? The usage of AI in the judiciary also poses a problem even if judges retain ultimate decision-making authority. It is not uncommon to become overly reliant on technology-based recommendations due to automated bias. A U.S. judge on Thursday imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by an artificial intelligence chatbot, ChatGPT. U.S. District Judge P. Kevin Castel in Manhattan ordered lawyers Steven Schwartz, Peter LoDuca, and their law firm Levidow, Levidow & Oberman to pay a $5,000 fine in total. The judge found the lawyers acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court.” Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers “respectfully” disagreed with the court that they acted in bad faith. “We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” the firm’s statement said. Lawyers for Schwartz said he declined to comment. LoDuca did not immediately reply to a request for comment, and his lawyer said they are reviewing the decision.

 

 

INFLUENCE ON CONTRACT LAW

 

Artificial Intelligence (AI) is revolutionizing numerous sectors, and contract law is no exception. The integration of AI in contract law promises to enhance efficiency, accuracy, and accessibility in the drafting, analysis, and management of legal agreements. By leveraging sophisticated algorithms and machine learning, AI systems can streamline contract creation, identify potential legal risks, and ensure compliance with relevant regulations. These advancements reduce the time and cost associated with traditional legal processes and minimize human error, providing more reliable outcomes. As AI continues to evolve, it raises important questions about its ethical, legal, and practical implications in contract law, challenging legal professionals to adapt and innovate in response to this transformative technology. AI contracting software helps firms keep terms and usage consistent in all of their contracts. For example, if a company wants to define the term “confidential information” in a specific way in its non-disclosure agreements (NDAs), it must make sure that all of its divisions are on board with this definition, and that changes to the definition get incorporated quickly and accurately because variation could prove damaging to the company. AI contracting software can easily keep this term consistent across the firm’s templates, and it can spot other terms that signal “confidential information” in NDAs from business partners. People are talking about AI not taking over the field of law, but the future is not that far, because as the quote says the future is now. In a world first, artificial intelligence demonstrated the ability to negotiate a contract autonomously with another artificial intelligence without any human involvement. British AI firm Luminance developed an AI system based on its own proprietary large language model (LLM) to analyze and make changes to contracts automatically. LLMs are a type of AI algorithm that can achieve general-purpose language processing and generation.

 

 

COMPLEXITIES OF AUTOMATED DECISION-MAKING

 

Automated decision-making systems, driven by sophisticated AI algorithms, are revolutionizing how decisions are made across various domains, from finance and healthcare to criminal justice and beyond. However, the adoption of these systems brings significant complexities that must be addressed. These include issues of transparency, as the decision-making processes of AI can be opaque and difficult to understand; potential biases, where AI systems may inadvertently perpetuate or even exacerbate existing prejudices; and accountability, raising questions about who is responsible for decisions made by machines. There are many social, ethical, and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and algorithmic bias, intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others. As Automated Decision-Making becomes more ubiquitous there is a greater need to address the ethical challenges to ensure good governance in information societies. Part of bolstering procedural capacities is ensuring there are means of gathering and scrutinizing necessary information relating to Automated Decision-Making systems. Evidence must be gathered and verified before applying the law to a given situation. This knowledge accretion is crucial to understand how particular Automated Decision-Making systems operate, what measures exist to mitigate or eliminate possible harms, and what can be reasonably expected when they are deployed. This is why requiring thorough and robust risk and impact assessments for Automated Decision-Making is vital. The current draft of the EU AI Act (European Union Artificial Intelligence Act) stipulates that providers and deployers should conduct risk assessments and human rights impact assessments for systems considered to be ‘high risk’.

 

 

FUTURE OF AI IN THE FIELD OF LAW

 

The future of AI in the field of law holds transformative potential, poised to revolutionize legal practice through increased efficiency, accuracy, and accessibility. AI systems can rapidly process vast amounts of legal data, enhancing the thoroughness of legal research and aiding in stronger case building. Automated tools for drafting and reviewing contracts ensure consistency and reduce human error, while predictive analytics help forecast legal outcomes, allowing for more effective strategies. Additionally, AI can democratize access to legal services, providing basic legal guidance to individuals and small businesses. Routine tasks like document management and client communication can be streamlined, freeing legal professionals to focus on complex activities. However, the integration of AI also brings ethical and regulatory challenges, necessitating transparency, bias mitigation, and accountability to maintain public trust and the integrity of the legal system. As technology advances, collaboration between legal professionals and AI developers will be crucial in harnessing these innovations to improve the delivery of justice and legal services. Predictive analytics can predict legal outcomes, enabling litigants and attorneys to make well-informed decisions. Online dispute resolution platforms provide a viable substitute for conventional litigation, enhancing the efficiency and accessibility of dispute resolution. Nevertheless, as AI gains prominence in law, we must exercise caution and remain vigilant regarding the obstacles to this revolutionary technology. The first such obstacle is the fairness and bias of AI systems, which are potential sources of concern. Algorithms for machine learning are trained using historical data, which may contain societal biases. AI could perpetuate and even exacerbate preexisting discrimination in the legal system if these prejudices are not addressed. AI development must prioritize fairness and equity, necessitating ongoing vigilance to identify and address bias in AI applications. Secondly, the opaqueness of AI decision-making processes raises concerns regarding transparency and accountability. In legal contexts, individuals have a right to be informed of the reasoning behind decisions made by AI systems. Legal experts and technologists must collaborate to guarantee the transparency of AI systems and allow individuals to contest AI-generated outcomes. However, AI technology is lagging, and AI systems are subjected to the accuracy-transparency dilemma, whereby the more accurate the AI model, the less transparent it is, and vice versa. This is because the high accuracy of the AI model correlates to the increased complexity of the model. Furthermore, the more complex the AI model, the more accurate it is, but this results in reduced transparency of the model. Professionals are mostly bullish on how AI can impact their daily tasks. A Thomson Reuters report underscores the significant impact of AI on the legal profession. According to the report, more than 67% of legal professionals predict that the emergence of AI and generative AI will bring about either a transformational or a high-impact change in their field within the next five years. This figure notably surpasses the second-most cited factor by a considerable 14%: The potential economic recession and the accompanying cost of living crisis, which stands at 53%. Such statistics highlight the expected profound influence of AI on the legal industry, emphasizing the need for legal professionals to adapt to the changing technological landscape. According to the report, the top priorities for law firms, such as productivity (75%), internal efficiency (50%), and recruitment & retention (44%), align closely with the areas where AI can have the most impact. This alignment suggests that AI solutions are not only desirable but necessary for addressing the core challenges facing law firms today.

 

 

CONCLUSION

 

While it’s clear that AI is transforming the job landscape, the question of whether it is creating new jobs remains contentious. In many sectors, the answer appears to be no. Take the field of law, for example. AI can certainly assist with research tasks, streamlining processes, and increasing efficiency. However, when it comes to representing clients in court, this is a domain that AI will likely never fully penetrate. The nuances of courtroom advocacy, the ability to empathize, and the human judgment required in such situations are beyond the current capabilities of AI. Moreover, concerns about AI handling confidential information are valid.

Until AI can guarantee the utmost confidentiality and security, reliance on it should be cautious. It’s crucial to remember that while AI can support legal professionals, the responsibility of presenting a case, that can significantly impact someone’s life, should not be entirely entrusted to AI at this stage. As technology evolves, our dependence on AI will grow, but it will always hinge on AI’s ability to understand and interpret human behavior accurately. For now, the irreplaceable human elements of empathy, intuition, and moral judgment remain paramount in the legal profession.