Part 2 of 3
This is the second installment in a three Part series on the role of Artificial Intelligence in Online Dispute Resolution. Please join the conversation and comment below.
ODR and AI
The subfield of Artificial Intelligence [AI] in Online Dispute Resolution has been progressing rapidly. There is no doubt that it will have a broad impact.
Law and AI are particularly well-suited to work together as they have a shared method with set rules and guidelines for determining how a decision is made and a problem is solved. Both use “semi-formal modeling”—in law, this takes “the form of binding precedents and statutory rules,” whereas AI uses “logical representations[.]” These logical representations are facilitated through different types of programming tools, in particular, machine learning. Machine learning is a means by which computers actually learn knowledge, attempting to mimic neural networks. This branch of AI is particularly exciting because—as the technology progresses—a computer will be able to learn to solve problems, including legal problems. Interestingly, recent research suggests that AI could be used to determine whether a statement was true of false and help robots replace jurors.
Already AI is being used in ODR systems. As discussed in Part One: An Introduction to Online Dispute Resolution, Cybersettle is a simple rule-based reasoning tool that aids in conflict settlement. Other programs are tasked with “understanding a problem, generating a plan for its solution, evaluating feedback from disputants and recovering from reasoning failures.”
More developed negotiation support systems, such as Smartsettle, use “bargaining ranges” developed from the parties’ “optimistic values.” Adjusted Winner uses an algorithm for “divid[ing] n divisible goods between two parties as fairly as possible.” Both systems employ game theory to reach “fair solutions.”
A model used in divorce is Family_Winner. It, “takes a common pool of items and distributes them between two parties based on the value of associated ratings. . . . [The] ratings sum to 100; thereby forcing parties to set priorities.”
Justice and AI
One issue common to each system is justice. Fair results may not necessarily be just.
For example, Family_Winner has been criticized for relating to the divorcing parties’ preferences and not those of dependents such as children.
Even though humans can—and do—get legal decisions wrong, society perhaps still would rather apply human judgment than give full control to a machine. Many consider justice to be a uniquely human ideal, one that may be difficult to impart on AI, no matter how sophisticated the programming is.
Security and AI
Another major concern with using AI in ODR is security. Disputants want to be certain that their dispute and related documents remain private—a key advantage when choosing arbitration. Some may be concerned that using a system solely over the internet would allow for the possibility that a program could be hacked with the intention of altering outcomes. Other concerns center around the fear of documents being intercepted or found on servers and databases.
However, the real concern should not be with the programs or documents, but rather with the human element involved in administering an arbitration provider’s program. If used correctly, current “security measures are essentially unbreakable.” When the employees of an arbitration provider follow protocols, the arbitration itself can reasonably be considered secure.
These protocols include among other things:
- Using private databases
- Strong passwords
- Multiple forms of access (for example, two-factor authentication)
- Quality software.
People and AI
Public opinion may be the biggest barrier to implementing a completely automated arbitration system. People do not trust computers in the same way they trust human judgment, especially in areas traditionally considered to involve expert and human evaluation.
For AI to become useful in ODR, it will have to be adopted by clients and trusted by those who use the system. Misguided human perception must be overcome to see widespread use of AI for resolving disputes.
One means for accomplishing this is using a human facilitator along with AI programming. A person would be the intermediary between the disputants and the program. This facilitator would not only help input information into the program and help interpret the results the program developed, but also aid in providing a human element in the dispute resolution process, which may be both needed and desired.
The determination of how an AI program models problems to be solved (and how narrow or broad the parameters are) involves a similar form of logical modeling as the study of law.
These logical modeling similarities include:
- How the legal issue is framed
- How the legal problem is solved
- How the decision is reached
Although this may be the most difficult aspect of implementing AI for ODR, some forms of machine learning such as neural networks—discussed in Part 3—do not encounter this problem as much as others.
Interconnected with system programming is the issue of defining how an AI system minimizes bias. Although human bias will not be an issue, informational, institutional, and programming bias may affect outcomes, and great care must be taken to ensure the process remains as neutral as possible.
Effective communication is a crucial aspect of implementing AI. Users must be able to communicate not only with one another, but also with the program and providers of the AI ODR system. To generate trust in using the program, users need to understand the process and know what to expect from the program. This involves a great deal of care on behalf of the program provider. Finally, communicating the program’s effectiveness to the public at large is important to maintain a perception of fairness—as well as to develop a clientele.
Finally, one of the greatest challenges for AI use in ODR is determining the appropriate extent to which it should be used. The program may be effective, but individuals may still prefer an arbitrator to use it as a tool instead of allowing it to independently resolve a dispute. Ultimately, this problem will most likely be solved through a function of technology growth and client needs.
Stay Tuned for Part 3 on Digital Disagreements where we discuss how Machine Learning modeled after the brain’s neural networks affects ODR!
*Grant is a J.D. and Master of Public Affairs candidate at the University of Texas. He will graduate in 2014. In addition to law, Grant enjoys hiking, soccer, and watching Law & Order.