Semantic matching based legal information retrieval system for COVID-19 pandemic
Recently, the pandemic caused by COVID-19 is severe in the entire world. The prevention and control of crimes associated with COVID-19 are critical for controlling the pandemic. Therefore, to provide efficient and convenient intelligent legal knowledge services during the pandemic, we develop an intelligent system for legal information retrieval on the WeChat platform in this paper. The data source we used for training our system is "The typical cases of national procuratorial authorities handling crimes against the prevention and control of the new coronary pneumonia pandemic following the law", which is published online by the Supreme People's Procuratorate of the People's Republic of China. We base our system on convolutional neural network and use the semantic matching mechanism to capture inter-sentence relationship information and make a prediction. Moreover, we introduce an auxiliary learning process to help the network better distinguish the relation between two sentences. Finally, the system uses the trained model to identify the information entered by a user and responds to the user with a reference case similar to the query case and gives the reference legal gist applicable to the query case.
The use of AI in legal systems: determining independent contractor vs. employee status
The use of artificial intelligence (AI) to aid legal decision making has become prominent. This paper investigates the use of AI in a critical issue in employment law, the determination of a worker's status-employee vs. independent contractor-in two common law countries (the U.S. and Canada). This legal question has been a contentious labor issue insofar as independent contractors are not eligible for the same benefits as employees. It has become an important societal issue due to the ubiquity of the gig economy and the recent disruptions in employment arrangements. To address this problem, we collected, annotated, and structured the data for all Canadian and Californian court cases related to this legal question between 2002 and 2021, resulting in 538 Canadian cases and 217 U.S. cases. In contrast to legal literature focusing on complex and correlated characteristics of the employment relationship, our statistical analyses of the data show very strong correlations between the worker's status and a small subset of quantifiable characteristics of the employment relationship. In fact, despite the variety of situations in the case law, we show that simple, off-the-shelf AI models classify the cases with an out-of-sample accuracy of more than 90%. Interestingly, the analysis of misclassified cases reveals consistent misclassification patterns by most algorithms. Legal analyses of these cases led us to identify how equity is ensured by judges in ambiguous situations. Finally, our findings have practical implications for access to legal advice and justice. We deployed our AI model via the open-access platform, https://MyOpenCourt.org/, to help users answer employment legal questions. This platform has already assisted many Canadian users, and we hope it will help democratize access to legal advice to large crowds.
Algorithms in the court: does it matter which part of the judicial decision-making is automated?
Artificial intelligence plays an increasingly important role in legal disputes, influencing not only the reality outside the court but also the judicial decision-making process itself. While it is clear why judges may generally benefit from technology as a tool for reducing effort costs or increasing accuracy, the presence of technology in the judicial process may also affect the public perception of the courts. In particular, if individuals are averse to adjudication that involves a high degree of automation, particularly given fairness concerns, then judicial technology may yield lower benefits than expected. However, the degree of aversion may well depend on how technology is used, i.e., on the timing and strength of judicial reliance on algorithms. Using an exploratory survey, we investigate whether the stage in which judges turn to algorithms for assistance matters for individual beliefs about the fairness of case outcomes. Specifically, we elicit beliefs about the use of algorithms in four different stages of adjudication: (i) information acquisition, (ii) information analysis, (iii) decision selection, and (iv) decision implementation. Our analysis indicates that individuals generally perceive the use of algorithms as fairer in the information acquisition stage than in other stages. However, individuals with a legal profession also perceive automation in the decision implementation stage as less fair compared to other individuals. Our findings, hence, suggest that individuals do care about how and when algorithms are used in the courts.
Legal document assembly system for introducing law students with legal drafting
In this paper, we present a method for introducing law students to the writing of legal documents. The method uses a machine-readable representation of the legal knowledge to support document assembly and to help the students to understand how the assembly is performed. The knowledge base consists of enacted legislation, document templates, and assembly instructions. We propose a system called LEDAS (LEgal Document Assembly System) for the interactive assembly of legal documents. It guides users through the assembly process and provides explanations of the interconnection between input data and claims stated in the document. The system acts as a platform for practicing drafting skills and has great potential as an education tool. It allows teachers to configure the system for the assembly of some particular type of legal document and then enables students to draft the documents by investigating which information is relevant for these documents and how the input data shape the final document. The generated legal document is complemented by a graphical representation of legal arguments expressed in the document. The system is based on existing legal standards to facilitate its introduction in the legal domain. Applicability of the system in the education of future lawyers is positively evaluated by the group of law students and their TA.
Judicial analytics and the great transformation of American Law
Predictive judicial analytics holds the promise of increasing efficiency and fairness of law. Judicial analytics can assess extra-legal factors that influence decisions. Behavioral anomalies in judicial decision-making offer an intuitive understanding of feature relevance, which can then be used for debiasing the law. A conceptual distinction between inter-judge disparities in predictions and inter-judge disparities in prediction accuracy suggests another normatively relevant criterion with regards to fairness. Predictive analytics can also be used in the first step of causal inference, where the features employed in the first step are exogenous to the case. Machine learning thus offers an approach to assess bias in the law and evaluate theories about the potential consequences of legal change.
Modelling competing legal arguments using Bayesian model comparison and averaging
Bayesian models of legal arguments generally aim to produce a single integrated model, combining each of the legal arguments under consideration. This combined approach implicitly assumes that variables and their relationships can be represented without any contradiction or misalignment, and in a way that makes sense with respect to the competing argument narratives. This paper describes a novel approach to compare and 'average' Bayesian models of legal arguments that have been built independently and with no attempt to make them consistent in terms of variables, causal assumptions or parameterization. The approach involves assessing whether competing models of legal arguments are explained or predict facts uncovered before or during the trial process. Those models that are more heavily disconfirmed by the facts are given lower weight, as model plausibility measures, in the Bayesian model comparison and averaging framework adopted. In this way a plurality of arguments is allowed yet a single judgement based on all arguments is possible and rational.
The winter, the summer and the summer dream of artificial intelligence in law: Presidential address to the 18th International Conference on Artificial Intelligence and Law
This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that "intelligence requires knowledge". Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it's not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.
Perceptions of Justice By Algorithms
Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they trust human judges more and have greater intentions to go to the court when a human (vs. an algorithmic) judge adjudicates. Additionally, we demonstrate that the extent that individuals trust algorithmic and human judges depends on the nature of the case: trust for algorithmic judges is especially low when legal cases involve emotional complexities (vs. technically complex or uncomplicated cases).
Towards a simple mathematical model for the legal concept of balancing of interests
We propose simple nonlinear mathematical models for the legal concept of balancing of interests. Our aim is to bridge the gap between an abstract formalisation of a balancing decision while assuring consistency and ultimately legal certainty across cases. We focus on the conflict between the rights to privacy and to the protection of personal data in Art. 7 and Art. 8 of the EU Charter of Fundamental Rights (EUCh) against the right of access to information derived from Art. 11 EUCh. These competing rights are denoted by () and () ; mathematically, their indices are respectively assigned by and subject to the constraint . This constraint allows us to use one single index to resolve the conflict through balancing. The outcome will be concluded by comparing the index with a prior given threshold . For simplicity, we assume that the balancing depends on only selected legal criteria such as the social status of affected person, and the sphere from which the information originated, which are represented as inputs of the models, called legal parameters. Additionally, we take "time" into consideration as a legal criterion, building on the European Court of Justice's ruling on the right to be forgotten: by considering time as a legal parameter, we model how the outcome of the balancing changes over the passage of time. To catch the dependence of the outcome by these criteria as legal parameters, data were created by a fully-qualified lawyer. By comparison to other approaches based on machine learning, especially neural networks, this approach requires significantly less data. This might come at the price of higher abstraction and simplification, but also provides for higher transparency and explainability. Two mathematical models for , a time-independent model and a time-dependent model, are proposed, that are fitted by using the data.
A Bayesian model of legal syllogistic reasoning
Bayesian approaches to legal reasoning propose causal models of the relation between evidence, the credibility of evidence, and ultimate hypotheses, or verdicts. They assume that legal reasoning is the process whereby one infers the posterior probability of a verdict based on observed evidence, or facts. In practice, legal reasoning does not operate quite that way. Legal reasoning is also an attempt at inferring applicable rules derived from legal precedents or statutes based on the facts at hand. To make such an inference, legal reasoning follows syllogistic logic and first order transitivity. This paper proposes a Bayesian model of legal syllogistic reasoning that complements existing Bayesian models of legal reasoning using a Bayesian network whose variables correspond to legal precedents, statutes, and facts. We suggest that legal reasoning should be modelled as a process of finding the posterior probability of precedents and statutes based on available facts.
Predicting citations in Dutch case law with natural language processing
With the ever-growing accessibility of case law online, it has become challenging to manually identify case law relevant to one's legal issue. In the Netherlands, the planned increase in the online publication of case law is expected to exacerbate this challenge. In this paper, we tried to predict whether court decisions are cited by other courts or not after being published, thus in a way distinguishing between more and less authoritative cases. This type of system may be used to process the large amounts of available data by filtering out large quantities of non-authoritative decisions, thus helping legal practitioners and scholars to find relevant decisions more easily, and drastically reducing the time spent on preparation and analysis. For the Dutch Supreme Court, the match between our prediction and the actual data was relatively strong (with a Matthews Correlation Coefficient of 0.60). Our results were less successful for the Council of State and the district courts (MCC scores of 0.26 and 0.17, relatively). We also attempted to identify the most informative characteristics of a decision. We found that a completely explainable model, consisting only of handcrafted metadata features, performs almost as well as a less well-explainable system based on all text of the decision.