Research

Legal AI FAQs

AI for Attorneys FAQs

Recently published papers exploring legal language models.
How does generative AI, like GPT, contribute to the creation of legal tools for self-represented litigants?
GPT models can significantly speed up the creation of legal tools for self-represented litigants. By using approaches like iterative prompting, constrained template-driven drafting, and hybrid methods, these models can automate the completion of court forms effectively. The hybrid approach, combining automated AI drafting with human review, has been found particularly suited for authoring guided legal interviews.

Citation: Steenhuis, Q., Colarusso, D., & Willey, B. (2023). Weaving Pathways for Justice with GPT: LLM-driven automated drafting of interactive legal applications. arXiv:2312.09198.
How can AI help in detecting unfair clauses in commercial contracts from a non-legal stakeholder's perspective?
Pre-trained Language Models like BERT can identify clauses in commercial contracts that may seem unfair to non-legal stakeholders. This technology, especially when fine-tuned, can analyze contracts for fairness with high accuracy, aiding in creating more equitable agreements.

Citation: Singhal, A., Anish, P. R., Karande, S., & Ghaisas, S. (2023). Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder's Perspective. arXiv:2312.01398.
How do Large Language Models enhance the legal aid intake process?
Large Language Models can streamline the legal intake process, making it more efficient and cost-effective. They do this by eliciting and inferring clients' underlying intentions and specific legal circumstances, although it's crucial to ensure that clients provide all necessary context for their legal situations.

Citation: Goodson, N., & Lu, R. (2023). Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process. arXiv:2311.13281.
What are the capabilities and limitations of Large Language Models in legal judgment prediction?
Large Language Models show promise in legal judgment prediction, but their reliability varies with task complexity and methodology. For complex tasks, such as classifying legal reasoning, these models require fine-tuning and human annotation for better accuracy. Models like LEGAL-BERT, specifically fine-tuned on legal datasets, demonstrate stronger performance.

Citation: Thalken, R., Stiglitz, E. H., Mimno, D., & Wilkens, M. (2023). Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement. arXiv:2310.18440.
How effective is GPT-3.5-turbo in predicting rhetorical roles in legal cases?
GPT-3.5-turbo is effective in rhetorical role prediction in legal cases, using techniques like zero-few shots and specific task-oriented prompts. Its performance can exceed traditional supervised classifiers, highlighting its potential in legal text analysis, though it doesn't yet match the top-performing specialized systems.

Citation: Belfathi, A., Hernandez, N., & Monceaux, L. (2023). Harnessing GPT-3.5-turbo for Rhetorical Role Prediction in Legal Cases. arXiv:2310.17413.
Do Language Models understand legal entity types during pretraining?
Language Models show an ability to acquire legal knowledge during pretraining, useful for entity typing tasks in legal NLP applications. They perform well on specific entities and show potential for further improvement with optimized prompting, despite some inconsistencies due to variations in training corpora.

Citation: Barale, C., Rovatsos, M., & Bhuta, N. (2023). Do Language Models Learn about Legal Entity Types during Pretraining? arXiv:2310.13092.
How can prompt-based methods improve legal case retrieval?
Prompt-based methods can enhance legal case retrieval by focusing on key legal features like legal facts and issues, rather than whole text matching. This approach effectively encodes and retrieves relevant cases by aligning key legal features, improving accuracy in legal case searches. Counsel Stack uses similar methods.

Citation: Tang, Y., Qiu, R., & Li, X. (2023). Prompt-based Effective Input Reformulation for Legal Case Retrieval. arXiv:2309.02962.
What is the role of prompt chaining in classifying long legal documents?
Prompt chaining involves breaking down complex tasks into smaller parts for better handling by language models. This technique is especially useful in classifying extensive legal documents, where it can enhance performance over zero-shot methods and even outperform larger models in specific tasks.

Citation: Trautmann, D. (2023). Large Language Model Prompt Chaining for Long Legal Document Classification. arXiv:2308.04138.
How does legal syllogism prompting aid in legal judgment prediction?
Legal syllogism prompting is a method that enhances large language models' ability to predict legal judgments. By focusing on the key information relevant to the judgment and understanding the legal meaning of acts, this method improves model performance in zero-shot judgment prediction experiments.

Citation: Jiang, C., & Yang, X. (2023). Legal Syllogism Prompting: Teaching Large Language Models for Legal Judgment Prediction. arXiv:2307.08321.
Can Large Language Models effectively apply tax law?
Large Language Models show emerging capabilities in applying tax law, with performance improving in each subsequent model release. These models, especially when enhanced with the right legal texts and prompting techniques, can perform complex tax law analysis, although not yet at expert levels.

Citation: Nay, J. J., Karamardian, D., Lawsky, S. B., Tao, W., Bhat, M., Jain, R., Lee, A. T., Choi, J. H., & Kasai, J. (2023). Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence. arXiv:2306.07075.
How can legal standards improve communication with AI in autonomous roles?
Legal standards, such as fiduciary obligations, can guide behavior in autonomous roles, allowing for robust communication of goals and expectations in unspecified scenarios. Research suggests that large language models are beginning to exhibit an understanding of legal standards, improving their applicability in autonomous decision-making.

Citation: Nay, J. J. (2023). Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards. arXiv:2301.10095.
What is the role of legal prompting in teaching a language model to think like a lawyer?
Legal prompting involves using specific legal reasoning techniques to improve the performance of language models in legal reasoning tasks. Techniques like IRAC (Issue, Rule, Application, Conclusion) have shown to significantly enhance accuracy in tasks such as the COLIEE entailment task based on the Japanese Bar exam.

Citation: Yu, F., Quartey, L., & Schilder, F. (2023). Legal Prompting: Teaching a Language Model to Think Like a Lawyer. arXiv:2212.01326.
What are the ethical limits of using NLP methods for legal text analysis?
The ethical limits of NLP methods in legal text analysis involve considerations like academic freedom, the diversity of legal and ethical norms, and the threat of moralism in computational law research. Addressing these parameters is crucial for acquiring genuine insights and ensuring responsible use of NLP in legal studies.

Citation: Tsarapatsanis, D., & Aletras, N. (2023). On the Ethical Limits of Natural Language Processing on Legal Text. arXiv:2105.02751.

Learn More

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.