Practice at the top of your licence.

Digital infrastructure for attorneys

Legal Marketing

You want a sleek website that you can be proud of, and you are committed to growing your legal brand.

Legal Leads

You want shared or exclusive legal leads, and you have an ideal practice area and location in mind.

Automation

You are intrigued by the idea of responsibly designing language models to enhance your practice.

Scale vertically and own your niche.

Tools to Grow Your Law Firm

Time is a limiting factor for legal professionals. Automation is the future.
Our platform helps lawyers

- delegate tasks
- improve processes
- automate workflows
- receive qualified leads
- own client relationships
- own work product
technology offerings

Check Citations

Language models hallucinate legal information 58% of the time with ChatGPT 4 and 88% with Llama 2. Even leading AI tools designed for lawyers hallucinate between 17% and 33% of the time.

Our citation checker is an additional verification mechanism that can help you avoid costly mistakes. You can use it standalone or give it to a worker.

Get Clients

Building a client base is tough. You put in the work, but lead services and conversion are hit-or-miss.
You need streamlined way to add qualified leads to your workflow and book high-value meetings.
We offer market-wide lead value comparison, supporting most U.S. jurisdictions. Connect your calendar, and we'll handle the rest.
Get Clients

Build Teams

Legal research platforms are overly broad and const prohibitive. Commercial LLMs rarely give actionable legal information.
You need a legal software solution that connects to your specific practice area data, giving you control and oversight.
We help you build a team of specialized legal research assistants. Customize them with tools and knowledge relevant to your practice.
Start Building

Track Time

Tracking billable hours is necessary but takes you away from actual legal work, and can lead to disputes with clients.
You need a simple way to log time that doesn't feel like a chore and provides clear, defensible records of your work.
We've integrated simple timekeeping right into your workflow. Choose between quick manual entries or suggested logs based on your activity.
Start Tracking
Grounded
LEgal
AI
Grounded
Legal
AI
Grounded
LEgal
AI

Responsible AI implementation

We partner with ambitious attorneys and forward-thinking law firms who understand how to use AI responsibly.

Our approach mirrors the legal process, ensuring thorough implementation and risk mitigation.
1

Step 1: Pleadings

We understand the risks associated with language models, and how they relate specifically to the legal industry.

"One of AI’s prominent applications made headlines this year for a shortcoming known as 'hallucination,' which caused the lawyers using the application to submit briefs with citations to non-existent cases. (Always abad idea.)"

"However, these tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system."

- Chief Justice John Roberts, 2023 Year-End Report on the Federal Judiciary

That's why we begin with identifying low-risk, high-impact areas for AI integration, ensuring a solid foundation for your AI strategy. For example, we've automated call transcriptions and report generation for legal intake and pre-qualification, providing you with structured summaries of phone calls that streamline research initiation and matter preparation.
2

Step 2: Discovery

Concerned about client confidentiality and data security? Good.

As it's being discovered, frontier language models were trained on IP-protected data The New York Times Company v. Microsoft Corporation (1:23-cv-11195). This is a nightmare for legal professionals whose livelihoods depend on their work product.

Some, but not all, AI companies will enter into a Data Processing Agreement (DPA), which is a legally binding contract between a Data Controller ("Controller") and a Data Processor ("Processor"). We help you navigate this process and assesses your unique security needs. With this information, we're able to determine the safest deployment strategy, from web apps hosted on our servers, servers of your choice, or completely on-premises local solutions that don't even require an internet connection to work. Our platform is model agnostic allowing for the use of one or more models simultaneously.
3

Step 3: Trial

Next, we develop and rigorously test your custom AI tools.

With any new technology there comes known and unknown risks. We first address the known risks, namely data privacy and security, human error, and hallucinations. We help you sign DPAs to reduce data risks, offer practical training to reduce the risk of human error, and constantly improve our citation checker to reduce the risk of hallucination. No platform is perfect, but our team is obsessed with finding solutions.

For example, We analyzed nearly 5,000 zero-shot case law citations generated by GPT 3.5 Turbo. Our goal was to identify accuracy patterns across various jurisdictions and practice areas. We gave the language model as little help as we could in terms of prompting and grounding. The results show that GPT case law citations vary widely across jurisdictions and practice areas, with Federal Constitutional Law being most accurate, and Maine Bankruptcy Law being the least accurate.

View case law hallucinations by jurisdiction and practice area here.
4

Step 4: Appeal

Our commitment extends beyond initial deployment. Through regular consultations, we work to improve your practice assistants. This process can include prompt engineering, creating custom datasets to enhance retrieval, accuracy and relevance, or advanced techniques like Reinforcement Learning from Human Feedback (RLHF). We also offer training sessions keep your team updated on best practices and new features.

Attorneys and AI

Frequently asked questions

Common questions about language models in legal practice
Why should lawyers care about language models?
Lawyers should be interested in language models because they offer significant advantages in legal research, align with ABA guidance on maintaining competence, and enhance cost efficiency. These models can automate and streamline many routine tasks, freeing lawyers to focus on more complex aspects of their cases.
What is the ABA's stance on AI and legal practice?
The ABA has adopted resolutions (604, 608, 609, 610) emphasizing responsible AI development and use, promoting ethical, transparent, and accountable deployment of AI in the legal sector. These resolutions also focus on enhanced cybersecurity, guidelines for organizations engaging in AI, and integrating cybersecurity education into law school curricula.
How do large language models work?
Language models are next word predictors. Large language models like GPT-4, Llama 2, and Mistral utilize training data, attention, and transformers. These mechanisms help the model capture nuanced semantic relationships between words and sentences, enabling them to generate coherent text.
How can litigators benefit from using language models?
Litigators can benefit significantly from language models by leveraging them for efficient legal research, strategic development, jury analysis, and enhancing overall litigation planning. These tools can streamline various aspects of legal practice, making processes more efficient and data-driven.
How do language models increase capital efficiency in legal practice?
Properly developed language models enhance capital efficiency in legal practice by accelerating tasks and making sophisticated legal analysis more accessible. This is particularly beneficial for less experienced practitioners, leveling the playing field in terms of resource availability and expertise.
What risks should lawyers avoid when using language models?
Lawyers need to be cautious of hallucinations (false information) in outputs and ambiguous provenance (unclear sources) in training data. It's crucial to verify AI-generated information and be aware of the limitations of these models in legal contexts.
Why do hallucinations in language models occur, and how can they be addressed?
Hallucinations in language models occur not due to a lack of reasoning, but due to a lack of specific knowledge. These models might inaccurately recall or generate specific information. Addressing hallucinations involves explicit prompting against them, grounding the model with context, and verifying information with retrieved documents. This enhances the model's ability to reason over the knowledge it has access to.
What are the implications of hallucinations in language models like U.S. v. Cohen and Mata v. Avianca Inc. cases?
These cases illustrate the dangers of relying on language models without proper verification. In U.S. v. Cohen, AI-generated misinformation led to complications, while in Mata v. Avianca Inc., lawyers submitted non-existent judicial opinions. These examples underscore the need for careful review and human oversight in using language models in legal practice.
Can hallucinations in language model outputs be eliminated?
No. completely eliminating hallucinations in language model outputs is not currently possible. Language models are text prediction engines - they have to guess what the next word is. However, hallucinations can be significantly reduced through grounding, careful prompt design, and providing relevant context to language models. Check out the case law citation tool to learn more.
How can response quality be improved when using language models in legal contexts?
Enhancing response quality with language models in legal contexts involves prompt engineering and grounding techniques, which provide the language model with necessary contextual information. This approach ensures clear, context-aware, and accurate language model responses, effectively utilizing the model's reasoning capabilities over the provided knowledge.
What is prompt engineering in the context of language models and law?
Prompt engineering involves designing specific queries or instructions to guide AI models towards generating more accurate and contextually appropriate responses. It's a critical skill for legal professionals using AI, ensuring that the technology aligns with the specific needs and nuances of legal cases.
What are some basic prompting techniques to improve response quality in language models
A few basic techniques to improve response quality in language models include (1) specifying clear task formats and tone, (2) encouraging step-by-step thinking through a chain of thought approach, and (3) persona prompting. These techniques help in eliciting more precise and relevant responses.
What is "grounding" a language model?
Grounding a language model is connecting it to a reliable datasource. Grounding with Retrieval Augmented Generation (RAG) helps mitigate hallucinations while enhancing the model's accuracy. This process involves providing the language model with a rich context or specific information to base its responses on, leading to more accurate and reliable outputs.
What are hyper parameters in language models, and how do they change outputs?
Hyperparameters in language models, such as temperature, token window, and penalties, significantly influence the model's outputs. The temperature setting affects the creativity or randomness of the response, the token window determines the scope of the output, and penalties help prevent repetitive or redundant phrases. Adjusting these settings allows legal professionals to tailor the language model's responses, ensuring they are suitable for the specific requirements of brainstorming, legal research, drafting, or analysis.