Full Directory

Browse all products
Robin AI
Robin AI

Robin AI is a leader in legal AI. Our Legal AI Assistant is used by hundreds of businesses globally to harness the power of generative AI for legal. We empower legal teams and lawyers to make contract processes effortless.

AI Tools & Software
Pincites
Pincites

Pincites makes contract negotiations faster and more consistent for legal teams. Using advanced language models, Pincites allows legal teams to build robust contract playbooks that any internal team can apply consistently within Microsoft Word.

AI Tools & Software
ThoughtRiver
ThoughtRiver

ThoughtRiver was founded in 2016 to transform third-party contract review. Over the past nine years, we’ve become a leader in the Legal Tech space, working with some of the world’s top legal teams and organizations. Our success is built on integrating human-led, legally trained data into our own LLM, ensuring accuracy and relevance in contract analysis.

AI Tools & Software
Fileread
Fileread

Our AI-powered litigation tools open up a dialogue with your data, so your legal team can focus on what they do best: thinking.

AI Tools & Software
Beagle
Beagle

Beagle is transforming how law firms, corporate legal teams, and eDiscovery service providers handle document review and eDiscovery. Our AI-powered platform delivers faster, more accurate results, streamlining processes and reducing costs to help you uncover key data quickly and efficiently.

AI Tools & Software
Casetext
Casetext

Casetext is a legal research platform that uses artificial intelligence to help lawyers and legal professionals find relevant case law, statutes, and other legal materials efficiently. It was particularly known for its AI-powered tool, CARA (Case Analysis Research Assistant), which allowed users to upload legal documents and receive highly relevant case law recommendations.

AI Tools & Software

AI Law Articles, Reports & Other Publications

Browse all products
Choose a tag to filter the list:
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
White House Issues National Security Memorandum on Artificial Intelligence
White House Issues National Security Memorandum on Artificial Intelligence
AI Publications

Covington's analysis examines the October 2024 National Security Memorandum requiring AISI to conduct voluntary preliminary testing of frontier AI models for national security threats including offensive cyber operations, biological/chemical weapons development, and autonomous malicious behavior. The assessment details new requirements for agencies to implement AI risk management practices, testing protocols, and classified evaluations while building on NIST's dual-use foundation model guidelines. This authoritative national security law analysis demonstrates how the Biden administration's AI NSM establishes comprehensive governance frameworks for military and intelligence AI deployment while requiring private sector cooperation in threat assessment and mitigation strategies.

Amoral Drift in AI Corporate Governance
Amoral Drift in AI Corporate Governance
AI Publications

This Harvard Law Review chapter tackles the “amoral drift” in AI corporate governance, warning that traditional tools—like board oversight and shareholder limits—fail to prevent companies like OpenAI and Anthropic from slipping toward profit-driven motives. It introduces the concept of “superstakeholders”—key talent and Big Tech backers whose equity-based influence can undermine an organization’s prosocial mission. The article also examines co-governance parallels, advocating democratic oversight structures that could anchor AI firms to ethical and societal objectives. Legal professionals and corporate counsel will want to dive into this piece to understand innovative governance mechanisms that balance existential AI risks with accountability.

Artificial Intelligence 2025 Legislation
Artificial Intelligence 2025 Legislation
AI Publications

This NCSL overview analyzes the surge of AI legislation across U.S. states in 2025, reporting on dozens of bills and task forces addressing everything from algorithmic bias to election disinformation. Legal practitioners will find this essential, as it synthesizes how states are shaping AI governance—providing insight into fast-moving, jurisdiction-specific trends and emerging compliance triggers. Click through to explore the full toolkit, tracked bills, and strategic guidance for navigating the evolving state-level legal landscape.

Artificial Intelligence and the Creative Double Bind
Artificial Intelligence and the Creative Double Bind
AI Publications

This Harvard Law Review “Developments in the Law” chapter examines the “creative double bind” that generative AI imposes on artists—offering powerful new tools while simultaneously threatening traditional copyright frameworks. It explores how this tension manifests differently across creative communities—from screenwriters to choreographers—depending on their varying attachments to existing IP protections. The piece spotlights how strategies like private negotiations, as seen in the WGA writers’ strike, could provide models for adapting copyright rules to balance innovation and protection . IP practitioners and policy experts will find this essential reading for its nuanced analysis and practical roadmap for navigating AI’s impact on creative industries—click through to explore its compelling doctrinal insights.

Resetting Antidiscrimination Law in the Age of AI
Resetting Antidiscrimination Law in the Age of AI
AI Publications

This Harvard Law Review article argues that current antidiscrimination laws—built for human decision-making—are ill-suited to handle algorithmic bias in the age of AI. It critiques the limitations of intent-based frameworks and disparate impact analysis under Supreme Court precedents, urging a doctrinal reset to ensure fairness in AI‑driven decision systems. The piece proposes modernizing legal tools—such as recalibrating Title VII and equal protection tests—to oversee AI outputs and mandate transparent auditing, empowering attorneys and regulators to combat hidden model unfairness. Legal professionals will want to read the full article to explore concrete strategies for integrating algorithmic accountability into established civil rights regimes.

Co-Governance and the Future of AI Regulation
Co-Governance and the Future of AI Regulation
AI Publications

The Harvard Law Review article advocates a co-governance model for AI regulation that involves governments, industry, civil society, and impacted communities working collaboratively. It argues traditional top-down rules fall short for AI’s complexity and urges transparency, inclusivity, and shared responsibility. This approach aims to balance innovation with accountability, embedding ethical oversight and continuous stakeholder dialogue. Legal professionals and policymakers will find this framework essential for crafting adaptable, equitable AI governance in an evolving tech landscape.

Filter by category

Submit your listing

Whether you offer an AI-powered legal tool or are a legal expert in AI regulation, submit your profile or product to be reviewed and included in our directory.

Submit now
Submit your AI Article or Publication

If you have written an article, report, book or created any media educating about legal issues around, AI submit it for approval to be included in our directory.

Submit now