Full Directory

Browse all products
Robin AI
Robin AI

Robin AI is a leader in legal AI. Our Legal AI Assistant is used by hundreds of businesses globally to harness the power of generative AI for legal. We empower legal teams and lawyers to make contract processes effortless.

AI Tools & Software
Pincites
Pincites

Pincites makes contract negotiations faster and more consistent for legal teams. Using advanced language models, Pincites allows legal teams to build robust contract playbooks that any internal team can apply consistently within Microsoft Word.

AI Tools & Software
ThoughtRiver
ThoughtRiver

ThoughtRiver was founded in 2016 to transform third-party contract review. Over the past nine years, we’ve become a leader in the Legal Tech space, working with some of the world’s top legal teams and organizations. Our success is built on integrating human-led, legally trained data into our own LLM, ensuring accuracy and relevance in contract analysis.

AI Tools & Software
Fileread
Fileread

Our AI-powered litigation tools open up a dialogue with your data, so your legal team can focus on what they do best: thinking.

AI Tools & Software
Beagle
Beagle

Beagle is transforming how law firms, corporate legal teams, and eDiscovery service providers handle document review and eDiscovery. Our AI-powered platform delivers faster, more accurate results, streamlining processes and reducing costs to help you uncover key data quickly and efficiently.

AI Tools & Software
Casetext
Casetext

Casetext is a legal research platform that uses artificial intelligence to help lawyers and legal professionals find relevant case law, statutes, and other legal materials efficiently. It was particularly known for its AI-powered tool, CARA (Case Analysis Research Assistant), which allowed users to upload legal documents and receive highly relevant case law recommendations.

AI Tools & Software

AI Law Articles, Reports & Other Publications

Browse all products
Choose a tag to filter the list:
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Memorandum on Advancing the United States' Leadership in Artificial Intelligence
Memorandum on Advancing the United States' Leadership in Artificial Intelligence
AI Publications

The White House's National Security Memorandum establishes comprehensive AI governance frameworks for military and intelligence purposes, requiring AISI testing of frontier AI models for cybersecurity, biological/chemical weapons, and nuclear threats while mandating classified evaluations and agency risk management practices. This landmark presidential directive creates the 'Framework to Advance AI Governance and Risk Management in National Security' as a counterpart to OMB civilian guidance while requiring DOD, DHS, and intelligence agencies to develop capabilities for rapid systematic AI testing. This authoritative government policy document demonstrates the Biden administration's strategic approach to balancing AI innovation with national security protection through systematic threat assessment, classified information safeguards, and interagency coordination mechanisms.

Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks
Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks
AI Publications

New York DFS's regulatory guidance details how AI advancement creates significant cybersecurity opportunities for criminals while enhancing threat detection capabilities for financial institutions under the state's cybersecurity regulation framework. The analysis emphasizes AI-enabled social engineering as the most significant threat to financial services while requiring covered entities to assess and address AI-related cybersecurity risks through existing Part 500 obligations. This state financial regulatory analysis demonstrates how AI transforms cyber risk landscapes by enabling sophisticated attacks at greater scale and speed while simultaneously providing improved defensive capabilities for prevention, detection, and incident response strategies.

Tracker: State Legislation on Deepfakes in Elections
Tracker: State Legislation on Deepfakes in Elections
AI Publications

Public Citizen's democracy protection analysis tracks bipartisan state legislation regulating AI-generated election deepfakes that depict candidates saying or doing things they never did to damage reputations and deceive voters. The assessment emphasizes urgent regulatory needs as deepfakes pose acute threats to democratic processes, particularly when released close to elections without sufficient time for debunking. This democracy advocacy perspective highlights how AI-generated election manipulation could alter electoral outcomes and undermine voter confidence, demonstrating the critical need for regulatory frameworks that address artificial intelligence's potential to supercharge disinformation and manipulate democratic participation across jurisdictions.

Year in Review: The Top Ten US Data Privacy Developments from 2024
Year in Review: The Top Ten US Data Privacy Developments from 2024
AI Publications

WilmerHale's comprehensive review tracks 2024's substantial data privacy advances including the EU AI Act adoption, growing state AI legislation, and continued FTC enforcement focusing on AI capabilities claims and unfair AI usage. The analysis details federal developments including NIST's nonbinding AI guidance responding to Biden's Executive Order, California's three new AI transparency laws, and international competition authority statements on AI ecosystem protection. This authoritative privacy law assessment demonstrates accelerating regulatory momentum across international, federal, and state levels while highlighting key enforcement trends around genetic data, location tracking, and national security concerns that will shape 2025 compliance obligations.

AI Pulse: What's new in AI regulations?
AI Pulse: What's new in AI regulations?
AI Publications

Trend Micro's cybersecurity analysis examines California's controversial SB 1047 legislation and the ongoing debate over regulating AI as technology versus specific applications, highlighting expert disagreements between innovation promotion and risk mitigation. The assessment details industry-government collaboration through NIST agreements with OpenAI and Anthropic while examining AI safety challenges including OpenAI's o1 model scoring medium-risk on CBRN dangers and deceptive capabilities. This cybersecurity industry perspective emphasizes the need for clear frameworks to determine AI risk while noting that regulated sectors like financial services and healthcare continue leading AI adoption despite compliance requirements.

Cybersecurity and Artificial Intelligence: An Increasingly Critical Interdependency
Cybersecurity and Artificial Intelligence: An Increasingly Critical Interdependency
AI Publications

Harvard Law's analysis examines the critical interdependency between cybersecurity and AI as organizations combat rising cyber dangers while recognizing AI's promise and threat to security infrastructure. The assessment details how AI transforms cybersecurity landscapes through both enhanced defensive capabilities and sophisticated attack vectors, citing 2024 breach cost data and federal regulatory responses including SEC disclosure requirements. This academic legal analysis emphasizes how the shift from analog to digital economies requires organizations to balance AI implementation benefits against emerging security vulnerabilities while maintaining compliance with evolving regulatory frameworks and disclosure obligations.

Filter by category

Submit your listing

Whether you offer an AI-powered legal tool or are a legal expert in AI regulation, submit your profile or product to be reviewed and included in our directory.

Submit now
Submit your AI Article or Publication

If you have written an article, report, book or created any media educating about legal issues around, AI submit it for approval to be included in our directory.

Submit now