Callidus is the most advanced legal AI platform. Offering a wide range of support for both litigation and transactional workflows, Callidus helps legal professionals drive better outcomes with increased efficiency. Callidus keeps the lawyer in the loop with interactive and highly visual workflows, and none of our solutions require more than 5 minutes of setup time
The AI-driven platform transforming mass tort case evaluations and settlements. Our software analyzes thousands of medical records in minutes, swiftly categorizing documents and surfacing key information to streamline case reviews and preparation for MDL settlements.
LAER AI is driven by a singular vision of radically transforming the experience of search and helping organizations find meaning and patterns hidden behind volumes of disparate and unstructured data. Its current product provides a significantly more accurate, faster, and cost effective solution to the expensive document review process duringlitigation, investigations, and compliance.
descrybe.ai is a singular way to search for, and understand, caselaw. Our unique process leverages generative AI to make complex legal information more accessible to professionals and laypeople alike. We are laser-focused on easy access to caselaw research, by lowering cost (it's free!), increasing ease of use, and employing natural language search and summarization capacity.
Responsiv is a legal and compliance automation platform that takes routine administrative tasks and delivers ready-to-review first drafts of work product. By integrating with your organization's systems, Responsiv analyzes regulatory changes and performs a gap analysis on your existing controls, policies, and procedures, suggesting necessary changes.
Lexlink AI revolutionizes legal document analysis by automating the identification of inconsistencies and discrepancies, enhancing litigation strategies. Key features include advanced inconsistency detection and automated discovery drafting, streamlining processes and saving time. Committed to transforming the legal industry, Lexlink AI ensures data privacy and champions innovation, making proceedings more efficient and fair.
This Perspective by Ben Chester Cheong (Singapore University of Social Sciences & Cambridge) offers a comprehensive legal–ethical review of transparency and accountability challenges in AI systems governing human wellbeing . It structures the discussion into four pillars—technical explainability methods, regulatory frameworks, ethical safeguards, and multi‑stakeholder collaboration—highlighting how each area plays a vital role in ensuring trust and societal resilience. Legal professionals will appreciate its actionable framework that bridges tech, ethics, and governance, making it a timely resource amid emerging regulations like GDPR “right to explanation” and EU AI Act mandates. By offering strategic clarity and policy cohesion, this article equips lawyers, compliance leaders, and policymakers with tools to embed transparency and accountability into AI systems that shape lives—making it a must‑read for anyone advising on responsible AI deployment.
This essay by Margot E. Kaminski and Meg Leta Jones explores how current legal frameworks actively construct AI-generated speech, rather than being passively disrupted by it. It introduces the “legal construction of technology” method—analyzing how laws like the First Amendment, content moderation, risk regulation, and consumer protection actively interpret and shape AI speech. This analysis matters to legal professionals because it reveals that existing institutions and norms already provide structured pathways for meaningful oversight, shifting the conversation from reactive problem-solving to proactive values-based policy design. By demonstrating that law and AI co-evolve through these intentional constructions, this piece empowers lawyers and policymakers to craft more effective, principled governance—prompting deeper engagement with the field.
This article by Graham H. Ryan analyzes how generative AI challenges the legal immunity conferred by Section 230 of the Communications Decency Act—and why that protection may crumble under new judicial scrutiny. Ryan argues that generative AI systems “create or develop content” and thus likely fall outside Section 230’s existing scope, exposing providers to increased liability for design decisions and algorithmic contributions. It matters for legal professionals because emerging case law may redefine liability standards—from co-authoring content to design-based claims—signaling a pivotal shift in AI governance and internet law that practitioners need to monitor closely. By framing generative AI as a catalyst for reevaluating the legal foundations of internet speech, the article urges lawyers to proactively reassess risk strategies and regulatory compliance in this evolving landscape.
This insightful analysis explores how China’s August 15, 2023 regulations on generative AI reflect a strategic choice to slow AI progress for social and political control. It reveals a key finding that Beijing’s cautious regulatory approach contrasts sharply with the innovation-first strategies of the U.S. and EU, granting other jurisdictions essential breathing room to develop responsible AI policies. Legal professionals will find this article timely and compelling, as it provides practical insight into how geopolitical AI maneuvering reshapes cross-border legal strategy, compliance, and tech governance. By positioning China’s policy as a global pivot point, this piece equips lawyers and policymakers with a nuanced understanding of how AI regulation is being shaped on the international stage—prompting further investigation and dialogue.
This analysis by Matt Blaszczyk, Geoffrey McGovern, and Karlyn D. Stanley explores how U.S. copyright law grapples with AI-generated content, addressing whether AI-assisted works qualify for protection and how training datasets are treated under both U.S. and EU doctrines. It highlights a key insight: U.S. law requires human “authorial contribution” for copyrightability, while the EU allows rights holders to challenge commercial AI training—underscoring a growing legal divide. Timely hearings in Congress and updates from the U.S. Copyright Office make this discussion urgent for legal professionals managing copyright risks in generative AI projects. The analysis empowers practitioners with a clear roadmap for navigating emerging policy debates, licensing strategies, and litigation landscapes around AI-authored content—making it essential reading for IP counsel advising on innovation-driven initiatives.
This expert analysis from Dentons partners Jennifer Cass, Anna Copeman, Sam Caunt, and David Wagget examines the unresolved IP challenges arising from generative AI in 2024 and the legal “cliffhangers” heading into 2025. It highlights key issues like copyright infringement during AI training, ownership of AI-generated works and inventions, and emerging litigation—such as Getty Images v. Stability AI. Legal professionals will find value in its forward-looking take on 2025 reforms, including licensing trends, contractual risk strategies, and pending court rulings. Written with practical insight, this piece equips lawyers with proactive tools to guide clients through rapidly evolving AI‑IP terrain.