LegalMation leverages the latest artificial intelligence systems including GPT-4 to help corporate legal departments and law firms drive efficiency with straightforward and easily deployed solutions specifically focused on litigation and dispute resolution workflows.
Bench IQ is an AI-powered service that allows attorneys to uncover the reasons behind all of their judges' rulings, not just the 3% that can be found in their judicial opinions. We provide attorneys with unparalleled insight into their judges' thinking, enabling them to argue more successfully.
LegalFly is an AI-powered platform designed to streamline legal operations, offering services such as contract review, drafting, and due diligence. It aims to enhance efficiency and accuracy for legal teams by automating repetitive tasks and allowing professionals to focus on strategic work.
Rhetoric helps litigators know more, persuade more, and win more cases. Identify judge preferences and custom tailor briefs through similarity scoring, sentiment analysis, and more
FirmPilot is the first AI Marketing Platform for Law Firms that intelligently suggests marketing tactics & generates high-quality content 10x faster to get more cases on auto-pilot.
Skribe is a company that offers an AI-powered alternative to traditional court reporting, aiming to streamline the process of capturing and analyzing legal testimony. It was co-founded by Karl Seelbach, a seasoned litigator, and Tom Irby, a former owner of a court reporting firm.
This Perspective by Ben Chester Cheong (Singapore University of Social Sciences & Cambridge) offers a comprehensive legal–ethical review of transparency and accountability challenges in AI systems governing human wellbeing . It structures the discussion into four pillars—technical explainability methods, regulatory frameworks, ethical safeguards, and multi‑stakeholder collaboration—highlighting how each area plays a vital role in ensuring trust and societal resilience. Legal professionals will appreciate its actionable framework that bridges tech, ethics, and governance, making it a timely resource amid emerging regulations like GDPR “right to explanation” and EU AI Act mandates. By offering strategic clarity and policy cohesion, this article equips lawyers, compliance leaders, and policymakers with tools to embed transparency and accountability into AI systems that shape lives—making it a must‑read for anyone advising on responsible AI deployment.
This essay by Margot E. Kaminski and Meg Leta Jones explores how current legal frameworks actively construct AI-generated speech, rather than being passively disrupted by it. It introduces the “legal construction of technology” method—analyzing how laws like the First Amendment, content moderation, risk regulation, and consumer protection actively interpret and shape AI speech. This analysis matters to legal professionals because it reveals that existing institutions and norms already provide structured pathways for meaningful oversight, shifting the conversation from reactive problem-solving to proactive values-based policy design. By demonstrating that law and AI co-evolve through these intentional constructions, this piece empowers lawyers and policymakers to craft more effective, principled governance—prompting deeper engagement with the field.
This article by Graham H. Ryan analyzes how generative AI challenges the legal immunity conferred by Section 230 of the Communications Decency Act—and why that protection may crumble under new judicial scrutiny. Ryan argues that generative AI systems “create or develop content” and thus likely fall outside Section 230’s existing scope, exposing providers to increased liability for design decisions and algorithmic contributions. It matters for legal professionals because emerging case law may redefine liability standards—from co-authoring content to design-based claims—signaling a pivotal shift in AI governance and internet law that practitioners need to monitor closely. By framing generative AI as a catalyst for reevaluating the legal foundations of internet speech, the article urges lawyers to proactively reassess risk strategies and regulatory compliance in this evolving landscape.
This insightful analysis explores how China’s August 15, 2023 regulations on generative AI reflect a strategic choice to slow AI progress for social and political control. It reveals a key finding that Beijing’s cautious regulatory approach contrasts sharply with the innovation-first strategies of the U.S. and EU, granting other jurisdictions essential breathing room to develop responsible AI policies. Legal professionals will find this article timely and compelling, as it provides practical insight into how geopolitical AI maneuvering reshapes cross-border legal strategy, compliance, and tech governance. By positioning China’s policy as a global pivot point, this piece equips lawyers and policymakers with a nuanced understanding of how AI regulation is being shaped on the international stage—prompting further investigation and dialogue.
This analysis by Matt Blaszczyk, Geoffrey McGovern, and Karlyn D. Stanley explores how U.S. copyright law grapples with AI-generated content, addressing whether AI-assisted works qualify for protection and how training datasets are treated under both U.S. and EU doctrines. It highlights a key insight: U.S. law requires human “authorial contribution” for copyrightability, while the EU allows rights holders to challenge commercial AI training—underscoring a growing legal divide. Timely hearings in Congress and updates from the U.S. Copyright Office make this discussion urgent for legal professionals managing copyright risks in generative AI projects. The analysis empowers practitioners with a clear roadmap for navigating emerging policy debates, licensing strategies, and litigation landscapes around AI-authored content—making it essential reading for IP counsel advising on innovation-driven initiatives.
This expert analysis from Dentons partners Jennifer Cass, Anna Copeman, Sam Caunt, and David Wagget examines the unresolved IP challenges arising from generative AI in 2024 and the legal “cliffhangers” heading into 2025. It highlights key issues like copyright infringement during AI training, ownership of AI-generated works and inventions, and emerging litigation—such as Getty Images v. Stability AI. Legal professionals will find value in its forward-looking take on 2025 reforms, including licensing trends, contractual risk strategies, and pending court rulings. Written with practical insight, this piece equips lawyers with proactive tools to guide clients through rapidly evolving AI‑IP terrain.