Paxton is an innovative legal technology firm transforming the legal landscape. Our vision is to equip legal professionals with an AI assistant that supercharges efficiency, enhances quality, and enables extraordinary results.
Developer of an document review platform designed to help law firms automate the reviewing process and find relevant evidence. The company's platform uses artificial intelligence to find evidence to support clients' cases, instantly view events timelines, autogenerate tags, and auto-categorize documents, helping lawyers to unearth critical evidence, and auto-generate comprehensive timelines.
DocLens.ai is a Software as a Service (SaaS) platform that leverages artificial intelligence (AI) and machine learning (ML) to assist insurance professionals in managing legal risks associated with liability claims and complex document reviews. The platform is designed to process both structured and unstructured data, including various types of documents, to extract critical information and provide actionable insights.
Wexler establishes the facts in any contentious matter, from an internal investigation, to international litigation to an employee grievance. Disputes of any kind rely on a deep understanding of the facts. With Wexler, legal, HR, compliance , forensic accounting and tax teams can quickly understand the facts in any matter, reducing doubt, saving critical time and increasing ROI, through more successful outcomes and fewer written off costs.
DeepJudge is the core AI platform for legal professionals. Powered by world-class enterprise search that serves up immediate access to all of the institutional knowledge in your firm, DeepJudge enables you to build entire AI applications, encapsulate multi-step workflows, and implement LLM agents.
Alexi is the premier AI-powered litigation platform, providing legal teams with high-quality research memos, pinpointing crucial legal issues and arguments, and automating routine litigation tasks.
This Harvard Journal of Law & Technology digest makes a bold case for the U.S. and its allies to shift from abstract AI policy debates to a concrete “Chips for Peace” framework—tying frontier-AI regulation, export controls, and benefit-sharing to prevent high-stakes misuse. It outlines three strategic pillars—catastrophe prevention, equitable prosperity, and coordinated governance—to leverage AI chip supply chains for global stability. Legal professionals and policymakers will find this compelling: it transforms chip controls into a diplomatic lever and calls for enforceable standards that go beyond national borders. This piece matters because it translates geopolitical tensions into a proactive legal strategy with real-world tools—click through to explore how your firm can navigate and shape the next generation of AI governance.
This insightful analysis by Megan Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles examines a pivotal February 2025 Delaware District Court ruling in Thomson Reuters v. ROSS Intelligence, where the court granted summary judgment for Thomson Reuters on direct copyright infringement and rejected ROSS’s fair use defense. A key takeaway is that courts may treat unlicensed training of AI models on copyrighted works as non-transformative, with commercial use and market harm outweighing internal-only access arguments. Timing matters: this decision offers a strong early signal that fair use defenses in AI-focused copyright litigation may face steep judicial scrutiny. For IP counsel and AI developers, the article offers actionable clarity on how fair use factors—especially purpose, transformation, and market impact—are being applied in emerging AI cases, making it essential reading for legal professionals navigating generative AI risk.
This Harvard Journal of Law & Technology digest argues that the Federal Circuit should adopt USPTO’s subject‑matter guidance to resolve the nagging inconsistencies from Alice in determining abstract‑idea patent eligibility under 35 U.S.C. § 101. It explains how courts currently struggle with a unified test—highlighting the fragmented judicial analyses and urging systematic classification of abstract‑idea categories at Step One . By leaning on PTO guidance for “groupings of subject matter” and “practical application” tests, the piece shows how the Federal Circuit could enhance predictability in § 101 decisions. IP practitioners and patent litigators will find this argument compelling and practical—it offers a clear pathway to untangle abstract‑idea ambiguity and fortify drafting and litigation strategies.
The article outlines the NYDFS’s October 16, 2024 Industry Letter, which leverages existing 23 NYCRR Part 500 frameworks to guide financial institutions on managing cybersecurity risks tied to AI—including deepfake-enabled social engineering, third-party vendor risk, and AI-as-threat vector. It emphasizes how firms should integrate AI-specific controls—like deepfake-resistant MFA, annual AI‑risk assessments, vendor due‑diligence, and AI‑awareness training—without introducing new regulations . Legal professionals and compliance teams will find this essential for updating governance frameworks, tightening vendor contracts, and ensuring regulatory adherence. Click through for a practical roadmap on aligning your cybersecurity programs with evolving AI‑driven threats.
Yale Journal of Law & Technology’s article delves into the ethical and bias challenges in AI‑powered legal tools, dissecting how human inputs—from data selection to prompting—profoundly shape AI outputs. It spotlights the tension between efficiency gains and the risk of automated errors, offering legal professionals a roadmap to evaluate when and how to retain human oversight. This piece matters because it equips lawyers and compliance teams with actionable insights to design fair, defensible AI workflows. Dive into the full analysis to understand the mechanics driving bias and how to implement guardrails that uphold integrity and trust.
This article highlights a “landmark” November 2023 ruling by the Beijing Internet Court, which for the first time confirmed that AI‑generated images—specifically those produced via Stable Diffusion with creative prompting and refinement—can qualify for copyright under Chinese law. It explains how the court analysed key concepts like “originality” and “intellectual achievement,” crediting the prompt engineer—not the AI model—as the true author. The ruling signals China’s judicial readiness to grapple with AI‑driven creativity and sets an actionable precedent for IP ownership in generative AI cases. Legal professionals will find its deep dive into authorship criteria especially relevant. Click through to explore the court’s detailed reasoning and its implications for future AI‑related IP strategy.