Paxton is an innovative legal technology firm transforming the legal landscape. Our vision is to equip legal professionals with an AI assistant that supercharges efficiency, enhances quality, and enables extraordinary results.
Developer of an document review platform designed to help law firms automate the reviewing process and find relevant evidence. The company's platform uses artificial intelligence to find evidence to support clients' cases, instantly view events timelines, autogenerate tags, and auto-categorize documents, helping lawyers to unearth critical evidence, and auto-generate comprehensive timelines.
DocLens.ai is a Software as a Service (SaaS) platform that leverages artificial intelligence (AI) and machine learning (ML) to assist insurance professionals in managing legal risks associated with liability claims and complex document reviews. The platform is designed to process both structured and unstructured data, including various types of documents, to extract critical information and provide actionable insights.
Wexler establishes the facts in any contentious matter, from an internal investigation, to international litigation to an employee grievance. Disputes of any kind rely on a deep understanding of the facts. With Wexler, legal, HR, compliance , forensic accounting and tax teams can quickly understand the facts in any matter, reducing doubt, saving critical time and increasing ROI, through more successful outcomes and fewer written off costs.
DeepJudge is the core AI platform for legal professionals. Powered by world-class enterprise search that serves up immediate access to all of the institutional knowledge in your firm, DeepJudge enables you to build entire AI applications, encapsulate multi-step workflows, and implement LLM agents.
Alexi is the premier AI-powered litigation platform, providing legal teams with high-quality research memos, pinpointing crucial legal issues and arguments, and automating routine litigation tasks.
This in-depth analysis by Chad A. Rutkowski of Baker & Hostetler unpacks the February 2025 Thomson Reuters v. Ross Intelligence decision, where the Delaware court granted summary judgment for Thomson Reuters—holding that Ross’s use of Westlaw headnotes to train its AI tool did not qualify as fair use. The court made clear that the training was commercial, non-transformative, and created a market substitute, emphasizing that even “intermediate” copying without substantial transformation can fail fair use protections. This ruling matters deeply to legal professionals: it sets a critical precedent that AI developers must carefully secure proper licenses for copyrighted training data, with fair use defenses facing steep judicial scrutiny. With broader implications for generative AI platforms like OpenAI or Meta, this piece is essential reading for IP counsel advising on AI-data compliance and litigation strategy—making it a must-click for forward-thinking practitioners.
This insightful analysis tackles a rarely addressed issue—whether AI trainers can claim copyright ownership over the output they help generate. Rutkowski dives into scenarios like image-generation with Midjourney, questioning if human prompts and curatorial choices elevate trainers into co-authorship roles. This matters for legal professionals advising AI developers and users, as it introduces new contours to authorship, licensing, and ownership arguments that could reshape IP strategies. By spotlighting the nuanced interplay between human input and AI-generated results, the piece urges counsel to proactively clarify rights, licensing, and attribution in training workflows—making it a vital read for forward-thinking IP practitioners.
This in-depth analysis by Eric Dinallo, Avi Gesser, Matt Kelly, Samuel J. Allaman, Melyssa Eigen, Ned Terrace, Stephanie Thomas, and Mengyi Xu examines Colorado’s proposed amendment to extend its AI governance and risk-management regulations—originally for life insurers—to auto and health insurance providers. Highlighting key updates like bias evaluations, board-level oversight, clear consumer explanation of adverse AI-driven decisions, and mandated human oversight in healthcare determinations, the authors draw attention to near-term compliance deadlines in 2025. It underscores a shift, showing how state regulators are preemptively integrating AI into sector-specific governance, offering legal and compliance teams concrete preparation steps. This matters to legal professionals because it signals growing state-level enforcement of AI accountability in insurance, helping counsel advise insurers on bridging internal policies with emerging regulatory frameworks.
This analysis—authored by Megan K. Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles—breaks down a pivotal February 2025 ruling in Tremblay v. OpenAI, where a U.S. federal court ordered OpenAI to produce its full training dataset for GPT‑4 in a copyright infringement case. The ruling underscores that courts are now treating training data as central to proving direct AI‑related copyright claims, even amidst the tension between discovery obligations and trade‑secret protection. For legal professionals, this marks a significant escalation in e‑discovery strategy: practitioners must now advise AI developers on balancing transparency, litigation readiness, and data security under protective orders. By spotlighting emerging standards for dataset disclosure, the article offers invaluable insight for litigators, in‑house counsel, and compliance teams managing AI‑driven legal risk.
This detailed analysis from Debevoise & Plimpton explores how the European Union’s AI Act intersects with insurance industry practices, offering a risk-tiered framework that directly impacts underwriting, fraud detection, and customer service AI tools. The article contrasts the EU's prescriptive regime with the UK's more flexible, principles-based oversight, helping legal and compliance teams understand diverging global regulatory landscapes. A standout insight is that many insurer AI use cases may fall outside the AI Act’s strictest categories—but existing frameworks like Solvency II, DORA, and the IDD already impose significant governance and transparency expectations. With August 2026 compliance deadlines looming, this piece provides insurers and their counsel with a practical roadmap to prepare for cross-jurisdictional AI oversight.
This analysis from Annie Dulka explores how AI applications in international human rights law are reshaping legal frameworks—from refugee protection and due‐process to surveillance governance. It outlines both innovative benefits (e.g., enhanced monitoring, rapid documentation) and legal risks (e.g., bias in asylum decisions, privacy violations), arguing that robust oversight and principled deployment are essential to align AI tools with international human rights norms. This matters significantly for legal professionals navigating cross-border AI use, as it offers a practical roadmap for integrating AI ethics into treaty interpretation, case law, and compliance mechanisms. Engaging and authoritative, the piece encourages lawyers and policymakers to proactively shape AI deployment in human rights contexts—making it a compelling entry-point for those advising on global AI governance.