Briefpoint is a legal tech company that offers AI-powered software to automate and streamline the discovery process for legal professionals. It integrates with legal practice management software like Clio and Smokeball.
Docsum is an AI contract review and negotiation platform. With Docsum, legal, procurement, and sales teams can negotiate and manage contracts 3x faster, to reduce the time to close and win more deals. Docsum works by analyzing and redlining contracts using configurable playbooks owned by lawyers.
Recital is a legal tech company that utilizes AI to streamline contract management for in-house legal teams. It focuses on simplifying and accelerating the contract review process through features like clause extraction and suggestion, as well as automated contract organization and updates. Recital aims to address the challenges of growing workloads and tight deadlines faced by legal departments.
DocDraft is an AI-powered legal platform designed to assist small businesses and individuals with drafting legal documents. It offers features such as AI-powered document drafting, allowing users to generate customized legal documents in minutes, and aims to provide affordable, accessible, and customizable legal support. DocDraft utilizes AI to automate the creation of legal documents, streamlining the process and improving efficiency for legal professionals.
Syntheia automatically turns your contracts into data, and delivers that data where you need it, when you need it. Each of our apps is designed to fit existing workflows - reviewing documents, creating a clause bank, drafting documents and advice, and collaborating on work.
Lexis® Create+ leverages existing internal work products of legal professionals, delivering a powerful, personalized drafting experience in Microsoft 365. It is grounded in your firm’s DMS and authoritative LexisNexis® sources, with generative AI capabilities built right in. Connect the full knowledge of your firm with the unrivaled insights of LexisNexis for everything you need to quickly build exceptional legal documents while preserving firm confidentiality and privacy requirements.
RAND's analysis examines whether AI-generated works merit copyright protection and if training AI models on copyrighted content violates U.S. and international law. The research reveals emerging global divergence, with Beijing courts recognizing copyright for AI-generated images demonstrating human intellectual effort while U.S. approaches remain uncertain pending landmark litigation like NYT v. OpenAI. This comprehensive legal assessment provides critical insights for content creators and AI developers navigating the unresolved fair use questions that will determine billions in potential liability and the future of generative AI training practices.
RAND's policy analysis examines how AI algorithms' opacity creates fundamental privacy challenges around data collection, use, and decision-making, particularly in light of the EU AI Act's transparency requirements. The research explores regulatory approaches to address AI's lack of explainability while examining the tension between innovation and privacy protection under frameworks like GDPR and emerging U.S. state privacy laws. This authoritative government-sponsored research provides crucial insights for policymakers grappling with AI's transformative impact on privacy rights and the effectiveness of existing privacy legal frameworks.
FinTech Weekly's analysis contrasts global AI regulatory approaches, from the EU AI Act's comprehensive framework with €35 million penalties to China's strict government control versus Japan's flexible industry self-regulation. The piece examines Trump's pro-AI executive orders reversing Biden's 'safe and secure' AI policies while highlighting how financial services firms increasingly look to EU standards as 'best practice' despite compliance costs. This industry perspective illuminates the political volatility around AI regulation and the challenges facing financial services firms navigating divergent international regulatory philosophies.
Banking industry analysis reveals that while the EU AI Act establishes the world's first AI regulation in 2024, U.S. financial institutions face a 'fast-moving target' for compliance as regulatory frameworks remain unsettled. The assessment highlights SEC efforts to address AI conflicts of interest for investment advisors while emphasizing that loose regulatory frameworks create significant risks if AI isn't implemented diligently. This practitioner-focused piece underscores how compliance officers must monitor evolving AI requirements to protect data safety and security amid shifting national and global regulatory concerns.
Skadden's comprehensive analysis examines how the EU AI Act will govern financial services from 2024 while U.S. regulators rely on existing frameworks and guidance rather than new AI-specific legislation. The report details critical regulatory concerns including data quality, model risk, governance challenges, and consumer protection as 79% of UK financial firms have deployed machine learning applications beyond pilot phases. This authoritative assessment demonstrates the divergent regulatory approaches across jurisdictions and highlights the industry demand for harmonized international standards as financial institutions navigate complex compliance requirements.
Cooley's analysis reveals how Trump's reversal of Biden's AI policies eliminated federal guidance on workplace wearables and algorithmic bias, yet underlying anti-discrimination laws like Title VII and ADA remain fully applicable to AI systems. The piece emphasizes that while agencies removed AI-specific guidance documents, employers still face liability for discriminatory AI outcomes, particularly as state laws like Colorado's SB 24-205 impose additional AI compliance requirements. This practical legal assessment helps employers navigate the regulatory gap between federal policy shifts and persistent legal obligations in AI deployment.