What’s Happening
Integris has published its 2026 Law Firm Trust in Technology Report, drawing on surveys of 416 law firm decision-makers and 600 law firm clients. The research documents a growing confidence gap in the legal sector: firms are spending more on technology, including AI and cybersecurity tools, but struggling to execute with enough discipline to satisfy increasingly discerning clients. Key findings include that 60% of law firms are already deploying AI across practice areas, 63% experienced a significant email-based security breach in the past year, and 85% of clients say firms should disclose when AI is used on their matters. The central tension the report exposes is not whether to invest in technology, but whether investment without governance and communication translates into anything clients actually value.
The Bigger Picture
The Governance Gap Is the Real Story
The headline numbers on AI adoption are easy to interpret as progress. Sixty percent of law firms using AI across practice areas sounds like meaningful penetration for an industry not historically known for technological agility. Look closer, though, and the picture is more complicated. Thirty percent of those same firms say AI implementation is a major challenge, 35% cite ethical and regulatory risks as a concern, and another 35% flag data privacy and security. These are not minor implementation hiccups. They are the signals of an industry that moved fast on adoption without building the underlying governance infrastructure to make that adoption defensible.
This pattern is not unique to legal. ECI Research’s 2025 AI Builder Summit survey found that 44% of enterprise AI leaders have only moderate confidence that AI agents can act autonomously without human intervention. If leaders at organizations with dedicated AI teams and substantial technical resources harbor meaningful doubts about autonomous AI, law firms operating with leaner IT functions and far higher confidentiality obligations should be treating governance as a prerequisite, not an afterthought.
The 85% of clients who say firms should disclose AI use is a striking data point in its own right. Nearly half call disclosure “extremely important.” That is not a preference. It is a threshold expectation, and firms that treat it as optional are accumulating reputational liability.
What This Means for Law Firm Technology Leaders
For ITDMs at law firms, the Integris findings present a straightforward prioritization problem: the governance and communication layer is lagging the investment layer by a meaningful margin, and clients are already making decisions on the basis of that gap.
The cybersecurity exposure is particularly acute. A 63% email-based breach rate and 57% mobile-related breach rate are not incidental findings. They reflect systemic weakness. Yet the more damaging fact may be the one about transparency: more than half of clients say their firm has never proactively communicated about cybersecurity. In a relationship-driven industry where client data is the primary asset under management, that silence reads as indifference. Proactive communication about data protection practices, incident response protocols, and AI governance is not marketing. It is a service delivery component.
The budget and roadmap challenges cited by 33% of firms compound this. Firms that cannot articulate a coherent IT strategy internally are unlikely to communicate one credibly to clients. The managed IT services conversation the report raises is therefore sensible, though the firms that benefit most will be those that treat MSP relationships as strategic rather than purely operational. The distinction matters: a vendor managing endpoints is not the same as a partner who can help a firm build an AI acceptable use policy that satisfies both regulatory requirements and client expectations.
What Developers and Technical Teams Should Be Building
For technical practitioners inside law firms or serving them, the Integris data clarifies where to focus engineering effort.
AI observability and auditability tooling should be at the top of the list. If 85% of clients expect disclosure when AI is used, the technical systems supporting legal workflows need to log AI involvement in a way that is auditable, reportable, and communicable. That is not a default capability in most general-purpose AI tools. It requires deliberate architecture decisions.
The data privacy and security concerns cited by law firm leaders (35% flagging this explicitly) point directly to a second requirement: AI platforms deployed in legal contexts need clear data residency controls, access logging, and strong separation between client matters. Firms currently relying on broadly available consumer-grade AI tools are likely operating with meaningful exposure they have not fully assessed. ECI Research’s 2025 AI Builder Summit survey found that half of enterprise AI leaders say their organizations still rely primarily on public AI tools like ChatGPT or Copilot. Law firms occupy one of the most data-sensitive verticals in the enterprise market. Relying on public tools without additional governance layers is a risk posture that should be explicitly evaluated and documented, not assumed to be acceptable.
The cybersecurity picture points toward email security posture and mobile device management as near-term engineering priorities. The breach rates suggest current controls are insufficient. Given that more than half of clients report never receiving cybersecurity communications from their firms, there is also a case for building client-facing security transparency into account management workflows.
Looking Ahead
AI Governance Will Become a Client Procurement Criterion
The directional trend here is clear. Client expectations around AI transparency are not softening. The 85% disclosure expectation documented in the Integris report reflects a broader shift in how enterprise buyers evaluate professional services firms, and legal clients are becoming enterprise buyers in their sophistication. Within two to three years, AI governance documentation will shift from a differentiator to a baseline expectation in client RFP processes and outside counsel guidelines. Firms that build this capability now, including acceptable use policies, disclosure frameworks, and audit trails, will have a material advantage. Those that wait will find themselves retrofitting governance into workflows designed without it, which is considerably harder.
Managed Services Will Absorb the Execution Gap
The report’s data on managed IT services demand reflects something structural. Law firms are not technology companies. Their comparative advantage is legal expertise, not infrastructure management or AI engineering. The 33% citing budget and roadmap challenges and the 28% citing lack of legal-specific expertise are not describing temporary capability gaps. They are describing a durable mismatch between what firms need to execute technically and what they can staff internally. The MSP market serving legal will consolidate around providers who can offer legal-specific compliance expertise alongside general technology services. Generic managed services providers without vertical depth will lose ground to those who understand HIPAA-adjacent data handling, bar association ethical rules around technology use, and the specific audit requirements that legal AI governance demands. The firms that identify and contract with those partners in the near term will close the client confidence gap faster than those still searching for a coherent IT roadmap two years from now.
