Navigating the Future with the SRA’s Regulatory Approach
The rise of Artificial Intelligence (AI) in the legal sector has been impossible to ignore, and a nightmare for compliance teams worldwide. These opportunities have brought about important regulatory challenges that firms must navigate vigilantly. The Solicitors Regulation Authority (“SRA”) recently delivered a webinar, hosted by the Society of Asian Lawyers (“SAL”), titled ‘AI Policy and Regulation’ on 4 February 2026, outlining their developing framework to enable firms and individuals to safely and ethically incorporate AI tools into their practice.
Generative AI vs Traditional AI
Olivier Roth, SRA’s Policy Manager, highlighted that the distinction between Generative AI (“GenAI”) and traditional AI is critical to understand before leveraging AI tools. GenAI models, such as Grok and ChatGPT are designed to simulate creativity, using pattern recognition from large data sets to create new content such as text, images or other forms of media. They can be incredibly flexible and versatile, however can fall short to issues such as the black box problem. The black box problem essentially states that whilst we can see what we have input into the GenAI model, and what it’s output is, it is extremely difficult or impossible to see how the model has arrived at its output.
Traditional AI models, on the other hand, are more limited, often relying upon smaller amounts of data and providing more predictable outcomes. They can prove more useful for tasks that demand consistency and rule-based decision making.
How are Clients Using AI?
According to SRA-commissioned research, due to be published in April 2026, noted that roughly a third of the public has used GenAI to help identify legal issues. Many individuals have been using a hybrid approach, wherein they seek the advice of both a solicitor and a GenAI model. Whilst this has enhanced access to justice and may reduce costs, the risk to accuracy and confidentiality are cause for concern for the SRA.
Risks of AI in Legal Services
GenAI models are prone to “hallucinations”, and have been known to produce fake cases and incorrect information which has found its way into court bundles and client advice. This is a step backwards for public trust, highlighting that human oversight is imperative when using AI tools. Furthermore, should a client input any privileged information into a public-facing AI tool, the tool may use this data to train the next iteration of it’s model. This may waive legal privilege and poses a complex challenge for solicitors. The SRA advises that clients are thoroughly briefed on these risks before any AI system is used.
The SRA’s Approach to AI Regulation
To address these risks, the SRA has adopted an outcome based regulatory approach. This means that firms are not required to adhere to prescriptive rules, but must meet the specific professional standards that have always been expected of them – whether they use AI tools or not.
The SRA highlight some key principles in their guidance:
- Explainability – firms must be transparent in their AI usage, demonstrating what data has been used, what oversight measures are in place and why they have used AI for the tasks
- Data Protection – firms must not put any identifiable client data into AI tools without informed consent. No raw client data should ever be put into public AI tools.
- Ethical use – AI tools should support professional judgement, nothing more. Firms and solicitors are accountable for any outputs produced by AI systems.
Resources and Best Practice
The SRA is set to release two resources in the coming months, providing guidance on AI policy. An FAQ document, ‘GenAI FAQ’, and a Good Practice Note on AI use and client data. The aim of these resources is to ensure responsible AI integration into the legal sector, supporting innovation whilst protecting the interests of clients.
In the meantime, the SRA encourages firms to document all AI use, conduct regular risk assessments and ensure staff are adequately trained in responsible AI use.
Further information about navigating innovation and technology can be found on SRA’s website:
Additional guidance can also be found on The Law Society’s website:
Generative AI – the essentials | The Law Society
Looking Ahead
As AI continues to disrupt and metamorphose the legal sector, solicitors and firms must ensure they meet the regulatory standards and public expectations of AI use. Using the SRA’s outcome-based approach as a guide, the future for legal innovation is undoubtedly promising, but it is one that demands diligent and careful steps to maintain the standards that have always formed the foundation of legal regulation and to ensure that when using AI and GenAI tools, solicitors and firms act in the best interest of their clients, provide competent service and remain accountable.
Matthew Singh
Paralegal, Spinal Injury Team
Bolt Burdon Kemp