Legal considerations for AI use and development

The rise in artificial intelligence (AI) is gathering pace and more and more businesses are adopting forms of AI into their working practises.

There is no set definition of AI, but it can be understood to be: technology that works together to enable machines to understand, learn and act with human-like levels of intelligence. Machine learning and natural language processing are good examples of AI that’s now fairly commonplace.

The types of AI that are used frequently by businesses include:

  • Translation software
  • Chatbots
  • Smart digital assistants like Siri and Alexa
  • Voice-to-text features like smart Dictaphones
  • Security surveillance
  • Spam filters
  • Fraud detection
  • Generative AI for content creation (like Jasper and Chat GPT)

While AI usually improves efficiency, there are a few legal considerations to take into account in the use and development of AI.

Legal considerations for using AI

Each new piece of AI software, and those listed above, has its own legal considerations. But as a broad-brush approach, these are the things to consider from a legal perspective when you are thinking about introducing AI into your business:

  • Privacy laws and data breaches: We discussed the problems with using sensitive data on open AI systems like Chat GPT, in our recent blog. It can lead to inadvertent data breaches as the technology ‘learns’ through its inputs. Beyond that, and depending on the type of AI you’re thinking about using, other privacy issues to consider are: informed consent, surveillance, and rights of access to personal data.
  • Intellectual Property issues: When it comes to IP, the laws are lagging behind the pace of development of the AI. It’s not yet clear if AI inventions should be considered Prior Art, or who owns AI generated work and products. If you have Terms & Conditions attached to the AI you’re using, specifically check for those issues, and write them into your agreement if they’re absent.
  • Liability for damage: Who is liable if the technology does something unexpected? For example, damage caused by a partially AI operated drone?  There are many parties involved in creating the drone, and liability is difficult to establish. Again, check if your T&Cs specifically address this.

Considerations involved in developing AI

As the technology develops, legal and compliance minds turn to the issue of regulation. How will we regulate this rapidly growing market?

The ICO recently published updated guidance on AI and data protection and in March 2023 the Government published its paper on its pro-innovation approach to regulating AI. It set out its intention for light-touch regulation to create a ‘thriving AI ecosystem’ in the UK.

The paper details the Government’s approach to regulate the use of AI but not the technology itself. The onus will be on the regulators to design and implement proportionate responses to high-risk uses.

There will be ‘minimal anticipated statutory intervention’ so we won’t see swathes of new law going through Parliament to regulate the industry. But there’s likely to be guidance and policy as the regulation evolves.

Interestingly, the UK’s approach is quite different to the rest of Europe, which is taking a much more rigid approach to regulation. By way of illustration, Chat GPT was banned in Italy from March 2023.


If you’re thinking about adopting AI into your business, but you’re concerned about the legal risks, please give us a call. We can help protect your business by reviewing your T&Cs, and updating your agreements to cover any potential risks.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin

Leave a comment

Contact us for free from anywhere

Use our live chat to get some quick answers 

We’ll call you straight back wherever you are! 

Call us from your phone with no charge

Click this button to request a callback - wherever you are!