Today comes into force one of the most important regulations of recent years in Europe: the AI Act, or Artificial Intelligence Act.
Although in many media the debate has focused on large technology companies or startups that create AI models, the reality is that this regulation affects any company that uses artificial intelligence in its activity.
Yes, also yours.

You may be using a tool like ChatGPT or Claude to write texts, automate responses or generate content. You may be part of a company that integrates AI solutions into its products or internal processes. In all those cases, the AI Act affects you as of today.

What exactly changes?

As of this August 2, 2025, companies using what are known as general purpose artificial intelligence (GPAI) models – that is, those that serve multiple tasks – must comply with a series of obligations established by the European Union.

These obligations are not arbitrary. They are intended to ensure that artificial intelligence is used in a safe, transparent and responsible manner.
Among other things, companies are required to:

  • Report on the data used to train AI, especially if there is copyrighted content.
  • Clearly explain what the model does, how it works and what risks it may pose.
  • Take real cybersecurity measures and demonstrate that the system is protected against malicious use.
  • Conduct a risk assessment when AI is used in contexts that may affect people.

And if the model you are using is particularly powerful (for example, if it requires a large computational capacity), you will need to take additional steps.

What if my company only uses AI, but does not develop it?

That is one of the most common doubts. And the answer is clear: it also applies to you.

The AI Act not only regulates companies that create technology, but also those that use it.
If you integrate AI into your products, services or processes – even if you didn’t develop it yourself – you are within the scope of this law.

And it doesn’t matter if your company is in Europe or not. If you sell or provide services in the European market using AI, you have to comply with the regulations.

What if you were already using artificial intelligence before today?

In that case, you have a little more leeway. The European Union has provided for an adaptation period: if the AI model you are using was implemented before August 2, 2025, you will have until August 2027 to fully adapt.

But if you start using it now or in the future, you should be compliant from day one.

In other words, if today your company decides to incorporate a model such as Claude or ChatGPT into its processes, the rules already apply to you from the very first moment.

Is there a way to facilitate compliance?

Yes, and it is one of the good news.

As of July 10, the European Commission has launched a Code of Best Practices that you can adopt on a voluntary basis.
This code includes clear and enforceable recommendations to demonstrate that your company is making responsible use of AI:

  • How to be more transparent with the data you use.
  • How to protect the copyright of the content you work with.
  • What minimum security measures you need.

In addition, adhering to this code gives you advantages: it can reduce the administrative burden, serve as proof of compliance with the authorities and show your customers that you are committed to ethical and professional use of AI.

What do I do now? Where do I start?

You don’t have to be a tech company or have a gigantic legal team to start complying.
In fact, there are some pretty simple steps you can put in place this very week:

This week:

  • Make a list of the artificial intelligence tools you are using.
  • Ask your suppliers if these models are considered “general purpose” (GPAI).

This month:

  • Ask for technical documentation on these models. You don’t need to understand everything, but you do need to keep this information.
  • Evaluate with your team how you use AI, whether there are associated risks, and whether you have sufficient protection against cyber-attacks.

Next months:

  • Consider adhering to the Code of Best Practices. It is not mandatory, but it can help you a lot.
  • And above all, stay informed: this is just the beginning. From February 2026, possible penalties will come into force for those who have not complied.

So… is this a threat or an opportunity?

It depends on how you approach it.
Many companies are already adapting, and not just for fear of fines. It’s also because demonstrating that you use AI responsibly can become a competitive advantage.

  • It allows you to differentiate yourself from the competition.
  • Reinforce your customers’ trust.
  • And it opens the door to new European programs to support innovation and digitalization.

Sources:

https://artificialintelligenceact.eu/es/gpai-guidelines-overview
https://digital-strategy.ec.europa.eu/es/news/eu-rules-general-purpose-ai-models-start-apply-tomorrow-bringing-more-transparency-safety-and