A woman is working intently on her laptop, with a warehouse visible in the background.
Security and legal

Using AI in your company, safeguarding data protection: Expert Council for Swiss SMEs

The use of artificial intelligence (AI) in Swiss SMEs is growing rapidly. Two out of three Swiss SMEs are already experimenting with AI or using it productively – but the majority of companies do not yet have clear rules on how to deal with AI, according to this year’s SME labor market study by AXA (in German). Attorney and expert in IT and data protection law, Marco S. Meier, explains how you as an SME can handle AI securely and minimize the risk of data protection breaches.

Commercial legal protection for companies

Why is data protection critical when dealing with AI?

The majority of SMEs see opportunities for efficiency and growth in the use of AI, while concerns are diminishing. Along with this development comes the challenge of ensuring the data protection-compliant entry and processing of personal data in AI tools.

The revised Federal Act on Data Protection (FADP) in 2023 requires the transparent use of data and data security. The law does not explicitly mention AI, but according to the Federal Data Protection and Information Commissioner (FDPIC), the same requirements must be met when using AI. Breaches could result in heavy fines and reputational damage.

The problem: Only one third of Swiss SMEs have clear rules for using AI, according to the 2025 SME labor market study. In the majority of companies, employees decide for themselves which tool to use for what purpose and what data to enter – without being aware of the potential consequences. 

Expert advice on AI and data protection law

Marco S. Meier, attorney at law, MLaw, CIPP, computer scientist with EFZ, explains in the interview where there are pitfalls under data protection law, who bears the consequences of a data protection breach and what they look like and what you as an SME can do to minimize the risk of data protection breaches.

  • Marco S. Meier
    Marco S. Meier

    Marco S. Meier, attorney-at-law, specializes in Information and Communication Technology (ICT) law. He advises and represents national and international clients in all areas of IT, technology, data protection, cyber security and intellectual property law as well as in technology-related compliance issues, such as the use or development of artificial intelligence (AI).

What pitfalls under data protection law are there when employees use AI services?

In practice, various data protection challenges arise when using AI services. The biggest danger when employees use AI services is that they don’t know what data they are allowed to enter in a specific AI tool. An essential measure to prevent data breaches is for the company to thoroughly review the AI services before they are released for use. This makes it possible to develop clear instructions for dealing with AI services in compliance with data protection regulations and to communicate them to employees.

In this respect, it is advisable that employees are given clear instructions on which services can be used under which conditions and, above all, what data can be entered into the respective AI service. For example, the question arises: May personal data or confidential data be entered when using the AI service or not?

Another pitfall is that employees resort to their own, privately used AI tools. Free versions are often used that have not been checked or approved by the company. Data protection violations can quickly arise if you enter personal data.

To prevent these pitfalls, it is imperative that the company performs a careful evaluation and risk analysis of the AI services, as well as setting clear guidelines for internal use and training.

  • Comprehensive protection with AXA
    Comprehensive protection with AXA

    Find out what insurance you need so that your work is well protected.

    To the Insurance check

Who is responsible if someone in the company violates the data protection law through the use of AI?

If a company uses an AI service of a third-party provider, it acts as a “controller” in terms of data protection law. As the controller, the company is obligated to ensure compliance with the Data Protection Act.

The following obligations must be implemented by the company:

  • ensuring transparency by making information available (so-called duty to provide information)
  • concluding a data processing agreement when using a third-party provider
  • ensuring an appropriate level of data protection when transferring data abroad
  • where appropriate, conducting a data protection impact assessment
  • ensuring data security

These obligations, with the exception of the data protection impact assessment, are punishable and can therefore lead to sanctions in the event of non-compliance. However, unlike European law, the penalty is not aimed at the company, but at the responsible individual within the company. 

What are the consequences of a breach of data protection law?

If the Data Protection Act is breached, the person responsible in the company faces a fine of up to CHF 250,000. In exceptional cases, the company itself may be fined if the identification of the persons responsible would be unreasonable. In this case, the company only faces a fine of up to CHF 50,000. 

To date, I am not aware of any fines that have exhausted their limits. Data protection breaches with sanctions are still rare. 

In addition to fines, the Federal Data Protection and Information Commissioner (FDPIC) has extended powers since the revision of the Swiss Data Protection Act. It is authorized to initiate investigations and order that certain data processing procedures, such as the use of AI services, must be adjusted, interrupted or discontinued. As a rule, the FDPIC acts on an informal level prior to a formal investigation procedure by drawing the controller’s attention to a data protection breach and recommending adjustments. These measures serve to raise the awareness of the company and the persons responsible with regard to the compliant use of AI in data processing and to minimize potential risks.

  • A man in a blue sweater with a laptop in his hand stands in front of a server.
    Cybercrime: Does your company have legal protection?

    Cyber attacks, data loss, extortion: Cybercrime also raises legal questions.

    Find out more

How can breaches of data protection law be prevented? 

Ensuring the data protection compliant use of AI tools in SMEs begins with raising awareness. To prevent breaches of data protection law, staff should receive regular training and information about the risks involved in handling personal data and AI tools. On the one hand, raising awareness should ensure that data protection issues are incorporated into new projects at an early stage, such as the introduction of an AI service. On the other hand, it helps to raise awareness of the risks involved in handling personal data and AI tools.

In addition to an internal data protection guideline that governs the general handling of personal data in the company, a specific AI guideline is also recommended in practice, on the basis of which the governance structure for dealing with AI can be created. The AI Directive specifies, for example, which AI services may be used with which data (e.g. personal data, particularly sensitive data, confidential data, etc.) and how the results generated by the AI service are to be handled. Ethical guidelines in the company (e.g. in relation to issues such as transparency, fairness, non-discrimination or human proctoring) should also be enshrined in it.

What tips do you have on how to use AI (tools) for SMEs in compliance with data protection regulations?

To ensure that AI tools are used in compliance with data protection requirements, central clarifications must be carried out before using an AI tool:

  • A comprehensive risk analysis that also considers data protection compliance aspects, e.g. based on an extended data protection impact assessment
  • A careful review of the AI provider as well as the neat preparation of the contractual basis or a review of the contractual basis, even if negotiating the contractual terms appears difficult.
  • Identify contractual risks and, if necessary, counteract them with internal guidelines or technical means and restrictions.

In addition to the legal risks, it is advisable to consider the technical aspects when using AI tools in order to identify potential risks and avoid data protection breaches.

Measures for the data protection compliant use of AI tools in the company

  • Raising employee awareness and training: Regular training ensures that employees are aware of risks, use AI services responsibly, and that data protection is incorporated into projects from the outset.
  • Creation of a specific AI policy: Create clear rules – with guidelines for handling data, AI-generated results and ethical guidelines.
  • Carrying out comprehensive risk analyses: Review the AI providers and contracts as well as the implementation of technical security measures before deploying the AI service in order to identify and mitigate risks early on.

Summary: Data protection-compliant application of AI in the company

In conclusion, the use of artificial intelligence by Swiss SMEs presents numerous opportunities, but also challenges in terms of data protection law. In order to deal with AI in a data protection compliant manner, targeted measures at the company level are essential. Creating clear guidelines, raising employee awareness, employee training and conducting comprehensive risk analyses are decisive steps to ensure that the use of AI in companies is compliant with data protection. Given the growing importance of AI, it is essential for Swiss SMEs to set the course for dealing with this technology in a responsible and legally compliant manner. Data protection and AI must go hand in hand to ensure long-term success and social acceptance. Companies that are proactive in this area can not only minimize potential risks, but also increase the trust of customers, partners and employees. An up-to-date and clearly formulated privacy policy is a key element in creating transparency and trust. Ultimately, a holistic and responsible approach to AI in terms of data protection is a key factor for the future viability of Swiss SMEs.

Frequently asked questions about data protection when using AI

Do we as a Swiss SME have to observe special data protection rules if we use AI?

Yes. In addition to the Swiss FADP, the stricter requirements of the EU GDPR and the EU AI Act may also apply – depending on the customers and area of activity.

What do the GDPR and the EU AI Act govern – and to what extent does this affect us as a Swiss SME?

The GDPR governs data protection and the processing of personal data in the EU. Swiss companies must also comply with these rules when processing data on EU citizens.

The EU AI Act specifically concerns the use of AI and ensures that high-risk AI systems operate transparently, securely and in compliance with data protection requirements. If Swiss SMEs use AI services that affect EU citizens or are hosted in the EU, they must also comply with these rules.

What does the Swiss-US DPF mean for Swiss companies?

Since September 15, 2024, it has allowed data to be transferred to the US without any additional protection mechanisms as long as the US provider is certified. Thus, data can be entered into the tools of such providers if all other requirements are met.

Can we enter customer data into AI tools like ChatGPT?

Only if you are using a business version with a data processing agreement (DPA) and it can be seen that the provider is certified and complies with the FADP. When in doubt, don’t enter confidential or sensitive data into AI tools.

What happens to our data when it is entered into AI tools?

Many providers use inputs to train their models. Reliable tools offer opt-out options, especially in the Pro/Enterprise version. Check the relevant conditions beforehand and select this option whenever possible to protect your data.