The Legal Tech Blog

High-Risk AI in Companies – Obligations and Risks under the EU AI Act 

Written by STP Group | Nov 19, 2025 7:47:19 AM

A New Legal Framework for Critical AI Applications  

Are your AI systems ready to meet the strict requirements of the EU AI Act? High-risk AI systems are subject to particularly stringent regulations. The greater the risk to people and society, the stricter the rules. Mistakes or oversights can lead not only to hefty fines but also to significant reputational damage. For providers and operators of such systems, it is therefore crucial to know their obligations precisely and implement them in time. This article explains which AI applications are classified as high-risk, what obligations you face, and how to act in compliance with the law.  

 

What Does High-Risk AI Mean According to the EU AI Act?  

High-risk AI includes systems that can have significant impacts on fundamental rights, safety, or human life. Examples include:  

  • AI-supported candidate screening in HR  
  • Diagnostic software in healthcare  
  • Risk assessments in the criminal justice system  
  • Automated creditworthiness checks in the financial sector  

It is not decisive whether the technology is visible to the public or operates in the background. Internal tools also fall into this category if they prepare or influence decisions with serious consequences for individuals. Companies should therefore review and document which systems they operate and whether these must be classified as high-risk AI. This documentation is essential as proof for the authorities.  

Key Compliance Obligations  

If your AI systems are classified as high-risk, extensive obligations apply:  

  • Technical documentation: Describes the purpose, functionality, architecture, and training methods of the AI system. Must be kept up to date and versioned. 
  • Data quality: Training, validation, and test data must be representative, accurate, and checked for bias. The origin and usage of data, as well as bias test results, must be verifiable.  
  • Human oversight: A person must be able to monitor the system and intervene. This includes technical intervention options, clear responsibilities, and trained personnel.  
  • Logging: System activities must be logged completely and tamper-proof. Logs must be accessible at any time for audits by supervisory authorities.  

Roles and Responsibilities in the Company  

The EU AI Act distinguishes between different roles, each with specific obligations: 

  • Provider: Develops or trains the AI and is usually responsible for technical documentation and conformity assessment.  
  • Deployer: Uses the system in business operations and must ensure compliant use and inform users of key features.  
  • Distributor: Distributes or markets the technology and is responsible for proper labeling.  
  • Importer: Brings systems from third countries into the EU and may only do so if conformity is already proven.  

Often, a company assumes multiple roles simultaneously, making a clear internal allocation of responsibilities essential. 

Consequences of Non-Compliance  

Ignoring or delaying compliance carries serious risks:  

  • Fines of up tp €15 million or 3% of global annual turnover for violations of high-risk obligations  
  • Up to €35 million or 7% of turnover for prohibited practices  
  • Multi-million euro fines for incorrect or refused information to authorities  
  • Risk of unannounced audits that may lead to a temporary system shutdown  

 

Audit Preparedness as a Competitive Advantage  

Complete and well-structured documentation is the best defense against fines and operational interruptions. Keep all relevant evidence ready:  

  • Classification documents  
  • Complete technical documentation  
  • Data quality and bias test reports  
  • Human oversight regulations  
  • System activity logs  

Internal audit simulations strengthen compliance awareness, signal professionalism to regulators, and can help avoid formal objections.  

Conclusion: Using Legal Certainty as an Opportunity  

High-risk AI is the core of the strictest provisions of the EU AI Act. For companies, this means technological innovation must go hand in hand with legal precision. Early investment in documentation, data quality, and human oversight protects you from high fines and operational risks. At the same time, it builds trust with customers, partners, and investors. In an increasingly regulated digital economy, this trust is a decisive competitive advantage.  

________________________________________________________________________________________________________________________________

More about the impact of the EU AI Act: Our blog series at a glance (coming soon):

Download our free checklist and check whether your systems meet the requirements.