Your browser is no longer supported. Please upgrade your browser to improve your experience.

The EU Artificial Intelligence Act is finally here. For device manufacturers, it marks a seismic shift in AI development and deployment guidelines.

But failing to comply could land you fines of up to €35 million.

Designed to safeguard EU citizens while encouraging responsible innovation, the Act ensures that all AI systems are safe, transparent, and non-discriminatory.

The first major enforcement deadline passed in February 2025, prohibiting AI systems deemed an unacceptable risk to people’s safety and fundamental rights. However, it’s the looming August 2026 deadline that’s set to upend the way you design, document, and deploy embedded AI applications for high-risk use cases.

In this blog, we’ll unpack the need for harmonised AI legislation, what it means for your organisation, and how compliance can help drive competitive advantage.

Why does Europe need standardised AI regulation?

Current laws don’t adequately address the complex challenges posed by high-risk AI systems, such as their potential to harm safety, privacy, and fundamental rights.

Until now, decisions related to AI training, validation, and more have been left mainly at the discretion of specific industries. For example, AI-embedded medical devices in the EU are subject to the Medical Devices Regulation (MDR) and In-Vitro Diagnostic Medical Devices Regulation (IVDR). However, neither includes comprehensive AI guidance.

Without a standardised regulatory framework for AI, manufacturers developing AI-embedded devices risk:

  • Financial and reputational damage due to unforeseen harm to users
  • Compromising product quality and user safety due to unclear testing rules
  • Failing certifications due to insufficient or inconsistent documentation
  • Delaying time-to-market due to legal ambiguities and global restrictions
  • Releasing AI features that may later be banned or require costly redesigns

In response, the EU created its AI Act to promote safe, accountable, and transparent AI development and deployment across Europe, and here’s how it works.

What the EU AI Act means for your AI systems

As the first-ever comprehensive legal framework for AI developers and deployers, the EU AI Act introduces specific rules for AI use across four risk categories: unacceptable, high, limited, and minimal.

Unacceptable risk relates to harmful AI uses that contravene EU values and violate fundamental rights. Such applications, including social scoring, predictive policing, and real-time biometric identification in public places, are banned.

High-risk AI use cases are far less insidious and, in many cases, contribute positively to social and economic progress—such as maximising surgical efficiency and minimising human error. However, without strict rules in place, they can still pose a serious risk to people’s health and safety.

That’s why, as of 2 August 2026, high-risk AI systems­ listed in Annex III must comply with the EU AI Act’s strict provisions. This includes AI-based components in:

  • Critical infrastructure, such as transport, including autonomous vehicles and passenger safety monitoring systems
  • Medical devices, such as robotic surgeon tools and therapeutic aids that help accelerate recovery through adaptive decision making
  • Civilian-use defence and security systems, such as public biometric detection and threat detection tools
  • Aerospace devices, such as flight control systems, aircraft health monitoring platforms, and predictive maintenance tools

AI uses with limited and minimal risk include chatbots, generative audio/visual content, and spam filters. Under the EU AI Act, these face strict transparency rules to ensure users understand they are interacting with an AI-powered application. Your AI systems are exempt from the regulation if they’re used exclusively for:

  • Military and national security applications, as regulatory responsibilities fall with individual EU member states
  • Research, development, and prototyping activities that occur internally before an AI system is put into service or deployed to market
  • Personal non-professional purposes, such as hobbyist robotics projects and smart home experimentation

What does the EU AI Act mean for device manufacturers?

Predictable certification is a keystone of device software—especially for sectors where safety, reliability, and regulatory compliance are critical.

Industries such as medical, aerospace, and industrial systems already rely on established standards like IEC 62304 (medical software), DO-178C (aerospace), and IEC 61508 (industrial safety systems). These frameworks support audit approval, secure market access, and underpin a culture of safety and accountability. However, they do not yet comprehensively address AI-specific concerns such as data bias, transparency, explainability, or human oversight.

The EU AI Act is designed to complement these sector-specific regulations by introducing targeted rules for high-risk AI systems. For device manufacturers developing or integrating AI-based functionality—such as diagnostic tools, autonomous navigation, or predictive maintenance—this means complying with both existing sector standards and new AI-specific legal obligations.

You may be thinking that more rules will mean more bureaucracy to navigate and processes to implement. But let’s consider what you stand to gain:

  • More time to innovate and drive business growth; less time navigating uncertain legal expectations and redesigning features mid-development
  • Simplified certification with clearly defined requirements for data, documentation, transparency, and testing—once it’s certified, it’s certified
  • Greater market access with single EU-wide rules that let you deploy the same AI features across all 27 member states—no adjustments required

Device manufacturers that begin aligning with the EU AI Act ahead of its key enforcement deadlines will gain a first-mover advantage. Early adopters can build regulatory trust, accelerate product certification, and ensure their AI-integrated devices remain both market-ready and competitively differentiated as the Act comes fully into force.

What’s the risk of non-compliance?

Failing to comply with the EU AI Act after 2 August 2026 may incur the following legal and commercial penalties:

  • Fines of up to €35 million for violating the prohibited AI systems ban, and up to €15 million for violating high-risk AI system requirements
  • Forced withdrawal or suspension of non-compliant AI systems, eroding brand trust and discouraging procurement tenders and partnerships
  • Increased regulatory scrutiny, including more frequent audits, inspections, and technical obligations that compound your administrative burden

Read the complete penalty list here.

How to prepare for that all-important deadline

As directed by the European Commission, we recommend performing a thorough EU AI Act readiness assessment as soon as possible. This will help you lay the foundation for long-term compliance and avoid a disruptive last-minute scramble come next year.

To get started, prioritise the following actions:

  • Map your product portfolio to identify devices with embedded AI features and assess their risk levels against the act’s framework
  • Perform a compliance gap analysis between your AI systems and the EU AI Act in areas such as data quality, human oversight measures, and bias detection
  • Build a flexible governance plan that clearly outlines how you’ll comply with specific obligations while allowing for future legal adaptations
  • Update your AI policies for internal and external use, which your AI compliance lead can use to drive strategic awareness and inform training sessions
  • Create a comprehensive AI inventory to track the live status and purpose of all your AI systems, and register any high-risk systems in the EU AI database

The EU has created an EU AI Act Compliance Checker, which you can use to gain a snapshot of your organisation’s current compliance status.

Streamline compliance with a proven AI partner

We know that navigating emerging regulations—especially ones as complex as the EU AI Act—can be disruptive. That’s why having the right technical partner matters.

With deep expertise in quality-critical software development, testing, and compliance consultancy, our teams support organisations in delivering robust, regulation-ready embedded systems. If you’re exploring how the EU AI Act might affect your software, or you need a partner who understands the regulatory landscape, we’re here to help.

Whether you’re creating a new device, or updating an existing one, get in touch to discuss your embedded software needs.

 

You might also like...

Did you know that we have a monthly newsletter?

If you’d like insights into software development, Lean-Agile practices, advances in technology and more to your inbox once a month—sign up today!

Find out more