top of page

May 1, 2024

Draft Regulation Includes Requirements for Medical Devices

The Parliament of the European Union (“EU”) presented its draft of the Artificial Intelligence Act to the European Commission on March 13, 2024.  The proposed regulation is positioned as a “horizontal EU legislative instrument,”[1] which means it has broad application to all artificial intelligence (“AI”) systems placed on the market or used in the EU, rather than a sector-specific regulation directed towards medical devices, for example.  


That said, the Artificial Intelligence Act dovetails with the EU Medical Device Regulation (EU 2017/745) and In-vitro Diagnostic Regulation (EU 2017/746), amongst other EU sector- and product-specific regulations.[2]  That does not mean, however, that medical devices manufacturers can expect to be given a “soft-touch” and mostly overlooked by the regulation.  Medical devices that incorporate AI may be classified as “high-risk,”[3] a term which appears 475 times in the draft regulation.  


The draft regulation indicates the EU’s stance on AI and what the final legislation will include.  This article examines the status of medical devices within the draft regulation and provides context on this developing area. 

It should be kept in mind that the opinions in this post apply only to the current draft regulation; the final legislation may look very different and the information contained herein may be rendered irrelevant.  Also, this article does not address several of the requirements of “general purpose AI models,” which are expected to be regulated by the final regulation.


I. Risk-Based Approach


The draft regulation defines an AI system as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”[4]  The draft regulation assumes a risk-based approach, where risk is determined according to the probable occurrence of a harm occurring in combination with the severity of the harm.[5]  Uses of AI that are perceived as demonstrating a higher-risk receive more regulation than lower-risk applications. 

Examples of high-risk AI systems include medical devices that incorporate AI into the safety features of the device, or AI incorporating medical devices that qualify as Software as a Medical Device (“SAMD”) product.[6]


II. Assess, and Re-assess


To place a medical device containing AI on the market in the EU (or European Economic Area (“EEA”)), the manufacturer must first submit an application to a “Notified Body,” a non-governmental organization charged with reviewing and approving the device.  After assessing the application, the Notified Body may issue a certificate (“CE certification,” where “CE” is an acronym of the French term: Conformitée Européenne).  Only regulated products containing a CE mark may be marketed in the EU.


High-risk AI applications are eligible for a CE certificate that is good for only four years.  After the four year term is up, the manufacturer must obtain re-certification, which includes a re-assessment of the product “in accordance with the applicable conformity assessment procedures.”[7]   This is important for manufacturers to keep in mind because not only might the AI model evolve, which can change its risk profile and the outcome of any conformity re-assessment, but the conformity assessment procedures might change as well.  In this case, past submissions cannot be relied upon to guide re-assessment for CE certification.  It is reasonable for manufacturers to anticipate that all-new data will have to be generated and the re-assessment may not be any quicker than the initial CE certification assessment in the event of changes to the risk profile and conformity assessment procedures.


III. More Regulation to Come


Real world testing of high-risk AI devices may have to be conducted according to specific, forthcoming regulations including EU member state specific requirements for “AI regulatory sandboxes.”[8]  Such regulatory sandboxes “shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant . . . [and] may include testing in real world conditions supervised in the sandbox.”[9] 


For high-risk AI systems, the developer must submit a test plan to the market surveillance authority where the developer plans to place the system on the market.[10]  Developers may submit the test data collected from a third-country if “appropriate and applicable safeguards under [European] Union law are implemented.”[11]  In other words, regardless of where the product testing occurs, it will have to comply with EU regulation to be considered by the relevant national authority if the manufacturer wishes to place its product on the market in the EU.


IV. Conclusion


The draft regulation shows the current path that regulation for AI containing medical devices is likely to take in the EU.  Medical devices that incorporate AI technology in device safety features, or SAMD applications incorporating AI are likely to be classified as high-risk devices and face the highest scrutiny.  This includes complying with additional regulatory requirements for product testing and participating in a “Regulatory Sandbox.”  EU regulation of AI is coming, and medical device manufacturers should prepare for these additional regulations when developing and testing their AI software containing products.

 

Reach out to Thompson PLLC for help you navigate this complex and evolving area of law.

 


[1] BRIEFING: EU Legislation in Progress. Artificial Intelligence Act by Tambiama Madiega. Pg. 2 Available at Artificial intelligence act (europa.eu)

[2] See Preamble para. 50

[3] See Preamble para. 50: “As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product concerned undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.”

[4] Article 3(1) of draft regulation.

[5] Id. at Article 3(2).

[6] See, e.g., Preamble (50).

[7] Article 44(2).

[8] See Article 57(1).

[9] Article 57(5).

[10] See Article 60(4)(a) and (b).

[11] Article 60(4)(e).

Reach out to see how we can help.

80 South 8th St. 

Suite 900

Minneapolis, MN 55402

612-351-2228

  • LinkedIn

Thanks for submitting!

bottom of page