The Healthy Technology Act 2025, a bill introduced in January 2025 by David Schweikert, a Republican Representative (Rep.) in the U.S. Congress, could signify a significant turning point for medicine in the United States. This bill proposes to amend Section 503(b) of the Federal Food, Drug, and Cosmetic Act, the federal law governing the safety and efficacy of drugs, medical devices, and food products, to recognize AI/ML technologies as “authorized practitioners“. This recognition would enable these technologies to prescribe drugs to patients in an automated manner, without human intervention.
It is important to note that this would not lead to unrestricted access, as these technologies would still be required to be licensed by the state and approved, validated, or cleared by the Food and Drug Administration (FDA). While the approval of this proposal is not certain in the short term, its introduction is expected to stimulate a rigorous debate about the legal and ethical implications of a healthcare system where AI assumes an autonomous prescriptive role.
What would happen if an algorithm were able to prescribe drugs to patients automatically, without human intervention?
Currently, only healthcare professionals are authorized to prescribe drugs. However, several AI-based systems are already in development and testing to assist healthcare professionals in this task. Examples include Google AI systems for drug prescriptions, Oxford University’s DrugGPT, which assists clinicians in prescribing drugs, and PharmacyGPT, a framework using large language models to replicate the role of clinical pharmacists. Given that AI systems are already aiding clinicians in drug prescription, it prompts consideration of the potential outcomes if such clinicians were no longer necessary in this process.
The U.S. innovation ecosystem
The Healthy Technology Act was proposed in the United States, which is a result of an ecosystem encouraging experimentation and innovation. The United States has historically adopted a more permissive approach to fostering innovation, whereas Europe maintains a cautious stance with stringent constraints, particularly in high-risk sectors such as healthcare. This divergence in regulatory models reflects a broader cultural distinction regarding AI and the balance between technological advancement and the protection of fundamental rights.
In the United States, AI regulation in healthcare has evolved within a fragmented framework, characterized by the progressive adaptation of existing regulations rather than comprehensive reform. The cornerstone of governance is the Food and Drug Administration (FDA), which has integrated AI into its regulatory frameworks for over a decade, treating it as a specific category of medical device software.
Since 2018, with the approval of the first autonomous diagnostic algorithm for diabetic retinopathy, the FDA has steadily expanded its oversight from case-by-case to a more structured system.
In the United States, AI regulation in healthcare has evolved within a fragmented framework
The AI/ML Action Plan of 2021 introduced concepts such as the “Predetermined Change Control Plan“, allowing algorithmic updates within predefined limits without requiring continuous reapproval. Additionally, the “Good Machine Learning Practices for Medical Device Development” aimed to establish minimum quality standards for the development and implementation of AI in healthcare devices.
This principle-based regulatory model provides producers with greater flexibility in demonstrating how their AI systems conform to overarching principles of safety and efficacy rather than imposing specific requirements. However, the absence of binding federal regulations on AI use in healthcare has led to fragmentation among sectoral regulations and non-binding guidelines, resulting in a regulatory ecosystem where legal accountability is not always clearly defined.
Lastly, the Trump administration marked further steps toward deregulation, including the revocation of President Biden’s Executive Order 14110, which sought to regulate AI at the federal level through a coherent action plan to be implemented by federal agencies, and the consequential withdrawal of the ‘AI Strategic Plan, Department of Health and Human Services’, which emphasized ex ante risk mitigation.
The European culture of precaution
In Europe, the development of AI in healthcare has been progressively consolidated through a centralized and integrated regulatory framework, culminating in the governance of the AI Act. This act ensures that AI systems are “human-centric” and transparent, employing a risk-based, precautionary principle approach.
Contrasting with the move towards a de-regulatory trajectory in AI development and adoption on one side of the ocean, Europe adopts a more cautious approach. The European Artificial Intelligence Regulation (AI Act, 2024/1689) establishes significant rules for transparency in all AI systems and classifies most AI systems in healthcare as “high risk“. These high-risk AI systems include those with a medical intended use qualifying them as Class IIa medical devices or higher, as well as AI systems used for triage and health care prioritization activities.
Europe adopts a more cautious approach on AI adoption in healthcare
The “high-risk AI system” classification demands stringent additional requirements beyond the basic ones, covering aspects such as quality, human oversight, and the CE marking process through the involvement of a Notified Body. This process is fully integrated with the activities mandated by the Medical Device Regulation (MDR, 2017/745), which came into effect in 2021.
Moreover, the General Data Protection Regulation (GDPR) limits the use of health data and severely restricts profiling and automated decisions without human intervention. While these regulations offer safeguards for patients, they also pose challenges for companies in the sector, particularly startups and SMEs, due to the high costs and bureaucratic complexities involved in complying with these requirements.
Who is responsible when an algorithm prescribes drugs?
In Europe, the decision-making automation of AI algorithms in healthcare is stringently regulated. Specifically, for all “high-risk” AI systems, human supervisors must have the capability to interrupt or override the model in a controlled and deliberate manner, with a real “stop button” available at predefined stages of the model decision cycle.
A proposal such as Representative Schweikert’s raises legal liability and patient protection issues. For instance, if an algorithm recommended prescribing the incorrect medication or provided an inaccurate diagnosis, determining liability becomes complex. Currently, the EU adheres to a strict liability regime, assigning liability to the manufacturer in accordance with the principle of defective product liability. Liability may only transfer to the healthcare provider in specific cases, primarily those involving unauthorized or negligent use of the product.
If an algorithm recommended prescribing the incorrect medication or provided an inaccurate diagnosis, determining liability becomes complex
Conversely, in the United States, this matter remains under deliberation. The lack of a specific regulatory framework for AI allows for varied legal interpretations, often resulting in the physician being held solely responsible in the event of an error. However, as AI systems advance toward greater autonomous decision-making capabilities, this approach may prove unsustainable, necessitating regulatory updates to more equitably distribute responsibilities among health technology developers, providers, and users.
AI literacy for healthcare professionals
In today’s healthcare landscape, it is essential for professionals to be literate in artificial intelligence. To fully harness the potential of these technologies while mitigating possible risks, comprehensive training for healthcare professionals is crucial, particularly in such a sensitive field.
The European Union has taken a pioneering step with the approval of Article 4 of the AI Act. This legislation mandates that providers and users of AI systems ensure an appropriate level of literacy among involved personnel, considering their skills and usage contexts.
There is an immediate need to define the essential knowledge and skills in AI for healthcare professionals. Awareness of available tools and their functions is important but insufficient. Professionals must critically integrate these technologies into clinical practice. They must discern when to rely on AI and when to question its outputs, always recognizing their irreplaceable role in decision-making.
Professionals must critically integrate these technologies into clinical practice
AI literacy should be an ongoing process, aligning with the continuous evolution of technology and requiring practitioners to constantly update their skills. AI can be a tremendous ally in medicine, provided it is used wisely and does not supplant clinical judgment with the illusion of technological infallibility.
Final Ethical Considerations
Finally, this scenario necessitates critical reflection on the new ethical responsibilities associated with the use of AI. While AI has traditionally been used primarily to support diagnosis and prognosis, the Healthy Technology Act 2025 heralds a significant change, as AI becomes autonomous in the therapeutic process. This transformation raises crucial questions regarding the accountability of algorithmic decisions, the protection of patient privacy, and fairness in access to care. Historically, physicians have been the guardians of patient health, utilizing judgment and empathy. If an algorithm can prescribe medication, we must consider whether a machine can make decisions with the same sensitivity and responsibility as humans.
Some ethical systems adopt a deontological (Kantian) approach, based on absolute moral principles where the morality of an action depends on its intention. Can an AI, lacking free will, truly make ethically sound decisions? Other systems favor a utilitarian (Benthamian) perspective, focusing on the consequences of actions; in this scenario, an AI prescribing medication must be evaluated not only for the formal accuracy of its decisions but also for their tangible impact on patients’ health.
Additionally, the Moral Machine experiment, conducted among 39 million people across 233 nations, demonstrated that ethical considerations are not universal but are deeply influenced by cultural context. These findings suggest that the principles of ethics applied to AI cannot be uniform, but must instead adapt to the values and norms of the contexts in which they are implemented.