Is AI the “Perfect Doctor”? Rules and limits in healthcare

From decision support to predictive diagnostics, artificial intelligence is opening unprecedented scenarios in medicine. Yet, adopting these technologies requires clear governance, high-quality data, and training for healthcare professionals. For TrendSanità, the authors of the book “Is AI the Perfect Doctor?” analyze AI’s potential, risks, and legal aspects

Artificial intelligence is entering healthcare at an unprecedented pace, progressively redefining how diagnoses are made, therapies personalized, and care pathways organized. Imaging algorithms, predictive models, clinical decision-support systems, and natural language processing tools are no longer niche experiments -they are beginning to have a tangible impact on clinical practice and healthcare governance.

This transformation, however, is not merely technological. AI adoption in healthcare raises complex ethical, legal, and organizational questions: from algorithmic transparency to the quality and representativeness of data, from liability in case of errors to privacy protection, up to the relationship between innovation and public trust.

These issues are central to the volume “Is AI the Perfect Doctor?”, which provides a comparative analysis of European and U.S. regulatory frameworks and offers a comprehensive reflection on the future of AI in medicine. For TrendSanità, the book’s authors – Oreste Pollicino (Full Professor of Constitutional Law and AI Regulation, Bocconi University), Francesca Aurora Sacchi (specialized in science policy, AI regulation, and bioethics), and Noemi Conditi (Lawyer, Stefanelli & Stefanelli Law Firm) – analyze AI’s potential, risks, and the legal aspects of the different international approaches.

AI Act and healthcare: algorithmic opacity, data, and governance

Oreste Pollicino

In healthcare, the European risk-based regulatory approach represents a significant paradigm shift, though not without practical challenges. The AI Act rightly identifies healthcare as a high-impact domain for fundamental rights, but its effectiveness will depend on translating high-level principles into actionable tools for clinical and organizational practice.

As Oreste Pollicino points out, “In healthcare, the most complex risks to regulate involve algorithmic opacity, data bias, and the secondary use of sensitive health information.” These are structural elements of AI systems that cannot be addressed solely through formal compliance obligations—they directly affect clinical decision quality and care equity.

The adoption of AI requires clear governance, high-quality data, and workforce training

According to Pollicino, “The AI Act’s risk-based framework is an important step”, but it is insufficient if it remains abstract. “Detailed sector-specific guidelines are needed to translate high-level principles into operational requirements for developers and healthcare organizations,” guiding algorithm design, validation, and use in everyday care settings.

Data governance is another critical node, representing AI’s real infrastructure. “Strengthening data governance is also essential: we need systems that protect privacy and security while enabling responsible secondary use of data to accelerate research and innovation,” Pollicino emphasizes. In this perspective, protecting fundamental rights and fostering technological development are not conflicting objectives—they must be integrated through advanced, responsible governance models.

From technological innovation to integration into care pathways

Alongside regulatory challenges, the book examines the evolution of AI tools already available or in advanced testing phases. The emerging picture is that of a rich, rapidly transforming ecosystem in which some applications are beginning to yield significant evidence.

Pollicino notes that “Many tools are showing strong potential,” citing “imaging algorithms for early diagnosis, predictive models for therapy personalization, clinical decision-support systems, and platforms that analyze real-world data.” Complementing these are natural language processing solutions, which “are also advancing rapidly and improving documentation and research workflows.”

Artificial intelligence can improve medicine only if guided by strong ethical principles

The real leap, however, concerns the ability to integrate these technologies into care pathways. “The next step is integrating these tools into care pathways,” Pollicino emphasizes, highlighting the need to pair technological innovation with structured investment in skills. In this sense, “digital upskilling programs for healthcare professionals—as foreseen by the AI Act—are essential” “to ensure safe, transparent, and responsible clinical use.”

Ethics, data, and regulatory convergence: bridging Europe and the U.S.

Francesca Aurora Sacchi

Digital transformation in healthcare does not develop within national borders. Data circulation, research, and algorithm development make comparison between regulatory models inevitable. In this scenario, dialogue between the EU and the U.S. is strategic.

Francesca Aurora Sacchi, also board member of the Italian Society for Artificial Intelligence in Medicine (SIIAM), emphasizes that “Artificial intelligence can transform healthcare only if grounded in strong ethical principles, but we must also strike a balance that allows innovation to flourish.” Differences between the two models are evident: “In Europe, GDPR and the AI Act emphasize transparency and patient safety, while the U.S. tends to adopt a more flexible, market-driven approach.

The challenge is to avoid fragmentation that penalizes both innovation and citizen protection. “Today’s priority is creating a common ground between these models,” Sacchi observes, “promoting global standards that ensure quality, algorithmic traceability, and interoperability.”

Public trust in the National Health Service depends on the transparency, usability, and safety of AI systems

A crucial element of this convergence concerns access to health data. “Without large, representative datasets, even the most sophisticated algorithms cannot deliver clinical value.” Therefore, “building a secure, privacy-preserving, and well-governed data-sharing ecosystem is essential to develop trustworthy AI solutions that are genuinely useful for clinical practice.”

From research to practice: emerging tools, digital twins, and new complexities

The period of research for the book coincided with an unprecedented acceleration in AI technology. “During the writing process, we witnessed extremely rapid evolution in AI tools – both in analytic capabilities and accessibility,” Sacchi recounts. Solutions initially perceived as experimental quickly became “integral to clinical and research workflows.”

Integrating digital twins into clinical pathways reduces development timelines and risks

Among the most notable innovations are digital twins: “Dynamic virtual replicas of patients that integrate clinical, physiological, and genomic data to simulate disease progression or treatment response.” Their potential also extends to R&D: “Digital twins make it possible to model interventions, optimize dosing, shorten development timelines, and potentially reduce reliance on early-stage traditional trials.”

This evolution also brings new responsibilities. “We also observed growing attention to data quality and algorithmic explainability,” marking a shift to a “more mature, but also more complex” landscape that demands “continuous regulatory and educational updates.”

AI in medical devices: safety, obligations, and liability

One of the most delicate areas is AI integrated into medical devices, where innovation directly intersects with patient safety. Noemi Conditi notes that “Integrating AI into medical devices requires simultaneous compliance with Regulation 2024/1689 (AI Act) and Regulation 2017/745 (MDR).”

Medical devices with AI require strict regulatory compliance and continuous attention to safety

Noemi Conditi

This regulatory combination “aims to ensure consistent levels of safety, performance, and quality throughout the product lifecycle,” but also “increases obligations for all actors in the supply chain,” with “significant administrative and resource implications.”

On liability, “The liability regime has not undergone a radical transformation,” Conditi explains, “as the proposal for a dedicated directive was ultimately abandoned; therefore, standard rules continue to apply.”

Clinical adoption and physician–patient relationship: a matter of trust

AI adoption in daily practice is growing but remains uneven. “The share of healthcare professionals using AI tools is steadily increasing, with almost geometric growth,” Conditi observes. However, many digital solutions “currently support relatively simple tasks,” while “the adoption of more complex systems remains challenging.”

Data protection and algorithmic transparency are essential for patient acceptance

The potential impact on the physician–patient relationship is significant. “In principle, AI can significantly influence the physician–patient relationship.” On one hand, patients “can benefit from improved care quality,” but on the other hand, “ensuring usability, inclusiveness, and transparency is essential to maintain trust in the healthcare system as a whole.”

A transformation to govern, not endure

Artificial intelligence is not the “perfect doctor,” but a powerful tool that can improve the quality, efficiency, and equity of healthcare systems. As highlighted by the authors of Is AI the Perfect Doctor?, its impact will depend on the ability to integrate technological innovation, effective regulation, and professional training.

Today’s challenge is no longer whether to adopt AI in healthcare, but how to do so responsibly, transparently, and sustainably, preserving citizens’ trust and reinforcing the public value of the healthcare system.

Può interessarti

Rossella Iannone
Direttrice responsabile TrendSanità