Artificial intelligence should help patients become more autonomous by giving them access to clear, reliable and tailored information. This enables them to better understand their situation, play an active role in their healthcare choices and make decisions in collaboration with professionals.
AI must enhance the quality of care and never reduce healthcare professionals to mere technical operators.
Algorithmic biases and the potential inequalities they generate must be actively identified and corrected to ensure equitable access.
Data confidentiality and consent must be strictly respected at every stage (collection, use and restitution of data).
BST THINKERS advocates for innovation that places humans back at the centre, while respecting the fundamental values of medicine and research.
The Expansion and Ethical Challenges of AI
Complete white paper: arguments and recommendations
Artificial intelligence opens vast fields of application in medicine: advanced imaging, rapid diagnostics and personalised risk prediction. These innovations raise questions about the quality of care and human relationships, which must remain at the heart of medicine. AI should support medical decision-making but never replace the practitioner’s judgement, who remains medically responsible.
Training healthcare professionals — both during their studies and throughout their careers — is essential to strengthen their skills and their ability to use AI in a critical and informed manner.
Bioethical Principles Applied to AI
Patient autonomy: Ensuring accessible information and free, informed consent.
Beneficence and non-maleficence: Promoting well-being and avoiding harm from automated decisions.
Justice and equity: Guaranteeing equal access to the benefits of AI for all.
Confidentiality: Strictly protecting data against any misuse.
Dilemmas: Decision Support or Substitution?
The central issue lies in preserving clinical judgement and expertise in reasoning and decision-making.
In practical know-how:
AI should support, not replace, the practitioner’s judgement. The physician remains responsible for medical decisions. Training must therefore strengthen the technical competence of healthcare professionals and their ability to use AI in an informed and critical manner.
In relational skills:
AI should be used without ever replacing the practitioner’s discernment. A physician must combine technical expertise with relational skills. Training must also develop empathy, communication and the ability to build an ongoing dialogue with patients. This approach fosters mutual understanding and enables patients to anticipate and actively engage in decisions about their health.
AI Bias: Continuous Monitoring and Correction
To ensure ethical and reliable use of AI, it is essential to understand and detect biases. Training data may unintentionally exclude population groups — by age, gender, origin or socio-economic status — leading to diagnostic errors. These biases can also reinforce health inequalities, particularly when certain populations have reduced access to care, technology or medical literacy.
Several measures can mitigate these risks:
Diversifying data sources.
Rigorously validating algorithms.
Implementing continuous monitoring and regular assessments.
These measures are necessary to ensure equity and medical justice for all.
Responsibility and Legal Framework
Roles must be clearly defined: the physician always retains ultimate validation of AI-assisted decisions. A clear regulatory framework, including alignment with the European AI Act, is essential to protect both patients and practitioners.
Ethical Data Management
Informed consent and transparency in data use — for care, research or industrial applications — are indispensable. Any secondary use of data must be clearly communicated and secured to preserve patient trust.
Ethical responsibility is collective: designers, manufacturers and users must all uphold the same standards of rigour and respect for rights.
Data management raises particularly complex questions in emerging fields such as bioprinting. In this domain, several actors are involved — the algorithm designer, the device manufacturer and the practitioner who implants or uses the final product.
Through invention, production and application, responsibility becomes diffused, especially when biological and digital data intersect. Who is responsible in the event of error, misuse or unauthorised reuse of data?
Currently, no universally accepted answer exists. This uncertainty underscores the urgent need for an ethical and legal framework capable of defining collective responsibility while guaranteeing transparency, traceability and informed patient consent at every stage of the innovation chain.
BST Thinkers Recommendations
Raise awareness and train professionals in the ethical and critical use of AI, complementing technical learning.
Ensure the representativeness and equity of data, from collection to processing, by guaranteeing balanced access to digital innovations.
Maintain human medical judgement as the ultimate decision criterion, supported by explicit and transparent reasoning communicable to patients.
Comply with European legal frameworks and international guidelines on AI and data governance.
Develop soft skills such as trust, responsibility, empathy and communication to preserve the human dimension of medical practice in the digital age.
Strengthen professional confidence and interpersonal skills through dedicated training programmes.
The future of AI in medicine will depend on responsibility and humanity. Innovation must improve care and trust, serving patients, professionals and the common good — without sacrificing fundamental human values.