the use of ai

The use of artificial intelligence in dentistry – how should ethics be considered? 

 By Engelschalk, Marcus 1, 2, Smeets, Ralf 2 
1 SlowDigitalDentistry, Private Dental Office, Munich, Germany 
2 Department of Oral and Maxillofacial Surgery, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany 

Every application of artificial intelligence (AI) in medicine should improve access to medical care and its quality, as well as increase the efficiency and safety of the respective therapy. AI also enables the sharing of patient data to support medical research, thereby reinforcing and strengthening treatment recommendations and strategies.1 Digital phenotyping already plays an invisible and pervasive role beyond medical treatment, where data is typically collected via mobile devices via health and wellbeing apps and cloud-based services. The users of such phenotypic data can select what data to collect and how and where it is stored, plus the data can then be analyzed to describe individual or collective patterns. Thus, the first ethical issue in the use of AI with phenotypic data is the democratization of this data, as the data can just be selected by the user. 

Fig. 1: AR (Augmented Reality) in the daily routine as a intermediary between dentist and software for the use of AI 

Essentially, every new dental workflow should have a distinct, noticeable positive effect on the patient and the corresponding form of therapy. While the application of AI for diagnosis and therapy planning can clearly offer significant improvements in treatment, its use is also associated with risks and challenges, especially related to the inclusion of soft skills (human emotions) and hard skills (technical skills).

Although AI has no ability to experience human emotions (soft skills), it can recognize human emotions based on the body language, facial expressions, text, or voice expressions of the patient. For example, in the context of a psycho-emotional patient survey, AI analyzes the patient’s textual interactions using a higher-level algorithm to control the solution search of one or more dependent algorithms and machine learning classification, producing a classification rate of 90%.2 When the patient survey analysis results of the AI model were then compared with those of psychiatrists, there was no significant difference in predicting moderate mental distress, yet the AI model was significantly more accurate in predicting severe mental stress.3 Professionalism in medicine involves more than pure knowledge-based action, and can be generalized as follows: (1) the patient has a concern; (2) the doctor attends to the patient’s concern; (3) the doctor helps without patronizing; (4) the doctor looks at the patient holistically without establishing a private relationship; and (5) the doctor applies their general knowledge to the specifics of the case. When these 5 characteristics are then applied to the use of AI, Heyen et al came to the following conclusions:4 

– The use of AI in medical practice requires the doctor to pay particular attention to those facts of the individual case that cannot be comprehensively considered by AI (personality, life situation or the cultural background of the patient).  

– The more routine the use of AI becomes in practice, the more doctors need to focus on the patient’s concerns and strengthen patient autonomy, for example through an appropriate integration of digital decision support in shared decision-making.  

– Due to the fact that computer-based technologies are generally considered insensitive, the use of AI in medical professions in some areas is particularly questionable. 

As a result, the need for doctors to apply soft skills to maintain a humane and ethically defined doctor-patient relationship becomes even more important and should be defined urgently. 

Accordingly, the use of AI should initiate a new discussion in the field of ethics and the associated ethical concerns and challenges must be worked out. Basic ethical principles, which can be defined as beneficence, non-maleficence, autonomy, equity, and explainability, all require concrete recommendations, as well as reimbursement policies, security and data stewardship when implemented in the world of AI.5

A systematic literature review of AI in dentistry identified 1,533 articles, among which 178 studies were highlighted. These studies identified 53 different applications of AI in dentistry and defined 6 ethical principles for their use: prudence, justice, data protection, responsibility, democratic participation, and solidarity.6 AI is not routinely used in dental practices as yet, possibly due to data availability, a lack of structures, or questions on the benefits for the clinician, ethics, and responsibilities. Data must be used ethically throughout its retention period, yet this requires harmonized data quality assessment, management, and control to strengthen trust in AI. Thus, factors such as data quality, algorithmic bias, lack of transparency, security, and assignment of responsibility can all influence the trustworthiness of medical AI. For a comprehensive ethical classification, these factors must be evaluated from the perspectives of technology, law, and health care. Medical data is currently unstructured due to the lack of standardized annotation, and data quality can directly impact the quality of medical AI algorithm models, thereby impacting AI clinical predictions. This can also affect patient and doctor confidence in AI and pose significant risks and potential harm to the patient. In this context, the term democratization is often used and mainly focuses on the patient as the consumer and relies on limited market-based solutions. Thus, defining and imagining democratization requires a set of social goals, as well as processes and forms of participation to ensure that those affected by AI in healthcare have a say in its development and use.7 

While the Hippocratic Oath and Belmont Report describe basic principles for the doctor-patient relationship, the increasing use of big data and AI techniques calls for a re-examination of the principles of privacy, confidentiality, data ownership, informed consent, epistemology, and injustice, as physicians have a traditional, fiduciary responsibility to protect the interests and privacy of their patients.8 The autonomy of doctors and the dignity of patients can be further threatened by the inclusion of AI. In addition, in the case of complications or treatment errors, the allocation of responsibility is currently unclear. Thus, the legal implications of using AI in the medical field need to be clarified here. Due to the rapid increase of AI, concerns are warranted and require a legal response to manage current and future uses. However, there is comprehensive summary in literature that examines and identifies the legal concerns of health-related AI.9 When it comes to legal questions about AI in medicine, it is also important to consider a wide range of interest groups and involve political decision-makers, developers, healthcare providers, and patients. 

To ensure the ethical implementation of medical AI, the following points can be defined as basic assumptions: 11 

– promotion of human health is the ultimate goal 

– current medical AI has no moral status, so the human remains the duty bearer 

– strengthened data quality management 

– improved transparency and traceability of algorithms 

– reduction of distortion by algorithms 

– ongoing regulation and review of entire process of AI industry to control risks 

The AI algorithms used for decision-making or targeted actions are based on data and models that contain relevant information on the question to be analyzed. Since AI application in the medical field relates directly to the health and life of patients, the data and corresponding algorithms must all be collected, cleaned, and organized with extreme precision for unequivocal interpretation. Data dependency must be avoided as far as possible through comprehensive and reliable data collection to avoid bias, false assumptions, or other types of error that can affect the users and patients. Since AI applications are modelled or programmed by engineers, algorithmic procedures are needed to ensure implementation safety and avoid unforeseen consequences. Thus, supervisory bodies need to be established to monitor technological developments and ensure that preventive safeguards are in place to protect stakeholders from direct or indirect harm. Thus, it is the responsibility of AI researchers to ensure that the future impacts are positive, while ethicists and philosophers need to be deeply involved in the development of such technologies from the outset. Explainability is also a touchstone for AI decisions, meaning that it should be possible to reconstruct why an AI system produced certain predictions.12 Explainability is not just a technological issue, but also raises questions of a medical, legal, ethical, and social nature. Thus, test simulations of each question should be conducted to evaluate the accuracy and precision of the AI system, including intentional mislabeling of training images according to different values, called a “mislabeling balance” or “corruption parameter”. Even a slight corruption can affect the accuracy.13 An ethical evaluation framework for algorithms has already been proposed by Beauchamp and Childress in their Principles of Biomedical Ethics, including the criteria of autonomy, beneficence, non-harm, and justice to assess explainability. The omission of explainability in medical decision support systems represents a threat to basic ethical values in medicine and can have adverse consequences for individual health.

Thus, the algorithms must be clearly communicated to the treating physicians on the basis of their explainability in order to make facilitate appropriate conclusions and highlight unclear questions. Thus, the main task lies in the collection of basic data in an ethically justifiable framework, along with open-minded thinking to discuss the problems and solutions with expert teams with different perspectives. 

In countries with a poor or weak healthcare infrastructure, AI offers an exciting solution. However, this is precisely when fairness issues and algorithmic bias need to be considered due to a lack of technical capacity, possible prejudice against minority groups, and lack of legal protections. This forces the additional implementation of the criteria of appropriateness, fairness, and bias to evaluate the use of AI systems and their algorithms.14 

Fig. 2: The use of AR (Augmented Reality) during surgery 

In summary, AI should expand options for action and not prevent them. Thus, the principle of “man and machine” and not “man against machine” should be the top priority. AI, machine learning (ML), and big data can exhaust human oversight and storage capacity, leading to problems. Technologies have no ethical or moral status, but are always linked to human activities and serve to enable qualitative and quantitative human performance and interaction. Personal integrity, equity of resource allocation, and accountability of moral agency characterize three ethical dilemmas that arise in the development and application of AI.15 A literature review has shown that the proportion of studies with AI-related ethical issues has remained similar in recent years, despite the sharp increase in the number of publications on AI. This clearly shows a lack of interest in engaging with the topic of ethics and a lack of information about ethical challenges related to AI. This can be interpreted as disinterest in the ethical component of this issue. But it is important to consider the soft skills in the philosophy and strategy of medical and dental treatment, and to see AI as an additional component in dental treatment that supports previous concepts. Therefore, this sensitization is required based on future clinical situations for human-machine interaction and for the optimization of human-machine interaction for the best possible patient experience and care.  

Fig. 3: External use of AR during surgery for education purpose 

The introduction of AI as a dependent, semi-, or fully autonomous partner in patient treatment may challenge the traditional assessment of patient and physician autonomy in treatment. Thus, the increasing progress in AI and its implementation in patient care require a discussion in order to protect humanitarian concerns in future treatment concepts. Therefore, doctors should neither accept developments in AI uncritically nor work against them, but rather actively participate in their development, constant testing, and application.17 Thus, supervisory bodies must be set up to monitor technological developments and ensure that preventive guarantees are in place to protect those involved from direct or indirect harm. So, it is the responsibility of AI researchers to ensure that future impacts tend to be positive, and ethicists and philosophers are closely involved in the development of such technologies from the beginning.18 While doctors and dentists are accustomed to base their actions on ethical considerations and implications, these detailed considerations may not always be present to the same extent when dealing with technological advances. In addition to pure decision-making, interactions with patients also play a crucial role in the doctor-patient relationship. Using the example of the transmission of bad news, a hedonistic calculus (felicific calculus according to Jeremy Bentham) can be discussed with the aim of increasing satisfaction and reducing pain. With regard to AI and the question of the transmission of a negative message, the evaluation point intensity, duration, certainty, propinquity, fecundity, purity, and extent must all be processed as comprehensively as possible to take account of the ethics involved in doctor-patient communication.19 

Fig. 4: Using AI for the segmentation of different bony parts 
Fig. 5: The use of AI for segmentation of teeth 
Fig. 6: The use of AI for nerve segmentation 

The generation of digital twins in connection with AI is the next step or evolutionary stage of digital health technologies and has the potential to change and challenge future medical practice. This already offers the opportunity to discuss ethical, legal, and societal implications in advance.20 AI ethics must include perspectives from philosophy, computer science, law, economics, and, of course, medicine and dentistry. The goals of developing ethical AI systems should always go hand in hand with the characterization of human moral judgments and their integration into decision-making from a computational point of view.21 From an ethical point of view, two problems arise: a possible lack of transparency in a computer simulation (epistemic opacity) contradicts the human need for communication and clear assignment of responsibility in the event of failure, i.e. the concepts of understanding and responsibility. Accordingly, ways must be found to link the results of machine algorithms with the desire for discussion. Consequently, the term “explainable AI” needs to be further defined and developed.22 

In conclusion, AI must respect human rights and freedoms, including the dignity and privacy of an individual. Attention should be paid to the greatest possible transparency and reliability, and the ultimate responsibility and accountability for the application of AI should remain with the human developers and operators for the foreseeable future. 

the trend marcus engelschalk 1

Dr Marcus Engelschalk
Clinic Name: Slow Digital Dentistry
Address: Frauenplatz 11, 80331 Munich,

qr code

For a full list of REFERENCES email:

Share this post

Submit an Event

Book an Advertisement

To Subscribe