Investigation of a coefficient agreement of diagnostic power between GPT CHAT and specialist dentist in the field of diagnosis
Abstract:
Introduction: Artificial intelligence is expanding in today's world, and artificial intelligence topics in medical fields have received a special attention in recent years. It is predicted that in the next 10 years, artificial intelligence will replace many jobs and will affect people's daily lives. This study was conducted in order to examine the accuracy of Chat GPT in dental diagnosis compared to specialist’s dentists.
Methods:
The number of 36 questions were selected by a specialist dentist and translated into English after obtaining the opinion of experts.َAfter grammatical check, the questions were asked from two specialist dentist and then from Chat GPT. The final result were compared to the answers of 2 specialist dentist who had answered these questions together and their diagnosis about each question was completely the same and the GPT chat answers. Cohen kappa coefficient was calculated.
Results:
Chat GPT was unable to answer 2 questions. In the other two cases, GPT CHAT declared himself incompetent and asked to visit a dentist. Regarding the other 8 questions, Chat GPT asked for paraphrasing and more explanations, and after paraphrasing, he gave correct answers to 5 questions and different answers to three questions from the specialist diagnosis. Regarding other 24 questions, the answers of specialist dentist and chat were completely the same. Cohen's kappa coefficient regarding the agreement between the experts and Chat GPT was estimated at 0.77.
Conclusion:
At the beginning of emergence, Chat GPT has a strong high analysis and a slight difference with the specialist dentists. It seems that with the further expansion of artificial intelligence, the power of diagnose of Chat GPT has improved even more and it will be able to diagnose without asking a question by the person only through IMAGE PROCCESSING.