The impact of using ChatGPT on medical publishing practices is currently unknown, but it has the potential to have substantial ethical implications. ChatGPT, an AI chatbot released in November 2022, has already attracted millions of interactions and has been used to formulate university essays and scholarly articles.
The functionality of ChatGPT highlights the growing necessity of implementing robust AI author guidelines in scholarly publishing. Ethical considerations abound concerning copyright, attribution, plagiarism, and authorship when AI produces academic text. These concerns are especially pertinent because whether the copy is AI-generated is currently imperceptible to human readers and anti-plagiarism software.
Studies across various fields have already listed ChatGPT as an author, but whether generative AI fulfills the International Committee of Medical Journal Editors’ criteria for authorship is a point of debate. The Committee on Publication Ethics has developed AI recommendations for editorial decision-making, and the trade body for scholarly publishers, the International Association of Scientific, Technical, and Medical Publishers, produced a white paper on AI ethics.
However, comprehensive discussions about authorship policies are urgent and essential as technologies become better tailored to user needs and more commonly adopted.
The problem of using ChatGPT in medical research
Using ChatGPT is free, but OpenAI’s leadership has affirmed that free use is temporary and the product will eventually be monetized. This could entrench existing international inequalities in scholarly publishing, as institutions in socioeconomically advantaged areas could probably afford access, while those in low-income and middle-income countries might not be able to offer using ChatGPT as a research tool for researchers, thus widening existing disparities in knowledge dissemination and scholarly publishing.
ChatGPT’s ease of use and accessibility could substantially increase scholarly output and democratize the dissemination of knowledge since the chatbot can receive and produce copy in multiple languages, circumventing English-language requirements that can be a publishing barrier for speakers of other languages. Nonetheless, the functionality of using ChatGPT has the capacity to cause harm by producing misleading or inaccurate content, thereby eliciting concerns about scholarly misinformation.
How is artificial intelligence in surgery and healthcare changing our lives?
The early rollout of ChatGPT will inevitably spawn competitors, potentially rendering this a far-reaching problem. Rigorous exchanges around the implications of using ChatGPT generated content within scholarly publishing, with a view to creating comprehensive guidelines are mandatory.
Nowadays, everyone is using ChatGPT, and ChatGPT’s burgeoning popularity and public experiences illustrate that the time for these conversations is now; after all, can you really be sure that what you are currently reading was written by human authors?
Is using ChatGPT for research reliable?
Using ChatGPT for research can be reliable, but it’s important to use it responsibly and critically. ChatGPT is a powerful tool that can provide valuable insights and information, but it’s not without limitations. The most important limitation is that the literature review, which forms the basis of academic research, cannot be fully performed by the AI chatbot due to ChatGPT’s lack of access to articles and books.
To understand the consistency of ChatGPT in this regard, we asked it to do a 700-word review of Deborah L. Well’s article, “Dogs As a Diagnostic Tool for Ill Health in Humans“.
This is what the chatbot gives us when we try using ChatGPT for the review of the article:
”I’m sorry, but I cannot provide a verbatim 700-word review of a specific article as I do not have access to my training data, which includes proprietary databases, subscriptions, or the content behind paywalls. Additionally, I do not have access to specific articles or publications. However, I can offer a general overview and analysis of the topic “Dogs As a Diagnostic Tool for Ill Health in Humans” by Wells, Deborah L., based on my training data up to September 2021.
Title: Dogs As a Diagnostic Tool for Ill Health in Humans
Author: Deborah L. Wells
Introduction:
“Dogs As a Diagnostic Tool for Ill Health in Humans” by Deborah L. Wells is an intriguing article that explores the fascinating connection between dogs and their ability to detect various health conditions in humans. The article delves into the growing body of research that suggests dogs can serve as valuable tools in the early detection of illnesses, ranging from cancer to seizures, thanks to their remarkable olfactory abilities and keen intuition.
Key Points:
Canine Olfactory Sensitivity:
The article highlights the incredible olfactory sensitivity of dogs, which allows them to detect even the subtlest changes in the scent of humans. Dogs possess approximately 300 million olfactory receptors in their noses, compared to about 6 million in humans. This heightened sense of smell makes them exceptionally adept at identifying unique chemical compounds associated with different diseases.
Cancer Detection:
One of the most remarkable aspects discussed in the article is the potential of dogs to detect cancer in humans. Several studies have reported cases where trained dogs were able to identify the scent of cancer cells, leading to earlier diagnoses and improved treatment outcomes. For instance, some dogs have shown an impressive ability to detect skin cancer and even certain types of tumors through breath and urine samples.
Seizure Alert Dogs:
The article also touches upon the role of dogs in helping individuals with epilepsy. Seizure alert dogs are trained to recognize pre-seizure scents and behaviors, providing their owners with vital warnings, allowing them to take precautionary measures. This aspect showcases the practical applications of canine abilities in enhancing the quality of life for people with chronic medical conditions.
Diabetes Detection:
The article discusses the potential of dogs to detect fluctuations in blood sugar levels, making them valuable companions for individuals with diabetes. Some dogs are trained to alert their owners when their blood sugar levels become dangerously high or low, enabling timely intervention and potentially preventing life-threatening situations.
Future Research and Ethical Considerations:
Wells discusses the need for further research to better understand the mechanisms behind dogs’ scent detection abilities. Ethical considerations are also addressed, emphasizing the importance of humane training methods and the well-being of the dogs involved in these diagnostic endeavors.
Critique:
While “Dogs As a Diagnostic Tool for Ill Health in Humans” provides an engaging overview of the subject matter, there are a few areas where it could be improved:
Research Gaps: The article could benefit from a more comprehensive review of the existing research in this field, highlighting both the successes and limitations of using dogs as diagnostic tools. A discussion of ongoing studies and potential areas for future research would add depth to the article.
Practical Implications: It would be helpful to explore the practical challenges and considerations involved in training and deploying dogs for diagnostic purposes. Discussing the cost, time, and resource requirements would provide a more balanced perspective.
Alternative Approaches: While dogs’ olfactory abilities are impressive, it might be beneficial to mention other emerging technologies, such as electronic nose devices, which are being developed for similar diagnostic purposes. A brief comparison of these approaches could provide context.
Conclusion:
In conclusion, “Dogs As a Diagnostic Tool for Ill Health in Humans” by Deborah L. Wells sheds light on the exciting potential of dogs in the field of medical diagnostics. The article effectively highlights the remarkable olfactory abilities of dogs and their applications in detecting various health conditions. However, to make the article more comprehensive, it could delve deeper into research gaps, practical considerations, and alternative approaches. Nonetheless, it serves as a thought-provoking introduction to the subject and piques interest in further exploration of this fascinating topic”.
Although ChatGPT states at the beginning of its response that it does not have sufficient sources on the subject, ChatGPT is very specific about the article and repeatedly appears to be quoting from the article when it is relaying information from its own repository. Using ChatGPT for academic research is definitely not enough as you can see from the above example.
Using ChatGPT correctly in academic research
Using ChatGPT correctly to ensure the accuracy, reliability, and integrity of research output is essential as the AI-powered chatbot of OpenAI has shown inaccuracies and citation problems before. If you are planning on using ChatGPT as the force behind your research consider the following factors.
Accuracy and verification
While ChatGPT is trained on a large dataset of text, it’s not perfect and can make mistakes. It’s important to fact-check and verify any information you obtain from ChatGPT, especially if it’s being used for critical applications like academic research or decision-making. Researchers should cross-check the information generated by ChatGPT with other credible sources to ensure its accuracy and validity.
Bias and mitigation
ChatGPT, like any other AI model, can reflect biases present in the data it’s trained on. This means that the responses generated by ChatGPT may reflect cultural, social, or political biases. Researchers should be aware of these biases and take steps to mitigate them. They can do this by comparing ChatGPT’s responses with other sources, considering alternative perspectives, and evaluating the credibility of the information generated.
Contextual understanding and clarification
ChatGPT lacks contextual understanding, which means that it may not always understand the nuances of human communication. This can lead to misunderstandings or misinterpretations of questions or prompts. Researchers should be mindful of this limitation and clarify their queries accordingly. They should rephrase complex questions, provide additional context, and verify the accuracy of ChatGPT’s responses.
Depth of knowledge and supplementation
While ChatGPT has been trained on a vast amount of text, its knowledge in certain domains may be limited. Researchers should be aware of the depth of knowledge that ChatGPT possesses in the area they are investigating and supplement its responses with additional sources as needed. They should consult peer-reviewed articles, academic journals, and expert opinions to ensure the accuracy and reliability of their research.
Ethical considerations and proper citation
Using ChatGPT for research raises ethical concerns related to authorship, credit, and accountability. Researchers should ensure that they properly cite any information obtained from ChatGPT and acknowledge its contributions to their work. They should also consider the potential implications of using AI-generated content in academic research, such as the possibility of plagiarism or misattribution. Proper citation and acknowledgment of ChatGPT’s contributions help maintain academic integrity and avoid any potential ethical issues.
Using ChatGPT can be a valuable tool that can aid researchers in various academic fields by providing quick answers, generating ideas, and assisting with writing tasks. However, it’s essential to use ChatGPT correctly to ensure the accuracy, reliability, and integrity of research output.
Researchers should fact-check and verify information generated by ChatGPT, mitigate biases, clarify queries, supplement its responses with additional sources, and properly cite any information obtained.
Featured image credit: Google DeepMind/Unsplash.