Medicine â€ș Health Informatics

Artificial Intelligence in Healthcare and Education

Description

This cluster of papers explores the intersection of artificial intelligence and medicine, focusing on applications in healthcare, medical imaging, clinical decision support, and the ethical challenges associated with AI implementation. It delves into topics such as machine learning, big data, precision medicine, and the potential impact of AI on health equity.

Keywords

Artificial Intelligence; Machine Learning; Healthcare; Medical Imaging; Clinical Decision Support; Ethical Challenges; Big Data; Precision Medicine; Radiology; Health Equity

Due largely to developments made in artificial intelligence and cognitive psychology during the past two decades, expertise has become an important subject for scholarly investigations. The Nature of Expertise displays 
 Due largely to developments made in artificial intelligence and cognitive psychology during the past two decades, expertise has become an important subject for scholarly investigations. The Nature of Expertise displays the variety of domains and human activities to which the study of expertise has been applied, and reflects growing attention on learning and the acquisition of expertise. Applying approaches influenced by such disciplines as cognitive psychology, artificial intelligence, and cognitive science, the contributors discuss those conditions that enhance and those that limit the development of high levels of cognitive skill.
diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Artificial intelligence (AI) aims to mimic human cognitive functions. It is bringing a paradigm shift to healthcare, powered by increasing availability of healthcare data and rapid progress of analytics techniques. 
 Artificial intelligence (AI) aims to mimic human cognitive functions. It is bringing a paradigm shift to healthcare, powered by increasing availability of healthcare data and rapid progress of analytics techniques. We survey the current status of AI applications in healthcare and discuss its future. AI can be applied to various types of healthcare data (structured and unstructured). Popular AI techniques include machine learning methods for structured data, such as the classical support vector machine and neural network, and the modern deep learning, as well as natural language processing for unstructured data. Major disease areas that use AI tools include cancer, neurology and cardiology. We then review in more details the AI applications in stroke, in the three major areas of early detection and diagnosis, treatment, as well as outcome prediction and prognosis evaluation. We conclude with discussion about pioneer AI systems, such as IBM Watson, and hurdles for real-life deployment of AI.
We need to consider the ethical challenges inherent in implementing machine learning in health care if its benefits are to be realized. Some of these challenges are straightforward, whereas others 
 We need to consider the ethical challenges inherent in implementing machine learning in health care if its benefits are to be realized. Some of these challenges are straightforward, whereas others have less obvious risks but raise broader ethical concerns.
Objective: The aim of this review was to summarize major topics in artificial intelligence (AI), including their applications and limitations in surgery. This paper reviews the key capabilities of AI 
 Objective: The aim of this review was to summarize major topics in artificial intelligence (AI), including their applications and limitations in surgery. This paper reviews the key capabilities of AI to help surgeons understand and critically evaluate new AI applications and to contribute to new developments. Summary Background Data: AI is composed of various subfields that each provide potential solutions to clinical problems. Each of the core subfields of AI reviewed in this piece has also been used in other industries such as the autonomous car, social networks, and deep learning computers. Methods: A review of AI papers across computer science, statistics, and medical sources was conducted to identify key concepts and techniques within AI that are driving innovation across industries, including surgery. Limitations and challenges of working with AI were also reviewed. Results: Four main subfields of AI were defined: (1) machine learning, (2) artificial neural networks, (3) natural language processing, and (4) computer vision. Their current and future applications to surgical practice were introduced, including big data analytics and clinical decision support systems. The implications of AI for surgeons and the role of surgeons in advancing the technology to optimize clinical effectiveness were discussed. Conclusions: Surgeons are well positioned to help integrate AI into modern practice. Surgeons should partner with data scientists to capture data across phases of care and to provide clinical context, for AI has the potential to revolutionize the way surgery is taught and practiced with the promise of a future optimized for the highest quality patient care.
Machine learning is used increasingly in clinical care to improve diagnosis, treatment selection, and health system efficiency. Because machine-learning models learn from historically collected data, populations that have experienced human 
 Machine learning is used increasingly in clinical care to improve diagnosis, treatment selection, and health system efficiency. Because machine-learning models learn from historically collected data, populations that have experienced human and structural biases in the past-called protected groups-are vulnerable to harm by incorrect predictions or withholding of resources. This article describes how model design, biases in data, and the interactions of model predictions with clinicians and patients may exacerbate health care disparities. Rather than simply guarding against these harms passively, machine-learning systems should be used proactively to advance health equity. For that goal to be achieved, principles of distributive justice must be incorporated into model design, deployment, and evaluation. The article describes several technical implementations of distributive justice-specifically those that ensure equality in patient outcomes, performance, and resource allocation-and guides clinicians as to when they should prioritize each principle. Machine learning is providing increasingly sophisticated decision support and population-level monitoring, and it should encode principles of justice to ensure that models benefit all patients.
In medicine, artificial intelligence (AI) research is becoming increasingly focused on applying machine learning (ML) techniques to complex problems, and so allowing computers to make predictions from large amounts of 
 In medicine, artificial intelligence (AI) research is becoming increasingly focused on applying machine learning (ML) techniques to complex problems, and so allowing computers to make predictions from large amounts of patient data, by learning their own associations.1 Estimates of the impact of AI on the wider economy globally vary wildly, with a recent report suggesting a 14% effect on global gross domestic product by 2030, half of which coming from productivity improvements.2 These predictions create political appetite for the rapid development of the AI industry,3 and healthcare is a priority area where this technology has yet to be exploited.2 3 The digital health revolution described by Duggal et al 4 is already in full swing with the potential to ‘disrupt’ healthcare. Health AI research has demonstrated some impressive results,5–10 but its clinical value has not yet been realised, hindered partly by a lack of a clear understanding of how to quantify benefit or ensure patient safety, and increasing concerns about the ethical and medico-legal impact.11 This analysis is written with the dual aim of helping clinical safety professionals to critically appraise current medical AI research from a quality and safety perspective, and supporting research and development in AI by highlighting some of the clinical safety questions that must be considered if medical application of these exciting technologies is to be successful. Clinical decision support systems (DSS) are in widespread use in medicine and have had most impact providing guidance on the safe prescription of medicines,12 guideline adherence, simple risk screening13 or prognostic scoring.14 These systems use predefined rules, which have predictable behaviour and are usually shown to reduce clinical error,12 although sometimes inadvertently introduce safety issues themselves.15 16 Rules-based systems have also been developed to address diagnostic uncertainty17–19 

Emerging vulnerabilities demand new conversations Emerging vulnerabilities demand new conversations
Abstract: Taiwan's National Health Insurance Research Database (NHIRD) exemplifies a population-level data source for generating real-world evidence to support clinical decisions and health care policy-making. Like with all claims databases, 
 Abstract: Taiwan's National Health Insurance Research Database (NHIRD) exemplifies a population-level data source for generating real-world evidence to support clinical decisions and health care policy-making. Like with all claims databases, there have been some validity concerns of studies using the NHIRD, such as the accuracy of diagnosis codes and issues around unmeasured confounders. Endeavors to validate diagnosed codes or to develop methodologic approaches to address unmeasured confounders have largely increased the reliability of NHIRD studies. Recently, Taiwan's Ministry of Health and Welfare (MOHW) established a Health and Welfare Data Center (HWDC), a data repository site that centralizes the NHIRD and about 70 other health-related databases for data management and analyses. To strengthen the protection of data privacy, investigators are required to conduct on-site analysis at an HWDC through remote connection to MOHW servers. Although the tight regulation of this on-site analysis has led to inconvenience for analysts and has increased time and costs required for research, the HWDC has created opportunities for enriched dimensions of study by linking across the NHIRD and other databases. In the near future, researchers will have greater opportunity to distill knowledge from the NHIRD linked to hospital-based electronic medical records databases containing unstructured patient-level information by using artificial intelligence techniques, including machine learning and natural language processes. We believe that NHIRD with multiple data sources could represent a powerful research engine with enriched dimensions and could serve as a guiding light for real-world evidence-based medicine in Taiwan. Keywords: Health and Welfare Data Center of Taiwan, real-world data, big data analysis, validation, database cross-linkage
<h3>ABSTRACT</h3> The complexity and rise of data in healthcare means that artificial intelligence (AI) will increasingly be applied within the field. Several types of AI are already being employed by 
 <h3>ABSTRACT</h3> The complexity and rise of data in healthcare means that artificial intelligence (AI) will increasingly be applied within the field. Several types of AI are already being employed by payers and providers of care, and life sciences companies. The key categories of applications involve diagnosis and treatment recommendations, patient engagement and adherence, and administrative activities. Although there are many instances in which AI can perform healthcare tasks as well or better than humans, implementation factors will prevent large-scale automation of healthcare professional jobs for a considerable period. Ethical issues in the application of AI to healthcare are also discussed.
Artificial intelligence (AI) is the term used to describe the use of computers and technology to simulate intelligent behavior and critical thinking comparable to a human being. John McCarthy first 
 Artificial intelligence (AI) is the term used to describe the use of computers and technology to simulate intelligent behavior and critical thinking comparable to a human being. John McCarthy first described the term AI in 1956 as the science and engineering of making intelligent machines.This descriptive article gives a broad overview of AI in medicine, dealing with the terms and concepts as well as the current and future applications of AI. It aims to develop knowledge and familiarity of AI among primary care physicians.PubMed and Google searches were performed using the key words 'artificial intelligence'. Further references were obtained by cross-referencing the key articles.Recent advances in AI technology and its current applications in the field of medicine have been discussed in detail.AI promises to change the practice of medicine in hitherto unknown ways, but many of its practical applications are still in their infancy and need to be explored and developed better. Medical professionals also need to understand and acclimatize themselves with these advances for better healthcare delivery to the masses.
Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging.In this systematic 
 Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging.In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176.Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9·7% to 100·0% (mean 79·1%, SD 0·2) and specificity ranging from 38·9% to 100·0% (mean 88·3%, SD 0·1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87·0% (95% CI 83·0-90·2) for deep learning models and 86·4% (79·9-91·0) for health-care professionals, and a pooled specificity of 92·5% (95% CI 85·1-96·4) for deep learning models and 90·5% (80·6-95·7) for health-care professionals.Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology.None.
Abstract Background Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques 
 Abstract Background Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. Main body Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. Conclusion The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.
Abstract Objective To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of 
 Abstract Objective To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians. Design Systematic review. Data sources Medline, Embase, Cochrane Central Register of Controlled Trials, and the World Health Organization trial registry from 2010 to June 2019. Eligibility criteria for selecting studies Randomised trial registrations and non-randomised studies comparing the performance of a deep learning algorithm in medical imaging with a contemporary group of one or more expert clinicians. Medical imaging has seen a growing interest in deep learning research. The main distinguishing feature of convolutional neural networks (CNNs) in deep learning is that when CNNs are fed with raw data, they develop their own representations needed for pattern recognition. The algorithm learns for itself the features of an image that are important for classification rather than being told by humans which features to use. The selected studies aimed to use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups (eg, disease or non-disease). For example, raw chest radiographs tagged with a label such as pneumothorax or no pneumothorax and the CNN learning which pixel patterns suggest pneumothorax. Review methods Adherence to reporting standards was assessed by using CONSORT (consolidated standards of reporting trials) for randomised studies and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) for non-randomised studies. Risk of bias was assessed by using the Cochrane risk of bias tool for randomised studies and PROBAST (prediction model risk of bias assessment tool) for non-randomised studies. Results Only 10 records were found for deep learning randomised clinical trials, two of which have been published (with low risk of bias, except for lack of blinding, and high adherence to reporting standards) and eight are ongoing. Of 81 non-randomised clinical trials identified, only nine were prospective and just six were tested in a real world clinical setting. The median number of experts in the comparator group was only four (interquartile range 2-9). Full access to all datasets and code was severely limited (unavailable in 95% and 93% of studies, respectively). The overall risk of bias was high in 58 of 81 studies and adherence to reporting standards was suboptimal (&lt;50% adherence for 12 of 29 TRIPOD items). 61 of 81 studies stated in their abstract that performance of artificial intelligence was at least comparable to (or better than) that of clinicians. Only 31 of 81 studies (38%) stated that further prospective studies or trials were required. Conclusions Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions. Study registration PROSPERO CRD42019123605.
The term “artificial intelligence” (AI) refers to the idea of machines being capable of performing human tasks. A subdomain of AI is machine learning (ML), which “learns” intrinsic statistical patterns 
 The term “artificial intelligence” (AI) refers to the idea of machines being capable of performing human tasks. A subdomain of AI is machine learning (ML), which “learns” intrinsic statistical patterns in data to eventually cast predictions on unseen data. Deep learning is a ML technique using multi-layer mathematical operations for learning and inferring on complex data like imagery. This succinct narrative review describes the application, limitations and possible future of AI-based dental diagnostics, treatment planning, and conduct, for example, image analysis, prediction making, record keeping, as well as dental research and discovery. AI-based applications will streamline care, relieving the dental workforce from laborious routine tasks, increasing health at lower costs for a broader population, and eventually facilitate personalized, predictive, preventive, and participatory dentistry. However, AI solutions have not by large entered routine dental practice, mainly due to 1) limited data availability, accessibility, structure, and comprehensiveness, 2) lacking methodological rigor and standards in their development, 3) and practical questions around the value and usefulness of these solutions, but also ethics and responsibility. Any AI application in dentistry should demonstrate tangible value by, for example, improving access to and quality of care, increasing efficiency and safety of services, empowering and enabling patients, supporting medical research, or increasing sustainability. Individual privacy, rights, and autonomy need to be put front and center; a shift from centralized to distributed/federated learning may address this while improving scalability and robustness. Lastly, trustworthiness into, and generalizability of, dental AI solutions need to be guaranteed; the implementation of continuous human oversight and standards grounded in evidence-based dentistry should be expected. Methods to visualize, interpret, and explain the logic behind AI solutions will contribute (“explainable AI”). Dental education will need to accompany the introduction of clinical AI solutions by fostering digital literacy in the future dental workforce.
At the beginning of the artificial intelligence (AI)/machine learning (ML) era, the expectations are high, and experts foresee that AI/ML shows potential for diagnosing, managing and treating a wide variety 
 At the beginning of the artificial intelligence (AI)/machine learning (ML) era, the expectations are high, and experts foresee that AI/ML shows potential for diagnosing, managing and treating a wide variety of medical conditions. However, the obstacles for implementation of AI/ML in daily clinical practice are numerous, especially regarding the regulation of these technologies. Therefore, we provide an insight into the currently available AI/ML-based medical devices and algorithms that have been approved by the US Food & Drugs Administration (FDA). We aimed to raise awareness of the importance of regulatory bodies, clearly stating whether a medical device is AI/ML based or not. Cross-checking and validating all approvals, we identified 64 AI/ML based, FDA approved medical devices and algorithms. Out of those, only 29 (45%) mentioned any AI/ML-related expressions in the official FDA announcement. The majority (85.9%) was approved by the FDA with a 510(k) clearance, while 8 (12.5%) received de novo pathway clearance and one (1.6%) premarket approval (PMA) clearance. Most of these technologies, notably 30 (46.9%), 16 (25.0%), and 10 (15.6%) were developed for the fields of Radiology, Cardiology and Internal Medicine/General Practice respectively. We have launched the first comprehensive and open access database of strictly AI/ML-based medical technologies that have been approved by the FDA. The database will be constantly updated.
Abstract Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown 
 Abstract Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
Artificial intelligence (AI) is a powerful and disruptive area of computer science, with the potential to fundamentally transform the practice of medicine and the delivery of healthcare. In this review 
 Artificial intelligence (AI) is a powerful and disruptive area of computer science, with the potential to fundamentally transform the practice of medicine and the delivery of healthcare. In this review article, we outline recent breakthroughs in the application of AI in healthcare, describe a roadmap to building effective, reliable and safe AI systems, and discuss the possible future direction of AI augmented healthcare systems.
ChatGPT is the world’s most advanced chatbot thus far. Unlike other chatbots, it can create impressive prose within seconds, and it has created much hype and doomsday predictions when it 
 ChatGPT is the world’s most advanced chatbot thus far. Unlike other chatbots, it can create impressive prose within seconds, and it has created much hype and doomsday predictions when it comes to student assessment in higher education and a host of other matters. ChatGPT is a state-of-the-art language model (a variant of OpenAI’s Generative Pretrained Transformer (GPT) language model) designed to generate text that can be indistinguishable from text written by humans. It can engage in conversation with users in a seemingly natural and intuitive way. In this article, we briefly tell the story of OpenAI, the organisation behind ChatGPT. We highlight the fundamental change from a not-for-profit organisation to a commercial business model. In terms of our methods, we conducted an extensive literature review and experimented with this artificial intelligence (AI) software. Our literature review shows our review to be amongst the first peer-reviewed academic journal articles to explore ChatGPT and its relevance for higher education (especially assessment, learning and teaching). After a description of ChatGPT’s functionality and a summary of its strengths and limitations, we focus on the technology’s implications for higher education and discuss what is the future of learning, teaching and assessment in higher education in the context of AI chatbots such as ChatGPT. We position ChatGPT in the context of current Artificial Intelligence in Education (AIEd) research, discuss student-facing, teacher-facing and system-facing applications, and analyse opportunities and threats. We conclude the article with recommendations for students, teachers and higher education institutions. Many of them focus on assessment.
Since its maiden release into the public domain on November 30, 2022, ChatGPT garnered morethan one million subscribers within a week. The generative AI tool ⎌ChatGPT took the world bysurprise 
 Since its maiden release into the public domain on November 30, 2022, ChatGPT garnered morethan one million subscribers within a week. The generative AI tool ⎌ChatGPT took the world bysurprise with it sophisticated capacity to carry out remarkably complex tasks. The extraordinaryabilities of ChatGPT to perform complex tasks within the field of education has caused mixedfeelings among educators as this advancement in AI seems to revolutionize existing educationalpraxis. This review article synthesizes recent extant literature to offer some potential benefits ofChatGPT in promoting teaching and learning. Benefits of ChatGPT include but are not limited topromotion of personalized and interactive learning, generating prompts for formative assessmentactivities that provide ongoing feedback to inform teaching and learning etc. The paper alsohighlights some inherent limitations in the ChatGPT such as generating wrong information,biases in data training which may augment existing biases, privacy issues etc. The study offersrecommendations on how ChatGPT could be leveraged to maximize teaching and learning.Policy makers, researchers, educators and technology experts could work together and startconversations on how these evolving generative AI tools could be used safely and constructivelyto improve education and support students' learning.
In less than 2 months, the artificial intelligence (AI) program ChatGPT has become a cultural sensation. It is freely accessible through a web portal created by the tool's developer, OpenAI. 
 In less than 2 months, the artificial intelligence (AI) program ChatGPT has become a cultural sensation. It is freely accessible through a web portal created by the tool's developer, OpenAI. The program-which automatically creates text based on written prompts-is so popular that it's likely to be "at capacity right now" if you attempt to use it. When you do get through, ChatGPT provides endless entertainment. I asked it to rewrite the first scene of the classic American play Death of a Salesman, but to feature Princess Elsa from the animated movie Frozen as the main character instead of Willy Loman. The output was an amusing conversation in which Elsa-who has come home from a tough day of selling-is told by her son Happy, "Come on, Mom. You're Elsa from Frozen. You have ice powers and you're a queen. You're unstoppable." Mash-ups like this are certainly fun, but there are serious implications for generative AI programs like ChatGPT in science and academia.
Background Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that can generate conversation-style responses to user input. Objective This study aimed to evaluate the performance of 
 Background Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that can generate conversation-style responses to user input. Objective This study aimed to evaluate the performance of ChatGPT on questions within the scope of the United States Medical Licensing Examination (USMLE) Step 1 and Step 2 exams, as well as to analyze responses for user interpretability. Methods We used 2 sets of multiple-choice questions to evaluate ChatGPT’s performance, each with questions pertaining to Step 1 and Step 2. The first set was derived from AMBOSS, a commonly used question bank for medical students, which also provides statistics on question difficulty and the performance on an exam relative to the user base. The second set was the National Board of Medical Examiners (NBME) free 120 questions. ChatGPT’s performance was compared to 2 other large language models, GPT-3 and InstructGPT. The text output of each ChatGPT response was evaluated across 3 qualitative metrics: logical justification of the answer selected, presence of information internal to the question, and presence of information external to the question. Results Of the 4 data sets, AMBOSS-Step1, AMBOSS-Step2, NBME-Free-Step1, and NBME-Free-Step2, ChatGPT achieved accuracies of 44% (44/100), 42% (42/100), 64.4% (56/87), and 57.8% (59/102), respectively. ChatGPT outperformed InstructGPT by 8.15% on average across all data sets, and GPT-3 performed similarly to random chance. The model demonstrated a significant decrease in performance as question difficulty increased (P=.01) within the AMBOSS-Step1 data set. We found that logical justification for ChatGPT’s answer selection was present in 100% of outputs of the NBME data sets. Internal information to the question was present in 96.8% (183/189) of all questions. The presence of information external to the question was 44.5% and 27% lower for incorrect answers relative to correct answers on the NBME-Free-Step1 (P&lt;.001) and NBME-Free-Step2 (P=.001) data sets, respectively. Conclusions ChatGPT marks a significant improvement in natural language processing models on the tasks of medical question answering. By performing at a greater than 60% threshold on the NBME-Free-Step-1 data set, we show that the model achieves the equivalent of a passing score for a third-year medical student. Additionally, we highlight ChatGPT’s capacity to provide logic and informational context across the majority of answers. These facts taken together make a compelling case for the potential applications of ChatGPT as an interactive medical education tool to support learning.
We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 
 We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations. These results suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making.
Abstract Artificial Intelligence (AI) technologies have been progressing constantly and being more visible in different aspects of our lives. One recent phenomenon is ChatGPT, a chatbot with a conversational artificial 
 Abstract Artificial Intelligence (AI) technologies have been progressing constantly and being more visible in different aspects of our lives. One recent phenomenon is ChatGPT, a chatbot with a conversational artificial intelligence interface that was developed by OpenAI. As one of the most advanced artificial intelligence applications, ChatGPT has drawn much public attention across the globe. In this regard, this study examines ChatGPT in education, among early adopters, through a qualitative instrumental case study. Conducted in three stages, the first stage of the study reveals that the public discourse in social media is generally positive and there is enthusiasm regarding its use in educational settings. However, there are also voices who are approaching cautiously using ChatGPT in educational settings. The second stage of the study examines the case of ChatGPT through lenses of educational transformation, response quality, usefulness, personality and emotion, and ethics. In the third and final stage of the study, the investigation of user experiences through ten educational scenarios revealed various issues, including cheating, honesty and truthfulness of ChatGPT, privacy misleading, and manipulation. The findings of this study provide several research directions that should be considered to ensure a safe and responsible adoption of chatbots, specifically ChatGPT, in education.
Abstract This paper aims to highlight the potential applications and limits of a large language model (LLM) in healthcare. ChatGPT is a recently developed LLM that was trained on a 
 Abstract This paper aims to highlight the potential applications and limits of a large language model (LLM) in healthcare. ChatGPT is a recently developed LLM that was trained on a massive dataset of text for dialogue with users. Although AI-based language models like ChatGPT have demonstrated impressive capabilities, it is uncertain how well they will perform in real-world scenarios, particularly in fields such as medicine where high-level and complex thinking is necessary. Furthermore, while the use of ChatGPT in writing scientific articles and other scientific outputs may have potential benefits, important ethical concerns must also be addressed. Consequently, we investigated the feasibility of ChatGPT in clinical and research scenarios: (1) support of the clinical practice, (2) scientific production, (3) misuse in medicine and research, and (4) reasoning about public health topics. Results indicated that it is important to recognize and promote education on the appropriate use and potential pitfalls of AI-based LLMs in medicine.
The use of artificial intelligence in academia is a hot topic in the education field. ChatGPT is an AI tool that offers a range of benefits, including increased student engagement, 
 The use of artificial intelligence in academia is a hot topic in the education field. ChatGPT is an AI tool that offers a range of benefits, including increased student engagement, collaboration, and accessibility. However, is also raises concerns regarding academic honesty and plagiarism. This paper examines the opportunities and challenges of using ChatGPT in higher education, and discusses the potential risks and rewards of these tools. The paper also considers the difficulties of detecting and preventing academic dishonesty, and suggests strategies that universities can adopt to ensure ethical and responsible use of these tools. These strategies include developing policies and procedures, providing training and support, and using various methods to detect and prevent cheating. The paper concludes that while the use of AI in higher education presents both opportunities and challenges, universities can effectively address these concerns by taking a proactive and ethical approach to the use of these tools.
ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid 
 ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.
Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents 
 Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT's capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT's use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts.
Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and 
 Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.
Abstract The advent of generative artificial intelligence (AI) offers transformative potential in the field of education. The study explores three main areas: (1) How did ChatGPT answer questions related to 
 Abstract The advent of generative artificial intelligence (AI) offers transformative potential in the field of education. The study explores three main areas: (1) How did ChatGPT answer questions related to science education? (2) What are some ways educators could utilise ChatGPT in their science pedagogy? and (3) How has ChatGPT been utilised in this study, and what are my reflections about its use as a research tool? This exploratory research applies a self-study methodology to investigate the technology. Impressively, ChatGPT’s output often aligned with key themes in the research. However, as it currently stands, ChatGPT runs the risk of positioning itself as the ultimate epistemic authority, where a single truth is assumed without a proper grounding in evidence or presented with sufficient qualifications. Key ethical concerns associated with AI include its potential environmental impact, issues related to content moderation, and the risk of copyright infringement. It is important for educators to model responsible use of ChatGPT, prioritise critical thinking, and be clear about expectations. ChatGPT is likely to be a useful tool for educators designing science units, rubrics, and quizzes. Educators should critically evaluate any AI-generated resource and adapt it to their specific teaching contexts. ChatGPT was used as a research tool for assistance with editing and to experiment with making the research narrative clearer. The intention of the paper is to act as a catalyst for a broader conversation about the use of generative AI in science education.
ChatGPT is an AI tool that has sparked debates about its potential implications for education. We used the SWOT analysis framework to outline ChatGPT's strengths and weaknesses and to discuss 
 ChatGPT is an AI tool that has sparked debates about its potential implications for education. We used the SWOT analysis framework to outline ChatGPT's strengths and weaknesses and to discuss its opportunities for and threats to education. The strengths include using a sophisticated natural language model to generate plausible answers, self-improving capability, and providing personalised and real-time responses. As such, ChatGPT can increase access to information, facilitate personalised and complex learning, and decrease teaching workload, thereby making key processes and tasks more efficient. The weaknesses are a lack of deep understanding, difficulty in evaluating the quality of responses, a risk of bias and discrimination, and a lack of higher-order thinking skills. Threats to education include a lack of understanding of the context, threatening academic integrity, perpetuating discrimination in education, democratising plagiarism, and declining high-order cognitive skills. We provide agenda for educational practice and research in times of ChatGPT.
Chatbots are computer programs with which one can have a conversation. In this article, the authors describe how the GPT-4 chatbot, which has been given a general education, could affect 
 Chatbots are computer programs with which one can have a conversation. In this article, the authors describe how the GPT-4 chatbot, which has been given a general education, could affect the practice of medicine.
In recent years, artificial intelligence (AI) and machine learning have been transforming the landscape of scientific research. Out of which, the chatbot technology has experienced tremendous advancements in recent years, 
 In recent years, artificial intelligence (AI) and machine learning have been transforming the landscape of scientific research. Out of which, the chatbot technology has experienced tremendous advancements in recent years, especially with ChatGPT emerging as a notable AI language model. This comprehensive review delves into the background, applications, key challenges, and future directions of ChatGPT. We begin by exploring its origins, development, and underlying technology, before examining its wide-ranging applications across industries such as customer service, healthcare, and education. We also highlight the critical challenges that ChatGPT faces, including ethical concerns, data biases, and safety issues, while discussing potential mitigation strategies. Finally, we envision the future of ChatGPT by exploring areas of further research and development, focusing on its integration with other technologies, improved human-AI interaction, and addressing the digital divide. This review offers valuable insights for researchers, developers, and stakeholders interested in the ever-evolving landscape of AI-driven conversational agents. This study explores the various ways ChatGPT has been revolutionizing scientific research, spanning from data processing and hypothesis generation to collaboration and public outreach. Furthermore, the paper examines the potential challenges and ethical concerns surrounding the use of ChatGPT in research, while highlighting the importance of striking a balance between AI-assisted innovation and human expertise. The paper presents several ethical issues in existing computing domain and how ChatGPT can invoke challenges to such notion. This work also includes some biases and limitations of ChatGPT. It is worth to note that despite of several controversies and ethical concerns, ChatGPT has attracted remarkable attentions from academia, research, and industries in a very short span of time.
An artificial intelligence-based chatbot, ChatGPT, was launched in November 2022 and is capable of generating cohesive and informative human-like responses to user input. This rapid review of the literature aims 
 An artificial intelligence-based chatbot, ChatGPT, was launched in November 2022 and is capable of generating cohesive and informative human-like responses to user input. This rapid review of the literature aims to enrich our understanding of ChatGPT’s capabilities across subject domains, how it can be used in education, and potential issues raised by researchers during the first three months of its release (i.e., December 2022 to February 2023). A search of the relevant databases and Google Scholar yielded 50 articles for content analysis (i.e., open coding, axial coding, and selective coding). The findings of this review suggest that ChatGPT’s performance varied across subject domains, ranging from outstanding (e.g., economics) and satisfactory (e.g., programming) to unsatisfactory (e.g., mathematics). Although ChatGPT has the potential to serve as an assistant for instructors (e.g., to generate course materials and provide suggestions) and a virtual tutor for students (e.g., to answer questions and facilitate collaboration), there were challenges associated with its use (e.g., generating incorrect or fake information and bypassing plagiarism detectors). Immediate action should be taken to update the assessment methods and institutional policies in schools and universities. Instructor training and student education are also essential to respond to the impact of ChatGPT on the educational environment.
This paper presents an analysis of the advantages, limitations, ethical considerations, future prospects, and practical applications of ChatGPT and artificial intelligence (AI) in the healthcare and medical domains. ChatGPT is 
 This paper presents an analysis of the advantages, limitations, ethical considerations, future prospects, and practical applications of ChatGPT and artificial intelligence (AI) in the healthcare and medical domains. ChatGPT is an advanced language model that uses deep learning techniques to produce human-like responses to natural language inputs. It is part of the family of generative pre-training transformer (GPT) models developed by OpenAI and is currently one of the largest publicly available language models. ChatGPT is capable of capturing the nuances and intricacies of human language, allowing it to generate appropriate and contextually relevant responses across a broad spectrum of prompts. The potential applications of ChatGPT in the medical field range from identifying potential research topics to assisting professionals in clinical and laboratory diagnosis. Additionally, it can be used to help medical students, doctors, nurses, and all members of the healthcare fraternity to know about updates and new developments in their respective fields. The development of virtual assistants to aid patients in managing their health is another important application of ChatGPT in medicine. Despite its potential applications, the use of ChatGPT and other AI tools in medical writing also poses ethical and legal concerns. These include possible infringement of copyright laws, medico-legal complications, and the need for transparency in AI-generated content. In conclusion, ChatGPT has several potential applications in the medical and healthcare fields. However, these applications come with several limitations and ethical considerations which are presented in detail along with future prospects in medicine and healthcare.
Abstract This study explores university students’ perceptions of generative AI (GenAI) technologies, such as ChatGPT, in higher education, focusing on familiarity, their willingness to engage, potential benefits and challenges, and 
 Abstract This study explores university students’ perceptions of generative AI (GenAI) technologies, such as ChatGPT, in higher education, focusing on familiarity, their willingness to engage, potential benefits and challenges, and effective integration. A survey of 399 undergraduate and postgraduate students from various disciplines in Hong Kong revealed a generally positive attitude towards GenAI in teaching and learning. Students recognized the potential for personalized learning support, writing and brainstorming assistance, and research and analysis capabilities. However, concerns about accuracy, privacy, ethical issues, and the impact on personal development, career prospects, and societal values were also expressed. According to John Biggs’ 3P model, student perceptions significantly influence learning approaches and outcomes. By understanding students’ perceptions, educators and policymakers can tailor GenAI technologies to address needs and concerns while promoting effective learning outcomes. Insights from this study can inform policy development around the integration of GenAI technologies into higher education. By understanding students’ perceptions and addressing their concerns, policymakers can create well-informed guidelines and strategies for the responsible and effective implementation of GenAI tools, ultimately enhancing teaching and learning experiences in higher education.
Abstract Introduction Healthcare systems are complex and challenging for all stakeholders, but artificial intelligence (AI) has transformed various fields, including healthcare, with the potential to improve patient care and quality 
 Abstract Introduction Healthcare systems are complex and challenging for all stakeholders, but artificial intelligence (AI) has transformed various fields, including healthcare, with the potential to improve patient care and quality of life. Rapid AI advancements can revolutionize healthcare by integrating it into clinical practice. Reporting AI’s role in clinical practice is crucial for successful implementation by equipping healthcare providers with essential knowledge and tools. Research Significance This review article provides a comprehensive and up-to-date overview of the current state of AI in clinical practice, including its potential applications in disease diagnosis, treatment recommendations, and patient engagement. It also discusses the associated challenges, covering ethical and legal considerations and the need for human expertise. By doing so, it enhances understanding of AI’s significance in healthcare and supports healthcare organizations in effectively adopting AI technologies. Materials and Methods The current investigation analyzed the use of AI in the healthcare system with a comprehensive review of relevant indexed literature, such as PubMed/Medline, Scopus, and EMBASE, with no time constraints but limited to articles published in English. The focused question explores the impact of applying AI in healthcare settings and the potential outcomes of this application. Results Integrating AI into healthcare holds excellent potential for improving disease diagnosis, treatment selection, and clinical laboratory testing. AI tools can leverage large datasets and identify patterns to surpass human performance in several healthcare aspects. AI offers increased accuracy, reduced costs, and time savings while minimizing human errors. It can revolutionize personalized medicine, optimize medication dosages, enhance population health management, establish guidelines, provide virtual health assistants, support mental health care, improve patient education, and influence patient-physician trust. Conclusion AI can be used to diagnose diseases, develop personalized treatment plans, and assist clinicians with decision-making. Rather than simply automating tasks, AI is about developing technologies that can enhance patient care across healthcare settings. However, challenges related to data privacy, bias, and the need for human expertise must be addressed for the responsible and effective implementation of AI in healthcare.
Since its maiden release into the public domain on November 30, 2022, ChatGPT garnered more than one million subscribers within a week. The generative AI tool ⎌ChatGPT took the world 
 Since its maiden release into the public domain on November 30, 2022, ChatGPT garnered more than one million subscribers within a week. The generative AI tool ⎌ChatGPT took the world by surprise with it sophisticated capacity to carry out remarkably complex tasks. The extraordinary abilities of ChatGPT to perform complex tasks within the field of education has caused mixed feelings among educators, as this advancement in AI seems to revolutionize existing educational praxis. This is an exploratory study that synthesizes recent extant literature to offer some potential benefits and drawbacks of ChatGPT in promoting teaching and learning. Benefits of ChatGPT include but are not limited to promotion of personalized and interactive learning, generating prompts for formative assessment activities that provide ongoing feedback to inform teaching and learning etc. The paper also highlights some inherent limitations in the ChatGPT such as generating wrong information, biases in data training, which may augment existing biases, privacy issues etc. The study offers recommendations on how ChatGPT could be leveraged to maximize teaching and learning. Policy makers, researchers, educators and technology experts could work together and start conversations on how these evolving generative AI tools could be used safely and constructively to improve education and support students’ learning.
The paper examines the application of machine learning (ML) techniques in the field of cybersecurity with the aim of enhancing threat detection and response capabilities. The initial section of the 
 The paper examines the application of machine learning (ML) techniques in the field of cybersecurity with the aim of enhancing threat detection and response capabilities. The initial section of the article provides a comprehensive examination of cybersecurity, highlighting the increasing significance of proactive defensive strategies in response to evolving cyber threats. Subsequently, a comprehensive overview of prevalentonline hazards is presented, emphasizing the imperative for the development of more sophisticated methodologies to detect and mitigate such risks. The primary emphasis of this work is to the practical use of machine learning in the identification and detection of potential dangers inside real-world contexts. This study examines three distinct cases: the detection of malware, attempts to breach security, and anomalous behavior shown by software. Each case study provides a detailed breakdown of the machine learning algorithms and approaches employed, demonstrating their effectiveness in identifying and mitigating risks. The paper further discusses the advantages and disadvantages associated with employing machine learning techniques for threat detection. One advantage of this approach is its ability to facilitatethe examination of extensive datasets, identification of intricate patterns, and prompt decision-making. However, discussions also revolve around difficulties like as erroneous discoveries, adversarial attacks, and concerns over privacy.
Zusammenfassung Klinische Entscheidungsfindung ist komplex, zeitkritisch und fehleranfĂ€llig. Klinische EntscheidungsunterstĂŒtzungssysteme (CDSS) sind computergestĂŒtzte Anwendungen, die Ärzt:innen und andere GesundheitsfachkrĂ€fte bei medizinischen Entscheidungen unterstĂŒtzen – etwa in der Diagnosestellung, Therapieplanung oder 
 Zusammenfassung Klinische Entscheidungsfindung ist komplex, zeitkritisch und fehleranfĂ€llig. Klinische EntscheidungsunterstĂŒtzungssysteme (CDSS) sind computergestĂŒtzte Anwendungen, die Ärzt:innen und andere GesundheitsfachkrĂ€fte bei medizinischen Entscheidungen unterstĂŒtzen – etwa in der Diagnosestellung, Therapieplanung oder RisikoeinschĂ€tzung. Unterschieden werden regelbasierte, wissensbasierte und KI-gestĂŒtzte AnsĂ€tze von CDSS. KI-gestĂŒtzte CDSS gewinnen zunehmend an Bedeutung. Sie ermöglichen die Analyse großer Datenmengen und liefern evidenzbasierte Empfehlungen zur UnterstĂŒtzung klinischer Entscheidungen. Zu den Herausforderungen zĂ€hlen die DatenqualitĂ€t, die Integration in klinische AblĂ€ufe und die Akzeptanz durch das Fachpersonal. Auch ethische und rechtliche Fragen, insbesondere bzgl. Datenschutz, bleiben kritisch. Derzeit finden KI-gestĂŒtzte CDSS vor allem in der Radiologie (z. B. Lungenrundherde, Mammographie) und Kardiologie erfolgreich Anwendung. Sie tragen dort zur Verbesserung der Diagnosegenauigkeit und der VersorgungsqualitĂ€t bei. ZukĂŒnftig können Chat- und Voicebots auf Basis großer Sprachmodelle (LLMs) im Bereich Shared Decision Making (SDM) eine wichtige Rolle spielen, indem sie Patient:innen gezielt informieren und in Entscheidungsprozesse einbeziehen. FĂŒr eine effektivere, effizientere und patientenzentrierte Gesundheitsversorgung bieten KI-gestĂŒtzte CDSS ein großes Potenzial. Voraussetzung dafĂŒr ist ein verantwortungsvoller Einsatz von KI, der ethische und rechtliche Anforderungen berĂŒcksichtigt. Eine passende Applikationsarchitektur kann diesen Anspruch unterstĂŒtzen – etwa durch die VerknĂŒpfung von KI-Modellen mit domĂ€nenspezifischen Daten, die Anonymisierung von Anfragen sowie die Validierung der generierten KI-Antworten.
Magdalena Wallkamm , Jule Kobs , Cosima Höttger +1 more | THE MIND - Bulletin on Mind-Body Medicine Research
Abstract In primary care, time constraints and communication challenges can hinder patient understanding and treatment adherence. OpenNotes promotes transparency and improves patient engagement, health literacy, and self-efficacy by allowing patients 
 Abstract In primary care, time constraints and communication challenges can hinder patient understanding and treatment adherence. OpenNotes promotes transparency and improves patient engagement, health literacy, and self-efficacy by allowing patients to access their clinical notes. Patients are increasingly using artificial intelligence (AI) — particularly large language models (LLM) — to help interpret these notes. Moreover, LLMs offer support for physicians by assisting with patient portal communication. However, there are risks of inappropriate or inconsistent responses. Therefore, further research is needed to explore the safe and effective application of AI in the primary care setting. Keywords: OpenNotes, primary care, artificial intelligence, patient activation, patient safety
Zusammenfassung Die historische Entwicklung der kĂŒnstlichen Intelligenz (KI) im Gesundheitswesen seit den 1960er-Jahren zeigt eine Transformation, die von einfachen regelbasierten Systemen zu komplexen, datengetriebenen AnsĂ€tzen reicht. FrĂŒhe Anwendungen konzentrierten sich 
 Zusammenfassung Die historische Entwicklung der kĂŒnstlichen Intelligenz (KI) im Gesundheitswesen seit den 1960er-Jahren zeigt eine Transformation, die von einfachen regelbasierten Systemen zu komplexen, datengetriebenen AnsĂ€tzen reicht. FrĂŒhe Anwendungen konzentrierten sich auf EntscheidungsunterstĂŒtzung, wĂ€hrend innovative Systeme neuronale Netze und maschinelles Lernen nutzen, um Muster in großen DatensĂ€tzen zu erkennen. Die Integration von KI-Technologien in der Medizin hat vielfĂ€ltige Anwendungsfelder hervorgebracht, die sich in prĂ€ventive, diagnostische, KI-gestĂŒtzte Therapie und administrative KI unterteilen lassen. PrĂ€ventive KI analysiert Risikofaktoren, um frĂŒhzeitige Interventionen zu ermöglichen, wĂ€hrend diagnostische KI zu schnelleren und prĂ€ziseren Diagnosen beitrĂ€gt. KI-gestĂŒtzte Therapie unterstĂŒtzt individualisierte Behandlungen, etwa durch personalisierte Medikation. Administrative KI optimiert Prozesse wie Terminplanung, Ressourcenmanagement und Abrechnung. Trotz ihrer Potenziale stehen KI-Systeme vor Herausforderungen. Dazu zĂ€hlen die Fragmentierung von Gesundheitsdaten, mangelnde Standardisierung, Datenschutzbedenken und algorithmische Verzerrungen. Der Aufbau interoperabler Dateninfrastrukturen und die Entwicklung ethischer Leitlinien sind entscheidend, um diese HĂŒrden zu ĂŒberwinden. ZukĂŒnftige Trends umfassen die Weiterentwicklung von Foundation Models (großen KI-Modellen, die auf breiten DatensĂ€tzen basieren und vielseitig einsetzbar sind), die Integration strukturierter und unstrukturierter Daten sowie eine stĂ€rkere Personalisierung in der Medizin. Langfristig kann KI die QualitĂ€t und Effizienz der Gesundheitsversorgung verbessern. Voraussetzung dafĂŒr sind jedoch enge Kooperationen zwischen Anwendern, Forschung, Industrie und Politik, um eine sichere und nachhaltige Implementierung zu gewĂ€hrleisten.
Background: Chronic diseases significantly burden healthcare systems due to the need for long-term treatment. Early diagnosis is critical for effective management and minimizing risk. The current traditional diagnostic approaches face 
 Background: Chronic diseases significantly burden healthcare systems due to the need for long-term treatment. Early diagnosis is critical for effective management and minimizing risk. The current traditional diagnostic approaches face various challenges regarding efficiency and cost. Digitized healthcare demonstrates several opportunities for reducing human errors, increasing clinical outcomes, tracing data, etc. Artificial Intelligence (AI) has emerged as a transformative tool in healthcare. Subsequently, the evolution of Generative AI represents a new wave. Large Language Models (LLMs), such as ChatGPT, are promising tools for enhancing diagnostic processes, but their potential in this domain remains underexplored. Methods: This study represents the first systematic evaluation of ChatGPT’s performance in chronic disease prediction, specifically targeting heart disease and diabetes. This study compares the effectiveness of zero-shot, few-shot, and CoT reasoning with feature selection techniques and prompt formulations in disease prediction tasks. The two latest versions of GPT4 (GPT-4o and GPT-4o-mini) are tested. Then, the results are evaluated against the best models from the literature. Results: The results indicate that GPT-4o significantly beat GPT-4o-mini in all scenarios regarding accuracy, precision, and F1-score. Moreover, a 5-shot learning strategy demonstrates superior performance to zero-shot, few-shot (3-shot and 10-shot), and various CoT reasoning strategies. The 5-shot learning strategy with GPT-4o achieved an accuracy of 77.07% in diabetes prediction using the Pima Indian Diabetes Dataset, 75.85% using the Frankfurt Hospital Diabetes Dataset, and 83.65% in heart disease prediction. Subsequently, refining prompt formulations resulted in notable improvements, particularly for the heart dataset (5% performance increase using GPT-4o), emphasizing the importance of prompt engineering. Conclusions: Even though ChatGPT does not outperform traditional machine learning and deep learning models, the findings highlight its potential as a complementary tool in disease prediction. Additionally, this work provides value by setting a clear performance baseline for future work on these tasks
Data visualization creators often lack formal training, resulting in a knowledge gap in design practice. Large-language models such as ChatGPT , with their vast internet-scale training data, offer transformative potential 
 Data visualization creators often lack formal training, resulting in a knowledge gap in design practice. Large-language models such as ChatGPT , with their vast internet-scale training data, offer transformative potential to address this gap. In this study, we used both qualitative and quantitative methods to investigate how well ChatGPT can address visualization design questions. First, we quantitatively compared the ChatGPT -generated responses with anonymous online Human replies to data visualization questions on the VisGuides user forum. Next, we conducted a qualitative user study examining the reactions and attitudes of practitioners toward ChatGPT as a visualization design assistant. Participants were asked to bring their visualizations and design questions and received feedback from both Human experts and ChatGPT in randomized order. Our findings from both studies underscore ChatGPT’s strengths—particularly its ability to rapidly generate diverse design options—while also highlighting areas for improvement, such as nuanced contextual understanding and fluid interaction dynamics beyond the chat interface. Drawing on these insights, we discuss design considerations for future LLM-based design feedback systems.
Purpose While there are studies on this topic, there may be a relative scarcity of research focusing on specific regions, such as Jordan. So, this study aims to gather insights 
 Purpose While there are studies on this topic, there may be a relative scarcity of research focusing on specific regions, such as Jordan. So, this study aims to gather insights from healthcare providers in Jordan concerning the advantages of integrating artificial intelligence (AI) into practices, their perspectives on AI applications in healthcare, and their views on the future role of AI in replacing key tasks within health services. Method An online questionnaire was used to collect data on demographics, attitudes towards using AI for tasks and opinions on the benefits of AI adoption among healthcare professionals. For healthcare professionals with restricted internet access or older people, soft copies of the questionnaire were provided to them, and then their responses were collected. A one way analysis of variance (ANOVA) was used to determine the associations between the determinants and the outcomes. Any test with a p value ≀0.05 was considered significant. Results A total of 612 healthcare professionals participated in the survey, with females comprising a majority of respondents (52.8%). The majority of respondents showed optimism about AI’s potential to improve and revolutionise the field, although there were concerns about AI replacing human roles. Generally, physical therapists, medical researchers and pharmacists displayed openness to incorporating AI into their work routines. Younger individuals aged &lt;40 seemed accepting of AI in the domain. A significant portion of participants believed that AI could negatively impact job opportunities. Conclusion To conclude, the results of this study suggest that healthcare professionals, in Jordan, hold receptive views on incorporating AI in the medical field, similar to their counterparts in developed nations. However, there is a concern about the implications of AI on job stability and potential replacements.
Bu bölĂŒm, klinik yapay zeka (Artificial Intelligence, AI) ve makine Ă¶ÄŸrenimi (Machine Learning, ML) temelli tahmin modellerinin ßeffaf raporlanmasına yönelik Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis 
 Bu bölĂŒm, klinik yapay zeka (Artificial Intelligence, AI) ve makine Ă¶ÄŸrenimi (Machine Learning, ML) temelli tahmin modellerinin ßeffaf raporlanmasına yönelik Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis–Artificial Intelligence (TRIPOD+AI, 2024) ile Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis–Large Language Models (TRIPOD-LLM, 2025) kılavuzlarını sistematik biçimde değerlendirmektedir. Öncelikle TRIPOD’ın klasik sĂŒrĂŒmĂŒnden bu yana metodolojik gereksinimlerin nasıl evrildiğini ortaya koymaktadır. İkinci kısımda, TRIPOD+AI’nin on dört ek maddesi ve TRIPOD-LLM’nin bĂŒyĂŒk dil modelleri özel ilkeleri ĂŒzerinden veri yönetimi, model gelißtirme-doğrulama, yorumlanabilirlik, etik ve yanlılık analizleri için ayrıntılı kontrol listeleri sunmaktadır. Eksik veri yönetimi, veri sızıntısı, tekil metrik kullanımı ve haricĂź doğrulama eksikliği gibi yaygın hataları örnekleyerek bunlara yönelik önleme stratejilerini tartıßmaktadır. Son bölĂŒm, biyoistatistikçilerin kuramsal rehberlik rolĂŒnĂŒ vurgulamakta; federatif Ă¶ÄŸrenme, gizlilik-korumalı AI ve adillik-odaklı değerlendirmeler ıßığında geleceğe yönelik araßtırma gĂŒndemini çizmektedir.
Research of Inflammatory Bowel Disease (IBD) involves integrating diverse and heterogeneous data sources, from clinical records to imaging and laboratory results, which presents significant challenges in data harmonization and exploration. 
 Research of Inflammatory Bowel Disease (IBD) involves integrating diverse and heterogeneous data sources, from clinical records to imaging and laboratory results, which presents significant challenges in data harmonization and exploration. These challenges are also reflected in the development of machine-learning applications, where inconsistencies in data quality, missing information, and variability in data formats can adversely affect the performance and generalizability of models. In this study, we describe the collection and curation of a comprehensive dataset focused on IBD. In addition, we present a dedicated research platform. We focus on ethical standards, data protection, and seamless integration of different data types. We also discuss the challenges encountered, as well as the insights gained during its implementation.
Hip fractures are a major orthopedic problem, especially in the elderly population. Hip fractures are usually diagnosed by clinical evaluation and imaging, especially X-rays. In recent years, new approaches to 
 Hip fractures are a major orthopedic problem, especially in the elderly population. Hip fractures are usually diagnosed by clinical evaluation and imaging, especially X-rays. In recent years, new approaches to fracture detection have emerged with the use of artificial intelligence (AI) and deep learning techniques in medical imaging. In this study, we aimed to evaluate the diagnostic performance of ChatGPT-4o, an artificial intelligence model, in diagnosing hip fractures. A total of 200 anteroposterior pelvic X-ray images were retrospectively analyzed. Half of the images belonged to patients with surgically confirmed hip fractures, including both displaced and non-displaced types, while the other half represented patients with soft tissue trauma and no fractures. Each image was evaluated by ChatGPT-4o through a standardized prompt, and its predictions (fracture vs. no fracture) were compared against the gold standard diagnoses. Diagnostic performance metrics such as sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curve, Cohen's kappa, and F1 score were calculated. ChatGPT-4o demonstrated an overall accuracy of 82.5% in detecting hip fractures on pelvic radiographs, with a sensitivity of 78.0% and specificity of 87.0%. PPVs and NPVs were 85.7% and 79.8%, respectively. The area under the ROC curve (AUC) was 0.825, indicating good discriminative performance. Among 22 false-negative cases, 68.2% were non-displaced fractures, suggesting the model had greater difficulty identifying subtle radiographic findings. Cohen's kappa coefficient was 0.65, showing substantial agreement with actual diagnoses. Chi-square analysis revealed a strong correlation (χÂČ = 82.59, P < 0.001), while McNemar's test (P = 0.176) showed no significant asymmetry in error distribution. ChatGPT-4o shows promising accuracy in identifying hip fractures on pelvic X-rays, especially when fractures are displaced. However, its sensitivity drops significantly for non-displaced fractures, leading to many false negatives. This highlights the need for caution when interpreting negative AI results, particularly when clinical suspicion remains high. While not a replacement for expert assessment, ChatGPT-4o may assist in settings with limited specialist access.
Objective: To investigate the current awareness of large language models (LLM) among Chinese clinical physicians and analyze the application needs of cardiovascular specialists. Methods: This is a cross-sectional study utilized 
 Objective: To investigate the current awareness of large language models (LLM) among Chinese clinical physicians and analyze the application needs of cardiovascular specialists. Methods: This is a cross-sectional study utilized convenience sampling. In December 2023, a self-designed questionnaire was distributed to 7 980 clinical physicians, including 930 cardiologists. The survey collected demographic information, including work city (categorized as first-tier, new first-tier, second-tier, third-tier, and fourth-tier and below), hospital level, professional title, and department. And the awareness of LLM, and their application demands in clinical decision-making support, information filtering, and scientific research work were also collected. Differences in awareness and application requirements across geographic regions, hospital tiers, professional ranks, and medical departments were analyzed. Besides, specific demands of cardiovascular specialists were further examined. Results: Among the 7 980 clinical physicians, the awareness rate of LLM was 76.3% (6 088/7 980), and the utilization rate was 11.8% (942/7 980). For the 930 cardiologists, the awareness rate was 78.5% (730/930) and the utilization rate was 11.4% (106/930). Significant differences in awareness and utilization rates were observed across city tiers, hospital grades, and departments (all P<0.05). No significant difference was found among professional titles (P=0.053). Among the 6 088 physicians aware of LLM, demand rates for clinical information filtering, clinical decision support, and research assistance were 87.3% (5 312/6 088), 78.4% (4 774/6 088), and 75.8% (4 616/6 088), respectively. For the 730 cardiologists aware of LLM, these rates were 91.0% (664/730), 79.2% (578/730), and 75.9% (554/730), respectively. Significant differences in demands for clinical information filtering and research assistance were observed across city tiers, hospital grades, professional titles, and departments (all P<0.05), while no significant difference was noted for decision support demands across hospital grades (P=0.085). In clinical information screening and acquisition, cardiologists from different city tiers exhibited statistically significant differences in the demand for literature interpretation. Similarly, variations in the demand for conference summaries, expert biographies, healthcare policies, and social news were noted among cardiologists with different professional titles, while disparities in patient education and science popularization needs were identified across city tiers and hospital grades (all P<0.05). In clinical decision-making support, cardiologists from diverse city tiers and professional titles demonstrated distinct differences in guideline and consensus inquiries, and those from various city tiers showed varied demands for pharmaceutical and medical device-related content (all P<0.05). For research support, cardiologists across city tiers and professional titles exhibited statistically significant differences in trial protocol design requirements, while those from varying city tiers differed in literature search/analysis and research application procedures. Additionally, physicians from different hospital grades displayed divergent needs for data collection (all P<0.05). Conclusions: The adoption of LLM is significantly influenced by regional disparities, institutional resources, and professional backgrounds. Implementing targeted interventions, such as enhancing technical training, optimizing LLM functionalities, and improving accessibility across diverse healthcare settings, could encourage widespread integration of LLM into clinical practice. Such measures could ultimately enhance the quality and efficiency of medical services in China and foster innovations in healthcare delivery.
Silvana Neshkovska | ELOPE English Language Overseas Perspectives and Enquiries
This paper explores the transformative role of Artificial Intelligence (AI) tools, specifically ChatGPT, in the acquisition of English as a foreign language. With the rapid evolution of educational technology, AI-driven 
 This paper explores the transformative role of Artificial Intelligence (AI) tools, specifically ChatGPT, in the acquisition of English as a foreign language. With the rapid evolution of educational technology, AI-driven chatbots like ChatGPT offer innovative methodologies to augment language teaching and learning. This study examines the potential of ChatGPT to improve English language students’ writing abilities by providing suggestions, corrections and automated assistance. Through a review of existing literature and a discussion of the findings of recent studies, the paper seeks to highlight the benefits and risks of integrating AI tools into language education, especially, in the context of writing. Insights gained from multiple studies suggest that while ChatGPT has the potential to significantly enhance language students’ writing skills in all phases of writing, by promoting engagement, motivation, and autonomy among learners, it also necessitates cautious use to ensure academic integrity and to prevent over-reliance, which in turn, can stifle students’ learning capacities.
Benjamin H. Kann | International Journal of Radiation Oncology*Biology*Physics
Background: The integration of artificial intelligence (AI) into healthcare research has the potential to enhance research capacity, streamline protocol development, and reduce barriers to engagement. Medway NHS Foundation Trust identified 
 Background: The integration of artificial intelligence (AI) into healthcare research has the potential to enhance research capacity, streamline protocol development, and reduce barriers to engagement. Medway NHS Foundation Trust identified a plateau in homegrown research participation, particularly among clinicians with limited research experience. A generative AI-driven chatbot was introduced to assist researchers in protocol development by providing step-by-step guidance, prompting ethical and scientific considerations, and offering immediate feedback. Methods: The chatbot was developed using OpenAI’s GPT-3.5 architecture, customised with domain-specific training based on Trust guidelines, Health Research Authority (HRA) requirements, and Integrated Research Application System (IRAS) submission protocols. It was deployed to guide researchers through protocol planning, ensuring compliance with ethical and scientific standards. A mixed-methods evaluation was conducted using a qualitative-dominant sequential explanatory design. Seven early adopters completed a 10-item questionnaire (5-point Likert scales), followed by eight free-flowing interviews to achieve thematic saturation. Results: Since its launch, the chatbot has received an overall performance rating of 8.86/10 from the seven survey respondents. Users reported increased confidence in protocol development, reduced waiting times for expert review, and improved inclusivity in research participation across professional groups. However, limitations in usage due to free-tier platform constraints were identified as a key challenge. Conclusions: AI-driven chatbot tools show promise in supporting research engagement in busy clinical environments. Future improvements should focus on expanding access, optimising integration, and fostering collaboration among NHS institutions to enhance research efficiency and inclusivity.
Abstract Background An artificial intelligence chatbot called Chat Generative Pre-Trained Transformers (ChatGPT) was created by OpenAI. It gained a lot of interest and attention from the scientific and academic sectors 
 Abstract Background An artificial intelligence chatbot called Chat Generative Pre-Trained Transformers (ChatGPT) was created by OpenAI. It gained a lot of interest and attention from the scientific and academic sectors since its November 2022 launch. Aim To identify ChatGPT pros and cons between nursing researchers in Egypt. Methods A descriptive (cross sectional) research design was conducted on a convenient sample of 1001 nursing researchers from faculties of nursing related to the Supreme Council of Universities. Two tools were used: demographic and technical characteristics, and nursing researchers’ opinion questionnaire. Results The majority of the participants ( 81.4%) thought that using ChatGPT had significant advantages. However, 44.7% of the cons were disclosed. Almost two-thirds of nursing researchers stated that they are concerned about patient confidentiality (65.6%), that it could lead to incorrect conclusions (68.8%), and that it could have medicolegal repercussions (68.6%). As a result, they are reluctant to use AI chatbots in healthcare decisions (67.8%). Conclusion and recommendations ChatGPT had benefits but at the same time associated with drawbacks and needs to be used wisely to avoid these drawbacks. Enhance ChatGPT’s ability to foster reflective practice to enhance decision-making and critical thinking while bridging the theoretical and practical knowledge gaps in nursing research.
Introduction The integration of large language models (LLMs) into personal health management presents transformative potential, but faces critical challenges in user experience (UX) design, ethical implementation, and clinical integration. Method 
 Introduction The integration of large language models (LLMs) into personal health management presents transformative potential, but faces critical challenges in user experience (UX) design, ethical implementation, and clinical integration. Method This paper introduces a novel AI-driven journaling application, a functional prototype available open source, designed to encourage patient engagement through a natural language interface. This approach, termed “AI-assisted health journaling,” enables users to document health experiences in their own words while receiving real-time, context-aware feedback from an LLM. The prototype combines a personal health record with an LLM assistant, allowing for reflective self-monitoring and aiming to combine patient-generated data with clinical insights. Key innovations include a three-panel interface for seamless journaling, AI dialogue, and longitudinal tracking, alongside specialized modes for interacting with simulated healthcare expert personas. Result Preliminary insights from persona-based evaluations highlight the system's capacity to enhance health literacy through explainable AI responses while maintaining strict data localization and privacy controls. We propose five design principles for patient-centric AI health tools: (1) decoupling core functionality from LLM dependencies, (2) layered transparency in AI outputs, (3) adaptive consent for data sharing, (4) clinician-facing data summarization, and (5) compliance-first architecture. Discussion By transforming unstructured patient narratives into structured insights through natural language processing, this approach demonstrates how journaling interfaces could serve as a critical middleware layer in healthcare ecosystems-empowering patients as active partners in care while preserving clinical oversight. Future research directions emphasize the need for rigorous trials evaluating impacts on care continuity, patient-provider communication, and long-term health outcomes across diverse populations.
Rationale and Objectives: Artificial intelligence (AI) tools, particularly generative models, are increasingly used to depict clinical roles in healthcare. This study evaluates whether generative AI systems accurately differentiate between radiologists 
 Rationale and Objectives: Artificial intelligence (AI) tools, particularly generative models, are increasingly used to depict clinical roles in healthcare. This study evaluates whether generative AI systems accurately differentiate between radiologists and medical radiation technologists (MRTs), 2 roles often confused by patients and providers. Materials and Methods: We assessed 1380 images and videos generated by 8 text-to-image/video AI models. Five raters evaluated task-role accuracy, attire, equipment, lighting, isolation, and demographics. Statistical tests compared differences across models and roles. Results: MRTs were depicted accurately in 82.0% of outputs, while only 56.2% of radiologist images/videos were role-appropriate. Among inaccurate radiologist depictions, 79.1% misrepresented MRTs tasks. Radiologists were more often male (73.8%) and White (79.7%), while MRTs were more diverse. Stethoscope misuse, lack of disability/religious markers, and overuse of business attire for radiologists further reflected bias. Conclusion: Generative AI frequently misrepresents radiologist roles and demographics, reinforcing stereotypes and public confusion. Greater oversight and inclusion standards are needed to ensure equitable AI-generated healthcare content.
This article explores the integration of artificial intelligence (AI) in advanced practice nursing, highlighting its potential to enhance clinical decision-making, diagnostic accuracy, and workflow efficiency. AI offers substantial benefits for 
 This article explores the integration of artificial intelligence (AI) in advanced practice nursing, highlighting its potential to enhance clinical decision-making, diagnostic accuracy, and workflow efficiency. AI offers substantial benefits for NPs, including improved patient outcomes and streamlined administrative tasks. However, ethical considerations, data quality, and biases present ongoing challenges in adopting AI technologies within healthcare.
This study examined the impacts of ChatGPT on the academic writing skills of students who are taking English courses at a university in TĂŒrkiye. A qualitative research design was followed, 
 This study examined the impacts of ChatGPT on the academic writing skills of students who are taking English courses at a university in TĂŒrkiye. A qualitative research design was followed, during which a semi-structured interview with 12 active users of ChatGPT for academic writing was conducted. Thematic analysis reveals that students mostly benefited from improved writing fluency, organization, and grammatical accuracy due to ChatGPT. A lot of participants highly regarded it for prompt feedback, coherence, and help with brainstorming. Some of the drawbacks included over-reliance on AI, inability to provide subject-specific accuracy in many cases, and financial accessibility concerns, since premium features were often too expensive for students. While useful, research highlighted the need for ChatGPT to function in conjunction with traditional writing instruction to maintain integrity and develop autonomous writing skills. The study underscores how fundamental AI literacy and institutional support are for affording the most benefits from AI-assisted learning in academic writing.
This study explored the effectiveness of ChatGPT as a learning aid for Bachelor of Science in Information and Communications Technology (BSICT) students at Surigao del Norte State University (SNSU). With 
 This study explored the effectiveness of ChatGPT as a learning aid for Bachelor of Science in Information and Communications Technology (BSICT) students at Surigao del Norte State University (SNSU). With the increasing integration of artificial intelligence in education, the research focuses on two key features of ChatGPT: its search function and personalized learning capabilities. The primary aim is to assess how these features impact students’ knowledge acquisition, skill development, and learning behaviour. A quantitative research design was employed using a structured questionnaire based on the Technology Acceptance Model (TAM) to evaluate perceived usefulness, ease of use, and satisfaction. Results revealed that ChatGPT helped students better understand complex topics, improve technical and cognitive skills, and foster positive behaviours such as consistency, curiosity, and independent study. Additionally, the tool was found to reduce manual effort and promote deeper engagement with learning materials. The study concludes that ChatGPT, when thoughtfully integrated into academic routines, can serve as an effective and accessible aid in enhancing student learning outcomes
Zusammenfassung Der Nutzung von kĂŒnstlicher Intelligenz (KI) in der Form von Large Language Models (LLM) von Patient*innen in der GynĂ€kologie und Geburtshilfe hat das Potenzial, die Versorgung tiefgreifend zu verĂ€ndern. 
 Zusammenfassung Der Nutzung von kĂŒnstlicher Intelligenz (KI) in der Form von Large Language Models (LLM) von Patient*innen in der GynĂ€kologie und Geburtshilfe hat das Potenzial, die Versorgung tiefgreifend zu verĂ€ndern. KI-gestĂŒtzte Sprachmodelle wie ChatGPT können Informationen zu sensiblen Gesundheitsthemen wie Menstruation, VerhĂŒtung, Schwangerschaft und Geburt rund um die Uhr zugĂ€nglich machen und damit den Informationsstand und die Selbstbestimmung der Patient*innen stĂ€rken. Die stĂ€ndige VerfĂŒgbarkeit und verstĂ€ndliche Kommunikation medizinischer Informationen ermöglichen es Patient*innen, sich besser auf ArztgesprĂ€che vorzubereiten und ihre individuellen Fragen ohne Scham zu stellen. Dies fördert eine gleichberechtigte, partnerschaftliche Kommunikation zwischen Patient*innen und medizinischem Fachpersonal. Gleichzeitig gibt es jedoch Grenzen und Herausforderungen beim Einsatz von ChatGPT in der Medizin. Das Modell kann nur allgemeine Informationen liefern und ist nicht in der Lage, die KomplexitĂ€t individueller GesundheitszustĂ€nde oder personalisierter medizinischer Entscheidungen vollstĂ€ndig zu erfassen. Auch Datenschutzbedenken und das Risiko von MissverstĂ€ndnissen stellen zentrale HĂŒrden dar, die bei der Nutzung berĂŒcksichtigt werden mĂŒssen. ChatGPT kann und darf daher keine Ă€rztliche Beratung ersetzen, sondern sollte lediglich als ergĂ€nzende Ressource genutzt werden. Insgesamt bietet der Einsatz von ChatGPT viele Chancen fĂŒr die Kommunikation auch in der GynĂ€kologie und Geburtshilfe. KĂŒnftige Entwicklungen und Forschungen könnten dazu beitragen, das Modell weiter an die spezifischen BedĂŒrfnisse der Patient*innen anzupassen und es als unterstĂŒtzende Ressource im Gesundheitswesen zu etablieren.
Artificial intelligence (AI) is reshaping health care, making AI literacy vital for nursing professionals. The Navigate AI basics, Utilize AI strategically, Recognize AI pitfalls, Skills support, Ethics in action, and 
 Artificial intelligence (AI) is reshaping health care, making AI literacy vital for nursing professionals. The Navigate AI basics, Utilize AI strategically, Recognize AI pitfalls, Skills support, Ethics in action, and Shape the future framework provides a structured approach to AI integration into nursing. Nurses need to understand AI's fundamentals and its impact on clinical practice and patient care, both in the classroom and at the bedside. Nurses can use AI effectively and responsibly by recognizing benefits, such as enhanced decision-making, and challenges, like biased data. Ethical considerations should guide AI usage in health care, with a commitment to frequent skill development. Nurses play a pivotal role in shaping the future by ensuring AI is applied to benefit their organizations and, more importantly, healthcare workers and patients. This AI literacy guide is designed to empower nurses to navigate and help build the future of health care and AI with confidence and competence.
With medical technology innovation, robotic surgery has evolved from mechanical arm operations to AI-assisted decision-making, promoting deep integration of surgical medicine with engineering and computer science. This study employed CiteSpace 
 With medical technology innovation, robotic surgery has evolved from mechanical arm operations to AI-assisted decision-making, promoting deep integration of surgical medicine with engineering and computer science. This study employed CiteSpace software to conduct a bibliometric analysis of robotic surgical technology evolution literature from the Web of Science (2014-2024). Analysis of 520 publications revealed explosive growth from &lt;5 annual papers (2014-2017) to 177 papers in 2024, representing a 3,540% increase. The dataset encompassed 2,968 authors, 1,957 institutions, and 266 journals across 77 countries/regions. The United States dominated with 191 publications (36.73%), followed by China (88, 16.92%) and the United Kingdom (71, 13.65%). The University of London emerged as the most productive institution (28 publications). Keyword burst analysis identified "artificial intelligence" (2019-2024) and "deep learning methods" (2022-2024) as dominant emerging themes. Computer science categories comprised &gt;10% of publications, demonstrating strong interdisciplinary integration centered on surgery (31.54%) and biomedical engineering (12.31%). The field demonstrated clear evolution from basic instrument innovation to AI-driven, multi-disciplinary collaborative intelligent surgical systems, with Italy (centrality 0.18) and France (0.16) serving as critical knowledge brokers despite moderate publication volumes.
Abstract Purpose This study introduces NeuroLens, a multimodal system designed to enhance anatomical recognition by integrating video with textual and voice inputs. It aims to provide an interactive learning platform 
 Abstract Purpose This study introduces NeuroLens, a multimodal system designed to enhance anatomical recognition by integrating video with textual and voice inputs. It aims to provide an interactive learning platform for surgical trainees. Methods NeuroLens employs a multimodal deep learning localization model trained on an Endoscopic Third Ventriculostomy dataset. It processes neuroendoscopic videos with textual or voice descriptions to identify and localize anatomical structures, displaying them as labeled bounding boxes. Usability was evaluated through a questionnaire by five participants, including surgical students and practicing surgeons. The questionnaire included both quantitative and qualitative sections. The quantitative part covered the System Usability Scale (SUS) and assessments of system appearance, functionality, and overall usability, while the qualitative section gathered user feedback and improvement suggestions. The localization model’s performance was assessed using accuracy and mean Intersection over Union (mIoU) metrics. Results The system demonstrates strong usability, with an average SUS score of 71.5, exceeding the threshold for acceptable usability. The localization achieves a predicted class accuracy of 100%, a localization accuracy of 79.69%, and a mIoU of 67.10%. Participant feedback highlights the intuitive design, organization, and responsiveness while suggesting enhancements like 3D visualization. Conclusion NeuroLens integrates multimodal inputs for accurate anatomical detection and localization, addressing limitations of traditional training. Its strong usability and technical performance make it a valuable tool for enhancing anatomical learning in surgical training. While NeuroLens shows strong usability and performance, its small sample size limits generalizability. Further evaluation with more students and enhancements like 3D visualization will strengthen its effectiveness.