Author Description

Login to generate an author description

Ask a Question About This Mathematician

Generative artificial intelligence (AI), including large language models (LLMs), is poised to transform scientific research, enabling researchers to elevate their research productivity. This article presents a how-to guide for employing … Generative artificial intelligence (AI), including large language models (LLMs), is poised to transform scientific research, enabling researchers to elevate their research productivity. This article presents a how-to guide for employing LLMs in academic settings, focusing on their unique strengths, constraints and implications through the lens of philosophy of science and epistemology. Using ChatGPT as a case study, I identify and elaborate on three attributes contributing to its effectiveness-intelligence, versatility and collaboration-accompanied by tips on crafting effective prompts, practical use cases and a living resource online (https://osf.io/8vpwu/). Next, I evaluate the limitations of generative AI and its implications for ethical use, equality and education. Regarding ethical and responsible use, I argue from technical and epistemic standpoints that there is no need to restrict the scope or nature of AI assistance, provided that its use is transparently disclosed. A pressing challenge, however, lies in detecting fake research, which can be mitigated by embracing open science practices, such as transparent peer review and sharing data, code and materials. Addressing equality, I contend that while generative AI may promote equality for some, it may simultaneously exacerbate disparities for others-an issue with potentially significant yet unclear ramifications as it unfolds. Lastly, I consider the implications for education, advocating for active engagement with LLMs and cultivating students' critical thinking and analytical skills. The how-to guide seeks to empower researchers with the knowledge and resources necessary to effectively harness generative AI while navigating the complex ethical dilemmas intrinsic to its application.
Diversity is the fuel of innovation. Global diversity-geographical or international diversification-is indispensable for developing a true psychological science of human beings but remains poorly understood. We surveyed 68 top psychology … Diversity is the fuel of innovation. Global diversity-geographical or international diversification-is indispensable for developing a true psychological science of human beings but remains poorly understood. We surveyed 68 top psychology journals in 10 subdisciplines and examined the global diversity of authors, editors (i.e., members of academic editorial teams), and journal ownership. Results show that (a) the global diversity of authorship, editorship, and ownership is low in top psychology journals, with the United States boasting outsized influences; (b) disparity intensifies along the hierarchy of authors, editors, and journal ownership and substantially differs between subdisciplines and journal types; (c) removing the United States markedly increases global diversity and eliminates differences in diversity between subdisciplines and between authorship and editorship; and (d) more authors and editors are from the journal's home country (vs. a foreign journal) and from the editor-in-chief's home country (vs. a journal with a foreign editor-in-chief), and the home-country biases are most pronounced in the United States-journals from the United States or with U.S. editors-in-chief have the lowest global diversity in authorship and editorship. These results provide substantial novel insights into the global diversity of psychology journals, with implications for a new diversity policy to stimulate the generation of variety and, by extension, innovation.
Discourse on gender diversity tends to overlook differences across levels of hierarchy (e.g., students, faculty, and editors) and critical dimensions (e.g., subdisciplines and geographical locations). Further ignored is its intersection … Discourse on gender diversity tends to overlook differences across levels of hierarchy (e.g., students, faculty, and editors) and critical dimensions (e.g., subdisciplines and geographical locations). Further ignored is its intersection with global diversity—representation from different countries. Here we document and contextualize gender disparity from perspectives of equal versus expected representation in journal editorship, by analyzing 68 top psychology journals in 10 subdisciplines. First, relative to ratios as students and faculty, women are underrepresented as editorial-board members (41%) and—unlike previous results based on one subfield—as editors-in-chief (34%) as well. Second, female ratios in editorship vary substantially across subdisciplines, genres of scholarship (higher in empirical and review journals than in method journals), continents/countries/regions (e.g., higher in North America than in Europe), and journal countries of origin (e.g., higher in American journals than in European journals). Third, under female (vs. male) editors-in-chief, women are much better represented as editorial-board members (47% vs. 36%), but the geographical diversity of editorial-board members and authorship decreases. These results reveal new local and broad contexts of gender diversity in editorship in psychology, with policy implications. Our approach also offers a methodological guideline for similar disparity research in other fields.
Academic writing is an indispensable yet laborious part of the research enterprise. This article maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), … Academic writing is an indispensable yet laborious part of the research enterprise. This article maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), to elevate the quality and efficiency of academic writing. It introduces a human–AI collaborative framework that delineates the rationale (“why”), process (“how”), and nature (“what”) of AI engagement in writing. The framework pinpoints both short-term and long-term reasons for engagement, their underlying mechanisms (e.g., cognitive offloading and imaginative stimulation), and the need for a learning mindset to avoid overreliance on AI. It reveals the role of AI throughout the writing process, conceptualized through a two-stage model for human–AI collaborative writing, and the nature of AI assistance in writing, represented through a model of writing-assistance types and levels. Building on this framework, it then describes effective prompting techniques for incorporating AI into the writing routine—outlining, drafting, and editing—as well as strategies for maintaining rigor and adhering to ethics and policies. Ultimately, the prudent integration of AI into academic writing can ease the communication burden, empower authors, accelerate discovery, and promote diversity in science.
Studies in vision, psychology, and neuroscience often present visual stimuli on digital screens. Crucially, the appearance of visual stimuli depends on properties such as luminance and color, making it critical … Studies in vision, psychology, and neuroscience often present visual stimuli on digital screens. Crucially, the appearance of visual stimuli depends on properties such as luminance and color, making it critical to measure them. Yet conventional luminance-measuring equipment is not only expensive but also onerous to operate (particularly for novices). Building on previous work, here we present an open-source integrated software package—PsyCalibrator ( https://github.com/yangzhangpsy/PsyCalibrator )—that takes advantage of consumer hardware (SpyderX, Spyder5) and makes luminance/color measurement and gamma calibration accessible and flexible. Gamma calibration based on visual methods (without photometers) is also implemented. PsyCalibrator requires MATLAB (or its free alternative, GNU Octave) and works in Windows, macOS, and Linux. We first validated measurements from SpyderX and Spyder5 by comparing them with professional, high-cost photometers (ColorCAL MKII Colorimeter and Photo Research PR-670 SpectraScan). Validation results show (a) excellent accuracy in linear correction and luminance/color measurement and (b) for practical purposes, low measurement variances. We offer a detailed tutorial on using PsyCalibrator to measure luminance/color and calibrate displays. Finally, we recommend reporting templates to describe simple (e.g., computer-generated shapes) and complex (e.g., naturalistic images and videos) visual stimuli.
Academic writing is an indispensable yet laborious part of the research enterprise. This Perspective maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), … Academic writing is an indispensable yet laborious part of the research enterprise. This Perspective maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), to elevate the quality and efficiency of academic writing. We introduce a human-AI collaborative framework that delineates the rationale (why), process (how), and nature (what) of AI engagement in writing. The framework pinpoints both short-term and long-term reasons for engagement and their underlying mechanisms (e.g., cognitive offloading and imaginative stimulation). It reveals the role of AI throughout the writing process, conceptualized through a two-stage model for human-AI collaborative writing, and the nature of AI assistance in writing, represented through a model of writing-assistance types and levels. Building on this framework, we describe effective prompting techniques for incorporating AI into the writing routine (outlining, drafting, and editing) as well as strategies for maintaining rigorous scholarship, adhering to varied journal policies, and avoiding overreliance on AI. Ultimately, the prudent integration of AI into academic writing can ease the communication burden, empower authors, accelerate discovery, and promote diversity in science.
How many researchers does it take to publish an article in top journals in neuroscience and psychology? Manually coding 42,580 articles spanning 1879-2021 from 32 journals, we examined the evolution … How many researchers does it take to publish an article in top journals in neuroscience and psychology? Manually coding 42,580 articles spanning 1879-2021 from 32 journals, we examined the evolution of authorship size and its rate of change. Moreover, we assessed the driving forces behind these changes. We found that, starting from the 1950s but not earlier, the average authorship size per article in neuroscience and psychology has increased exponentially, growing by 50% and 31% over the last decade and reaching a record high of 10.4 and 4.8 authors in 2021, respectively. Single-authored articles have become a rarity today, particularly in primary research articles: 1.7% in neuroscience and 2.2% in psychology in 2019-2021 (vs. 5.7% and 11.2% in review articles). With the withering of sole authors rises a new type of authorship, group authors (e.g., a consortium). Group authorship was rare before 2000, but in 2019-2021, it appeared in 4.1% of articles in neuroscience, mostly in genetics, neuroimaging, and disease-outnumbering single-authored articles for the first time-and 0.7% in psychology, mostly in developmental and clinical research. The exponential inflation in authorship size could not be attributed to behaviors of professional editors in profit-oriented journals but aligns with a hybrid epistemic-behavioral-cultural account-an account that integrates multidimensional factors, including increased research complexity, the benefits of collaboration, the rise of government-funded research, changing norms in authorship practices, and biased incentives in evaluation. These findings suggest troubling implications for research reproducibility, innovations, equity/diversity, and ethics, calling for policy deliberations to address potential negative ramifications. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Transparency in research reporting is crucial for evaluating the reproducibility and validity of research, including potential confounding factors (internal validity) and generalizability (external validity). Here we focus on visual stimuli—stimuli … Transparency in research reporting is crucial for evaluating the reproducibility and validity of research, including potential confounding factors (internal validity) and generalizability (external validity). Here we focus on visual stimuli—stimuli routinely used to elicit mental processes and behaviors—as a case study to systematically assess and evaluate current practices in reporting visual characteristics, including display setup, stimulus size, luminance/color, and contrast. Our first study scrutinized recent publications (N = 360) involving visual stimuli in leading journals in neuroscience and psychology—spanning vision, cognitive, clinical, developmental, and social/personality psychology. The second study examined recent publications (N = 114) on visual attentional bias in clinical samples, involving tasks known to be sensitive to visual properties. Analyzing the full text and supplemental materials of each publication, the two studies reveal a systematic lapse in current practices of reporting characteristics of visual stimuli. This reporting failure was not due to authors making visual materials available online, which was rare (<20%) and could not replace the reporting of visual characteristics. Failure to report stimulus properties hinders efforts to build cumulative science: 1) direct replications become challenging if not impossible; 2) internal validity may be compromised; and 3) generalizability across stimulus properties is prematurely assumed, and its evaluation is precluded in the first place. Our findings have immediate implications for journal policies on reporting practices, urging for explicit emphasis on transparent reporting of stimulus properties, particularly when perceptual components are involved. To assist in this effort, we provide reporting templates.
Psychtoolbox is among the most popular open-source software packages for stimulus presentation and response collection. It provides flexibility and power in the choice of stimuli and responses, in addition to … Psychtoolbox is among the most popular open-source software packages for stimulus presentation and response collection. It provides flexibility and power in the choice of stimuli and responses, in addition to precision in control and timing. However, Psychtoolbox requires coding in MATLAB (or its equivalent, e.g., Octave). Scripting is challenging to learn and can lead to timing inaccuracies unwittingly. It can also be time-consuming and error prone even for experienced users. We have developed the first general-purpose graphical experiment builder for Psychtoolbox, called PsyBuilder, for both new and experienced users. The builder allows users to graphically implement sophisticated experimental tasks through intuitive drag and drop without the need to script. The output codes have built-in optimized timing precision and come with detailed comments to facilitate customization. Because users can see exactly how the code changes in response to modifications in the graphical interface, PsyBuilder can also bolster the understanding of programming in ways that were not previously possible. In this tutorial, we first describe its interface, then walk the reader through the graphical building process using a concrete experiment, and finally address important issues from the perspective of potential adopters.
The rapid adoption of generative artificial intelligence (AI) in scientific research, particularly large language models (LLMs), has outpaced the development of ethical guidelines, leading to a “Triple-Too” problem: too many … The rapid adoption of generative artificial intelligence (AI) in scientific research, particularly large language models (LLMs), has outpaced the development of ethical guidelines, leading to a “Triple-Too” problem: too many high-level ethical initiatives, too abstract principles lacking contextual and practical relevance, and too much focus on restrictions and risks over benefits and utilities. Existing approaches—principlism (reliance on abstract ethical principles), formalism (rigid application of rules), and technological solutionism (overemphasis on technological fixes)—offer little practical guidance for addressing ethical challenges of AI in scientific research practices. To bridge the gap between abstract principles and day-to-day research practices, a user-centered, realism-inspired approach is proposed here. It outlines five specific goals for ethical AI use: 1) understanding model training and output, including bias mitigation strategies; 2) respecting privacy, confidentiality, and copyright; 3) avoiding plagiarism and policy violations; 4) applying AI beneficially compared to alternatives; and 5) using AI transparently and reproducibly. Each goal is accompanied by actionable strategies and realistic cases of misuse and corrective measures. I argue that ethical AI application requires evaluating its utility against existing alternatives rather than isolated performance metrics. Additionally, I propose documentation guidelines to enhance transparency and reproducibility in AI-assisted research. Moving forward, we need targeted professional development, training programs, and balanced enforcement mechanisms to promote responsible AI use while fostering innovation. By refining these ethical guidelines and adapting them to emerging AI capabilities, we can accelerate scientific progress without compromising research integrity.
Humans from different cultures define the self differently, but how cultures influence self-construal—beliefs about the self—remains elusive. Do cultures mold our way of perceiving, thinking, feeling, and acting, much into … Humans from different cultures define the self differently, but how cultures influence self-construal—beliefs about the self—remains elusive. Do cultures mold our way of perceiving, thinking, feeling, and acting, much into a habit through cultural practices and daily routines (habit mechanism)? Or do cultures merely modify the accessibility of a certain way of perceiving, thinking, feeling, and acting, just as one’s thoughts constantly change on a daily basis based on the current motive and situation (access mechanism)? A highly influential line of work in cultural priming—self-construal priming—suggests that reading different story primes (reflecting either independent or interdependent thought processes) or circling different types of pronouns in word-search primes (either independent [e.g., I, mine] or interdependent [e.g., we, ours] pronouns) can shift self-descriptions, value endorsement, and social obligation judgment (Gardner, Gabriel, & Lee, 1999). In this preregistered replication and extension study, despite efforts to maximize priming and to identify moderators, we found that self-construal priming, either through story primes or word-search primes, did not change the relative independence or interdependence of one’s self-construal in Chinese participants. Priming was also not modulated by gender, experience living aboard, rice vs. wheat farming legacy, or self-reported earnestness in answering the questions. Thus, the predominant access afforded by cultures is much less malleable than previously assumed, consistent with the habit but not access mechanism of cultural influences. To build a cumulative and reproducible cultural psychology, we call for direct replications of key findings in cultural priming and related literature.
Central in an experimental psychologist’s toolkit is software for stimulus presentation and response collection. An ideal package should combine flexibility and power in the choice of stimuli and responses, precision … Central in an experimental psychologist’s toolkit is software for stimulus presentation and response collection. An ideal package should combine flexibility and power in the choice of stimuli and responses, precision in control and timing, and ease of use. Psychtoolbox has remained the most popular open-source package, but it suffers from two major long-standing limitations: being script-based only, Psychtoolbox is challenging to learn and time-consuming to program and debug; scripting can also lead to timing inaccuracies unwittingly. We dissolve these limitations by developing the first general-purpose graphical experiment builder for Psychtoolbox, called PsyBuilder, which allows users to graphically implement sophisticated experimental tasks through intuitive drag-and-drop without the need of coding and with built-in optimized timing precision. PsyBuilder is poised to facilitate wider adoption of Psychtoolbox in teaching and research in lieu of proprietary software, fueling the open science movement. Furthermore, its built-in performance optimization and drag-and-drop design can improve data-collection efficiency and accuracy to accelerate scientific progress.
Computer programming (coding) is indispensable for researchers across disciplines, yet it remains challenging to learn and time-consuming to carry out. Generative AI, particularly large language models (LLMs), has the potential … Computer programming (coding) is indispensable for researchers across disciplines, yet it remains challenging to learn and time-consuming to carry out. Generative AI, particularly large language models (LLMs), has the potential to transform coding into intuitive conversations, but best practices and effective workflows are only emerging. We dissect AI-based coding through three key lenses: the nature and role of LLMs in coding (why), six types of coding assistance they provide (what), and a five-step workflow in action with practical implementation strategies (how). Additionally, we address the limitations and future outlook of AI in coding. By offering actionable insights, this framework helps to guide researchers in effectively leveraging AI to enhance coding practices and education, accelerating scientific progress.
ABSTRACT Objective To assess the knowledge, attitudes, and practices (KAP) of medical stakeholders regarding the use of generative artificial intelligence (GAI) tools. Methods A cross‐sectional survey was conducted among stakeholders … ABSTRACT Objective To assess the knowledge, attitudes, and practices (KAP) of medical stakeholders regarding the use of generative artificial intelligence (GAI) tools. Methods A cross‐sectional survey was conducted among stakeholders in medicine. Participants included researchers, clinicians, and medical journal editors with varying degrees of familiarity with GAI tools. The survey questionnaire comprised 40 questions covering four main dimensions: basic information, knowledge, attitudes, and practices related to GAI tools. Descriptive analysis, Pearson's correlation, and multivariable regression were used to analyze the data. Results The overall awareness rate of GAI tools was 93.3%. Participants demonstrated moderate knowledge (mean score 17.71 ± 5.56), positive attitudes (mean score 73.32 ± 15.83), and reasonable practices (mean score 40.70 ± 12.86). Factors influencing knowledge included education level, geographic region, and attitudes ( p < 0.05). Attitudes were influenced by work experience and knowledge ( p < 0.05), while practices were driven by both knowledge and attitudes ( p < 0.001). Participants from outside China scored higher in all dimensions compared to those from China ( p < 0.001). Additionally, 74.0% of participants emphasized the importance of reporting GAI usage in research, and 73.9% advocated for naming the specific tool used. Conclusion The findings highlight a growing awareness and generally positive attitude toward GAI tools among medical stakeholders, alongside the recognition of their ethical implications and the necessity for standardized reporting practices. Targeted training and the development of clear reporting guidelines are recommended to enhance the effective use of GAI tools in medical research and practice.
ABSTRACT Objective To assess the knowledge, attitudes, and practices (KAP) of medical stakeholders regarding the use of generative artificial intelligence (GAI) tools. Methods A cross‐sectional survey was conducted among stakeholders … ABSTRACT Objective To assess the knowledge, attitudes, and practices (KAP) of medical stakeholders regarding the use of generative artificial intelligence (GAI) tools. Methods A cross‐sectional survey was conducted among stakeholders in medicine. Participants included researchers, clinicians, and medical journal editors with varying degrees of familiarity with GAI tools. The survey questionnaire comprised 40 questions covering four main dimensions: basic information, knowledge, attitudes, and practices related to GAI tools. Descriptive analysis, Pearson's correlation, and multivariable regression were used to analyze the data. Results The overall awareness rate of GAI tools was 93.3%. Participants demonstrated moderate knowledge (mean score 17.71 ± 5.56), positive attitudes (mean score 73.32 ± 15.83), and reasonable practices (mean score 40.70 ± 12.86). Factors influencing knowledge included education level, geographic region, and attitudes ( p < 0.05). Attitudes were influenced by work experience and knowledge ( p < 0.05), while practices were driven by both knowledge and attitudes ( p < 0.001). Participants from outside China scored higher in all dimensions compared to those from China ( p < 0.001). Additionally, 74.0% of participants emphasized the importance of reporting GAI usage in research, and 73.9% advocated for naming the specific tool used. Conclusion The findings highlight a growing awareness and generally positive attitude toward GAI tools among medical stakeholders, alongside the recognition of their ethical implications and the necessity for standardized reporting practices. Targeted training and the development of clear reporting guidelines are recommended to enhance the effective use of GAI tools in medical research and practice.
Computer programming (coding) is indispensable for researchers across disciplines, yet it remains challenging to learn and time-consuming to carry out. Generative AI, particularly large language models (LLMs), has the potential … Computer programming (coding) is indispensable for researchers across disciplines, yet it remains challenging to learn and time-consuming to carry out. Generative AI, particularly large language models (LLMs), has the potential to transform coding into intuitive conversations, but best practices and effective workflows are only emerging. We dissect AI-based coding through three key lenses: the nature and role of LLMs in coding (why), six types of coding assistance they provide (what), and a five-step workflow in action with practical implementation strategies (how). Additionally, we address the limitations and future outlook of AI in coding. By offering actionable insights, this framework helps to guide researchers in effectively leveraging AI to enhance coding practices and education, accelerating scientific progress.
How many researchers does it take to publish an article in top journals in neuroscience and psychology? Manually coding 42,580 articles spanning 1879-2021 from 32 journals, we examined the evolution … How many researchers does it take to publish an article in top journals in neuroscience and psychology? Manually coding 42,580 articles spanning 1879-2021 from 32 journals, we examined the evolution of authorship size and its rate of change. Moreover, we assessed the driving forces behind these changes. We found that, starting from the 1950s but not earlier, the average authorship size per article in neuroscience and psychology has increased exponentially, growing by 50% and 31% over the last decade and reaching a record high of 10.4 and 4.8 authors in 2021, respectively. Single-authored articles have become a rarity today, particularly in primary research articles: 1.7% in neuroscience and 2.2% in psychology in 2019-2021 (vs. 5.7% and 11.2% in review articles). With the withering of sole authors rises a new type of authorship, group authors (e.g., a consortium). Group authorship was rare before 2000, but in 2019-2021, it appeared in 4.1% of articles in neuroscience, mostly in genetics, neuroimaging, and disease-outnumbering single-authored articles for the first time-and 0.7% in psychology, mostly in developmental and clinical research. The exponential inflation in authorship size could not be attributed to behaviors of professional editors in profit-oriented journals but aligns with a hybrid epistemic-behavioral-cultural account-an account that integrates multidimensional factors, including increased research complexity, the benefits of collaboration, the rise of government-funded research, changing norms in authorship practices, and biased incentives in evaluation. These findings suggest troubling implications for research reproducibility, innovations, equity/diversity, and ethics, calling for policy deliberations to address potential negative ramifications. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Academic writing is an indispensable yet laborious part of the research enterprise. This article maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), … Academic writing is an indispensable yet laborious part of the research enterprise. This article maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), to elevate the quality and efficiency of academic writing. It introduces a human–AI collaborative framework that delineates the rationale (“why”), process (“how”), and nature (“what”) of AI engagement in writing. The framework pinpoints both short-term and long-term reasons for engagement, their underlying mechanisms (e.g., cognitive offloading and imaginative stimulation), and the need for a learning mindset to avoid overreliance on AI. It reveals the role of AI throughout the writing process, conceptualized through a two-stage model for human–AI collaborative writing, and the nature of AI assistance in writing, represented through a model of writing-assistance types and levels. Building on this framework, it then describes effective prompting techniques for incorporating AI into the writing routine—outlining, drafting, and editing—as well as strategies for maintaining rigor and adhering to ethics and policies. Ultimately, the prudent integration of AI into academic writing can ease the communication burden, empower authors, accelerate discovery, and promote diversity in science.
The rapid adoption of generative artificial intelligence (AI) in scientific research, particularly large language models (LLMs), has outpaced the development of ethical guidelines, leading to a “Triple-Too” problem: too many … The rapid adoption of generative artificial intelligence (AI) in scientific research, particularly large language models (LLMs), has outpaced the development of ethical guidelines, leading to a “Triple-Too” problem: too many high-level ethical initiatives, too abstract principles lacking contextual and practical relevance, and too much focus on restrictions and risks over benefits and utilities. Existing approaches—principlism (reliance on abstract ethical principles), formalism (rigid application of rules), and technological solutionism (overemphasis on technological fixes)—offer little practical guidance for addressing ethical challenges of AI in scientific research practices. To bridge the gap between abstract principles and day-to-day research practices, a user-centered, realism-inspired approach is proposed here. It outlines five specific goals for ethical AI use: 1) understanding model training and output, including bias mitigation strategies; 2) respecting privacy, confidentiality, and copyright; 3) avoiding plagiarism and policy violations; 4) applying AI beneficially compared to alternatives; and 5) using AI transparently and reproducibly. Each goal is accompanied by actionable strategies and realistic cases of misuse and corrective measures. I argue that ethical AI application requires evaluating its utility against existing alternatives rather than isolated performance metrics. Additionally, I propose documentation guidelines to enhance transparency and reproducibility in AI-assisted research. Moving forward, we need targeted professional development, training programs, and balanced enforcement mechanisms to promote responsible AI use while fostering innovation. By refining these ethical guidelines and adapting them to emerging AI capabilities, we can accelerate scientific progress without compromising research integrity.
Generative artificial intelligence (AI), including large language models (LLMs), is poised to transform scientific research, enabling researchers to elevate their research productivity. This article presents a how-to guide for employing … Generative artificial intelligence (AI), including large language models (LLMs), is poised to transform scientific research, enabling researchers to elevate their research productivity. This article presents a how-to guide for employing LLMs in academic settings, focusing on their unique strengths, constraints and implications through the lens of philosophy of science and epistemology. Using ChatGPT as a case study, I identify and elaborate on three attributes contributing to its effectiveness-intelligence, versatility and collaboration-accompanied by tips on crafting effective prompts, practical use cases and a living resource online (https://osf.io/8vpwu/). Next, I evaluate the limitations of generative AI and its implications for ethical use, equality and education. Regarding ethical and responsible use, I argue from technical and epistemic standpoints that there is no need to restrict the scope or nature of AI assistance, provided that its use is transparently disclosed. A pressing challenge, however, lies in detecting fake research, which can be mitigated by embracing open science practices, such as transparent peer review and sharing data, code and materials. Addressing equality, I contend that while generative AI may promote equality for some, it may simultaneously exacerbate disparities for others-an issue with potentially significant yet unclear ramifications as it unfolds. Lastly, I consider the implications for education, advocating for active engagement with LLMs and cultivating students' critical thinking and analytical skills. The how-to guide seeks to empower researchers with the knowledge and resources necessary to effectively harness generative AI while navigating the complex ethical dilemmas intrinsic to its application.
Studies in vision, psychology, and neuroscience often present visual stimuli on digital screens. Crucially, the appearance of visual stimuli depends on properties such as luminance and color, making it critical … Studies in vision, psychology, and neuroscience often present visual stimuli on digital screens. Crucially, the appearance of visual stimuli depends on properties such as luminance and color, making it critical to measure them. Yet conventional luminance-measuring equipment is not only expensive but also onerous to operate (particularly for novices). Building on previous work, here we present an open-source integrated software package—PsyCalibrator ( https://github.com/yangzhangpsy/PsyCalibrator )—that takes advantage of consumer hardware (SpyderX, Spyder5) and makes luminance/color measurement and gamma calibration accessible and flexible. Gamma calibration based on visual methods (without photometers) is also implemented. PsyCalibrator requires MATLAB (or its free alternative, GNU Octave) and works in Windows, macOS, and Linux. We first validated measurements from SpyderX and Spyder5 by comparing them with professional, high-cost photometers (ColorCAL MKII Colorimeter and Photo Research PR-670 SpectraScan). Validation results show (a) excellent accuracy in linear correction and luminance/color measurement and (b) for practical purposes, low measurement variances. We offer a detailed tutorial on using PsyCalibrator to measure luminance/color and calibrate displays. Finally, we recommend reporting templates to describe simple (e.g., computer-generated shapes) and complex (e.g., naturalistic images and videos) visual stimuli.
Academic writing is an indispensable yet laborious part of the research enterprise. This Perspective maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), … Academic writing is an indispensable yet laborious part of the research enterprise. This Perspective maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), to elevate the quality and efficiency of academic writing. We introduce a human-AI collaborative framework that delineates the rationale (why), process (how), and nature (what) of AI engagement in writing. The framework pinpoints both short-term and long-term reasons for engagement and their underlying mechanisms (e.g., cognitive offloading and imaginative stimulation). It reveals the role of AI throughout the writing process, conceptualized through a two-stage model for human-AI collaborative writing, and the nature of AI assistance in writing, represented through a model of writing-assistance types and levels. Building on this framework, we describe effective prompting techniques for incorporating AI into the writing routine (outlining, drafting, and editing) as well as strategies for maintaining rigorous scholarship, adhering to varied journal policies, and avoiding overreliance on AI. Ultimately, the prudent integration of AI into academic writing can ease the communication burden, empower authors, accelerate discovery, and promote diversity in science.
Discourse on gender diversity tends to overlook differences across levels of hierarchy (e.g., students, faculty, and editors) and critical dimensions (e.g., subdisciplines and geographical locations). Further ignored is its intersection … Discourse on gender diversity tends to overlook differences across levels of hierarchy (e.g., students, faculty, and editors) and critical dimensions (e.g., subdisciplines and geographical locations). Further ignored is its intersection with global diversity—representation from different countries. Here we document and contextualize gender disparity from perspectives of equal versus expected representation in journal editorship, by analyzing 68 top psychology journals in 10 subdisciplines. First, relative to ratios as students and faculty, women are underrepresented as editorial-board members (41%) and—unlike previous results based on one subfield—as editors-in-chief (34%) as well. Second, female ratios in editorship vary substantially across subdisciplines, genres of scholarship (higher in empirical and review journals than in method journals), continents/countries/regions (e.g., higher in North America than in Europe), and journal countries of origin (e.g., higher in American journals than in European journals). Third, under female (vs. male) editors-in-chief, women are much better represented as editorial-board members (47% vs. 36%), but the geographical diversity of editorial-board members and authorship decreases. These results reveal new local and broad contexts of gender diversity in editorship in psychology, with policy implications. Our approach also offers a methodological guideline for similar disparity research in other fields.
Diversity is the fuel of innovation. Global diversity-geographical or international diversification-is indispensable for developing a true psychological science of human beings but remains poorly understood. We surveyed 68 top psychology … Diversity is the fuel of innovation. Global diversity-geographical or international diversification-is indispensable for developing a true psychological science of human beings but remains poorly understood. We surveyed 68 top psychology journals in 10 subdisciplines and examined the global diversity of authors, editors (i.e., members of academic editorial teams), and journal ownership. Results show that (a) the global diversity of authorship, editorship, and ownership is low in top psychology journals, with the United States boasting outsized influences; (b) disparity intensifies along the hierarchy of authors, editors, and journal ownership and substantially differs between subdisciplines and journal types; (c) removing the United States markedly increases global diversity and eliminates differences in diversity between subdisciplines and between authorship and editorship; and (d) more authors and editors are from the journal's home country (vs. a foreign journal) and from the editor-in-chief's home country (vs. a journal with a foreign editor-in-chief), and the home-country biases are most pronounced in the United States-journals from the United States or with U.S. editors-in-chief have the lowest global diversity in authorship and editorship. These results provide substantial novel insights into the global diversity of psychology journals, with implications for a new diversity policy to stimulate the generation of variety and, by extension, innovation.
Psychtoolbox is among the most popular open-source software packages for stimulus presentation and response collection. It provides flexibility and power in the choice of stimuli and responses, in addition to … Psychtoolbox is among the most popular open-source software packages for stimulus presentation and response collection. It provides flexibility and power in the choice of stimuli and responses, in addition to precision in control and timing. However, Psychtoolbox requires coding in MATLAB (or its equivalent, e.g., Octave). Scripting is challenging to learn and can lead to timing inaccuracies unwittingly. It can also be time-consuming and error prone even for experienced users. We have developed the first general-purpose graphical experiment builder for Psychtoolbox, called PsyBuilder, for both new and experienced users. The builder allows users to graphically implement sophisticated experimental tasks through intuitive drag and drop without the need to script. The output codes have built-in optimized timing precision and come with detailed comments to facilitate customization. Because users can see exactly how the code changes in response to modifications in the graphical interface, PsyBuilder can also bolster the understanding of programming in ways that were not previously possible. In this tutorial, we first describe its interface, then walk the reader through the graphical building process using a concrete experiment, and finally address important issues from the perspective of potential adopters.
Humans from different cultures define the self differently, but how cultures influence self-construal—beliefs about the self—remains elusive. Do cultures mold our way of perceiving, thinking, feeling, and acting, much into … Humans from different cultures define the self differently, but how cultures influence self-construal—beliefs about the self—remains elusive. Do cultures mold our way of perceiving, thinking, feeling, and acting, much into a habit through cultural practices and daily routines (habit mechanism)? Or do cultures merely modify the accessibility of a certain way of perceiving, thinking, feeling, and acting, just as one’s thoughts constantly change on a daily basis based on the current motive and situation (access mechanism)? A highly influential line of work in cultural priming—self-construal priming—suggests that reading different story primes (reflecting either independent or interdependent thought processes) or circling different types of pronouns in word-search primes (either independent [e.g., I, mine] or interdependent [e.g., we, ours] pronouns) can shift self-descriptions, value endorsement, and social obligation judgment (Gardner, Gabriel, & Lee, 1999). In this preregistered replication and extension study, despite efforts to maximize priming and to identify moderators, we found that self-construal priming, either through story primes or word-search primes, did not change the relative independence or interdependence of one’s self-construal in Chinese participants. Priming was also not modulated by gender, experience living aboard, rice vs. wheat farming legacy, or self-reported earnestness in answering the questions. Thus, the predominant access afforded by cultures is much less malleable than previously assumed, consistent with the habit but not access mechanism of cultural influences. To build a cumulative and reproducible cultural psychology, we call for direct replications of key findings in cultural priming and related literature.
Transparency in research reporting is crucial for evaluating the reproducibility and validity of research, including potential confounding factors (internal validity) and generalizability (external validity). Here we focus on visual stimuli—stimuli … Transparency in research reporting is crucial for evaluating the reproducibility and validity of research, including potential confounding factors (internal validity) and generalizability (external validity). Here we focus on visual stimuli—stimuli routinely used to elicit mental processes and behaviors—as a case study to systematically assess and evaluate current practices in reporting visual characteristics, including display setup, stimulus size, luminance/color, and contrast. Our first study scrutinized recent publications (N = 360) involving visual stimuli in leading journals in neuroscience and psychology—spanning vision, cognitive, clinical, developmental, and social/personality psychology. The second study examined recent publications (N = 114) on visual attentional bias in clinical samples, involving tasks known to be sensitive to visual properties. Analyzing the full text and supplemental materials of each publication, the two studies reveal a systematic lapse in current practices of reporting characteristics of visual stimuli. This reporting failure was not due to authors making visual materials available online, which was rare (<20%) and could not replace the reporting of visual characteristics. Failure to report stimulus properties hinders efforts to build cumulative science: 1) direct replications become challenging if not impossible; 2) internal validity may be compromised; and 3) generalizability across stimulus properties is prematurely assumed, and its evaluation is precluded in the first place. Our findings have immediate implications for journal policies on reporting practices, urging for explicit emphasis on transparent reporting of stimulus properties, particularly when perceptual components are involved. To assist in this effort, we provide reporting templates.
Central in an experimental psychologist’s toolkit is software for stimulus presentation and response collection. An ideal package should combine flexibility and power in the choice of stimuli and responses, precision … Central in an experimental psychologist’s toolkit is software for stimulus presentation and response collection. An ideal package should combine flexibility and power in the choice of stimuli and responses, precision in control and timing, and ease of use. Psychtoolbox has remained the most popular open-source package, but it suffers from two major long-standing limitations: being script-based only, Psychtoolbox is challenging to learn and time-consuming to program and debug; scripting can also lead to timing inaccuracies unwittingly. We dissolve these limitations by developing the first general-purpose graphical experiment builder for Psychtoolbox, called PsyBuilder, which allows users to graphically implement sophisticated experimental tasks through intuitive drag-and-drop without the need of coding and with built-in optimized timing precision. PsyBuilder is poised to facilitate wider adoption of Psychtoolbox in teaching and research in lieu of proprietary software, fueling the open science movement. Furthermore, its built-in performance optimization and drag-and-drop design can improve data-collection efficiency and accuracy to accelerate scientific progress.