Author Description

Login to generate an author description

Ask a Question About This Mathematician

Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a … Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a robot to decide optimally between taking a physical action toward task completion and issuing an utterance to the human teammate. We focus on two types of utterances: verbal commands, where the robot asks the human to take a physical action, and state-conveying actions, where the robot informs the human about its internal state, which captures the information that the robot uses in its decision making. Human subject experiments show that enabling the robot to issue verbal commands is the most effective form of communicating objectives, while retaining user trust in the robot. Communicating information about the robot’s state should be done judiciously, since many participants questioned the truthfulness of the robot statements when the robot did not provide sufficient explanation about its actions.
Significance To study the COVID-19 pandemic, its effects on society, and measures for reducing its spread, researchers need detailed data on the course of the pandemic. Standard public health data … Significance To study the COVID-19 pandemic, its effects on society, and measures for reducing its spread, researchers need detailed data on the course of the pandemic. Standard public health data streams suffer inconsistent reporting and frequent, unexpected revisions. They also miss other aspects of a population’s behavior that are worthy of consideration. We present an open database of COVID signals in the United States, measured at the county level and updated daily. This includes traditionally reported COVID cases and deaths, and many others: measures of mobility, social distancing, internet search trends, self-reported symptoms, and patterns of COVID-related activity in deidentified medical insurance claims. The database provides all signals in a common, easy-to-use format, empowering both public health research and operational decision-making.
A robot operating in isolation needs to reason over the uncertainty in its model of the world and adapt its own actions to account for this uncertainty. Similarly, a robot … A robot operating in isolation needs to reason over the uncertainty in its model of the world and adapt its own actions to account for this uncertainty. Similarly, a robot interacting with people needs to reason over its uncertainty over the human internal state, as well as over how this state may change, as humans adapt to the robot. This paper summarizes our own work in this area, which depicts the different ways that probabilistic planning and game-theoretic algorithms can enable such reasoning in robotic systems that collaborate with people. We start with a general formulation of the problem as a two-player game with incomplete information. We then articulate the different assumptions within this general formulation, and we explain how these lead to exciting and diverse robot behaviors in real-time interactions with actual human subjects, in a variety of manufacturing, personal robotics and assistive care settings.
Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy … Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.
One of the challenges in conducting research on the intersection of the CHI and Human-Robot Interaction (HRI) communities is in addressing the gap of acceptable design research methods between the … One of the challenges in conducting research on the intersection of the CHI and Human-Robot Interaction (HRI) communities is in addressing the gap of acceptable design research methods between the two. While HRI is focused on interaction with robots and includes design research in its scope, the community is not as accustomed to exploratory design methods as the CHI community. This workshop paper argues for bringing exploratory design, and specifically Research through Design (RtD) methods that have been established in CHI for the past decade to the foreground of HRI. RtD can enable design researchers in the field of HRI to conduct exploratory design work that asks what is the right thing to design and share it within the community.
Abstract The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, … Abstract The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, ideally at fine spatial and temporal resolution. The COVIDcast API is our attempt to fill this need: operational since April 2020, it provides open access to both traditional public health surveillance signals (cases, deaths, and hospitalizations) and many auxiliary indicators of COVID-19 activity, such as signals extracted from de-identified medical claims data, massive online surveys, cell phone mobility data, and internet search trends. These are available at a fine geographic resolution (mostly at the county level) and are updated daily. The COVIDcast API also tracks all revisions to historical data, allowing modelers to account for the frequent revisions and backfill that are common for many public health data sources. All of the data is available in a common format through the API and accompanying R and Python software packages. This paper describes the data sources and signals, and provides examples demonstrating that the auxiliary signals in the COVIDcast API present information relevant to tracking COVID activity, augmenting traditional public health reporting and empowering research and decision-making.
Artificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stakes contexts. However, these algorithms are complex and have … Artificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stakes contexts. However, these algorithms are complex and have trade-offs, notably between prediction accuracy and fairness to population subgroups. This makes it hard for designers to understand algorithms and design products or services in a way that respects users' goals, values, and needs. We proposed a method to help designers and users explore algorithms, visualize their trade-offs, and select algorithms with trade-offs consistent with their goals and needs. We evaluated our method on the problem of predicting criminal defendants' likelihood to re-offend through (i) a large-scale Amazon Mechanical Turk experiment, and (ii) in-depth interviews with domain experts. Our evaluations show that our method can help designers and users of these systems better understand and navigate algorithmic trade-offs. This paper contributes a new way of providing designers the ability to understand and control the outcomes of algorithmic systems they are creating.
Emotions that are perceived as "negative" are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in … Emotions that are perceived as "negative" are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in interaction with technology. As technology is becoming more social, personal and emotional by mediating our relationships and generating new social entities (such as conversational agents and robots), it is valuable to consider how it can support people's negative emotions and behaviors. Research in Psychology shows that interacting with negative emotions correctly can benefit well-being, yet the boundary between helpful and harmful is delicate. This workshop paper looks at the opportunities of designing for negative affect, and the challenge of "causing no harm" that arises in an attempt to do so.
Emotions that are perceived as are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in interaction … Emotions that are perceived as are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in interaction with technology. As technology is becoming more social, personal and emotional by mediating our relationships and generating new social entities (such as conversational agents and robots), it is valuable to consider how it can support people's negative emotions and behaviors. Research in Psychology shows that interacting with negative emotions correctly can benefit well-being, yet the boundary between helpful and harmful is delicate. This workshop paper looks at the opportunities of designing for negative affect, and the challenge of causing no harm that arises in an attempt to do so.
Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a … Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a robot to decide optimally between doing a task and issuing an utterance. We focus on two types of utterances: verbal commands, where the robot expresses how it wants its human teammate to behave, and state-conveying actions, where the robot explains why it is behaving this way. Human subject experiments show that enabling the robot to issue verbal commands is the most effective form of communicating objectives, while retaining user trust in the robot. Communicating why information should be done judiciously, since many participants questioned the truthfulness of the robot statements.
Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy … Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.
The implementation of responsible AI in an organization is inherently complex due to the involvement of multiple stakeholders, each with their unique set of goals and responsibilities across the entire … The implementation of responsible AI in an organization is inherently complex due to the involvement of multiple stakeholders, each with their unique set of goals and responsibilities across the entire AI lifecycle. These responsibilities are often ambiguously defined and assigned, leading to confusion, miscommunication, and inefficiencies. Even when responsibilities are clearly defined and assigned to specific roles, the corresponding AI actors lack effective tools to support their execution. Toward closing these gaps, we present a systematic review and comprehensive meta-analysis of the current state of responsible AI tools, focusing on their alignment with specific stakeholder roles and their responsibilities in various AI lifecycle stages. We categorize over 220 tools according to AI actors and stages they address. Our findings reveal significant imbalances across the stakeholder roles and lifecycle stages addressed. The vast majority of available tools have been created to support AI designers and developers specifically during data-centric and statistical modeling stages while neglecting other roles such as institutional leadership, deployers, end-users, and impacted communities, and stages such as value proposition and deployment. The uneven distribution we describe here highlights critical gaps that currently exist in responsible AI governance research and practice. Our analysis reveals that despite the myriad of frameworks and tools for responsible AI, it remains unclear \emph{who} within an organization and \emph{when} in the AI lifecycle a tool applies. Furthermore, existing tools are rarely validated, leaving critical gaps in their usability and effectiveness. These gaps provide a starting point for researchers and practitioners to create more effective and holistic approaches to responsible AI development and governance.
As the population of older adults increases, there is a growing need for support for them to age in place. This is exacerbated by the growing number of individuals struggling … As the population of older adults increases, there is a growing need for support for them to age in place. This is exacerbated by the growing number of individuals struggling with cognitive decline and shrinking number of youth who provide care for them. Artificially intelligent agents could provide cognitive support to older adults experiencing memory problems, and they could help informal caregivers with coordination tasks. To better understand this possible future, we conducted a speed dating with storyboards study to reveal invisible social boundaries that might keep older adults and their caregivers from accepting and using agents. We found that healthy older adults worry that accepting agents into their homes might increase their chances of developing dementia. At the same time, they want immediate access to agents that know them well if they should experience cognitive decline. Older adults in the early stages of cognitive decline expressed a desire for agents that can ease the burden they saw themselves becoming for their caregivers. They also speculated that an agent who really knew them well might be an effective advocate for their needs when they were less able to advocate for themselves. That is, the agent may need to transition from being unremarkable to remarkable. Based on these findings, we present design opportunities and considerations for agents and articulate directions of future research.
As the population of older adults increases, there is a growing need for support for them to age in place. This is exacerbated by the growing number of individuals struggling … As the population of older adults increases, there is a growing need for support for them to age in place. This is exacerbated by the growing number of individuals struggling with cognitive decline and shrinking number of youth who provide care for them. Artificially intelligent agents could provide cognitive support to older adults experiencing memory problems, and they could help informal caregivers with coordination tasks. To better understand this possible future, we conducted a speed dating with storyboards study to reveal invisible social boundaries that might keep older adults and their caregivers from accepting and using agents. We found that healthy older adults worry that accepting agents into their homes might increase their chances of developing dementia. At the same time, they want immediate access to agents that know them well if they should experience cognitive decline. Older adults in the early stages of cognitive decline expressed a desire for agents that can ease the burden they saw themselves becoming for their caregivers. They also speculated that an agent who really knew them well might be an effective advocate for their needs when they were less able to advocate for themselves. That is, the agent may need to transition from being unremarkable to remarkable. Based on these findings, we present design opportunities and considerations for agents and articulate directions of future research.
The implementation of responsible AI in an organization is inherently complex due to the involvement of multiple stakeholders, each with their unique set of goals and responsibilities across the entire … The implementation of responsible AI in an organization is inherently complex due to the involvement of multiple stakeholders, each with their unique set of goals and responsibilities across the entire AI lifecycle. These responsibilities are often ambiguously defined and assigned, leading to confusion, miscommunication, and inefficiencies. Even when responsibilities are clearly defined and assigned to specific roles, the corresponding AI actors lack effective tools to support their execution. Toward closing these gaps, we present a systematic review and comprehensive meta-analysis of the current state of responsible AI tools, focusing on their alignment with specific stakeholder roles and their responsibilities in various AI lifecycle stages. We categorize over 220 tools according to AI actors and stages they address. Our findings reveal significant imbalances across the stakeholder roles and lifecycle stages addressed. The vast majority of available tools have been created to support AI designers and developers specifically during data-centric and statistical modeling stages while neglecting other roles such as institutional leadership, deployers, end-users, and impacted communities, and stages such as value proposition and deployment. The uneven distribution we describe here highlights critical gaps that currently exist in responsible AI governance research and practice. Our analysis reveals that despite the myriad of frameworks and tools for responsible AI, it remains unclear \emph{who} within an organization and \emph{when} in the AI lifecycle a tool applies. Furthermore, existing tools are rarely validated, leaving critical gaps in their usability and effectiveness. These gaps provide a starting point for researchers and practitioners to create more effective and holistic approaches to responsible AI development and governance.
Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy … Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.
Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy … Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.
Significance To study the COVID-19 pandemic, its effects on society, and measures for reducing its spread, researchers need detailed data on the course of the pandemic. Standard public health data … Significance To study the COVID-19 pandemic, its effects on society, and measures for reducing its spread, researchers need detailed data on the course of the pandemic. Standard public health data streams suffer inconsistent reporting and frequent, unexpected revisions. They also miss other aspects of a population’s behavior that are worthy of consideration. We present an open database of COVID signals in the United States, measured at the county level and updated daily. This includes traditionally reported COVID cases and deaths, and many others: measures of mobility, social distancing, internet search trends, self-reported symptoms, and patterns of COVID-related activity in deidentified medical insurance claims. The database provides all signals in a common, easy-to-use format, empowering both public health research and operational decision-making.
Abstract The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, … Abstract The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, ideally at fine spatial and temporal resolution. The COVIDcast API is our attempt to fill this need: operational since April 2020, it provides open access to both traditional public health surveillance signals (cases, deaths, and hospitalizations) and many auxiliary indicators of COVID-19 activity, such as signals extracted from de-identified medical claims data, massive online surveys, cell phone mobility data, and internet search trends. These are available at a fine geographic resolution (mostly at the county level) and are updated daily. The COVIDcast API also tracks all revisions to historical data, allowing modelers to account for the frequent revisions and backfill that are common for many public health data sources. All of the data is available in a common format through the API and accompanying R and Python software packages. This paper describes the data sources and signals, and provides examples demonstrating that the auxiliary signals in the COVIDcast API present information relevant to tracking COVID activity, augmenting traditional public health reporting and empowering research and decision-making.
Emotions that are perceived as are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in interaction … Emotions that are perceived as are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in interaction with technology. As technology is becoming more social, personal and emotional by mediating our relationships and generating new social entities (such as conversational agents and robots), it is valuable to consider how it can support people's negative emotions and behaviors. Research in Psychology shows that interacting with negative emotions correctly can benefit well-being, yet the boundary between helpful and harmful is delicate. This workshop paper looks at the opportunities of designing for negative affect, and the challenge of causing no harm that arises in an attempt to do so.
One of the challenges in conducting research on the intersection of the CHI and Human-Robot Interaction (HRI) communities is in addressing the gap of acceptable design research methods between the … One of the challenges in conducting research on the intersection of the CHI and Human-Robot Interaction (HRI) communities is in addressing the gap of acceptable design research methods between the two. While HRI is focused on interaction with robots and includes design research in its scope, the community is not as accustomed to exploratory design methods as the CHI community. This workshop paper argues for bringing exploratory design, and specifically Research through Design (RtD) methods that have been established in CHI for the past decade to the foreground of HRI. RtD can enable design researchers in the field of HRI to conduct exploratory design work that asks what is the right thing to design and share it within the community.
Artificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stakes contexts. However, these algorithms are complex and have … Artificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stakes contexts. However, these algorithms are complex and have trade-offs, notably between prediction accuracy and fairness to population subgroups. This makes it hard for designers to understand algorithms and design products or services in a way that respects users' goals, values, and needs. We proposed a method to help designers and users explore algorithms, visualize their trade-offs, and select algorithms with trade-offs consistent with their goals and needs. We evaluated our method on the problem of predicting criminal defendants' likelihood to re-offend through (i) a large-scale Amazon Mechanical Turk experiment, and (ii) in-depth interviews with domain experts. Our evaluations show that our method can help designers and users of these systems better understand and navigate algorithmic trade-offs. This paper contributes a new way of providing designers the ability to understand and control the outcomes of algorithmic systems they are creating.
Emotions that are perceived as "negative" are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in … Emotions that are perceived as "negative" are inherent in the human experience. Yet not much work in the field of HCI has looked into the role of these emotions in interaction with technology. As technology is becoming more social, personal and emotional by mediating our relationships and generating new social entities (such as conversational agents and robots), it is valuable to consider how it can support people's negative emotions and behaviors. Research in Psychology shows that interacting with negative emotions correctly can benefit well-being, yet the boundary between helpful and harmful is delicate. This workshop paper looks at the opportunities of designing for negative affect, and the challenge of "causing no harm" that arises in an attempt to do so.
Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a … Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a robot to decide optimally between taking a physical action toward task completion and issuing an utterance to the human teammate. We focus on two types of utterances: verbal commands, where the robot asks the human to take a physical action, and state-conveying actions, where the robot informs the human about its internal state, which captures the information that the robot uses in its decision making. Human subject experiments show that enabling the robot to issue verbal commands is the most effective form of communicating objectives, while retaining user trust in the robot. Communicating information about the robot’s state should be done judiciously, since many participants questioned the truthfulness of the robot statements when the robot did not provide sufficient explanation about its actions.
A robot operating in isolation needs to reason over the uncertainty in its model of the world and adapt its own actions to account for this uncertainty. Similarly, a robot … A robot operating in isolation needs to reason over the uncertainty in its model of the world and adapt its own actions to account for this uncertainty. Similarly, a robot interacting with people needs to reason over its uncertainty over the human internal state, as well as over how this state may change, as humans adapt to the robot. This paper summarizes our own work in this area, which depicts the different ways that probabilistic planning and game-theoretic algorithms can enable such reasoning in robotic systems that collaborate with people. We start with a general formulation of the problem as a two-player game with incomplete information. We then articulate the different assumptions within this general formulation, and we explain how these lead to exciting and diverse robot behaviors in real-time interactions with actual human subjects, in a variety of manufacturing, personal robotics and assistive care settings.
Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a … Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a robot to decide optimally between doing a task and issuing an utterance. We focus on two types of utterances: verbal commands, where the robot expresses how it wants its human teammate to behave, and state-conveying actions, where the robot explains why it is behaving this way. Human subject experiments show that enabling the robot to issue verbal commands is the most effective form of communicating objectives, while retaining user trust in the robot. Communicating why information should be done judiciously, since many participants questioned the truthfulness of the robot statements.