Computer Science Computer Vision and Pattern Recognition

Music Technology and Sound Studies

Description

This cluster of papers focuses on the development and application of interactive evolutionary music systems and instruments, combining human evaluation with EC optimization. It encompasses topics such as music generation, digital musical instruments, gesture recognition, machine learning, sound synthesis, and the intersection of art and technology in musical performance.

Keywords

Interactive Evolutionary Computation; Music Generation; Human-Computer Interaction; Digital Musical Instruments; Gesture Recognition; Machine Learning; Sound Synthesis; Musical Performance; Artificial Intelligence; Acoustic Ecology

Introduction Ch.1 Perception, Ecology, and Music Ch. 2 Jimi Hendrix's Star Spangled Banner Ch. 3 Music, Motion and Subjectivity Ch. 4 Subject Position in Music Ch.5 Autonomy/Heteronomy and Perceptual Style … Introduction Ch.1 Perception, Ecology, and Music Ch. 2 Jimi Hendrix's Star Spangled Banner Ch. 3 Music, Motion and Subjectivity Ch. 4 Subject Position in Music Ch.5 Autonomy/Heteronomy and Perceptual Style Ch.6 The First Movement of Beethoven's Quartet in A Minor, Op 132 Conclusions Notes References Index
There is more to sound recording than just recording sound. Far from being simply a tool for the preservation of music, the technology is a catalyst. In this award-winning text, … There is more to sound recording than just recording sound. Far from being simply a tool for the preservation of music, the technology is a catalyst. In this award-winning text, Mark Katz provides a wide-ranging, deeply informative, consistently entertaining history of recording's profound impact on the musical life of the past century, from Edison to the Internet. Fully revised and updated, this new edition adds coverage of mashups and Auto-Tune, explores recent developments in file-sharing, and includes an expanded conclusion and bibliography.
Comparison of recent psychoacoustic data on consonance with those on roughness reveals that “psychoacoustic consonance” merely corresponds to the absence of roughness and is only slightly and indirectly correlated with … Comparison of recent psychoacoustic data on consonance with those on roughness reveals that “psychoacoustic consonance” merely corresponds to the absence of roughness and is only slightly and indirectly correlated with musical intervals. Thus, psychoacoustic consonance cannot be considered as the basis of the sense of musical intervals. The basis of that sense seems to be provided by the concept of virtual pitch. This concept is introduced with a model. The concept accounts for many psychoacoustic and musical phenomena as, e.g., the ambiguity of pitch of complex tones, the “residue,” the pitch of inharmonic signals, the dominance of certain harmonics, pitch shifts, the sense for musical intervals, octave periodicity, octave enlargement, “stretching” of musical scales, and the “tonal meaning” of chords in music.
Foreword (Albert Bregman) An Introduction to Auditory Display (Gregory Kramer) Delivery of Information Through Sound (James A. Ballas) Perceptual Principles in Sound Grouping (Sheila M. Williams) Spatial Sound and Sonification … Foreword (Albert Bregman) An Introduction to Auditory Display (Gregory Kramer) Delivery of Information Through Sound (James A. Ballas) Perceptual Principles in Sound Grouping (Sheila M. Williams) Spatial Sound and Sonification (Elizabeth M. Wenzel) Pattern and Reference in Auditory Display (Robin Bargar) Environments for Exploring Auditory Representations of Multidimensional Data (Stuart Smith, Ronald M. Pickett, and Marian G. Williams) Some Organizing Principles for Representing Data with Sound (Gregory Kramer) Sound Synthesis Algorithms for Auditory Data Representations (Carla Scaletti) Sonnet: Audio-Enhanced Monitoring and Debugging (David H. Jameson) A Framework for Sonification Design (Tara M. Madhyastha and Daniel A. Reed) Synchronization of Visual and Aural Parallel Program Performance Data (Jay Alan Jackson and Joan M. Francioni) Sonifying the Body Electric: Superiority of an Auditory over a Visual Display in a Complex, Multivariate System (W. Tecumeseh Fitch and Gregory Kramer) Auditory Display of Computational Fluid Dynamics Data (Kevin McCabe and Akil Rangwalla) Musical Structures in Data from Chaotic Attractors (Gottfried Mayer-Kress, Robin Bargar, and Insook Choi) Listening to the Earth Sing (Chris Hayward) Multivariate Data Mappings (Sara Bly).
(1966). Can One Hear the Shape of a Drum? The American Mathematical Monthly: Vol. 73, No. 4P2, pp. 1-23. (1966). Can One Hear the Shape of a Drum? The American Mathematical Monthly: Vol. 73, No. 4P2, pp. 1-23.
Listening combines broad coverage of acoustics, speech and music perception psychophysics, and auditory physiology with a coherent theoretical orientation in a lively and accessible introduction to the perception of music … Listening combines broad coverage of acoustics, speech and music perception psychophysics, and auditory physiology with a coherent theoretical orientation in a lively and accessible introduction to the perception of music and speech events.Handel treats the production and perception of music and speech in parallel throughout the text, arguing that their production and perception follows identical principles; music and speech share the same formal properties, involve the same cognitive mechanisms, and cannot exist in separate modules. The way that a sound is produced determines the physical properties of the acoustic wave. These properties in turn lead to the perception of the event.The initial chapters take up physical processes, including a section on characterization of sound and discussion of the way instruments and speech produce musical sound. Handel explains how the environment affects perceived sounds, including reflection, reverberation, diffraction, and the Doppler effect. Subsequent chapters take up psychological processes: partitioning smeared sounds into discrete events, identifying sound sources, the units and phrases of speech and music, and speech and music rhythms. The final chapter provides a detailed treatment of the physiology and neurophysiology of the auditory system.All of the author's explanations are coherent and clear, and this strategy includes discussing particular pieces of research in detail rather than covering many things superficially Handel analyzes causes as well as describing phenomena and sets out for the reader the difficulties inherent in the research methods he discusses. He defines the physical, musical, and psychological terms used, even the most basic ones, and covers all of the experimental methods and statistical procedures in the text.Stephen Handel is Professor of Psychology at the University of Tennessee. A Bradford Book.
Lombard noted in 1911 that a speaker changes his voice level similarly when the ambient noise level increases, on the one hand, and when the level at which he hears … Lombard noted in 1911 that a speaker changes his voice level similarly when the ambient noise level increases, on the one hand, and when the level at which he hears his own voice (his sidetone) decreases, on the other. We can now state the form of these two functions, show that they are related to each other and to the equal-sensation function for imitating speech or noise loudness, and account for their form in terms of the underlying sensory scales and the hypothesis that the speaker tries to maintain a speech-to-noise ratio favorable for communication. Perturbations in the timing and spectrum of sidetone also lead the speaker to compensate for the apparent deterioration in his intelligibility. Such compensations reflect direct and indirect audience control of speech, rather than its autoregulation by sidetone. When not harassed by prying experimenters or an unfavorable acoustic environment, the speaker need no more listen to himself while speaking than he need speak to himself while listening.
A 4-b/sample transform coder is designed using a psychoacoustically derived noise-making threshold that is based on the short-term spectrum of the signal. The coder has been tested in a formal … A 4-b/sample transform coder is designed using a psychoacoustically derived noise-making threshold that is based on the short-term spectrum of the signal. The coder has been tested in a formal subjective test involving a wide selection of monophonic audio inputs. The signals used in the test were of 15-kHz bandwidth, sampled at 32 kHz. The bit rate of the resulting coder was 128 kb/s. The subjective test shows that the coded signal could not be distinguished from the original at that bit rate. Subsequent informal work suggests that a bit rate of 96 kb/s may maintain transparency for the set of inputs used in the test.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
In this influential book on the subject of rhythm, the authors develop a theoretical framework based essentially on a Gestalt approach, viewing rhythmic experience in terms of pattern perception or … In this influential book on the subject of rhythm, the authors develop a theoretical framework based essentially on a Gestalt approach, viewing rhythmic experience in terms of pattern perception or groupings. Musical examples of increasing complexity are used to provide training in the analysis, performance, and writing of rhythm, with exercises for the student's own work. This is a path-breaking work, important alike to music students and teachers, but it will make profitable reading for performers, too.-New York Times Book Review When at some future time theories of rhythm are as well understood, and as much discussed as theories of harmony and counterpoint they will rest in no small measure on the foundations laid by Cooper and Meyer in this provocative dissertation on the rhythmic structure of music.-Notes . a significant, courageous and, on the whole, successful attempt to deal with a very controversial and neglected subject. Certainly no one who takes the time to read it will emerge from the experience unchanged or unmoved.-Journal of Music Theory The late GROSVENOR W. COOPER, author of Learning to Listen, was professor of music at the University of California at Santa Cruz.
The art of music is no longer limited to the sounding models of instruments and voices. Electoacoustic music opens access to all sounds, a bewildering sonic array ranging from the … The art of music is no longer limited to the sounding models of instruments and voices. Electoacoustic music opens access to all sounds, a bewildering sonic array ranging from the real to the surreal and beyond. For listeners the traditional links with physical sound-making are frequently ruptured: electroacoustic sound-shapes and qualities frequently do not indicate known sources and causes. Gone are the familiar articulations of instruments and vocal utterance: gone is the stability of note and interval: gone too is the reference of beat and metre. Composers also have problems: how to cut an aesthetic path and discover a stability in a wide-open sound world, how to develop appropriate sound-making methods, how to select technologies and software.
This article describes a system for the evolution and coevolution of virtual creatures that compete in physically simulated three-dimensional worlds. Pairs of individuals enter one-on-one contests in which they contend … This article describes a system for the evolution and coevolution of virtual creatures that compete in physically simulated three-dimensional worlds. Pairs of individuals enter one-on-one contests in which they contend to gain control of a common resource. The winners receive higher relative fitness scores allowing them to survive and reproduce. Realistic dynamics simulation including gravity, collisions, and friction, restricts the actions to physically plausible behaviors. The morphology of these creatures and the neural systems for controlling their muscle forces are both genetically determined, and the morphology and behavior can adapt to each other as they evolve simultaneously. The genotypes are structured as directed graphs of nodes and connections, and they can efficiently but flexibly describe instructions for the development of creatures' bodies and control systems with repeating or recursive components. When simulated evolutions are performed with populations of competing creatures, interesting and diverse strategies and counterstrategies emerge.
The appropriate use of nonspeech sounds has the potential to add a great deal to the functionality of computer interfaces. Sound is a largely unexploited medium of output, even though … The appropriate use of nonspeech sounds has the potential to add a great deal to the functionality of computer interfaces. Sound is a largely unexploited medium of output, even though it plays an integral role in our everyday encounters with the world, a role that is complementary to vision. Sound should be used in computers as it is in the world, where it conveys information about the nature of sound-producing events. Such a strategy leads to auditory icons, which are everyday sounds meant to convey information about computer events by analogy with everyday events. Auditory icons are an intuitively accessible way to use sound to provide multidimensional, organized information to users. These ideas are instantiated in the SonicFinder, which is an auditory interface I developed at Apple Computer, Inc. In this interface, information is conveyed using auditory icons as well as standard graphical feedback. I discuss how events are mapped to auditory icons in the SonicFinder, and illustrate how sound is used by describing a typical interaction with this interface. Two major gains are associated with using sound in this interface: an increase in direct engagement with the model world of the computer and an added flexibility for users in getting information about that world. These advantages seem to be due to the iconic nature of the mappings used between sound and the information it is to convey. I discuss sound effects and source metaphors as methods of extending auditory icons beyond the limitations implied by literal mappings, and I speculate on future directions for such interfaces.
We survey the research on interactive evolutionary computation (IEC). The IEC is an EC that optimizes systems based on subjective human evaluation. The definition and features of the IEC are … We survey the research on interactive evolutionary computation (IEC). The IEC is an EC that optimizes systems based on subjective human evaluation. The definition and features of the IEC are first described and then followed by an overview of the IEC research. The overview primarily consists of application research and interface research. In this survey the IEC application fields include graphic arts and animation, 3D computer graphics lighting, music, editorial design, industrial design, facial image generation, speed processing and synthesis, hearing aid fitting, virtual reality, media database retrieval, data mining, image processing, control and robotics, food industry, geophysics, education, entertainment, social system, and so on. The interface research to reduce human fatigue is also included. Finally, we discuss the IEC from the point of the future research direction of computational intelligence. This paper features a survey of about 250 IEC research papers.
Constructal theory is the view that the generation of “designedness” in nature is a universal (physics) phenomenon that can be based on principle (the constructal law): “For a finite-size flow … Constructal theory is the view that the generation of “designedness” in nature is a universal (physics) phenomenon that can be based on principle (the constructal law): “For a finite-size flow system to persist in time (to live) its configuration must change in time so that it provides greater and greater access to its currents”. This principle predicts natural form across the board, from river basins to animal design, engineering and social dynamics. In this introduction to the theory we show examples of vascular designs at large and small scales and multi-objective flow configurations.
December 01 2000 The Aesthetics of Failure: "Post-Digital" Tendencies in Contemporary Computer Music Kim Cascone Kim Cascone anechoicmedia, 748 Edgemar Ave., Pacifica, CA 94044, USA, [email protected], http://www.anechoicmedia.com Search for other … December 01 2000 The Aesthetics of Failure: "Post-Digital" Tendencies in Contemporary Computer Music Kim Cascone Kim Cascone anechoicmedia, 748 Edgemar Ave., Pacifica, CA 94044, USA, [email protected], http://www.anechoicmedia.com Search for other works by this author on: This Site Google Scholar Author and Article Information Kim Cascone anechoicmedia, 748 Edgemar Ave., Pacifica, CA 94044, USA, [email protected], http://www.anechoicmedia.com Online Issn: 1531-5169 Print Issn: 0148-9267 © 2000 Massachusetts Institute of Technology2000 Computer Music Journal (2000) 24 (4): 12–18. https://doi.org/10.1162/014892600559489 Cite Icon Cite Permissions Share Icon Share Facebook Twitter LinkedIn MailTo Views Icon Views Article contents Figures & tables Video Audio Supplementary Data Peer Review Search Site Citation Kim Cascone; The Aesthetics of Failure: "Post-Digital" Tendencies in Contemporary Computer Music. Computer Music Journal 2000; 24 (4): 12–18. doi: https://doi.org/10.1162/014892600559489 Download citation file: Ris (Zotero) Reference Manager EasyBib Bookends Mendeley Papers EndNote RefWorks BibTex toolbar search Search Dropdown Menu toolbar search search input Search input auto suggest filter your search All ContentAll JournalsComputer Music Journal Search Advanced Search This content is only available as a PDF. © 2000 Massachusetts Institute of Technology2000 Article PDF first page preview Close Modal You do not currently have access to this content.
<JATS1:p>This book explores what speech, music and other sounds have in common. It gives a detailed description of the way perspective, rhythm, textual quality and other aspects of sound are … <JATS1:p>This book explores what speech, music and other sounds have in common. It gives a detailed description of the way perspective, rhythm, textual quality and other aspects of sound are used to communicate emotion and meaning. It draws on a wealth of examples from radio (disk jockey and newsreading speech, radio plays, advertising jingles, news signature tunes), film soundtracks (The Piano, The X-files, Disney animation films), music ranging from medieval plain chant to drum 'n' bass and everyday soundscapes.</JATS1:p>

Form in Music

2025-06-25
Sumy Takesue | Routledge eBooks
Abstract Composer-Performer Collaboration (CPC) has become a distinct research field in the last twenty years. This article explores a long letter written by Justin Connolly to Neil Heyde in place … Abstract Composer-Performer Collaboration (CPC) has become a distinct research field in the last twenty years. This article explores a long letter written by Justin Connolly to Neil Heyde in place of final workshops for Collana, for solo cello. The letter sheds forensic light on Connolly’s musical vision and approach to collaboration, revealing a distinctive combination of pedantic concern for details (with concomitant precision of notation) and great flexibility. Connolly encourages the performer as an active participant, with responsibility for a ‘parallel universe of discourse’. Heyde responds directly to extracts from the letter and outlines the shared working context. Connolly’s letter confirms the significance of the dimensions of notation, gesture and instrumental choreography that have emerged in the CPC literature but affords a perspective not shaped by academic demand characteristics. It presents an especially sophisticated approach to what recent writing has called empathetic embodiment.
The article discusses the effects of heavy practice mutes on violin sonority. Objective data (Long Term Average Spectrum and Loudness) from a parameterized sampling made with different types of performance … The article discusses the effects of heavy practice mutes on violin sonority. Objective data (Long Term Average Spectrum and Loudness) from a parameterized sampling made with different types of performance and practice mutes, as well as modified devices and replicas made with lead, were employed in a comparative study. Specific effects of heavy practice mutes, such as the absence of energy transfer to the low frequency region, were observed. The text presents the mutes and describes the methodology, data processing, and the acoustic descriptors employed, followed by a discussion of the results.

Signal Flows

2025-06-24
Allison Sokil , Amandine Pras | Oxford University Press eBooks
Abstract The term “signal flow” was introduced as an engineering metaphor in the 1920s to articulate the relationalities of audio signals when selecting pieces of equipment and sound in the … Abstract The term “signal flow” was introduced as an engineering metaphor in the 1920s to articulate the relationalities of audio signals when selecting pieces of equipment and sound in the context of recording, mixing, and mastering. Despite the widespread use of digital audio workstations in mainstream music production and audio engineering today, students in the recording arts still must learn and understand signal flows through practical exercises and discursive repetition. This is particularly true for marginalized recordists tasked with proving themselves performatively and professionally, as they continually navigate biased assessments of their foundational knowledge, technical and musical capacities, and finished work. In this chapter, the authors extend a metaphorical usage of “signal flow” to an examination of the career paths of women and gender non-conforming (WGNC) producers and engineers in a strongly male-dominated and discriminative industry. Based on the experience and vision of eight WGNC recordists, educators, and audio program organizers located in Canada at the time of the study, the authors outline how alternative educational “flows” create paths that bypass the direct line of heteropatriarchal norms and conventions. Also, to troubleshoot habitual silences and stoppages, this chapter offers feminist-informed pedagogical models that better support and sustain early-career WGNC recordists, contributing to much-needed structural changes in the music industry.
Dheeraj Yadav | International Journal for Research in Applied Science and Engineering Technology

Sound

2025-06-23
Jacob A. Smith , Neil Verma | Routledge eBooks
Abstract This article examines the creation of an Urban Archive as an English Garden , a work that uses GPU-accelerated low-resolution wavefield synthesis (WFS) to combine field recordings, live performance … Abstract This article examines the creation of an Urban Archive as an English Garden , a work that uses GPU-accelerated low-resolution wavefield synthesis (WFS) to combine field recordings, live performance and generative audio in real time. Owing to computational overhead, WFS is often pre-rendered, leading to a less dynamic and more static scope for the embodied and intersubjective nature of human sensory understanding, a tendency that can also be found in traditional soundscape composition. We argue that engagement with real-time WFS offers a new approach to soundscape composition, wherein musical-system design may have multiple agencies, or that of virtual environment, co-creator, archive and hybrid instrument. Through a post-phenomenological lens, an analysis of the work’s creation through different domains reveals how these technologies afford novel practices to engage with our sonic environments. Additionally, the article unpacks how this same process, grounded in site-responsive design offers new approaches to composition, performance and artistic collaboration across these practices.
Theresa Vollmer | Routledge eBooks
| University of Pennsylvania Press, Inc. eBooks
Jazz music has long been a subject of interest in the field of generative music. Traditional jazz chord progressions follow established patterns that contribute to the genre’s distinct sound. However, … Jazz music has long been a subject of interest in the field of generative music. Traditional jazz chord progressions follow established patterns that contribute to the genre’s distinct sound. However, the demand for more innovative and diverse harmonic structures has led to the exploration of alternative approaches in music generation. This paper addresses the challenge of generating novel and engaging jazz chord sequences that go beyond traditional chord progressions. It proposes an unconventional statistical approach, leveraging a corpus of 1382 jazz standards, which includes key information, song structure, and chord sequences by section. The proposed method generates chord sequences based on statistical patterns extracted from the corpus, considering a tonal context while introducing a degree of unpredictability that enhances the results with elements of surprise and interest. The goal is to move beyond conventional and well-known jazz chord progressions, exploring new and inspiring harmonic possibilities. The evaluation of the generated dataset, which matches the size of the learning corpus, demonstrates a strong statistical alignment between distributions across multiple analysis parameters while also revealing opportunities for further exploration of novel harmonic pathways.
K. Ajay Kumar | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Abstract - MELODY AI is an innovative system designed to generate music using artificial intelligence by transforming textual descriptions into original compositions. Leveraging the MusicGen model from Hugging Face, the … Abstract - MELODY AI is an innovative system designed to generate music using artificial intelligence by transforming textual descriptions into original compositions. Leveraging the MusicGen model from Hugging Face, the system employs advanced deep learning techniques to interpret user input and produce corresponding audio tracks. The workflow includes interpreting the user's prompt, translating it into a musical representation, and synthesizing it into audio form. The AI model has been fine-tuned to maintain musical richness and structural integrity, enabling it to generate expressive, genre-diverse compositions. The system effectively captures the essence of user prompts, delivering melodically coherent and emotionally resonant music. Its ability to generate tailored and aesthetically pleasing music makes it a valuable tool for creators in film, digital media, and the arts. This project highlights the growing role of AI in reshaping music composition and enhancing creative expression. Keywords: AI-generated music, deep learning, text-to-audio synthesis, Hugging Face, MusicGen, creative AI tools.
<ns3:p>Artykuł poświęcony jest analizie utworu "Serendib pour 22 instrumentistes" (1992) Tristana Muraila w kontekście technik kompozytorskich spektralizmu i centralnej roli barwy jako elementu formotwórczego. Analiza fragmentu "Serendib" pokazuje, w jaki … <ns3:p>Artykuł poświęcony jest analizie utworu "Serendib pour 22 instrumentistes" (1992) Tristana Muraila w kontekście technik kompozytorskich spektralizmu i centralnej roli barwy jako elementu formotwórczego. Analiza fragmentu "Serendib" pokazuje, w jaki sposób Murail wykorzystuje instrumentację do stworzenia syntezy akustycznych cech dźwięku, postrzeganegojako instrument wirtualny. Zasady teorii Gestalt wykorzystywane są do odkrywania tych form dźwiękowych i pokazania, jakimi sposobami gry wytwarza się dźwięk o cechach emergentnych.Taka percepcja oddziela dźwięk od jego fizycznego źródła i nadaje mu niezależną tożsamość.</ns3:p>
Christine Dell'Amore | C&EN Global Enterprise
Recent advancements in generative neural networks, particularly transformer-based models, have introduced novel possibilities for sound design. This study explores the use of generative pre-trained transformers (GPT) to create complex, multilayered … Recent advancements in generative neural networks, particularly transformer-based models, have introduced novel possibilities for sound design. This study explores the use of generative pre-trained transformers (GPT) to create complex, multilayered soundscapes from textual and visual prompts. A custom pipeline is proposed, featuring modules for converting the source input into structured sound descriptions and subsequently generating cohesive auditory outputs. As a complementary solution, a granular synthesizer prototype was developed to enhance the usability of generative audio samples by enabling their recombination into seamless and non-repetitive soundscapes. The integration of GPT models with granular synthesis demonstrates significant potential for innovative audio production, paving the way for advancements in professional sound-design workflows and immersive audio applications.
Abstract The notion of sound space emerges as a multifaceted exploration within the music, artistic and architectural realms, delving into its evolution from a musical object to a transdisciplinary aesthetic … Abstract The notion of sound space emerges as a multifaceted exploration within the music, artistic and architectural realms, delving into its evolution from a musical object to a transdisciplinary aesthetic event. Rooted in the interplay of sound and space, the term defies strict definition, reflecting a dynamic amalgamation of interpretations throughout its historical practices and conceptualisations. This article engages with different perspectives on the subject of sound space, bringing together a group of architects, sound engineers, artists and researchers – all of them dedicated to sound – to discuss the sensitive experience of listening to space, within material, and/or dematerialised realities. The methodology was based on a series of interviews, confronting their different points of view, therefore building a compelling retrospective around the subject of study. In this exploration, sound space emerges as a complex entity that transcends traditional boundaries, offering a unique lens through which practitioners redefine architecture, challenge perceptions and engage in a dynamic interplay between sound, space and the listener’s experience. The resulting territory is depicted as a rhizomatic system with diverse temporalities coexisting and influencing the understanding of sound space within phenomenal and material perspectives. It portrays a dynamic and evolving system, celebrating diversity and interaction in a transdisciplinary field.
Yuvraj Mhaske | International Journal for Research in Applied Science and Engineering Technology
In ancient times, the Sign language system was developed to eradicate the problem in dumb and deaf people and make them gain knowledge for proper interaction with the world. Learning … In ancient times, the Sign language system was developed to eradicate the problem in dumb and deaf people and make them gain knowledge for proper interaction with the world. Learning sign language involved creating sign language using fingers movement and movement in hand. It is been observed that dumb and deaf people feel themselves inferior to the society and do not interact much with other people. There are many projects tried to Develop to make things beneficial for them but still we are unable to create a better support for DnD people. There where a lot of projects developed but the costing of the project was very high which cannot be affordable at any scale. Projects developed by MIT students cost more than 2 lakh rupees and there where many other efforts made to help but all went in vain as the biggest problem raised was project being cost efficient and for a developing country like India creating a product that cause 2 lakh for a person is not at all possible. From the literature it is known that, in India it is true that computers have not reached even normal schools in the rural and remote areas. Therefore, providing computers for Dumb and deaf children to learn and use them seems certainly farfetched. Therefore, we got an idea to design and develop user-friendly, cost- effective learning aid for Dumb and deaf children. The project aims to design a learning aid for the Dumb and deaf people in English language which infuses a sense of playing while learning. The proposed idea is implemented on an Arduino Microcontroller interfaced with LCD, Gyroscope and mobile device as output devices. The proposed model develops a glove that can Convert sign language to human voice and which can be understandable by us and this will definitely build the bridge between us and make Dumb and Deaf people a part of our society and which can make better for mankind. The biggest challenge is to reduce the cost as much as possible and keep things more available for the society
In my compositional practice, I view feedback as an instrument to address spaces of situated sound inquiry. Such spaces are practically intertwined with, and often mutually defining, the material space … In my compositional practice, I view feedback as an instrument to address spaces of situated sound inquiry. Such spaces are practically intertwined with, and often mutually defining, the material space in which feedback operates. Feedback instruments as such develop at the intersection of these spaces, becoming mediators between artistic concerns and the material affordances that form and inform them. The first part of this article introduces some of these concepts – instrument, medium, material space – by contextualizing them in the current artistic research discourse, and by situating them in selected examples of historical and contemporary feedback-based productions. In the second part I make use of these ideas to illustrate some of my recent works, focusing in particular on the productive intersections between notions of instruments, media, and spaces in feedback practice.
Deirdre Loughridge | University of Pennsylvania Press, Inc. eBooks
Social inequality in both the classical music production and consumption practices is undoubtedly at the base of the lack of involvement of the so-called new audiences in the contemporary music … Social inequality in both the classical music production and consumption practices is undoubtedly at the base of the lack of involvement of the so-called new audiences in the contemporary music scene and it has been observed that timbre-based approaches to synthesis tends to favour listening and interaction when people have no music background. The present paper investigates the use of audio and electromagnetic feedback for the stimulation of vibration in metal panels and metallic musical instruments within the frame of interactive sound installations and compositions to pursue the exploration of less hierarchical structures within the performance space. A variety of feedback algorithms implemented for those works will be described along with the machine learning techniques used to expand the human-machine relationship via web-applications, which allow for remote interaction of the audience. Classification of sound according to its emotional content was used because non-musicians tend to relate to music mainly through their emotional response to it. Sound analysis and re-synthesis enabled the production of control data to steer AC devices as well as regulate the activation of the audio feedback algorithms. Given its non-linearity and its reluctance to control, feedback turned out to be particularly successful in the generation of a vast variety of timbre-based audio material and agency, thus laying the ground for free improvisation and interaction within the performance space.
Kuang Yuan , Dong Li , Hao Zhou +4 more | Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies
Acoustic sensing has recently garnered significant interest for a wide range of applications ranging from motion tracking to health monitoring. However, prior works overlooked an important real-world factor affecting acoustic … Acoustic sensing has recently garnered significant interest for a wide range of applications ranging from motion tracking to health monitoring. However, prior works overlooked an important real-world factor affecting acoustic sensing systems---the instability of the propagation medium due to ambient airflow. Airflow introduces rapid and random fluctuations in the speed of sound, leading to performance degradation in acoustic sensing tasks. This paper presents WindDancer, the first comprehensive framework to understand how ambient airflow influences existing acoustic sensing systems, as well as provides solutions to enhance systems performance in the presence of airflow. Specifically, our work includes a mechanistic understanding of airflow interference, modeling of sound speed variations, and analysis of how several key real-world factors interact with airflow. Furthermore, we provide practical recommendations and signal processing solutions to improve the resilience of acoustic sensing systems for real-world deployment. We envision that WindDancer establishes a theoretical foundation for understanding the impact of airflow on acoustic sensing, and advances the reliability of acoustic sensing technologies for broader adoption.
| Nature Climate Change