Engineering Control and Systems Engineering

Robotics and Automated Systems

Description

This cluster of papers encompasses a wide range of research on cloud robotics, automation, and related technologies. It covers topics such as cloud robotics architecture, challenges, and applications, internet of robotic things, cognitive infocommunications, service robots in cloud computing, virtual reality for robotics, and the integration of artificial intelligence in robotic systems. The research also explores the use of cloud robotics in smart city applications and emphasizes human-robot interaction.

Keywords

Cloud Robotics; Internet of Robotic Things; Cognitive Infocommunications; Service Robots; Robot Middleware; Virtual Reality; Artificial Intelligence; Networked Robotics; Smart City Applications; Human-Robot Interaction

An interdisciplinary overview of current research on imitation in animals and artifacts. The effort to explain the imitative abilities of humans and other animals draws on fields as diverse as … An interdisciplinary overview of current research on imitation in animals and artifacts. The effort to explain the imitative abilities of humans and other animals draws on fields as diverse as animal behavior, artificial intelligence, computer science, comparative psychology, neuroscience, primatology, and linguistics. This volume represents a first step toward integrating research from those studying imitation in humans and other animals, and those studying imitation through the construction of computer software and robots. Imitation is of particular importance in enabling robotic or software agents to share skills without the intervention of a programmer and in the more general context of interaction and collaboration between software agents and humans. Imitation provides a way for the agent—whether biological or artificial—to establish a "social relationship" and learn about the demonstrator's actions, in order to include them in its own behavioral repertoire. Building robots and software agents that can imitate other artificial or human agents in an appropriate way involves complex problems of perception, experience, context, and action, solved in nature in various ways by animals that imitate. Bradford Books imprint
The storage of manipulator control functions in the CM AC memory is accomplished by an iterative process which, if the control function is sufficiently smooth, will converge. There are several … The storage of manipulator control functions in the CM AC memory is accomplished by an iterative process which, if the control function is sufficiently smooth, will converge. There are several different techniques for loading the CM AC memory depending on the amount of data which has already been stored and the degree of accuracy which is desired. The CM AC system lends itself to a “natural” partitioning of the control problem into manageable subproblems. At each level the CM AC controller translates commands from the next higher level into sequences of instructions to the next lower level. Data storage, or training, is accomplished first at the lowest level and must be completed, or nearly so, at each level before it can be initiated at the next higher level.
More than 40 years ago, Masahiro Mori, a robotics professor at the Tokyo Institute of Technology, wrote an essay [1] on how he envisioned people's reactions to robots that looked … More than 40 years ago, Masahiro Mori, a robotics professor at the Tokyo Institute of Technology, wrote an essay [1] on how he envisioned people's reactions to robots that looked and acted almost like a human. In particular, he hypothesized that a person's response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance. This descent into eeriness is known as the uncanny valley. The essay appeared in an obscure Japanese journal called Energy in 1970, and in subsequent years, it received almost no attention. However, more recently, the concept of the uncanny valley has rapidly attracted interest in robotics and other scientific circles as well as in popular culture. Some researchers have explored its implications for human-robot interaction and computer-graphics animation, whereas others have investigated its biological and social roots. Now interest in the uncanny valley should only intensify, as technology evolves and researchers build robots that look human. Although copies of Mori's essay have circulated among researchers, a complete version hasn't been widely available. The following is the first publication of an English translation that has been authorized and reviewed by Mori. (See “Turning Point” in this issue for an interview with Mori.).
The Cloud infrastructure and its extensive set of Internet-accessible resources has potential to provide significant benefits to robots and automation systems. We consider robots and automation systems that rely on … The Cloud infrastructure and its extensive set of Internet-accessible resources has potential to provide significant benefits to robots and automation systems. We consider robots and automation systems that rely on data or code from a network to support their operation, i.e., where not all sensing, computation, and memory is integrated into a standalone system. This survey is organized around four potential benefits of the Cloud: 1) Big Data: access to libraries of images, maps, trajectories, and descriptive data; 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning; 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes; and 4) Human Computation: use of crowdsourcing to tap human skills for analyzing images and video, classification, learning, and error recovery. The Cloud can also improve robots and automation systems by providing access to: a) datasets, publications, models, benchmarks, and simulation tools; b) open competitions for designs and systems; and c) open-source software. This survey includes over 150 references on results and open challenges. A website with new developments and updates is available at: http://goldberg.berkeley.edu/cloud-robotics/.
We describe YARP, Yet Another Robot Platform, an open-source project that encapsulates lessons from our experience in building humanoid robots. The goal of YARP is to minimize the effort devoted … We describe YARP, Yet Another Robot Platform, an open-source project that encapsulates lessons from our experience in building humanoid robots. The goal of YARP is to minimize the effort devoted to infrastructure-level software development by facilitating code reuse, modularity and so maximize research-level development and collaboration. Humanoid robotics is a “bleeding edge” field of research, with constant flux in sensors, actuators, and processors. Code reuse and maintenance is therefore a significant challenge. We describe the main problems we faced and the solutions we adopted. In short, the main features of YARP include support for inter-process communication, image processing as well as a class hierarchy to ease code reuse across different hardware platforms. YARP is currently used and tested on Windows, Linux and QNX6 which are common operating systems used in robotics.
Reliable wireless communication is essential for mobile robotic systems operating in dynamic environments, particularly in the context of smart mobility and cloud-integrated urban infrastructures. This article presents an experimental study … Reliable wireless communication is essential for mobile robotic systems operating in dynamic environments, particularly in the context of smart mobility and cloud-integrated urban infrastructures. This article presents an experimental study analyzing the impact of robot motion dynamics on wireless network performance, contributing to the broader discussion on data reliability and communication efficiency in intelligent transportation systems. Measurements were conducted using a quadruped robot equipped with an onboard edge computing device, navigating predefined trajectories in a laboratory setting designed to emulate real-world variability. Key wireless parameters, including signal strength (RSSI), latency, and packet loss, were continuously monitored alongside robot kinematic data such as speed, orientation (roll, pitch, yaw), and movement patterns. The results show a significant correlation between dynamic motion—especially high forward velocities and rotational maneuvers—and degradations in network performance. Increased robot speeds and frequent orientation changes were associated with elevated latency and greater packet loss, while static or low-motion periods exhibited more stable communication. These findings highlight critical challenges for real-time data transmission in mobile IoRT (Internet of Robotic Things) systems, and emphasize the role of network-aware robotic behavior, interoperable communication protocols, and edge-to-cloud data integration in ensuring robust wireless performance within smart city environments.
People’s need for English translation is gradually growing in the modern era of technological advancements, and a computer that can comprehend and interpret English is now more crucial than ever. … People’s need for English translation is gradually growing in the modern era of technological advancements, and a computer that can comprehend and interpret English is now more crucial than ever. Some issues, including ambiguity in English translation and improper word choice in translation techniques, must be addressed to enhance the quality of the English translation model and accuracy based on the corpus. Hence, an edge computing-based translation model (FSRL-P2O) is proposed to improve translation accuracy by using huge bilingual corpora, considering Fuzzy Semantic (FS) properties, and maximizing the translation output using optimal control techniques with the incorporation of Reinforcement Learning and Proximal Policy Optimisation (PPO) techniques. The corpus data is initially gathered, and necessary preprocessing and feature extraction techniques are made. The preprocessed sentences are given as input to the fuzzy semantic similarity phase, which aims to avoid uncertainties by measuring the semantic resemblance between two linguistic elements, such as phrases, words, or sentences involved in a translation using the Jaccard similarity coefficient. The fuzzy semantic resemblance component’s training estimates the degree of overlap or similarity between two sentences, such as calculating the percentage of characters and length of the longest matching sequence of characters. The suggested Reinforcement learning and PPO can address specific uncertainty causes in machine translation assessment, like out-of-domain data and low-quality references. In addition to simple word-level comparison, it permits a more complex grasp of the semantic link. Reinforcement Learning (RL) and Proximal Policy Optimisation (PPO) techniques are implemented as optimal control techniques to optimize the translation procedures and enhance the quality and precision of generated translations. RL and PPO aim to improve a machine translation system’s translation policy depending on a predetermined reward signal or quality parameter. The system’s effectiveness is evaluated by various metrics such as accuracy, Fuzzy semantic similarity, Bi-Lingual Evaluation Understudy (BLEU), and National Institute of Standards and Technology score (NIST). Thus, the proposed system achieves higher quality and translation accuracy of the text that has been translated and produces higher semantic similarity.
Robotic Computed Tomography (CT) is establishing itself as a versatile and transformative tool in non-destructive inspection. Robots not only facilitate upscaling CT measurements for large objects but also offer the … Robotic Computed Tomography (CT) is establishing itself as a versatile and transformative tool in non-destructive inspection. Robots not only facilitate upscaling CT measurements for large objects but also offer the flexibility of diverse scanning trajectories, making advanced inspection of regions of interest possible. Our portable scanner RadalyX design eliminates the need for objects to "go to the CT machine"; instead, the CT machine "comes to the object," significantly enhancing usability in aerospace, space, and other demanding fields [1]. This mobility broadens the spectrum of object sizes that can be inspected while enabling the use of smaller, more precise robots even for scans of regions-of-interest of very large structures. This paper presents such capability. and case studies demonstrating the portability of the system, including the inspection of a damaged aircraft, and discuss installation procedures and new robot position calibration methods. We will also showcase recent improvements in scanning strategies, image reconstruction, and the integration of multimodal imaging techniques, providing a glimpse into the future of portable robotic inspection.
Amit , Amandeep Amandeep , Khushi | International Journal for Research in Applied Science and Engineering Technology
Growing next generation technologies include autonomous driving, smart healthcare systems, and augmented reality provide massive amounts of data that need to be consistently and fast handled. Expectations of ultra-low latency … Growing next generation technologies include autonomous driving, smart healthcare systems, and augmented reality provide massive amounts of data that need to be consistently and fast handled. Expectations of ultra-low latency and tremendous bandwidth have surged sharply with the deployment of 5G networks. Although conventional cloud computing provides a lot of processing capability, its inherent delay from centralized architectures makes it difficult to fulfill the real-time needs of these applications. Moving computation closer to data sources gives edge computing a potential answer. The edge does, however, have several drawbacks as well: limited processing resources, changing device circumstances, and higher security hazards. We offer a hybrid AI-driven architecture specifically for 5G edge settings in order to handle these difficulties. The approach deliberately combines lightweight machine learning and deep learning modules to dynamically allocate tasks across edge, fog, and cloud tiers. It combines important technologies like a trust-aware approach to filter unreliable edge nodes, reinforcement learning for intelligent job offloading, and federated learning for privacy protection. Here we created and simulated the whole architecture with MATLAB. Early-exit logic in a modular hybrid AI system with RL-based offloading agent, trust score evaluation, and federated model aggregation forms part of our approach. To see system latency, bandwidth usage, and performance changes under dynamic traffic, we also created waveform graphs. Simulation results showed that the proposed hybrid model lay a strong basis for next-generation intelligent edge systems in real-world 5G deployments since it outperformed both edge-only and cloud only configurations in terms of response time, scalability, and resilience to adversarial conditions
The edge computing paradigm has gained prominence in both academic and industry circles in recent years. When edge computing facilities and services are implemented in robotics, they become a key … The edge computing paradigm has gained prominence in both academic and industry circles in recent years. When edge computing facilities and services are implemented in robotics, they become a key enabler in the deployment of artificial intelligence applications to robots. Time-sensitive robotics applications benefit from the reduced latency, mobility, and location awareness provided by the edge computing paradigm, which enables real-time data processing and intelligence at the network’s edge. While the advantages of integrating edge computing into robotics are numerous, there has been no recent survey that comprehensively examines these benefits. This paper aims to bridge that gap by highlighting important work in the domain of edge robotics, examining recent advancements, and offering deeper insight into the challenges and motivations behind both current and emerging solutions. In particular, this article provides a comprehensive evaluation of recent developments in edge robotics, with an emphasis on fundamental applications, providing in-depth analysis of the key motivations, challenges, and future directions in this rapidly evolving domain. It also explores the importance of edge computing in real-world robotics scenarios where rapid response times are critical. Finally, the paper outlines various open research challenges in the field of edge robotics.
This paper introduces a configuration and integration model for mobile robots deployed in emergency and special operations scenarios. The proposed method is designed for implementation within the operational technology (OT) … This paper introduces a configuration and integration model for mobile robots deployed in emergency and special operations scenarios. The proposed method is designed for implementation within the operational technology (OT) domain, enforcing security protocols that ensure both data encryption and network isolation. The primary objective is to establish a dedicated operational environment encompassing a command and control center where the robotic network server resides, alongside real-time data storage from network clients and remote control of field-deployed mobile robots. Building on this infrastructure, operational strategies are developed to enable an efficient robotic response in critical situations. By leveraging remote robotic networks, significant benefits are achieved in terms of personnel safety and mission efficiency, minimizing response time and reducing the risk of injury to human operators during hazardous interventions. Unlike generic IoT or IoRT systems, this work focuses on secure robotic integration within segmented OT infrastructures. The technologies employed create a synergistic system that ensures data integrity, encryption, and safe user interaction through a web-based interface. Additionally, the system includes mobile robots and a read-only application positioned within a demilitarized zone (DMZ), allowing for secure data monitoring without granting control access to the robotic network, thus enabling cyber-physical isolation and auditability.
The role of robots has evolved with recent technological innovations and social challenges, such as declining workforce. Robots are no longer confined to the industrial field but are becoming an … The role of robots has evolved with recent technological innovations and social challenges, such as declining workforce. Robots are no longer confined to the industrial field but are becoming an integral part of daily lives. Consequently, their optional environment has shifted from factories to human living space. As the presence of robots in these environments continues to grow, human-robot interaction has emerged as a critical issue that requires further exploration. Robotics competitions serve as a platform for testing solutions to these challenges, with many events focused on advancing robot integration into society. However, a competition specifically designed for human-robot coexistence has not yet been proposed. This study aims to present a vision for a future society where humans and robots coexist. To facilitate discussions on this concept, we organized a robotics competition in which humans, animals, and robots can participate. Accordingly, we designed a robotics competition incorporating humans, animals, and robots based on a relay race, known as Ekiden. Furthermore, we outlined the design principles of the competition, described its rules and results, and evaluated the validity of these rules.
The Advanced Intelligence & Robotics (AIBot) Research Center at Tamagawa University is dedicated to advancing human-centered robotics and AI through interdisciplinary collaboration. This special issue highlights key research initiatives across … The Advanced Intelligence & Robotics (AIBot) Research Center at Tamagawa University is dedicated to advancing human-centered robotics and AI through interdisciplinary collaboration. This special issue highlights key research initiatives across several domains, including human-robot collaboration, cognitive robotics, and human assistive systems. The Robotics Research Division explores symbol emergence for natural communication and applies virtual reality (VR) technologies to model human-robot-environment interactions, aiming to optimize cognitive feedback and user experience. The Brain-Inspired AI Division investigates methods to enhance psychological well-being and self-efficacy using VR-based feedback in assistive contexts. The Tamagawa Robot Challenge Project links university-level research with primary and secondary education by leveraging robotics competitions and practical learning tools. Together, these efforts showcase the commitment of the center to foster next-generation robotics that supports the physical, cognitive, and emotional aspects of human life. This issue brings together selected studies that reflect the ongoing contribution of the center to the international research community.
Целью работы является уточнение алгоритма бинаризации цифровых снимков поперечного распила круглых лесоматериалов для автоматизированной оценки доли коры, а также тестирование методики настройки его параметров. Программная реализация уточненного алгоритма и вычисления … Целью работы является уточнение алгоритма бинаризации цифровых снимков поперечного распила круглых лесоматериалов для автоматизированной оценки доли коры, а также тестирование методики настройки его параметров. Программная реализация уточненного алгоритма и вычисления выполнены на языке Python, основные используемые функции реализованы в библиотеке OpenCV. Алгоритм бинаризации снимков, используемый при оценке доли коры, использует метод бинаризации по одинарному порогу. Настройки параметров метода выполнены для авторских снимков поперечных распилов круглых лесоматериалов. Для полученияэкспериментальных данных произведена съемка поперечного распила круглых лесоматериалов с различных ракурсов. В выборку, предназначенную для тестирования предлагаемого алгоритма и программного решения, вошли 130 снимков торцов диаметром 20 – 30 см; породы древесины – береза, ель, ольха. Съемка выполнена в зимних условиях, на заснеженном фоне при естественном освещении в отсутствие прямых солнечных лучей. Разрешение оригинальных снимков составляло 3024х4032 пикселей (72 dpi). Предложен алгоритм, который включает в себя настройку параметров метода бинаризации по одинарному порогу на основе информации о светочувствительности, уровне яркости снимка и числе экспозиции; обращение к методу выполняется дважды. Приведены эвристические зависимости для определения пороговых значений параметров метода для двух его итераций, выполняемых последовательно при отделении объекта от фона и дальнейшей сегментации коры на снимке. Приведены примеры результатов определения доли коры на снимках с неровным контуром и доминирующим фоном, с относительно ровным контуром поперечного распила на снимке, а также с частично обледенелой поверхностью распила с неровным контуром, и ссылка на репозиторий с экспериментальными снимками и программным решением, реализующим предлагаемый алгоритм. Тестирование методики настройки параметров и алгоритма с учетом различных характеристик снимка и объекта на нем показало, что решение позволяет получить удовлетворительные результаты. The aim of the study is refining the algorithm for binarization of digital images of cross-cut round timber for automated assessment of the bark ratio, as well as testing methodology for adjusting its parameters. The program implementation of the refined algorithm and calculations are performed in Python, the main functions used are implemented in the OpenCV library. The image binarization algorithm used to assess the bark ratio bases on the single threshold method. The method parameter settings are made for the author's images of cross-cut round timber. To obtain experimental data, cross-cut round timber was photographed from different angles. The sample intended for testing the proposed algorithm and software solution included 130 images of crosscut timber with a diameter of 20–30 cm; wood species – birch, spruce, alder. The photographing was performed in winter conditions, on a snow-covered background under natural light in the absence of direct sunlight. The resolution of the original images was 3024x4032 pixels (72 dpi). The algorithm proposed includes adjustment of the parameters of the single threshold binarization method basing on information about the photosensitivity, the brightness level of the image, and the exposure value; the method is called twice during the image processing. Heuristic dependencies are given for determining the threshold values of the single threshold method parameters for its two iterations performed sequentially when separating an object from the background and further segmenting the bark in the image. Examples of the results of determining the ratio of bark in images with an uneven contour and a dominant background, with a relatively smooth contour of a cross-cut in the image, as well as with a partially icy surface of a cut with an uneven contour, as well as a link to a repository with experimental images and the program solution implementing the proposed algorithm are given. Testing the method for adjusting the parameters and the algorithm taking into account various characteristics of the image and the object in it showed that the solution allows obtaining satisfactory results.
Haiyan Wang | Journal of Computational Methods in Sciences and Engineering
This paper presents an intelligent teaching resource management platform built upon a task-oriented dialogue system, specifically tailored to the structured and domain-specific needs of vocational contexts. The system integrates a … This paper presents an intelligent teaching resource management platform built upon a task-oriented dialogue system, specifically tailored to the structured and domain-specific needs of vocational contexts. The system integrates a schema-guided dialogue state tracking framework that combines LSTM-based feature extraction, a slot gating mechanism, and a pointer neural network to effectively handle diverse conversational phenomena and unknown slot values. Experimental results show that the proposed model achieves a Joint Goal Accuracy (JGA) of 0.580, outperforming competitive baselines such as STAR (0.568) and TRADE (0.487). Unlike general-purpose chatbots, this platform supports real-time resource retrieval, pedagogical interaction, and administrative management in vocational settings, contributing to improved teaching quality and learner engagement.
This paper presents the design and performance evaluation of a human detection robot using the YOLOv8 model and the COCO dataset for object recognition. The robot is equipped with a … This paper presents the design and performance evaluation of a human detection robot using the YOLOv8 model and the COCO dataset for object recognition. The robot is equipped with a Pi camera, Raspberry Pi, four GM25 13CPR motors, an L298 motor driver, and a buck converter, ensuring efficient operation in real-time environments. The human detection accuracy was evaluated at different distances, achieving 99% at 2 feet, 98% at 15 feet, and 96% at 25 feet, demonstrating the effectiveness of the YOLOv8 model in varying conditions.The robot's movement is controlled using a PWM-based speed control technique, where the DC motors operate at different duty cycles. Experimental results show variations in speed accuracy, with error percentages of 7.6% at 20% duty cycle, 5.8% at 40%, 5.1% at 60%, 4.8% at 80%, and 3.8% at 100% duty cycle. These results indicate that higher duty cycles lead to improved speed accuracy, minimizing the deviation from the desired speed. The study highlights the integration of YOLOv8 for object detection and PWM for precise motor control, making the system suitable for applications in autonomous navigation, surveillance, and security.
<title>Abstract</title> <italic>With the rapid growth of financial data, extracting accurate and contextually relevant information remains a challenge. Existing financial question-answering (QA) models struggle with domain-specific terminology, long-document processing, and answer … <title>Abstract</title> <italic>With the rapid growth of financial data, extracting accurate and contextually relevant information remains a challenge. Existing financial question-answering (QA) models struggle with domain-specific terminology, long-document processing, and answer consistency. To address these issues, this paper proposes the Intelligent-Aware Transformer (IAT), a financial QA system based on GLM4-9B-Chat, integrating a multi-level information aggregation framework. The system employs a Financial-Specific Attention Mechanism (FSAM) to enhance focus on key financial terms, a Dynamic Context Embedding Layer (DCEL) to improve long-document processing, and a Hierarchical Answer Aggregator (HAA) to ensure response coherence. Additionally, Knowledge-Augmented Textual Entailment (KATE) strengthens the model’s generalization by inferring implicit financial knowledge. Experimental results demonstrate that IAT surpasses existing models in financial QA tasks, exhibiting superior adaptability in long-text comprehension and domain-specific reasoning. Future work will explore computational optimizations, advanced knowledge integration, and broader financial applications.</italic>
A. Miguel | International Journal for Research in Applied Science and Engineering Technology
Vision-based cursor control technologies have made significant improvements, from hardware-dependent systems to more intelligent, accessible, and user-friendly solutions. Early techniques utilized niche devices such as Kinect and RGB-D cameras for … Vision-based cursor control technologies have made significant improvements, from hardware-dependent systems to more intelligent, accessible, and user-friendly solutions. Early techniques utilized niche devices such as Kinect and RGB-D cameras for gesture recognition, providing high accuracy but poor portability and robustness to environmental factors. The advent of webcam-based systems then opened up greater access through colored gloves, facial gesture recognition, and later, markerless hand tracking based on computer vision algorithms. Integration with gaze tracking and vestibulo-ocular reflex further improved accuracy for hands-free operation, although calibration and lighting sensitivity remained. More recent advancements have accepted multimodal systems involving gaze, speech, and lip detection, in addition to non-invasive EEG/EOG wearables and brain–computer interfaces for greater functionality. Simpler blink-based interfaces and smaller vision-based tactile sensors have also appeared to assist people with more severe mobility impairments, weighing simplicity with technical compromises.
This paper proposes an efficient expansion scheme for the multimodal transformer architecture. By integrating sparse attention and low-rank adaptation technology, a distributed training framework is constructed. In response to the … This paper proposes an efficient expansion scheme for the multimodal transformer architecture. By integrating sparse attention and low-rank adaptation technology, a distributed training framework is constructed. In response to the efficiency requirements of multimodal understanding tasks, this scheme reduces computational complexity while maintaining performance benchmarks for text, image, and audio tasks. The sparse attention mechanism reduces memory and computational energy consumption by limiting the attention span, while the low-rank adaptation technology enables rapid task migration without the need for complete parameter retraining. The distributed training mechanism, combining model and data parallelism, ensures the system's adaptability to large-scale datasets and heterogeneous hardware environments. Experiments on standard datasets such as MSCOCO and VGGSound show that this scheme achieves significant improvements over traditional methods in terms of accuracy, memory usage, and training speed. The ablation experiment verified the synergistic effect of sparse attention and low-grade adaptation technology, and the scalability test showed a nearly linear acceleration effect among multiple devices. This research provides a feasible technical route for building intelligent systems suitable for real-time reasoning and multimodal fusion scenarios, and promotes the practical application of resource-saving multimodal technologies.
With advancements in intelligent technologies, near-field pose estimation has become essential across various applications. This paper presents a 5-degree-of-freedom (DOF) real-time object pose estimation system based on a uniaxial magnetic … With advancements in intelligent technologies, near-field pose estimation has become essential across various applications. This paper presents a 5-degree-of-freedom (DOF) real-time object pose estimation system based on a uniaxial magnetic sensor array and MPNet convolutional network. A simulated dataset of magnetic field distributions is created for training, utilizing a data augmentation method that reduces pose space traversal by up to 98.43%. The SMPNet network, built on a ResNet backbone, achieves high classification accuracy and Pearson correlation for object pose estimation. A multi-channel data acquisition system with an 8 × 8 sensor array is designed, and real-world data are used to optimize the model, achieving a mean square error of 2.02 mm for XYZ displacement and 5.92[Formula: see text] for rotational angles. The system is integrated with hardware to minimize data transmission delays, achieving a total system power consumption of 2.058 W and an estimated pose RMS error of 2.38 mm and 7.56[Formula: see text]. This study demonstrates the feasibility of using a uniaxial magnetic array for real-time pose estimation, offering insights for future system optimizations and practical applications.
Cyber situational awareness systems are increasingly used for creating cyber common operating pictures for cybersecurity analysis and education. However, these systems face data occlusion and convolution issues due to the … Cyber situational awareness systems are increasingly used for creating cyber common operating pictures for cybersecurity analysis and education. However, these systems face data occlusion and convolution issues due to the burgeoning complexity, dimensionality, and heterogeneity of cybersecurity data, which damages cyber situational awareness of end-users. Moreover, conventional forms of human–computer interactions, such as mouse and keyboard, increase the mental effort and cognitive load of cybersecurity practitioners when analyzing cyber situations of large-scale infrastructures. Therefore, immersive technologies, such as virtual reality, augmented reality, and mixed reality, are employed in the cybersecurity realm to create intuitive, engaging, and interactive cyber common operating pictures. Immersive cyber situational awareness (ICSA) systems provide several unique visualization techniques and interaction features for the perception, comprehension, and projection of cyber situational awareness. However, there has been no attempt to comprehensively investigate and classify the existing state of the art in the use of immersive technologies for cyber situational awareness. Therefore, in this paper, we have gathered, analyzed, and synthesized the existing body of knowledge on ICSA systems. In particular, our survey has identified visualization and interaction techniques, evaluation mechanisms, and different levels of cyber situational awareness (i.e., perception, comprehension, and projection) for ICSA systems. Consequently, our survey has enabled us to propose (i) a reference framework for designing and analyzing ICSA systems by mapping immersive visualization and interaction techniques to the different levels of ICSA; (ii) future research directions for advancing the state of the art on ICSA systems; and (iii) an in-depth analysis of the industrial implications of ICSA systems to enhance cybersecurity operations.
Mayank Maheshwari | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Abstract: Effective communication across languages is essential in today’s interconnected world . This project presents an advanced interactive voice translation system for real-time multilingual conversations. Using Automatic Speech Recognition (ASR) … Abstract: Effective communication across languages is essential in today’s interconnected world . This project presents an advanced interactive voice translation system for real-time multilingual conversations. Using Automatic Speech Recognition (ASR) to transcribe speech, Natural Language Processing (NLP) for context understanding, and Machine Translation (MT) for accurate conversions, it bridges language barriers by translating speech seamlessly into the target language. The system ensures accurate,context-aware translations by preserving the original speech's nuances. With applications in international business, travel, education, and customer service, this solution transforms multilingual interactions, making communication more accessible and effective across diverse contexts and environments. By combining cutting-edge technologies like ASR, NLP and MT, this project creates a powerful tool to bridge linguistic divides. It enhances global collaboration, simplifies cross-cultural communication, and supports learning, travel, and professional needs, fostering inclusive and mutual understanding in an increasingly connected world.
| Journal on Electronic and Automation Engineering
The "Landmine Detection Robotic Vehicle with GPS Positioning Using STM32" paper is designed to enhance safety and efficiency in landmine detection through advanced robotics. The system utilizes an STM32F103C8T6 microcontroller … The "Landmine Detection Robotic Vehicle with GPS Positioning Using STM32" paper is designed to enhance safety and efficiency in landmine detection through advanced robotics. The system utilizes an STM32F103C8T6 microcontroller to orchestrate various components aimed at detecting and locating landmines. A metal detector sensor is employed to identify metallic objects, with the system programmed to send alerts via GSM if a mine is detected, including the precise GPS coordinates of the location. The robotic vehicle is powered by DC motors, which are controlled through a motor driver module and Bluetooth interface, allowing for directional movement (forward, backward, right, and left). The setup includes buzzers for sound alerts to indicate detection or operational status, and a robot chassis with wheels for mobility. The integration of these components ensures a robust and efficient landmine detection system, combining real-time detection with precise location tracking to improve safety and operational effectiveness in hazardous environment.
| Journal on Electronic and Automation Engineering
The Cloud-Based Automation of Industrial Ambient Robotic System is designed to enhance industrial automation, safety, and remote monitoring by integrating IoT, cloud computing, and real-time sensor data processing. The system … The Cloud-Based Automation of Industrial Ambient Robotic System is designed to enhance industrial automation, safety, and remote monitoring by integrating IoT, cloud computing, and real-time sensor data processing. The system utilizes a Raspberry Pi zero as the central processing unit while incorporating an ESP32 microcontroller to interface with environmental sensors. The ESP32 is responsible for collecting data from DHT11 (temperature &amp; humidity sensor), MQ135 (gas detection sensor), flame sensor, and a buzzer. The remaining components, including ultrasonic sensors, servo motors, relays, and motor driver (L298N) for robotic navigation and obstacle avoidance, remain controlled by the Raspberry Pi. The system employs edge computing to locally process sensor data, reducing cloud dependency and improving response time for critical safety measures. This Project presents the design and development of a cloud-based automated industrial ambient robotic system using an Raspberry pi and ESP32 as the main microcontrollers. The robotic system is controlled via with raspberry pi, which connects to the Adafruit cloud platform, allowing for remote control and automation of industrial processes. This cloud-based approach enhances safety, operational efficiency, and reduces the risks associated with industrial hazards. Cloud-based systems can store vast amounts of data from sensors, machines, and robots in real time. A cloud-based automation industrial ambient robotic system integrates robotics, IoT (Internet of Things), and cloud computing to create an intelligent, scalable environment for manufacturing and logistics operations.
Robotic arms are programmable mechanical manipulators with movable components which can cause relative motion between adjoining links. The aim of this project is developing a robotic arm capable of identifying … Robotic arms are programmable mechanical manipulators with movable components which can cause relative motion between adjoining links. The aim of this project is developing a robotic arm capable of identifying the voice commands. The speech acts as an input responsible for triggering action. Apart from having diverse applications in industries, with advancement in technology and medical sciences, the robotic prosthesis is highly successful at restoring one's biological ability to perform daily chores comfortably. This approach commits to a goal to accomplish a well-built system with minimum faults. Computing coordinates to attain a Soft-home position is an essential task which is responsible for achieving the required speed, torque, and delivers optimum performance. Most challenging part in the whole procedure is to obtain high calibration for smooth working of the arm. We have employed a software-based calibration technique which is simple to implement and highly efficient.