Quantum Kernel Learning for Small Dataset Modeling in Semiconductor Fabrication: Application to Ohmic Contact

Abstract

Modeling complex semiconductor fabrication processes such as Ohmic contact formation remains challenging due to high-dimensional parameter spaces and limited experimental data. While classical machine learning (CML) approaches have been successful in many domains, their performance degrades in small-sample, nonlinear scenarios. In this work, quantum machine learning (QML) is investigated as an alternative, exploiting quantum kernels to capture intricate correlations from compact datasets. Using only 159 experimental GaN HEMT samples, a quantum kernel-aligned regressor (QKAR) is developed combining a shallow Pauli-Z feature map with a trainable quantum kernel alignment (QKA) layer. All models, including seven baseline CML regressors, are evaluated under a unified PCA-based preprocessing pipeline to ensure a fair comparison. QKAR consistently outperforms classical baselines across multiple metrics (MAE, MSE, RMSE), achieving a mean absolute error of 0.338 Ω·mm when validated on experimental data. Noise robustness and generalization are further assessed through cross-validation and new device fabrication. These findings suggest that carefully constructed QML models can provide predictive advantages in data-constrained semiconductor modeling, offering a foundation for practical deployment on near-term quantum hardware. While challenges remain for both QML and CML, this study demonstrates QML's potential as a complementary approach in complex process modeling tasks.

Locations

Ask a Question About This Paper

Summary

This paper presents a novel approach to modeling complex semiconductor fabrication processes, specifically Ohmic contact formation in Gallium Nitride (GaN) High-Electron-Mobility Transistors (HEMTs), using quantum machine learning (QML). The significance of this work lies in addressing critical challenges in semiconductor manufacturing: the high-dimensional parameter space, nonlinear relationships, and crucially, the scarcity of experimental data due to the high cost and time involved in data collection. While classical machine learning (CML) struggles with small, nonlinear datasets, this research demonstrates that carefully constructed QML models can provide significant predictive advantages, paving the way for practical deployment on near-term quantum hardware.

The key innovation is the development of a Quantum Kernel-Aligned Regressor (QKAR). This hybrid quantum-classical model is designed to capture intricate correlations from compact datasets. Its architecture features a shallow Pauli-Z feature map that efficiently encodes classical data into quantum states, combined with a trainable Quantum Kernel Alignment (QKA) layer. The QKA layer is a significant aspect, as it allows the quantum kernel to be optimized and adapted to the specific regression task, enhancing its expressivity and leading to improved accuracy and stability. The paper rigorously benchmarks QKAR against seven widely used classical machine learning regressors (including SVM, Decision Tree, Gradient Boosting, XGBoost, AdaBoost, Deep Learning, and Elastic Net), consistently showing superior performance across multiple metrics (MAE, MSE, RMSE). Furthermore, the study includes robust validation through noise sensitivity analysis and, critically, by predicting the performance of newly fabricated devices, confirming QKAR’s generalization ability to unseen process settings. This experimental validation on physical samples, beyond just simulations, underscores the practical utility of the proposed QML approach.

The main prior ingredients needed for this work include:

  1. Quantum Machine Learning (QML) Foundations: The paper builds upon the theoretical framework of QML, particularly quantum kernel methods, which leverage quantum phenomena to map classical data into high-dimensional Hilbert spaces where complex patterns might become more separable.
  2. Quantum Feature Maps: The concept of encoding classical data into quantum states using parameterized quantum circuits (e.g., Pauli-Z, Pauli-ZZ) is a fundamental component of quantum kernel methods.
  3. Quantum Kernel Alignment (QKA): While the specific trainable QKA layer in QKAR is innovative, the general concept of aligning or adapting quantum kernels to a given task to improve performance is an established area within QML research.
  4. Classical Support Vector Regressor (SVR): QKAR uses SVR as its classical ā€œheadā€ for the regression task, processing the quantum kernel matrix to make predictions. This highlights the hybrid nature of the model, combining quantum computation for data embedding with classical algorithms for prediction.
  5. Principal Component Analysis (PCA): This classical dimensionality reduction technique is applied as a preprocessing step to reduce the initial high-dimensional feature space of the semiconductor fabrication data.
  6. Variational Autoencoder (VAE): A classical deep learning technique, VAE is employed for data augmentation to expand the limited experimental dataset, addressing the small-sample challenge inherent in semiconductor manufacturing research.
  7. Domain Expertise in Semiconductor Fabrication: A deep understanding of GaN HEMT device physics and Ohmic contact formation processes is essential for defining the problem, collecting relevant data, and interpreting the model’s performance in a real-world context.
  8. Quantum and Classical Machine Learning Libraries: Tools like Qiskit (for quantum circuit simulation and QML model construction) and Scikit-learn (for classical baselines and SVR implementation) provide the computational infrastructure.
This paper pioneers the use of quantum machine learning (QML) for modeling the Ohmic contact process in GaN high-electron-mobility transistors (HEMTs) for the first time. Utilizing data from 159 devices … This paper pioneers the use of quantum machine learning (QML) for modeling the Ohmic contact process in GaN high-electron-mobility transistors (HEMTs) for the first time. Utilizing data from 159 devices and variational auto-encoder-based augmentation, we developed a quantum kernel-based regressor (QKR) with a 2-level ZZ-feature map. Benchmarking against six classical machine learning (CML) models, our QKR consistently demonstrated the lowest mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE). Repeated statistical analysis confirmed its robustness. Additionally, experiments verified an MAE of 0.314 ohm-mm, underscoring the QKR's superior performance and potential for semiconductor applications, and demonstrating significant advancements over traditional CML methods.
Device variability is a bottleneck for the scalability of semiconductor quantum devices. Increasing device control comes at the cost of a large parameter space that has to be explored in … Device variability is a bottleneck for the scalability of semiconductor quantum devices. Increasing device control comes at the cost of a large parameter space that has to be explored in order to find the optimal operating conditions. We demonstrate a statistical tuning algorithm that navigates this entire parameter space, using just a few modelling assumptions, in the search for specific electron transport features. We focused on gate-defined quantum dot devices, demonstrating fully automated tuning of two different devices to double quantum dot regimes in an up to eight-dimensional gate voltage space. We considered a parameter space defined by the maximum range of each gate voltage in these devices, demonstrating expected tuning in under 70 minutes. This performance exceeded a human benchmark, although we recognise that there is room for improvement in the performance of both humans and machines. Our approach is approximately 180 times faster than a pure random search of the parameter space, and it is readily applicable to different material systems and device architectures. With an efficient navigation of the gate voltage space we are able to give a quantitative measurement of device variability, from one device to another and after a thermal cycle of a device. This is a key demonstration of the use of machine learning techniques to explore and optimise the parameter space of quantum devices and overcome the challenge of device variability.
The discrepancies between reality and simulation impede the optimisation and scalability of solid-state quantum devices. Disorder induced by the unpredictable distribution of material defects is one of the major contributions … The discrepancies between reality and simulation impede the optimisation and scalability of solid-state quantum devices. Disorder induced by the unpredictable distribution of material defects is one of the major contributions to the reality gap. We bridge this gap using physics-aware machine learning, in particular, using an approach combining a physical model, deep learning, Gaussian random field, and Bayesian inference. This approach has enabled us to infer the disorder potential of a nanoscale electronic device from electron transport data. This inference is validated by verifying the algorithm's predictions about the gate voltage values required for a laterally-defined quantum dot device in AlGaAs/GaAs to produce current features corresponding to a double quantum dot regime.
Support Vector Machine (SVM) is a state-of-the-art classification method widely used in science and engineering due to its high accuracy, its ability to deal with high dimensional data, and its … Support Vector Machine (SVM) is a state-of-the-art classification method widely used in science and engineering due to its high accuracy, its ability to deal with high dimensional data, and its flexibility in modeling diverse sources of data. In this paper, we propose an autotuning-based optimization framework to quantify the ranges of hyperparameters in SVMs to identify their optimal choices, and apply the framework to two SVMs with the mixed-kernel between Sigmoid and Gaussian kernels for smart pixel datasets in high energy physics (HEP) and mixed-kernel heterojunction transistors (MKH). Our experimental results show that the optimal selection of hyperparameters in the SVMs and the kernels greatly varies for different applications and datasets, and choosing their optimal choices is critical for a high classification accuracy of the mixed kernel SVMs. Uninformed choices of hyperparameters C and coef0 in the mixed-kernel SVMs result in severely low accuracy, and the proposed framework effectively quantifies the proper ranges for the hyperparameters in the SVMs to identify their optimal choices to achieve the highest accuracy 94.6\% for the HEP application and the highest average accuracy 97.2\% with far less tuning time for the MKH application.
The semiconductors industry benefits greatly from the integration of machine learning (ML)-based techniques in technology computer-aided design (TCAD) methods. The performance of ML models, however, relies heavily on the quality … The semiconductors industry benefits greatly from the integration of machine learning (ML)-based techniques in technology computer-aided design (TCAD) methods. The performance of ML models, however, relies heavily on the quality and quantity of training datasets. They can be particularly difficult to obtain in the semiconductor industry due to the complexity and expense of the device fabrication. In this article, we propose a self-augmentation strategy for improving ML-based device modeling using variational autoencoder (VAE)-based techniques. These techniques require a small number of experimental data points and do not rely on TCAD tools. To demonstrate the effectiveness of our approach, we apply it to a deep neural network (DNN)-based prediction task for the ohmic resistance value in gallium nitride (GaN) devices. A 70% reduction in mean absolute error (MAE) when predicting experimental results is achieved. The inherent flexibility of our approach allows easy adaptation to various tasks, thus making it highly relevant to many applications of the semiconductor industry.
Lithography is fundamental to integrated circuit fabrication, necessitating large computation overhead. The advancement of machine learning (ML)-based lithography models alleviates the trade-offs between manufacturing process expense and capability. However, all … Lithography is fundamental to integrated circuit fabrication, necessitating large computation overhead. The advancement of machine learning (ML)-based lithography models alleviates the trade-offs between manufacturing process expense and capability. However, all previous methods regard the lithography system as an image-to-image black box mapping, utilizing network parameters to learn by rote mappings from massive mask-to-aerial or mask-to-resist image pairs, resulting in poor generalization capability. In this paper, we propose a new ML-based paradigm disassembling the rigorous lithographic model into non-parametric mask operations and learned optical kernels containing determinant source, pupil, and lithography information. By optimizing complex-valued neural fields to perform optical kernel regression from coordinates, our method can accurately restore lithography system using a small-scale training dataset with fewer parameters, demonstrating superior generalization capability as well. Experiments show that our framework can use 31% of parameters while achieving 69$\times$ smaller mean squared error with 1.3$\times$ higher throughput than the state-of-the-art.
The semiconductors industry benefits greatly from the integration of Machine Learning (ML)-based techniques in Technology Computer-Aided Design (TCAD) methods. The performance of ML models however relies heavily on the quality … The semiconductors industry benefits greatly from the integration of Machine Learning (ML)-based techniques in Technology Computer-Aided Design (TCAD) methods. The performance of ML models however relies heavily on the quality and quantity of training datasets. They can be particularly difficult to obtain in the semiconductor industry due to the complexity and expense of the device fabrication. In this paper, we propose a self-augmentation strategy for improving ML-based device modeling using variational autoencoder-based techniques. These techniques require a small number of experimental data points and does not rely on TCAD tools. To demonstrate the effectiveness of our approach, we apply it to a deep neural network-based prediction task for the Ohmic resistance value in Gallium Nitride devices. A 70% reduction in mean absolute error when predicting experimental results is achieved. The inherent flexibility of our approach allows easy adaptation to various tasks, thus making it highly relevant to many applications of the semiconductor industry.
This study proposes a novel framework integrating long short-term memory (LSTM) networks with Bayesian optimization (BO) to address process–device co-optimization challenges in trench-gate metal–oxide–semiconductor field-effect transistor (MOSFET) manufacturing. Conventional TCAD … This study proposes a novel framework integrating long short-term memory (LSTM) networks with Bayesian optimization (BO) to address process–device co-optimization challenges in trench-gate metal–oxide–semiconductor field-effect transistor (MOSFET) manufacturing. Conventional TCAD simulations, while accurate, suffer from computational inefficiency in high-dimensional parameter spaces. To overcome this, an LSTM-based TCAD proxy model is developed, leveraging hierarchical temporal dependencies to predict electrical parameters (e.g., breakdown voltage, threshold voltage) with deviations below 3.5% compared to physical simulations. The model, validated on both N-type and P-type 20 V trench MOS devices, outperforms conventional RNN and GRU architectures, reducing average relative errors by 1.78% through its gated memory mechanism. A BO-driven inverse optimization methodology is further introduced to navigate trade-offs between conflicting objectives (e.g., minimizing on-resistance while maximizing breakdown voltage), achieving recipe predictions with a maximum deviation of 8.3% from experimental data. Validation via TCAD-simulated extrapolation tests and SEM metrology confirms the framework’s robustness under extended operating ranges (e.g., 0–40 V drain voltage) and dimensional tolerances within industrial specifications. The proposed approach establishes a scalable, data-driven paradigm for semiconductor manufacturing, effectively bridging TCAD simulations with production realities while minimizing empirical trial-and-error iterations.
Differentiable models of physical systems provide a powerful platform for gradient-based algorithms, with particular impact on parameter estimation and optimal control. Quantum systems present a particular challenge for such characterisation … Differentiable models of physical systems provide a powerful platform for gradient-based algorithms, with particular impact on parameter estimation and optimal control. Quantum systems present a particular challenge for such characterisation and control, owing to their inherently stochastic nature and sensitivity to environmental parameters. To address this challenge, we present a versatile differentiable quantum master equation solver, and incorporate this solver into a framework for device characterisation. Our approach utilises gradient-based optimisation and Bayesian inference to provide estimates and uncertainties in quantum device parameters. To showcase our approach, we consider steady state charge transport through electrostatically defined quantum dots. Using simulated data, we demonstrate efficient estimation of parameters for a single quantum dot, and model selection as well as the capability of our solver to compute time evolution for a double quantum dot system. Our differentiable solver stands to widen the impact of physics-aware machine learning algorithms on quantum devices for characterisation and control.
The recently proposed machine learning-based physically-constrained nonlocal (MPN) kinetic energy density functional (KEDF) can be used for simple metals and their alloys [Phys. Rev. B 109, 115135 (2024)]. However, the … The recently proposed machine learning-based physically-constrained nonlocal (MPN) kinetic energy density functional (KEDF) can be used for simple metals and their alloys [Phys. Rev. B 109, 115135 (2024)]. However, the MPN KEDF does not perform well for semiconductors. Here we propose a multi-channel MPN (CPN) KEDF, which extends the MPN KEDF to semiconductors by integrating information collected from multiple channels, with each channel featuring a specific length scale in real space. The CPN KEDF is systematically tested on silicon and binary semiconductors. We find that the multi-channel design for KEDF is beneficial for machine-learning-based models in capturing the characteristics of semiconductors, particularly in handling covalent bonds. In particular, the CPN5 KEDF, which utilizes five channels, demonstrates excellent accuracy across all tested systems. These results offer a new path for generating KEDFs for semiconductors.
Substrate oxidation is inevitable when exposed to ambient atmosphere during semiconductor manufacturing, which is detrimental to the fabrication of state-of-the-art devices. Optimizing the deoxidation process in molecular beam epitaxy (MBE) … Substrate oxidation is inevitable when exposed to ambient atmosphere during semiconductor manufacturing, which is detrimental to the fabrication of state-of-the-art devices. Optimizing the deoxidation process in molecular beam epitaxy (MBE) for random substrates poses a multidimensional challenge and is sometimes controversial. Due to variations in substrates and growth processes, the determination of the deoxidation condition heavily relies on the individual's expertise, yielding inconsistent results. This study employs a machine learning model that integrates interpolation and vision transformer (Interpolation-ViT) techniques. The model utilizes reflection high-energy electron diffraction videos as input to predict the status of the substrate, enabling automated deoxidation within a controlled architecture for various substrates. Furthermore, we highlight the potential of models trained on data from specific MBE equipment to achieve high-accuracy deployment on different pieces of equipment. In contrast to traditional methods, our approach holds exceptional value, as it standardizes deoxidation temperatures across diverse equipment and substrates. This significantly advances the standardization of the semiconductor process. The concepts and methods presented are expected to revolutionize semiconductor manufacturing processes in the optoelectronic and microelectronic industries.
The semiconductor industry has prioritized automating repetitive tasks by closed-loop, autonomous experimentation which enables accelerated optimization of complex multi-step processes. The emergence of machine learning (ML) has ushered in automated … The semiconductor industry has prioritized automating repetitive tasks by closed-loop, autonomous experimentation which enables accelerated optimization of complex multi-step processes. The emergence of machine learning (ML) has ushered in automated process with minimal human intervention. In this work, we develop SemiEpi, a self-driving automation platform capable of executing molecular beam epitaxy (MBE) growth with multi-steps, continuous in-situ monitoring, and on-the-fly feedback control. By integrating standard hardware, homemade software, curve fitting, and multiple ML models, SemiEpi operates autonomously, eliminating the need for extensive expertise in MBE processes to achieve optimal outcomes. The platform actively learns from previous experimental results, identifying favorable conditions and proposing new experiments to achieve the desired results. We standardize and optimize growth for InAs/GaAs quantum dots (QDs) heterostructures to showcase the power of ML-guided multi-step growth. A temperature calibration was implemented to get the initial growth condition, and fine control of the process was executed using ML. Leveraging RHEED movies acquired during the growth, SemiEpi successfully identified and optimized a novel route for multi-step heterostructure growth. This work demonstrates the capabilities of closed-loop, ML-guided systems in addressing challenges in multi-step growth for any device. Our method is critical to achieve repeatable materials growth using commercially scalable tools. Our strategy facilitates the development of a hardware-independent process and enhancing process repeatability and stability, even without exhaustive knowledge of growth parameters.
Two-dimensional van der Waals (vdW) materials exhibit a broad palette of unique and superlative properties, including high electrical and thermal conductivities, paired with the ability to exfoliate or grow and … Two-dimensional van der Waals (vdW) materials exhibit a broad palette of unique and superlative properties, including high electrical and thermal conductivities, paired with the ability to exfoliate or grow and transfer single layers onto a variety of substrates thanks to the relatively weak vdW interlayer bonding. However, the same vdW bonds also lead to relatively low thermal boundary conductance (TBC) between the 2D layer and its 3D substrate, which is the main pathway for heat removal and thermal management in devices, leading to a potential thermal bottleneck and dissipation-driven performance degradation. Here we use first-principles phonon dispersion with our 2D-3D Boltzmann phonon transport model to compute the TBC of 156 unique 2D/3D interface pairs, many of which are not available in the literature. We then employ machine learning (ML) to develop streamlined predictive models, of which Neural Network and Gaussian process display the highest predictive accuracy (RMSE $<$ 5 MWm$^{-2}$K$^{-1}$ and $R^2>$0.99) on the complete descriptor set. Then we perform sensitivity analysis to identify the most impactful descriptors, consisting of the vdW spring coupling constant, 2D thermal conductivity, ZA phonon bandwidth, the ZA phonon resonance gap, and the frequency of the first van Hove singularity or Boson peak. On that reduced set, we find that a decision-tree algorithm can make accurate predictions (RMSE $<$ 20 MWm$^{-2}$K$^{-1}$ and $R^2>$0.9) on materials it has not been trained on by performing a transferability analysis. Our model allows optimal selection of 2D-substrate pairings to maximize heat transfer and will improve thermal management in future 2D nanoelectronics.
Quantum dots (QDs) defined with electrostatic gates are a leading platform for a scalable quantum computing implementation. However, with increasing numbers of qubits, the complexity of the control parameter space … Quantum dots (QDs) defined with electrostatic gates are a leading platform for a scalable quantum computing implementation. However, with increasing numbers of qubits, the complexity of the control parameter space also grows. Traditional measurement techniques, relying on complete or near-complete exploration via two-parameter scans (images) of the device response, quickly become impractical with increasing numbers of gates. Here we propose to circumvent this challenge by introducing a measurement technique relying on one-dimensional projections of the device response in the multidimensional parameter space. Dubbed the "ray-based classification (RBC) framework," we use this machine learning approach to implement a classifier for QD states, enabling automated recognition of qubit-relevant parameter regimes. We show that RBC surpasses the 82% accuracy benchmark from the experimental implementation of image-based classification techniques from prior work, while reducing the number of measurement points needed by up to 70%. The reduction in measurement cost is a significant gain for time-intensive QD measurements and is a step forward toward the scalability of these devices. We also discuss how the RBC-based optimizer, which tunes the device to a multiqubit regime, performs when tuning in the two-dimensional and three-dimensional parameter spaces defined by plunger and barrier gates that control the QDs. This work provides experimental validation of both efficient state identification and optimization with machine learning techniques for nontraditional measurements in quantum systems with high-dimensional parameter spaces and time-intensive measurements.Received 23 February 2021Accepted 12 May 2021Corrected 24 March 2022DOI:https://doi.org/10.1103/PRXQuantum.2.020335Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasMachine learningQuantum controlQuantum information architectures & platformsQuantum information with solid state qubitsPhysical SystemsDouble quantum dotsSemiconductor compoundsQuantum Information
In this paper, we address the problem of compact model parameter extraction to simultaneously extract tens of parameters via derivative-free optimization. Traditionally, parameter extraction is performed manually by dividing the … In this paper, we address the problem of compact model parameter extraction to simultaneously extract tens of parameters via derivative-free optimization. Traditionally, parameter extraction is performed manually by dividing the complete set of parameters into smaller subsets, each targeting different operational regions of the device, a process that can take several days or even weeks. Our approach streamlines this process by employing derivative-free optimization to identify a good parameter set that best fits the compact model without performing an exhaustive number of simulations. We further enhance the optimization process to address critical issues in device modeling by carefully choosing a loss function that evaluates model performance consistently across varying magnitudes by focusing on relative errors (as opposed to absolute errors), prioritizing accuracy in key operational regions of the device above a certain threshold, and reducing sensitivity to outliers. Furthermore, we utilize the concept of train-test split to assess the model fit and avoid overfitting. This is done by fitting 80% of the data and testing the model efficacy with the remaining 20%. We demonstrate the effectiveness of our methodology by successfully modeling two semiconductor devices: a diamond Schottky diode and a GaN-on-SiC HEMT, with the latter involving the ASM-HEMT DC model, which requires simultaneously extracting 35 model parameters to fit the model to the measured data. These examples demonstrate the effectiveness of our approach and showcase the practical benefits of derivative-free optimization in device modeling.
Two-dimensional van der Waals (vdW) materials exhibit a broad palette of unique and superlative properties, including high electrical and thermal conductivities, paired with the ability to exfoliate or grow and … Two-dimensional van der Waals (vdW) materials exhibit a broad palette of unique and superlative properties, including high electrical and thermal conductivities, paired with the ability to exfoliate or grow and transfer single layers onto a variety of substrates thanks to the relatively weak vdW interlayer bonding. However, the same vdW bonds also lead to relatively low thermal boundary conductance (TBC) between the 2D layer and its 3D substrate, which is the main pathway for heat removal and thermal management in devices, leading to a potential thermal bottleneck and dissipation-driven performance degradation. Here we use first-principles phonon dispersion with our 2D-3D Boltzmann phonon transport model to compute the TBC of 156 unique 2D/3D interface pairs, many of which are not available in the literature. We then employ machine learning (ML) to develop streamlined predictive models, of which Neural Network and Gaussian process display the highest predictive accuracy (RMSE $<$ 5 MWm$^{-2}$K$^{-1}$ and $R^2>$0.99) on the complete descriptor set. Then we perform sensitivity analysis to identify the most impactful descriptors, consisting of the vdW spring coupling constant, 2D thermal conductivity, ZA phonon bandwidth, the ZA phonon resonance gap, and the frequency of the first van Hove singularity or Boson peak. On that reduced set, we find that a decision-tree algorithm can make accurate predictions (RMSE $<$ 20 MWm$^{-2}$K$^{-1}$ and $R^2>$0.9) on materials it has not been trained on by performing a transferability analysis. Our model allows optimal selection of 2D-substrate pairings to maximize heat transfer and will improve thermal management in future 2D nanoelectronics.
Abstract The recently proposed machine learning-based physically-constrained nonlocal (MPN) kinetic energy density functional (KEDF) can be used for simple metals and their alloys (Sun and Chen 2024 Phys. Rev. B … Abstract The recently proposed machine learning-based physically-constrained nonlocal (MPN) kinetic energy density functional (KEDF) can be used for simple metals and their alloys (Sun and Chen 2024 Phys. Rev. B 109 115135). However, the MPN KEDF does not perform well for semiconductors. Here we propose a multi-channel MPN (CPN) KEDF, which extends the MPN KEDF to semiconductors by integrating information collected from multiple channels, with each channel featuring a specific length scale in real space. The CPN KEDF is systematically tested on silicon and binary semiconductors. We find that the multi-channel design for KEDF is beneficial for machine-learning-based models in capturing the characteristics of semiconductors, particularly in handling covalent bonds. In particular, the CPN 5 KEDF, which utilizes five channels, demonstrates excellent accuracy across all tested systems. These results offer a new path for generating KEDFs for semiconductors.
In semiconductor manufacturing, the accurate quantification of critical dimensions (CD) remains a pivotal challenge. This study advances the existing Model-Based Library (MBL) method, integrating it with machine learning algorithms for … In semiconductor manufacturing, the accurate quantification of critical dimensions (CD) remains a pivotal challenge. This study advances the existing Model-Based Library (MBL) method, integrating it with machine learning algorithms for the precise measurement of CDs from Critical Dimension Scanning Electron Microscope (CD-SEM) images. Utilizing Monte Carlo simulations, secondary electron linescan profiles were generated for silicon (Si) and gold (Au) trapezoidal line structures under varying geometrical parameters, such as top CD, sidewall angle, and height. Machine learning techniques were then deployed to predict these linescan profiles based on a randomly chosen training set. The predicted profiles exhibited remarkable fidelity to the simulated results, with standard deviations of 0.1% and 6% in the relative error distributions for Si and Au, respectively. Our findings demonstrate that machine learning approaches significantly enhance the MBL method by reducing library size, expediting database construction, and enriching the content of existing MBL databases.
The ground state electron density - obtainable using Kohn-Sham Density Functional Theory (KS-DFT) simulations - contains a wealth of material information, making its prediction via machine learning (ML) models attractive. … The ground state electron density - obtainable using Kohn-Sham Density Functional Theory (KS-DFT) simulations - contains a wealth of material information, making its prediction via machine learning (ML) models attractive. However, the computational expense of KS-DFT scales cubically with system size which tends to stymie training data generation, making it difficult to develop quantifiably accurate ML models that are applicable across many scales and system configurations. Here, we address this fundamental challenge by employing transfer learning to leverage the multi-scale nature of the training data. Our ML models employ descriptors involving simple scalar products, comprehensively sample system configurations through thermalization, and quantify uncertainty in electron density predictions using Bayesian neural networks. We show that our models incur significantly lower data generation costs while allowing confident - and when verifiable, accurate - predictions for a wide variety of bulk systems well beyond training, including systems with defects, different alloy compositions, and at unprecedented, multi-million-atom scales.
Lattice thermal conductivity (TC) of semiconductors is crucial for various applications, ranging from microelectronics to thermoelectrics. Data-driven approach can potentially establish the critical composition-property relationship needed for fast screening of … Lattice thermal conductivity (TC) of semiconductors is crucial for various applications, ranging from microelectronics to thermoelectrics. Data-driven approach can potentially establish the critical composition-property relationship needed for fast screening of candidates with desirable TC, but the small number of available data remains the main challenge. TC can be efficiently calculated using empirical models, but they have inferior accuracy compared to the more resource-demanding first-principles calculations. Here, we demonstrate the use of transfer learning (TL) to improve the machine learning models trained on small but high-fidelity TC data from experiments and first-principles calculations, by leveraging a large but low-fidelity data generated from empirical TC models, where the trainings on high- and low-fidelity TC data are treated as different but related tasks. TL improves the model accuracy by as much as 23% in R2 and reduces the average factor difference by as much as 30%. Using the TL model, a large semiconductor database is screened, and several candidates with room temperature TC > 350 W/mK are identified and further verified using first-principles simulations. This study demonstrates that TL can leverage big low-fidelity data as a proxy task to improve models for the target task with high-fidelity but small data. Such a capability of TL may have important implications to materials informatics in general.
In this study, a novel AlGaN/GaN power rectifier with an integrated lateral composite buffer diode (IBD‐Rectifier) for reverse blocking capability improvement is proposed and investigated by Sentaurus simulations (this paper … In this study, a novel AlGaN/GaN power rectifier with an integrated lateral composite buffer diode (IBD‐Rectifier) for reverse blocking capability improvement is proposed and investigated by Sentaurus simulations (this paper includes only simulated data and no real experimental result). AlGaN buffer layer under the anode is adopted to realise great high reverse blocking capability. A minimum turn‐on voltage of 0.6 V and a maximum breakdown voltage (BV) &gt;1.3 kV are simultaneously obtained in the IBD‐Rectifier, resulting in a high Baliga's figure of merits BV 2 / R on,sp ( R on,sp is specific‐on resistance) of ∼3000 MW/cm 2 . In comparison with MIS‐gated hybrid anode diode and conventional schottky barrier diode, the IBD‐Rectifier delivers an excellent theoretical method to achieve superior performances in high‐efficiency GaN power applications.
A class of novel tunable light-emission-diode (LED)-compatible current regulator, including the reverse blocking and the reverse conducting device, is proposed by integrating the p-GaN cap with a voltage nanosensor on … A class of novel tunable light-emission-diode (LED)-compatible current regulator, including the reverse blocking and the reverse conducting device, is proposed by integrating the p-GaN cap with a voltage nanosensor on the AlGaN/GaN platform. Verified by the experimentally calibrated simulation, it is the feedback of the voltage sensor that stabilizes the depletion region of the reverse biased p-GaN/AlGaN/2-DEG junction, which contributes to clamping the voltage effectively. Compared with the proposed regulator, the devices only with the p-GaN cap or the sensor cannot perform the current regulation. Moreover, investigated by varying the p-type concentration of the p-GaN, the length of the sensor as well as the temperature, the proposed device featuring a ripple wave below 4 mA/mm and a temperature coefficient of 4.5 mA Ā· mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-1</sup> K <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-1</sup> exhibits its potential in LED industry and other related applications.
A machine learning (ML) model by combing two autoencoders and one linear regression model is proposed to avoid overfitting and to improve the accuracy of Technology Computer-Aided Design (TCAD)-augmented ML … A machine learning (ML) model by combing two autoencoders and one linear regression model is proposed to avoid overfitting and to improve the accuracy of Technology Computer-Aided Design (TCAD)-augmented ML for semiconductor structural variation identification and inverse design, without using domain expertise. TCAD-augmented ML utilizes TCAD simulations to generate sufficient data for ML model development when experimental data are inadequate. The ML model can then be used to identify semiconductor structural variation for given experimental electrical measurements. In this study, the variation of layer thicknesses in the p-i-n diode is used as a demonstration. An ML model is developed to predict the diode layer thicknesses based on a given Current-Voltage (IV) curve. Although the variations of interest can be incorporated easily in TCAD simulations to generate ML training data, the TCAD-augmented ML model generally is overfitted and cannot predict the variations in experiment well due to hidden variables which also alters the IV curves. We show that by using an autoencoder, this problem can be solved. To verify the effectiveness, another set of TCAD simulation data is generated with hidden variables (dopant concentration variation) to emulate experimental data. Testing on the second set of data shows that the proposed model can avoid overfitting and has up to 15 times improvement in accuracy in thickness prediction. Moreover, this model is used successfully to perform inverse design and can capture an underlying physics that cannot be described by a simple physical parameter.
In this work, we demonstrate the use of a machine learning (ML)-based statistical approach to model and analyze the impact of the fabrication processes on the threshold voltage in recessed … In this work, we demonstrate the use of a machine learning (ML)-based statistical approach to model and analyze the impact of the fabrication processes on the threshold voltage in recessed gate AlGaN/GaN metal-insulator-semiconductor high electron mobility transistors. First, we employ a ML-based Tikhonov regularization approach using the input of 19 different processing splits to generate a multivariable analytical equation that considers four critical processing parameters, such as the remaining AlGaN depth, ex-situ wet clean, in-situ plasma, and gate dielectric. Furthermore, the artificial neural network-based approach, which cannot be used to further analyze the impact of the process, is implemented for the comparison. Second, the results from this analytical equation show a nice agreement with the measured threshold voltage, indicating a successful correlation between the processing parameters and the threshold voltage. Finally, the impact of each process on the threshold voltage can be analyzed through the coefficient value related to each process, which can be a useful guidance for the device optimization toward the targeted performance.
Abstract The ever‐increasing power density and operation frequency in electrical power conversion systems require the development of power devices that can outperform conventional Si‐based devices. Gallium nitride (GaN) has been … Abstract The ever‐increasing power density and operation frequency in electrical power conversion systems require the development of power devices that can outperform conventional Si‐based devices. Gallium nitride (GaN) has been regarded as the candidate for next‐generation power devices to improve the conversion efficiency in high‐power electric systems. GaN‐based high electron mobility transistors (HEMTs) with normally‐off operation is an important device structure for different application scenarios. In this review, an overview of a series of effective approaches to improve the performance of GaN‐based power HEMT devices is given. Modified epistructures are presented to suppress defects and current leakage, and low‐damage recess‐free processes are discussed in fabricating normally‐off HEMTs. Possible effects of dielectrics on a metal–insulator–semiconductor (MIS) structure are also intensively introduced. Metal/semiconductor contact engineering is investigated, and fabrication of Au‐free ohmic contact and graphene insertion layer to enhance the device performance is emphasized. Finally, the effects of field plates are studied through the use of simulated and fabricated devices.
Gallium nitride (GaN) devices have been successfully commercialized due to their superior performance, especially their high-power transformation efficiency. To further reduce the power consumption of these devices, the optimization for … Gallium nitride (GaN) devices have been successfully commercialized due to their superior performance, especially their high-power transformation efficiency. To further reduce the power consumption of these devices, the optimization for the ohmic contacts is attracting more and more attention. In the light of the mature and powerful machine learning (ML) techniques, this work provides a novel method to evaluate the fabrication processes of the ohmic contacts in AlGaN/GaN heterojunction, n-type, and p-type GaN, by establishing a regression-based model. The proposed model can not only investigate the influence weight of each process but also predict the contact resistance by inputting the desired recipes. A website (http://ohmic.zeheng.wang/) containing the successfully trained model for the readers' interests is also provided, which, we believe, would benefit the society of the process development and optimization.
Kernel methods are a cornerstone of classical machine learning. The idea of using quantum computers to compute kernels has recently attracted attention. Quantum embedding kernels (QEKs), constructed by embedding data … Kernel methods are a cornerstone of classical machine learning. The idea of using quantum computers to compute kernels has recently attracted attention. Quantum embedding kernels (QEKs), constructed by embedding data into the Hilbert space of a quantum computer, are a particular quantum kernel technique that is particularly suitable for noisy intermediate-scale quantum devices. Unfortunately, kernel methods face three major problems: Constructing the kernel matrix has quadratic computational complexity in the number of training samples, choosing the right kernel function is nontrivial, and the effects of noise are unknown. In this work, we addressed the latter two. In particular, we introduced the notion of trainable QEKs, based on the idea of classical model optimization methods. To train the parameters of the QEK, we proposed the use of kernel-target alignment. We verified the feasibility of this method, and showed that for our experimental setup we could reduce the training error significantly. Furthermore, we investigated the effects of device and finite sampling noise, and we evaluated various mitigation techniques numerically on classical hardware. We took the best performing strategy and evaluated it on data from a real quantum processing unit. We found that using this mitigation strategy demonstrated an increased kernel matrix quality.
GaN has been widely used to develop devices for high-power and high-frequency applications owing to its higher breakdown voltage and high electron saturation velocity. The GaN HEMT radio frequency (RF) … GaN has been widely used to develop devices for high-power and high-frequency applications owing to its higher breakdown voltage and high electron saturation velocity. The GaN HEMT radio frequency (RF) power amplifier is the first commercialized product which is fabricated using the conventional Au-based III–V device manufacturing process. In recent years, owing to the increased applications in power electronics, and expanded applications in RF and millimeter-wave (mmW) power amplifiers for 5G mobile communications, the development of high-volume production techniques derived from CMOS technology for GaN electronic devices has become highly demanded. In this article, we will review the history and principles of each unit process for conventional HEMT technology with Au-based metallization schemes, including epitaxy, ohmic contact, and Schottky metal gate technology. The evolution and status of CMOS-compatible Au-less process technology will then be described and discussed. In particular, novel process techniques such as regrown ohmic layers and metal–insulator–semiconductor (MIS) gates are illustrated. New enhancement-mode device technology based on the p-GaN gate is also reviewed. The vertical GaN device is a new direction of development for devices used in high-power applications, and we will also highlight the key features of such kind of device technology.
There is a growing consensus that the physics-based model needs to be coupled with machine learning (ML) model relying on data or vice versa in order to fully exploit their … There is a growing consensus that the physics-based model needs to be coupled with machine learning (ML) model relying on data or vice versa in order to fully exploit their combined strengths to address scientific or engineering problems that cannot be solved separately. We propose several methodologies of bridging technology computer-aided design (TCAD) simulation and artificial intelligence (AI) with its application to the tasks for which traditional TCAD faces challenges in terms of simulation runtime, coverage, and so on. AI-emulator that learns fine-grained information from rigorous TCAD enables simulation of process technologies and device in real-time as well as large-scale simulation such as full-pattern analysis of stress without high demand on computational resource. To accelerate atomistic molecular dynamics (MD) simulation, we have done a comparison study of descriptor-based and graph-based neural net potential, and also show their capability with large-scale and long-time simulation of silicon oxidation. Finally, we discuss the use of hybrid modeling of AI- and physics-based model for the case where physical equations are either fully or partially unknown.
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In … Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number $N$ of training data points. We show that the generalization error of a quantum machine learning model with $T$ trainable gates scales at worst as $\sqrt{T/N}$. When only $K \ll T$ gates have undergone substantial change in the optimization process, we prove that the generalization error improves to $\sqrt{K / N}$. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.
In this paper, two methodologies are used to speed up the maximization of the breakdown volt-age (BV) of a vertical GaN diode that has a theoretical maximum BV of ~2100V. … In this paper, two methodologies are used to speed up the maximization of the breakdown volt-age (BV) of a vertical GaN diode that has a theoretical maximum BV of ~2100V. Firstly, we demonstrated a 5X faster accurate simulation method in Technology Computer-Aided-Design (TCAD). This allows us to find 50% more numbers of high BV (>1400V) designs at a given simulation time. Secondly, a machine learning (ML) model is developed using TCAD-generated data and used as a surrogate model for differential evolution optimization. It can inversely design an out-of-the-training-range structure with BV as high as 1887V (89% of the ideal case) compared to ~1100V designed with human domain expertise.
Quantum computing with its inherent parallelism provides a quantum advantage over classical computing. Its potential to offer breakthrough advances in various areas of science and engineering is foreseen. Machine learning … Quantum computing with its inherent parallelism provides a quantum advantage over classical computing. Its potential to offer breakthrough advances in various areas of science and engineering is foreseen. Machine learning is one of the key areas where the power of quantum computing can be utilized. Though many machine learning algorithms have been successfully developed to solve a variety of problems in the past decades, these algorithms take a long time to train. Also working on today's colossal datasets makes these algorithms computationally intensive. Quantum machine learning by utilizing the concepts of superposition and entanglement promises a solution to this problem. Quantum machine learning algorithms are in surface for the past few years and majority of the current research has dealt with the two machine learning problems namely classification and clustering. In this paper, a brief review of the recent techniques and algorithms of quantum machine learning and its scope in solving real world problems is studied.
The small size and excellent integrability of silicon metal-oxide-semiconductor (SiMOS) quantum dot spin qubits make them an attractive system for mass-manufacturable, scaled-up quantum processors. Furthermore, classical control electronics can be … The small size and excellent integrability of silicon metal-oxide-semiconductor (SiMOS) quantum dot spin qubits make them an attractive system for mass-manufacturable, scaled-up quantum processors. Furthermore, classical control electronics can be integrated on-chip, in-between the qubits, if an architecture with sparse arrays of qubits is chosen. In such an architecture qubits are either transported across the chip via shuttling or coupled via mediating quantum systems over short-to-intermediate distances. This paper investigates the charge and spin characteristics of an elongated quantum dot-a so-called jellybean quantum dot-for the prospects of acting as a qubit-qubit coupler. Charge transport, charge sensing, and magneto-spectroscopy measurements are performed on a SiMOS quantum dot device at mK temperature and compared to Hartree-Fock multi-electron simulations. At low electron occupancies where disorder effects and strong electron-electron interaction dominate over the electrostatic confinement potential, the data reveals the formation of three coupled dots, akin to a tunable, artificial molecule. One dot is formed centrally under the gate and two are formed at the edges. At high electron occupancies, these dots merge into one large dot with well-defined spin states, verifying that jellybean dots have the potential to be used as qubit couplers in future quantum computing architectures.
A blueprint for exploiting symmetries in the construction of variational quantum learning models that can result in improved generalization performance is developed and demonstrated on practical problems. A blueprint for exploiting symmetries in the construction of variational quantum learning models that can result in improved generalization performance is developed and demonstrated on practical problems.
The semiconductors industry benefits greatly from the integration of machine learning (ML)-based techniques in technology computer-aided design (TCAD) methods. The performance of ML models, however, relies heavily on the quality … The semiconductors industry benefits greatly from the integration of machine learning (ML)-based techniques in technology computer-aided design (TCAD) methods. The performance of ML models, however, relies heavily on the quality and quantity of training datasets. They can be particularly difficult to obtain in the semiconductor industry due to the complexity and expense of the device fabrication. In this article, we propose a self-augmentation strategy for improving ML-based device modeling using variational autoencoder (VAE)-based techniques. These techniques require a small number of experimental data points and do not rely on TCAD tools. To demonstrate the effectiveness of our approach, we apply it to a deep neural network (DNN)-based prediction task for the ohmic resistance value in gallium nitride (GaN) devices. A 70% reduction in mean absolute error (MAE) when predicting experimental results is achieved. The inherent flexibility of our approach allows easy adaptation to various tasks, thus making it highly relevant to many applications of the semiconductor industry.
Quantum Machine Learning (QML) models promise to have some computational (or quantum) advantage for classifying supervised datasets (e.g., satellite images) over some conventional Deep Learning (DL) techniques due to their … Quantum Machine Learning (QML) models promise to have some computational (or quantum) advantage for classifying supervised datasets (e.g., satellite images) over some conventional Deep Learning (DL) techniques due to their expressive power via their local effective dimension. There are, however, two main challenges regardless of the promised quantum advantage: 1) Currently available quantum bits (qubits) are very small in number, while real-world datasets are characterized by hundreds of high-dimensional elements ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e.</i> features). Additionally, there is not a single unified approach for embedding real-world high-dimensional datasets in a limited number of qubits. 2) Some real-world datasets are too small for training intricate QML networks. Hence, to tackle these two challenges for benchmarking and validating QML networks on real-world, small, and high-dimensional datasets in one-go, we employ quantum transfer learning comprising a classical VGG16 layer and a multi-qubit QML layer. We use real-amplitude and strongly-entangling N-layer QML networks with and without data re-uploading layers as a multi-qubit QML layer, and evaluate their expressive power quantified by using their local effective dimension; the lower the local effective dimension of a QML network, the better its performance on unseen data. As datasets, we utilize Eurosat and synthetic datasets ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e.</i> easy-to-classify datasets), and an UC Merced Land Use dataset ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e.</i> a hard-to-classify dataset). Our numerical results show that the strongly-entangling N-layer QML network has a lower local effective dimension than the real-amplitude QML network and outperforms it on the hard-to-classify datasets. In addition, quantum transfer learning helps tackle the two challenges mentioned above for benchmarking and validating QML networks on real-world, small, and high-dimensional datasets.
In most of physics it is normal to obtain information by analysis of noisy data. The paradigm of quantum computing has been a simplified version of this -- one measurement … In most of physics it is normal to obtain information by analysis of noisy data. The paradigm of quantum computing has been a simplified version of this -- one measurement of a two-level system gives one bit of reliable information about the result of a computation. But real-world quantum computers do not work this way: the noisiness of quantum evolution also requires good strategies for extracting information. This review covers many error-mitigation strategies used in present-day quantum processors. These strategies make it much more feasible to obtain useful results before fault tolerance is achieved.
In this research, we address the urgent need for accurate prediction of in-hospital survival periods for patients diagnosed with pancreatic cancer (PC), a disease notorious for its late-stage diagnosis and … In this research, we address the urgent need for accurate prediction of in-hospital survival periods for patients diagnosed with pancreatic cancer (PC), a disease notorious for its late-stage diagnosis and dismal survival rates.Utilizing machine learning (ML) technologies, we focus on the application of Variational Autoencoders (VAE) for data augmentation and ensemble learning techniques for enhancing predictive accuracy.Our dataset comprises biochemical blood test (BBT) results from stage II/III PC patients, which is limited in size, making VAE's capability for data augmentation particularly valuable.The study employs several ML models, including Elastic Net (EN), Decision Trees (DT), and Radial Basis Function Support Vector Machine (RBF-SVM), and evaluates their performance using metrics such as Mean Absolute Error (MAE) and Mean Squared Error (MSE).Our findings reveal that EN, DT, and RBF-SVM are the most effective models within a VAE-augmented framework, showing substantial improvements in predictive accuracy.An ensemble learning approach further optimized the results, reducing the MAE to approximately 10 days.These advancements hold significant implications for the field of precision medicine, enabling more targeted therapeutic interventions and optimizing healthcare resource allocation.The study can also serve as a foundational step towards more personalized and effective healthcare solutions for PC patients.
Single electron spins bound to multi-phosphorus nuclear spin registers in silicon have demonstrated fast (0.8 ns) two-qubit Single electron spins bound to multi-phosphorus nuclear spin registers in silicon have demonstrated fast (0.8 ns) two-qubit
Due to its excellent material performance, the AlGaN/GaN high-electron-mobility transistor (HEMT) provides a wide platform for biosensing. The high density and mobility of two-dimensional electron gas (2DEG) at the AlGaN/GaN … Due to its excellent material performance, the AlGaN/GaN high-electron-mobility transistor (HEMT) provides a wide platform for biosensing. The high density and mobility of two-dimensional electron gas (2DEG) at the AlGaN/GaN interface induced by the polarization effect and the short distance between the 2DEG channel and the surface can improve the sensitivity of the biosensors. The high thermal and chemical stability can also benefit HEMT-based biosensors’ operation under, for example, high temperatures and chemically harsh environments. This makes creating biosensors with excellent sensitivity, selectivity, reliability, and repeatability achievable using commercialized semiconductor materials. To synthesize the recent developments and advantages in this research field, we review the various AlGaN/GaN HEMT-based biosensors’ structures, operations mechanisms, and applications. This review will help new researchers to learn the basic information about the topic and aid in the development of next-generation of AlGaN/GaN HEMT-based biosensors.
Abstract Schizophrenia is a serious chronic mental disorder that significantly affects daily life. Electroencephalography (EEG), a method used to measure mental activities in the brain, is among the techniques employed … Abstract Schizophrenia is a serious chronic mental disorder that significantly affects daily life. Electroencephalography (EEG), a method used to measure mental activities in the brain, is among the techniques employed in the diagnosis of schizophrenia. The symptoms of the disease typically begin in childhood and become more pronounced as one grows older. However, it can be managed with specific treatments. Computer-aided methods can be used to achieve an early diagnosis of this illness. In this study, various machine learning algorithms and the emerging technology of quantum-based machine learning algorithm were used to detect schizophrenia using EEG signals. The principal component analysis (PCA) method was applied to process the obtained data in quantum systems. The data, which were reduced in dimensionality, were transformed into qubit form using various feature maps and provided as input to the Quantum Support Vector Machine (QSVM) algorithm. Thus, the QSVM algorithm was applied using different qubit numbers and different circuits in addition to classical machine learning algorithms. All analyses were conducted in the simulator environment of the IBM Quantum Platform. In the classification of this EEG dataset, it is evident that the QSVM algorithm demonstrated superior performance with a 100% success rate when using Pauli X and Pauli Z feature maps. This study serves as proof that quantum machine learning algorithms can be effectively utilized in the field of healthcare.
Abstract The accumulation of physical errors 1–3 prevents the execution of large-scale algorithms in current quantum computers. Quantum error correction 4 promises a solution by encoding k logical qubits onto … Abstract The accumulation of physical errors 1–3 prevents the execution of large-scale algorithms in current quantum computers. Quantum error correction 4 promises a solution by encoding k logical qubits onto a larger number n of physical qubits, such that the physical errors are suppressed enough to allow running a desired computation with tolerable fidelity. Quantum error correction becomes practically realizable once the physical error rate is below a threshold value that depends on the choice of quantum code, syndrome measurement circuit and decoding algorithm 5 . We present an end-to-end quantum error correction protocol that implements fault-tolerant memory on the basis of a family of low-density parity-check codes 6 . Our approach achieves an error threshold of 0.7% for the standard circuit-based noise model, on par with the surface code 7–10 that for 20 years was the leading code in terms of error threshold. The syndrome measurement cycle for a length- n code in our family requires n ancillary qubits and a depth-8 circuit with CNOT gates, qubit initializations and measurements. The required qubit connectivity is a degree-6 graph composed of two edge-disjoint planar subgraphs. In particular, we show that 12 logical qubits can be preserved for nearly 1 million syndrome cycles using 288 physical qubits in total, assuming the physical error rate of 0.1%, whereas the surface code would require nearly 3,000 physical qubits to achieve said performance. Our findings bring demonstrations of a low-overhead fault-tolerant quantum memory within the reach of near-term quantum processors.
Abstract Recent work has proposed solving the k‐means clustering problem on quantum computers via the Quantum Approximate Optimization Algorithm (QAOA) and coreset techniques. Although the current method demonstrates the possibility … Abstract Recent work has proposed solving the k‐means clustering problem on quantum computers via the Quantum Approximate Optimization Algorithm (QAOA) and coreset techniques. Although the current method demonstrates the possibility of quantum k‐means clustering, it does not ensure high accuracy and consistency across a wide range of datasets. The existing coreset techniques are designed for classical algorithms, and there is no quantum‐tailored coreset technique designed to boost the accuracy of quantum algorithms. This study proposes solving the k‐means clustering problem with the variational quantum eigensolver (VQE) and a customized coreset method, the Contour coreset, which is formulated with a specific focus on quantum algorithms. Extensive simulations with synthetic and real‐life data demonstrated that the VQE+Contour Coreset approach outperforms existing QAOA+Coreset k‐means clustering approaches with higher accuracy and lower standard deviation. This research demonstrates that quantum‐tailored coreset techniques can remarkably boost the performance of quantum algorithms compared to generic off‐the‐shelf coreset techniques.
Abstract The rapid growth of Internet of Things (IoT) devices necessitates efficient data compression techniques to manage the vast amounts of data they generate. Chemiresistive sensor arrays (CSAs), a simple … Abstract The rapid growth of Internet of Things (IoT) devices necessitates efficient data compression techniques to manage the vast amounts of data they generate. Chemiresistive sensor arrays (CSAs), a simple yet essential component in IoT systems, produce large datasets due to their simultaneous multi‐sensor operations. Classical principal component analysis (cPCA), a widely used solution for dimensionality reduction, often struggles to preserve critical information in complex datasets. In this study, the self‐adaptive quantum kernel (SAQK) PCA is introduced as a complementary approach to enhance information retention. The results show that SAQK PCA outperforms cPCA in various back end machine‐learning tasks, particularly in low‐dimensional scenarios where quantum bit resources are constrained. Although the overall improvement is modest in some cases, SAQK PCA proves especially effective in preserving group structures within low‐dimensional data. These findings underscore the potential of noisy intermediate‐scale quantum (NISQ) computers to transform data processing in real‐world IoT applications by improving the efficiency and reliability of CSA data compression and readout, despite current qubit limitations.