Author Description

Login to generate an author description

Ask a Question About This Mathematician

Radiation therapy of thoracic and abdominal tumors requires incorporating the respiratory motion into treatments. To precisely account for the patient's respiratory motions and predict the respiratory signals, a generalized model … Radiation therapy of thoracic and abdominal tumors requires incorporating the respiratory motion into treatments. To precisely account for the patient's respiratory motions and predict the respiratory signals, a generalized model for predictions of different types of patients' respiratory motions is desired. The aim of this study is to explore the feasibility of developing a long short-term memory (LSTM)-based generalized model for the respiratory signal prediction. To achieve that, 1703 sets of real-time position management (RPM) data were collected from retrospective studies across three clinical institutions. These datasets were separated as the training, internal validity and external validity groups. Among all the datasets, 1187 datasets were used for model development and the remaining 516 datasets were used to test the model's generality power. Furthermore, an exhaustive grid search was implemented to find the optimal hyper-parameters of the LSTM model. The hyper-parameters are the number of LSTM layers, the number of hidden units, the optimizer, the learning rate, the number of epochs, and the length of time lags. The obtained model achieved superior accuracy over conventional artificial neural network (ANN) models: with the prediction window equaling to 500 ms, the LSTM model achieved an average relative mean absolute error (MAE) of 0.037, an average root mean square error (RMSE) of 0.048, and a maximum error (ME) of 1.687 in the internal validity data, and an average relative MAE of 0.112, an average RMSE of 0.139 and an ME of 1.811 in the external validity data. Compared to the LSTM model trained with default hyper-parameters, the MAE of the optimized model results decreased by 20%, indicating the importance of tuning the hyper-parameters of LSTM models to obtain superior accuracies. This study demonstrates the potential of deep LSTM models for the respiratory signal prediction and illustrates the impacts of major hyper-parameters in LSTM models.
Summary Multiphysics simulations are used to solve coupled physics problems found in a wide variety of applications in natural and engineering systems. Combining models of different physical processes into one … Summary Multiphysics simulations are used to solve coupled physics problems found in a wide variety of applications in natural and engineering systems. Combining models of different physical processes into one computational tool is the essence of multiphysics simulation. A general and widely used coupling approach is to combine several ‘single‐physics’ numerical solvers, each already being well developed, mature/sophisticated into an iteration‐based computational package. This approach iterates over the constituent solvers at every time step, to obtain globally converged solutions. During the simulation, each single‐physics component is solved repeatedly until the feedback has been adequately resolved. However, the component problem solution only needs to be as precise as the feedback that it receives from the other component. Thus, computational effort expended to exceed such precision is wasted. This issue is usually called over‐solving. This paper proposes and discusses several methods that circumvent over‐solving. The residual balance, relaxed relative tolerance, alternating nonlinear, and solution interruption methods are described, and their performance is compared with Picard iteration. A steady state problem with coupling along an interface and a transient problem with two fields coupled throughout the spatial domain are solved as examples. These problems demonstrate that the savings associated with eliminating over‐solving can reach at least 30% without any loss in accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
When the chord length sampling method is applied to solve radiation transport problems in random media where inclusions are randomly distributed in a background region, inaccuracy occurs due to two … When the chord length sampling method is applied to solve radiation transport problems in random media where inclusions are randomly distributed in a background region, inaccuracy occurs due to two major factors: memory effect and boundary effect. In this article, by applying chord length sampling to solve fixed source and eigenvalue problems in 1-D binary stochastic media, an investigation on how and why these two effects give rise to the inaccuracy in the final solutions is performed. The investigation is based on a series of radiation transport simulations for the calculation of reflection rate, flux distribution, and effective multiplication factor.
Radiation hardening of power MOSFETs (metal oxide semiconductor field effect transistors) is of the highest priority for sustaining high-power systems in the space radiation environment. Silicon carbide (SiC)-based power electronics … Radiation hardening of power MOSFETs (metal oxide semiconductor field effect transistors) is of the highest priority for sustaining high-power systems in the space radiation environment. Silicon carbide (SiC)-based power electronics are being investigated as a strong alternative for high power spaceborne power electronic systems. SiC MOSFETs have been shown to be most prone to single-event burnout (SEB) from space radiation. The current knowledge of SiC MOSFET device degradation and failure mechanisms are reviewed in this paper. Additionally, the viability of radiation tolerant SiC MOSFET designs and the modeling methods of SEB phenomena are evaluated. A merit system is proposed to consider the performance of radiation tolerance and nominal electrical performance. Criteria needed for high-fidelity SEB simulations are also reviewed. This paper stands as a necessary analytical review to intercede the development of radiation-hardened power devices for space and extreme environment applications.
This research aims to study a self-supervised 3D clothing reconstruction method, which recovers the geometry shape and texture of human clothing from a single image. Compared with existing methods, we … This research aims to study a self-supervised 3D clothing reconstruction method, which recovers the geometry shape and texture of human clothing from a single image. Compared with existing methods, we observe that three primary challenges remain: (1) 3D ground-truth meshes of clothing are usually inaccessible due to annotation difficulties and time costs; (2) Conventional template-based methods are limited to modeling non-rigid objects, e.g., handbags and dresses, which are common in fashion images; (3) The inherent ambiguity compromises the model training, such as the dilemma between a large shape with a remote camera or a small shape with a close camera. In an attempt to address the above limitations, we propose a causality-aware self-supervised learning method to adaptively reconstruct 3D non-rigid objects from 2D images without 3D annotations. In particular, to solve the inherent ambiguity among four implicit variables, i.e., camera position, shape, texture, and illumination, we introduce an explainable structural causal map (SCM) to build our model. The proposed model structure follows the spirit of the causal map, which explicitly considers the prior template in the camera estimation and shape prediction. When optimization, the causality intervention tool, i.e., two expectation-maximization loops, is deeply embedded in our algorithm to (1) disentangle four encoders and (2) facilitate the prior template. Extensive experiments on two 2D fashion benchmarks (ATR and Market-HQ) show that the proposed method could yield high-fidelity 3D reconstruction. Furthermore, we also verify the scalability of the proposed method on a fine-grained bird dataset, i.e., CUB. The code is available at https://github.com/layumi/ 3D-Magic-Mirror .
Relying on the flexible-access interconnects to the scalable storage and compute resources, data centers deliver critical communications connectivity among numerous servers to support the housed applications and services. To provide … Relying on the flexible-access interconnects to the scalable storage and compute resources, data centers deliver critical communications connectivity among numerous servers to support the housed applications and services. To provide the high-speeds and long-distance communications, the data centers have turned to fiber interconnections. With the stringently increased traffic volume, the data centers are then expected to further deploy the optical switches into the systems infrastructure to implement the full optical switching. This paper first summarizes the topologies and traffic characteristics in data centers and analyzes the reasons and importance of moving to optical switching. Recent techniques related to the optical switching, and main challenges limiting the practical deployments of optical switches in data centers are also summarized and reported.
Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning … Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning methods have been introduced to improve image quality from low-dose or few-view SPECT data, previous approaches often fail to generalize across different acquisition settings, limiting their applicability in reality. This work introduced DiffSPECT-3D, a diffusion framework for 3D cardiac SPECT imaging that effectively adapts to different acquisition settings without requiring further network re-training or fine-tuning. Using both image and projection data, a consistency strategy is proposed to ensure that diffusion sampling at each step aligns with the low-dose/few-view projection measurements, the image data, and the scanner geometry, thus enabling generalization to different low-dose/few-view settings. Incorporating anatomical spatial information from CT and total variation constraint, we proposed a 2.5D conditional strategy to allow the DiffSPECT-3D to observe 3D contextual information from the entire image volume, addressing the 3D memory issues in diffusion model. We extensively evaluated the proposed method on 1,325 clinical 99mTc tetrofosmin stress/rest studies from 795 patients. Each study was reconstructed into 5 different low-count and 5 different few-view levels for model evaluations, ranging from 1% to 50% and from 1 view to 9 view, respectively. Validated against cardiac catheterization results and diagnostic comments from nuclear cardiologists, the presented results show the potential to achieve low-dose and few-view SPECT imaging without compromising clinical performance. Additionally, DiffSPECT-3D could be directly applied to full-dose SPECT images to further improve image quality, especially in a low-dose stress-first cardiac SPECT imaging protocol.
The matrix over division rings have been applied in many fields,such as in quantum physics and computer graphics.However,since the multiplication of division ring is non-commutive,the matrix research prepares to the … The matrix over division rings have been applied in many fields,such as in quantum physics and computer graphics.However,since the multiplication of division ring is non-commutive,the matrix research prepares to the body on is paid attention.This paper studies the additive operators preserving idempotent over division rings.Existing results about linear operators preserving idempotent over division rings have been improved.
Recently hybridized monolayers consisting of hexagonal boron nitride (h-BN) phases inside graphene layer have been synthesized and shown to be an effective way of opening band gap in graphene monolayers … Recently hybridized monolayers consisting of hexagonal boron nitride (h-BN) phases inside graphene layer have been synthesized and shown to be an effective way of opening band gap in graphene monolayers [1]. In this letter, we report an ab initio density functional theory (DFT)- based study of h-BN domain size effect on the elastic properties of graphene/boron nitride hybrid monolayers (h-BNC). We found both inplane stiffness and longitudinal sound velocity of h-BNC linearly decrease with h-BN concentration.
Ramp merging is a typical application of cooperative intelligent transportation system (C-ITS). Vehicle trajectories perceived by roadside sensors are importation complement to the limited visual field of on-board perception. Vehicle … Ramp merging is a typical application of cooperative intelligent transportation system (C-ITS). Vehicle trajectories perceived by roadside sensors are importation complement to the limited visual field of on-board perception. Vehicle tracking and trajectory denoising algorithm is proposed in this paper to take full advantage of roadside cameras for vehicle trajectory and speed profile estimation. Dynamic speed guidance algorithm is proposed to help on-ramp vehicles to merge into mainline smoothly, even in non-cooperative environment where mainline vehicles are not expected to slow down to accommodate on-ramp vehicles. On-site experiments were taken out in a merging area of Hangzhou Belt Highway to testify our prototype system, and simulation analysis shows our proposed algorithm can achieve significant fuel savings during the ramp merging process.
In this work, we present Stellar Spectra Factory (SSF), a tool to generate empirical-based stellar spectra from arbitrary stellar atmospheric parameters. The relative flux-calibrated empirical spectra can be predicted by … In this work, we present Stellar Spectra Factory (SSF), a tool to generate empirical-based stellar spectra from arbitrary stellar atmospheric parameters. The relative flux-calibrated empirical spectra can be predicted by SSF given arbitrary effective temperature, surface gravity, and metallicity. SSF constructs the interpolation approach based on the SLAM, using ATLAS-A library as the training dataset. SSF is composed of 4 data-driven sub-models to predict empirical stellar spectra. SSF-N can generate spectra from A to K type and some M giant stars, covering 3700 < Teff < 8700 K, 0 < logg < 6 dex, and -1.5 < [M/H] < 0.5 dex. SSF-gM is mainly used to predict M giant spectra with 3520 < Teff < 4000K and -1.5 < [M/H] < 0.4 dex. SSF-dM is for generating M dwarf spectra with 3295 < Teff < 4040K, -1.0 < [M/H] < 0.1 dex. And SSF-B can predict B-type spectra with 9000 < Teff < 24000K and -5.2< MG < 1.5 mag. The accuracy of the predicted spectra is validated by comparing the flux of predicted spectra to those with same stellar parameters selected from the known spectral libraries, MILES and MaStar. The averaged difference of flux over optical wavelength between the predicted spectra and the corresponding ones in MILES and MaStar is less than 5%. More verification is conducted between the magnitudes calculated from the integration of the predicted spectra and the observations in PS1 and APASS bands with the same stellar parameters. No significant systematic difference is found between the predicted spectra and the photomatric observations. The uncertainty is 0.08mag in r band for SSF-gM when comparing with the stars with the same stellar parameters selected from PS1. And the uncertainty becomes 0.31mag in i band for SSF-dM when comparing with the stars with the same stellar parameters selected from APASS.
Radiation hardening of the MOSFET is of the highest priority for sustaining high-power systems in the space radiation environment. SiC-based power electronics are being looked at as a strong alternative … Radiation hardening of the MOSFET is of the highest priority for sustaining high-power systems in the space radiation environment. SiC-based power electronics are being looked at as a strong alternative for high power spaceborne power electronic systems. The SiC MOSFET has been shown to be most prone to SEB of the radiation effects. The knowledge of SiC MOSFET device degradation and failure mechanisms are reviewed. Additionally, the viability of rad-tolerant SiC MOSFET designs and the methods of SEB simulation are evaluated. A merit system is proposed to consider the performance of radiation tolerance and nominal electrical performance. Criteria needed for high-fidelity SEB simulation are also reviewed. This paper stands as a necessary analytical review to intercede the development of rad-hard power devices for space and extreme environment applications.
In this article, we demonstrate the novel use of Proper Generalized Decomposition (PGD) to separate the axial and, optionally, polar dimensions of neutron transport. Doing so, the resulting Reduced-Order Models … In this article, we demonstrate the novel use of Proper Generalized Decomposition (PGD) to separate the axial and, optionally, polar dimensions of neutron transport. Doing so, the resulting Reduced-Order Models (ROMs) can exploit the fact that nuclear reactors tend to be tall, but geometrically simple, in the axial direction $z$, and so the 3D neutron flux distribution often admits a low-rank "2D/1D" approximation. Through PGD, this approximation is computed by alternately solving 2D and 1D sub-models, like in existing 2D/1D models of reactor physics. However, the present methodology is more general in that the decomposition is arbitrary-rank, rather than rank-one, and no simplifying approximations of the transverse leakage are made. To begin, we derive two original models: that of axial PGD -- which separates only $z$ and the axial streaming direction $\vartheta\in\{-1,+1\}$ -- and axial-polar PGD -- which separates both $z$ and polar angle $\mu$ from the radial domain. Additionally, we grant that the energy dependence $E$ may be ascribed to either radial or axial modes, or both, bringing the total number of candidate 2D/1D ROMs to six. To assess performance, these PGD ROMs are applied to two few-group benchmarks characteristic of Light Water Reactors. Therein, we find both the axial and axial-polar ROMs are convergent and that the latter are often more economical than the former. Ultimately, given the popularity of 2D/1D methods in reactor physics, we expect a PGD ROM which achieves a similar effect, but perhaps with superior accuracy, a quicker runtime, and/or broader applicability, would be eminently useful, especially for full-core problems.
We obtain the existence, nonexistence and multiplicity of positive solutions with prescribed mass for nonlinear Schr\"{o}dinger equations in bounded domains via a global bifurcation approach. The nonlinearities in this paper … We obtain the existence, nonexistence and multiplicity of positive solutions with prescribed mass for nonlinear Schr\"{o}dinger equations in bounded domains via a global bifurcation approach. The nonlinearities in this paper can be mass supercritical, critical, subcritical or some mixes of these cases, and the equation can be autonomous or non-autonomous. This generalizes a result in Noris, Tavares and Verzini [\emph{Anal. PDE}, 7 (8) (2014) 1807-1838], where the equation is autonomous with homogeneous nonlinearities. Besides, we have proven some orbital stability or instability results.
This paper aims to the existence of positive and multiple normalized solutions to Schr\"{o}dinger equations with general nonlinearities in star-shaped bounded domains. We obtain a normalized ground state by searching … This paper aims to the existence of positive and multiple normalized solutions to Schr\"{o}dinger equations with general nonlinearities in star-shaped bounded domains. We obtain a normalized ground state by searching for a local minimizer. Furthermore, relying on a monotonicity trick, we get another positive normalized solution by Mountain Pass theorem, and multiple solutions by establishing some special links. Moreover, we obtain the existence of nonradial normalized solutions to Schr\"{o}dinger equations in balls. The assumptions on the nonlinearities in this paper are (almost) optimal, and the results in this paper are latest for the normalized solutions of Schr\"{o}dinger equations in bounded domains.
We present Step-Video-T2V, a state-of-the-art text-to-video pre-trained model with 30B parameters and the ability to generate videos up to 204 frames in length. A deep compression Variational Autoencoder, Video-VAE, is … We present Step-Video-T2V, a state-of-the-art text-to-video pre-trained model with 30B parameters and the ability to generate videos up to 204 frames in length. A deep compression Variational Autoencoder, Video-VAE, is designed for video generation tasks, achieving 16x16 spatial and 8x temporal compression ratios, while maintaining exceptional video reconstruction quality. User prompts are encoded using two bilingual text encoders to handle both English and Chinese. A DiT with 3D full attention is trained using Flow Matching and is employed to denoise input noise into latent frames. A video-based DPO approach, Video-DPO, is applied to reduce artifacts and improve the visual quality of the generated videos. We also detail our training strategies and share key observations and insights. Step-Video-T2V's performance is evaluated on a novel video generation benchmark, Step-Video-T2V-Eval, demonstrating its state-of-the-art text-to-video quality when compared with both open-source and commercial engines. Additionally, we discuss the limitations of current diffusion-based model paradigm and outline future directions for video foundation models. We make both Step-Video-T2V and Step-Video-T2V-Eval available at https://github.com/stepfun-ai/Step-Video-T2V. The online version can be accessed from https://yuewen.cn/videos as well. Our goal is to accelerate the innovation of video foundation models and empower video content creators.
We present Step-Video-T2V, a state-of-the-art text-to-video pre-trained model with 30B parameters and the ability to generate videos up to 204 frames in length. A deep compression Variational Autoencoder, Video-VAE, is … We present Step-Video-T2V, a state-of-the-art text-to-video pre-trained model with 30B parameters and the ability to generate videos up to 204 frames in length. A deep compression Variational Autoencoder, Video-VAE, is designed for video generation tasks, achieving 16x16 spatial and 8x temporal compression ratios, while maintaining exceptional video reconstruction quality. User prompts are encoded using two bilingual text encoders to handle both English and Chinese. A DiT with 3D full attention is trained using Flow Matching and is employed to denoise input noise into latent frames. A video-based DPO approach, Video-DPO, is applied to reduce artifacts and improve the visual quality of the generated videos. We also detail our training strategies and share key observations and insights. Step-Video-T2V's performance is evaluated on a novel video generation benchmark, Step-Video-T2V-Eval, demonstrating its state-of-the-art text-to-video quality when compared with both open-source and commercial engines. Additionally, we discuss the limitations of current diffusion-based model paradigm and outline future directions for video foundation models. We make both Step-Video-T2V and Step-Video-T2V-Eval available at https://github.com/stepfun-ai/Step-Video-T2V. The online version can be accessed from https://yuewen.cn/videos as well. Our goal is to accelerate the innovation of video foundation models and empower video content creators.
Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning … Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning methods have been introduced to improve image quality from low-dose or few-view SPECT data, previous approaches often fail to generalize across different acquisition settings, limiting their applicability in reality. This work introduced DiffSPECT-3D, a diffusion framework for 3D cardiac SPECT imaging that effectively adapts to different acquisition settings without requiring further network re-training or fine-tuning. Using both image and projection data, a consistency strategy is proposed to ensure that diffusion sampling at each step aligns with the low-dose/few-view projection measurements, the image data, and the scanner geometry, thus enabling generalization to different low-dose/few-view settings. Incorporating anatomical spatial information from CT and total variation constraint, we proposed a 2.5D conditional strategy to allow the DiffSPECT-3D to observe 3D contextual information from the entire image volume, addressing the 3D memory issues in diffusion model. We extensively evaluated the proposed method on 1,325 clinical 99mTc tetrofosmin stress/rest studies from 795 patients. Each study was reconstructed into 5 different low-count and 5 different few-view levels for model evaluations, ranging from 1% to 50% and from 1 view to 9 view, respectively. Validated against cardiac catheterization results and diagnostic comments from nuclear cardiologists, the presented results show the potential to achieve low-dose and few-view SPECT imaging without compromising clinical performance. Additionally, DiffSPECT-3D could be directly applied to full-dose SPECT images to further improve image quality, especially in a low-dose stress-first cardiac SPECT imaging protocol.
This paper aims to the existence of positive and multiple normalized solutions to Schr\"{o}dinger equations with general nonlinearities in star-shaped bounded domains. We obtain a normalized ground state by searching … This paper aims to the existence of positive and multiple normalized solutions to Schr\"{o}dinger equations with general nonlinearities in star-shaped bounded domains. We obtain a normalized ground state by searching for a local minimizer. Furthermore, relying on a monotonicity trick, we get another positive normalized solution by Mountain Pass theorem, and multiple solutions by establishing some special links. Moreover, we obtain the existence of nonradial normalized solutions to Schr\"{o}dinger equations in balls. The assumptions on the nonlinearities in this paper are (almost) optimal, and the results in this paper are latest for the normalized solutions of Schr\"{o}dinger equations in bounded domains.
We obtain the existence, nonexistence and multiplicity of positive solutions with prescribed mass for nonlinear Schr\"{o}dinger equations in bounded domains via a global bifurcation approach. The nonlinearities in this paper … We obtain the existence, nonexistence and multiplicity of positive solutions with prescribed mass for nonlinear Schr\"{o}dinger equations in bounded domains via a global bifurcation approach. The nonlinearities in this paper can be mass supercritical, critical, subcritical or some mixes of these cases, and the equation can be autonomous or non-autonomous. This generalizes a result in Noris, Tavares and Verzini [\emph{Anal. PDE}, 7 (8) (2014) 1807-1838], where the equation is autonomous with homogeneous nonlinearities. Besides, we have proven some orbital stability or instability results.
In this article, we demonstrate the novel use of Proper Generalized Decomposition (PGD) to separate the axial and, optionally, polar dimensions of neutron transport. Doing so, the resulting Reduced-Order Models … In this article, we demonstrate the novel use of Proper Generalized Decomposition (PGD) to separate the axial and, optionally, polar dimensions of neutron transport. Doing so, the resulting Reduced-Order Models (ROMs) can exploit the fact that nuclear reactors tend to be tall, but geometrically simple, in the axial direction $z$, and so the 3D neutron flux distribution often admits a low-rank "2D/1D" approximation. Through PGD, this approximation is computed by alternately solving 2D and 1D sub-models, like in existing 2D/1D models of reactor physics. However, the present methodology is more general in that the decomposition is arbitrary-rank, rather than rank-one, and no simplifying approximations of the transverse leakage are made. To begin, we derive two original models: that of axial PGD -- which separates only $z$ and the axial streaming direction $\vartheta\in\{-1,+1\}$ -- and axial-polar PGD -- which separates both $z$ and polar angle $\mu$ from the radial domain. Additionally, we grant that the energy dependence $E$ may be ascribed to either radial or axial modes, or both, bringing the total number of candidate 2D/1D ROMs to six. To assess performance, these PGD ROMs are applied to two few-group benchmarks characteristic of Light Water Reactors. Therein, we find both the axial and axial-polar ROMs are convergent and that the latter are often more economical than the former. Ultimately, given the popularity of 2D/1D methods in reactor physics, we expect a PGD ROM which achieves a similar effect, but perhaps with superior accuracy, a quicker runtime, and/or broader applicability, would be eminently useful, especially for full-core problems.
Radiation hardening of power MOSFETs (metal oxide semiconductor field effect transistors) is of the highest priority for sustaining high-power systems in the space radiation environment. Silicon carbide (SiC)-based power electronics … Radiation hardening of power MOSFETs (metal oxide semiconductor field effect transistors) is of the highest priority for sustaining high-power systems in the space radiation environment. Silicon carbide (SiC)-based power electronics are being investigated as a strong alternative for high power spaceborne power electronic systems. SiC MOSFETs have been shown to be most prone to single-event burnout (SEB) from space radiation. The current knowledge of SiC MOSFET device degradation and failure mechanisms are reviewed in this paper. Additionally, the viability of radiation tolerant SiC MOSFET designs and the modeling methods of SEB phenomena are evaluated. A merit system is proposed to consider the performance of radiation tolerance and nominal electrical performance. Criteria needed for high-fidelity SEB simulations are also reviewed. This paper stands as a necessary analytical review to intercede the development of radiation-hardened power devices for space and extreme environment applications.
Relying on the flexible-access interconnects to the scalable storage and compute resources, data centers deliver critical communications connectivity among numerous servers to support the housed applications and services. To provide … Relying on the flexible-access interconnects to the scalable storage and compute resources, data centers deliver critical communications connectivity among numerous servers to support the housed applications and services. To provide the high-speeds and long-distance communications, the data centers have turned to fiber interconnections. With the stringently increased traffic volume, the data centers are then expected to further deploy the optical switches into the systems infrastructure to implement the full optical switching. This paper first summarizes the topologies and traffic characteristics in data centers and analyzes the reasons and importance of moving to optical switching. Recent techniques related to the optical switching, and main challenges limiting the practical deployments of optical switches in data centers are also summarized and reported.
In this work, we present Stellar Spectra Factory (SSF), a tool to generate empirical-based stellar spectra from arbitrary stellar atmospheric parameters. The relative flux-calibrated empirical spectra can be predicted by … In this work, we present Stellar Spectra Factory (SSF), a tool to generate empirical-based stellar spectra from arbitrary stellar atmospheric parameters. The relative flux-calibrated empirical spectra can be predicted by SSF given arbitrary effective temperature, surface gravity, and metallicity. SSF constructs the interpolation approach based on the SLAM, using ATLAS-A library as the training dataset. SSF is composed of 4 data-driven sub-models to predict empirical stellar spectra. SSF-N can generate spectra from A to K type and some M giant stars, covering 3700 < Teff < 8700 K, 0 < logg < 6 dex, and -1.5 < [M/H] < 0.5 dex. SSF-gM is mainly used to predict M giant spectra with 3520 < Teff < 4000K and -1.5 < [M/H] < 0.4 dex. SSF-dM is for generating M dwarf spectra with 3295 < Teff < 4040K, -1.0 < [M/H] < 0.1 dex. And SSF-B can predict B-type spectra with 9000 < Teff < 24000K and -5.2< MG < 1.5 mag. The accuracy of the predicted spectra is validated by comparing the flux of predicted spectra to those with same stellar parameters selected from the known spectral libraries, MILES and MaStar. The averaged difference of flux over optical wavelength between the predicted spectra and the corresponding ones in MILES and MaStar is less than 5%. More verification is conducted between the magnitudes calculated from the integration of the predicted spectra and the observations in PS1 and APASS bands with the same stellar parameters. No significant systematic difference is found between the predicted spectra and the photomatric observations. The uncertainty is 0.08mag in r band for SSF-gM when comparing with the stars with the same stellar parameters selected from PS1. And the uncertainty becomes 0.31mag in i band for SSF-dM when comparing with the stars with the same stellar parameters selected from APASS.
Radiation hardening of the MOSFET is of the highest priority for sustaining high-power systems in the space radiation environment. SiC-based power electronics are being looked at as a strong alternative … Radiation hardening of the MOSFET is of the highest priority for sustaining high-power systems in the space radiation environment. SiC-based power electronics are being looked at as a strong alternative for high power spaceborne power electronic systems. The SiC MOSFET has been shown to be most prone to SEB of the radiation effects. The knowledge of SiC MOSFET device degradation and failure mechanisms are reviewed. Additionally, the viability of rad-tolerant SiC MOSFET designs and the methods of SEB simulation are evaluated. A merit system is proposed to consider the performance of radiation tolerance and nominal electrical performance. Criteria needed for high-fidelity SEB simulation are also reviewed. This paper stands as a necessary analytical review to intercede the development of rad-hard power devices for space and extreme environment applications.
This research aims to study a self-supervised 3D clothing reconstruction method, which recovers the geometry shape and texture of human clothing from a single image. Compared with existing methods, we … This research aims to study a self-supervised 3D clothing reconstruction method, which recovers the geometry shape and texture of human clothing from a single image. Compared with existing methods, we observe that three primary challenges remain: (1) 3D ground-truth meshes of clothing are usually inaccessible due to annotation difficulties and time costs; (2) Conventional template-based methods are limited to modeling non-rigid objects, e.g., handbags and dresses, which are common in fashion images; (3) The inherent ambiguity compromises the model training, such as the dilemma between a large shape with a remote camera or a small shape with a close camera. In an attempt to address the above limitations, we propose a causality-aware self-supervised learning method to adaptively reconstruct 3D non-rigid objects from 2D images without 3D annotations. In particular, to solve the inherent ambiguity among four implicit variables, i.e., camera position, shape, texture, and illumination, we introduce an explainable structural causal map (SCM) to build our model. The proposed model structure follows the spirit of the causal map, which explicitly considers the prior template in the camera estimation and shape prediction. When optimization, the causality intervention tool, i.e., two expectation-maximization loops, is deeply embedded in our algorithm to (1) disentangle four encoders and (2) facilitate the prior template. Extensive experiments on two 2D fashion benchmarks (ATR and Market-HQ) show that the proposed method could yield high-fidelity 3D reconstruction. Furthermore, we also verify the scalability of the proposed method on a fine-grained bird dataset, i.e., CUB. The code is available at https://github.com/layumi/ 3D-Magic-Mirror .
Ramp merging is a typical application of cooperative intelligent transportation system (C-ITS). Vehicle trajectories perceived by roadside sensors are importation complement to the limited visual field of on-board perception. Vehicle … Ramp merging is a typical application of cooperative intelligent transportation system (C-ITS). Vehicle trajectories perceived by roadside sensors are importation complement to the limited visual field of on-board perception. Vehicle tracking and trajectory denoising algorithm is proposed in this paper to take full advantage of roadside cameras for vehicle trajectory and speed profile estimation. Dynamic speed guidance algorithm is proposed to help on-ramp vehicles to merge into mainline smoothly, even in non-cooperative environment where mainline vehicles are not expected to slow down to accommodate on-ramp vehicles. On-site experiments were taken out in a merging area of Hangzhou Belt Highway to testify our prototype system, and simulation analysis shows our proposed algorithm can achieve significant fuel savings during the ramp merging process.
Radiation therapy of thoracic and abdominal tumors requires incorporating the respiratory motion into treatments. To precisely account for the patient's respiratory motions and predict the respiratory signals, a generalized model … Radiation therapy of thoracic and abdominal tumors requires incorporating the respiratory motion into treatments. To precisely account for the patient's respiratory motions and predict the respiratory signals, a generalized model for predictions of different types of patients' respiratory motions is desired. The aim of this study is to explore the feasibility of developing a long short-term memory (LSTM)-based generalized model for the respiratory signal prediction. To achieve that, 1703 sets of real-time position management (RPM) data were collected from retrospective studies across three clinical institutions. These datasets were separated as the training, internal validity and external validity groups. Among all the datasets, 1187 datasets were used for model development and the remaining 516 datasets were used to test the model's generality power. Furthermore, an exhaustive grid search was implemented to find the optimal hyper-parameters of the LSTM model. The hyper-parameters are the number of LSTM layers, the number of hidden units, the optimizer, the learning rate, the number of epochs, and the length of time lags. The obtained model achieved superior accuracy over conventional artificial neural network (ANN) models: with the prediction window equaling to 500 ms, the LSTM model achieved an average relative mean absolute error (MAE) of 0.037, an average root mean square error (RMSE) of 0.048, and a maximum error (ME) of 1.687 in the internal validity data, and an average relative MAE of 0.112, an average RMSE of 0.139 and an ME of 1.811 in the external validity data. Compared to the LSTM model trained with default hyper-parameters, the MAE of the optimized model results decreased by 20%, indicating the importance of tuning the hyper-parameters of LSTM models to obtain superior accuracies. This study demonstrates the potential of deep LSTM models for the respiratory signal prediction and illustrates the impacts of major hyper-parameters in LSTM models.
Summary Multiphysics simulations are used to solve coupled physics problems found in a wide variety of applications in natural and engineering systems. Combining models of different physical processes into one … Summary Multiphysics simulations are used to solve coupled physics problems found in a wide variety of applications in natural and engineering systems. Combining models of different physical processes into one computational tool is the essence of multiphysics simulation. A general and widely used coupling approach is to combine several ‘single‐physics’ numerical solvers, each already being well developed, mature/sophisticated into an iteration‐based computational package. This approach iterates over the constituent solvers at every time step, to obtain globally converged solutions. During the simulation, each single‐physics component is solved repeatedly until the feedback has been adequately resolved. However, the component problem solution only needs to be as precise as the feedback that it receives from the other component. Thus, computational effort expended to exceed such precision is wasted. This issue is usually called over‐solving. This paper proposes and discusses several methods that circumvent over‐solving. The residual balance, relaxed relative tolerance, alternating nonlinear, and solution interruption methods are described, and their performance is compared with Picard iteration. A steady state problem with coupling along an interface and a transient problem with two fields coupled throughout the spatial domain are solved as examples. These problems demonstrate that the savings associated with eliminating over‐solving can reach at least 30% without any loss in accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
The matrix over division rings have been applied in many fields,such as in quantum physics and computer graphics.However,since the multiplication of division ring is non-commutive,the matrix research prepares to the … The matrix over division rings have been applied in many fields,such as in quantum physics and computer graphics.However,since the multiplication of division ring is non-commutive,the matrix research prepares to the body on is paid attention.This paper studies the additive operators preserving idempotent over division rings.Existing results about linear operators preserving idempotent over division rings have been improved.
When the chord length sampling method is applied to solve radiation transport problems in random media where inclusions are randomly distributed in a background region, inaccuracy occurs due to two … When the chord length sampling method is applied to solve radiation transport problems in random media where inclusions are randomly distributed in a background region, inaccuracy occurs due to two major factors: memory effect and boundary effect. In this article, by applying chord length sampling to solve fixed source and eigenvalue problems in 1-D binary stochastic media, an investigation on how and why these two effects give rise to the inaccuracy in the final solutions is performed. The investigation is based on a series of radiation transport simulations for the calculation of reflection rate, flux distribution, and effective multiplication factor.
Recently hybridized monolayers consisting of hexagonal boron nitride (h-BN) phases inside graphene layer have been synthesized and shown to be an effective way of opening band gap in graphene monolayers … Recently hybridized monolayers consisting of hexagonal boron nitride (h-BN) phases inside graphene layer have been synthesized and shown to be an effective way of opening band gap in graphene monolayers [1]. In this letter, we report an ab initio density functional theory (DFT)- based study of h-BN domain size effect on the elastic properties of graphene/boron nitride hybrid monolayers (h-BNC). We found both inplane stiffness and longitudinal sound velocity of h-BNC linearly decrease with h-BN concentration.