Author Description

Login to generate an author description

Ask a Question About This Mathematician

Image classification models tend to make decisions based on peripheral attributes of data items that have strong correlation with a target variable (i.e., dataset bias). These biased models suffer from … Image classification models tend to make decisions based on peripheral attributes of data items that have strong correlation with a target variable (i.e., dataset bias). These biased models suffer from the poor generalization capability when evaluated on unbiased datasets. Existing approaches for debiasing often identify and emphasize those samples with no such correlation (i.e., bias-conflicting) without defining the bias type in advance. However, such bias-conflicting samples are significantly scarce in biased datasets, limiting the debiasing capability of these approaches. This paper first presents an empirical analysis revealing that training with "diverse" bias-conflicting samples beyond a given training set is crucial for debiasing as well as the generalization capability. Based on this observation, we propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples. To this end, our method learns the disentangled representation of (1) the intrinsic attributes (i.e., those inherently defining a certain class) and (2) bias attributes (i.e., peripheral attributes causing the bias), from a large number of bias-aligned samples, the bias attributes of which have strong correlation with the target variable. Using the disentangled representation, we synthesize bias-conflicting samples that contain the diverse intrinsic attributes of bias-aligned samples by swapping their latent features. By utilizing these diversified bias-conflicting features during the training, our approach achieves superior classification accuracy and debiasing results against the existing baselines on synthetic and real-world datasets.
Purpose Nyquist ghost artifacts in echo planar imaging (EPI) are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous … Purpose Nyquist ghost artifacts in echo planar imaging (EPI) are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high‐field MRI due to the nonlinear and time‐varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k‐space interpolation problem that can be solved using structured low‐rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k‐space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non‐accelerated EPI acquisitions. Theory and Methods To take advantage of the even and odd‐phase directional redundancy, the k‐space data are divided into 2 channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi‐coil k‐space data into additional input channels. Then, our k‐space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k‐space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k‐space data from both ghost and subsampling. Results Reconstruction results using 3T and 7T in vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster. Conclusions The proposed k‐space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high‐field MRI without changing the current acquisition protocol.
Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field … Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field MRI due to the non-linear and time-varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k-space interpolation problem that can be solved using structured low-rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k-space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non-accelerated EPI acquisitions. To take advantage of the even and odd-phase directional redundancy, the k-space data is divided into two channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi-coil k-space data into additional input channels. Then, our k-space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k-space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k-space data from both ghost and subsampling. Reconstruction results using 3T and 7T in-vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster.The proposed k-space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high-field MRI without changing the current acquisition protocol.
저궤도에서 운용되는 위성은 대기 저항에 의한 연료소모가 크며, 연료소모는 임무수명 및 발사무게에 영향을 미치게 되어 위성 형상에 따른 항력의 예측이 중요하다. 본 논문에서는 직접모사법을 이용하여 파라볼릭 안테나를 탑재한 저궤도 위성의 … 저궤도에서 운용되는 위성은 대기 저항에 의한 연료소모가 크며, 연료소모는 임무수명 및 발사무게에 영향을 미치게 되어 위성 형상에 따른 항력의 예측이 중요하다. 본 논문에서는 직접모사법을 이용하여 파라볼릭 안테나를 탑재한 저궤도 위성의 임무고도의 변화와 받음각에 따른 항력 및 항력 계수의 변화를 살펴보았다. 저궤도의 희박 기체의 거동을 모사하는 직접모사법의 적용성을 검증하기 위해 스타샤인(Starshine) 위성의 비행데이터를 이용하여 고도, 대기와 표면의 상호작용에 따른 항력 계수를 비교하였다. 결론적으로 계산결과로부터 저궤도 위성의 정밀한 궤도수명 계산에 적합한 항력 계수를 도출하였다. Consumption of the fuel on the satellite operating in low earth orbit, is increased due to the air resistance and the amount of increase makes the satellite lifetime decrease or the satellite mass risen. Therefore the prediction of drag force of the satellite is important. In the paper, drag force and drag coefficient analysis of the parabolic antenna satellite in low earth orbit using direct simulation monte carlo method (DSMC) is conducted according to the mission altitude and angle of attack. To verify the DSMC simulated rarefied air movement, Starshine satellite drag coefficient according to the altitude and gas-surface interaction are compared with the flight data. Finally, from the analysis results, it leads to appropriate satellite drag coefficient for orbit lifetime calculation.
In this note we show that the strong spherical maximal function in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"> <mml:semantics> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mi>d</mml:mi> </mml:msup> … In this note we show that the strong spherical maximal function in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"> <mml:semantics> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mi>d</mml:mi> </mml:msup> <mml:annotation encoding="application/x-tex">\mathbb R^d</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is bounded on <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper L Superscript p"> <mml:semantics> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>p</mml:mi> </mml:msup> <mml:annotation encoding="application/x-tex">L^p</mml:annotation> </mml:semantics> </mml:math> </inline-formula> if <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="p greater-than 2 left-parenthesis d plus 1 right-parenthesis slash left-parenthesis d minus 1 right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>p</mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">(</mml:mo> <mml:mi>d</mml:mi> <mml:mo>+</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mi>d</mml:mi> <mml:mo>−</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">p&gt;2(d+1)/(d-1)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="d greater-than-or-equal-to 3"> <mml:semantics> <mml:mrow> <mml:mi>d</mml:mi> <mml:mo>≥</mml:mo> <mml:mn>3</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">d\ge 3</mml:annotation> </mml:semantics> </mml:math> </inline-formula>.
Let $\Gamma$ be a finite graph and let $\Gamma^{\mathrm{e}}$ be its extension graph. We inductively define a sequence $\{\Gamma_i\}$ of finite induced subgraphs of $\Gamma^{\mathrm{e}}$ through successive applications of an … Let $\Gamma$ be a finite graph and let $\Gamma^{\mathrm{e}}$ be its extension graph. We inductively define a sequence $\{\Gamma_i\}$ of finite induced subgraphs of $\Gamma^{\mathrm{e}}$ through successive applications of an operation called "doubling along a star". Then we show that every finite induced subgraph of $\Gamma^{\mathrm{e}}$ is isomorphic to an induced subgraph of some $\Gamma_i$.
$L^p$ boundedness of the circular maximal function $\mathcal M_{\mathbb{H}^1}$ on the Heisenberg group $\mathbb{H}^1$ has received considerable attentions. While the problem still remains open, $L^p$ boundedness of $\mathcal M_{\mathbb{H}^1}$ on … $L^p$ boundedness of the circular maximal function $\mathcal M_{\mathbb{H}^1}$ on the Heisenberg group $\mathbb{H}^1$ has received considerable attentions. While the problem still remains open, $L^p$ boundedness of $\mathcal M_{\mathbb{H}^1}$ on Heisenberg radial functions was recently shown for $p&gt;2$ by Beltran, Guo, Hickman, and Seeger [2]. In this paper we extend their result considering the local maximal operator $M_{\mathbb{H}^1}$ which is defined by taking supremum over $1
Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field … Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field MRI due to the non-linear and time-varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k-space interpolation problem that can be solved using structured low-rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k-space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non-accelerated EPI acquisitions. To take advantage of the even and odd-phase directional redundancy, the k-space data is divided into two channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi-coil k-space data into additional input channels. Then, our k-space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k-space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k-space data from both ghost and subsampling. Reconstruction results using 3T and 7T in-vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster.The proposed k-space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high-field MRI without changing the current acquisition protocol.
We investigate $L^p$ boundedness of the maximal function defined by the averaging operator $f\to \mathcal{A}_t^s f$ over the two-parameter family of tori $\mathbb{T}_t^{s}:=\{ ( (t+s\cos\theta)\cos\phi,\,(t+s\cos\theta)\sin\phi,\, s\sin\theta ): \theta, \phi \in … We investigate $L^p$ boundedness of the maximal function defined by the averaging operator $f\to \mathcal{A}_t^s f$ over the two-parameter family of tori $\mathbb{T}_t^{s}:=\{ ( (t+s\cos\theta)\cos\phi,\,(t+s\cos\theta)\sin\phi,\, s\sin\theta ): \theta, \phi \in [0,2\pi) \}$ with $c_0t>s>0$ for some $c_0\in (0,1)$. We prove that the associated (two-parameter) maximal function is bounded on $L^p$ if and only if $p>2$. We also obtain $L^p$--$L^q$ estimates for the local maximal operator on a sharp range of $p,q$. Furthermore, the sharp smoothing estimates are proved including the sharp local smoothing estimates for the operators $f\to \mathcal A_t^s f$ and $f\to \mathcal A_t^{c_0t} f$. For the purpose, we make use of Bourgain--Demeter's decoupling inequality for the cone and Guth--Wang--Zhang's local smoothing estimates for the $2$ dimensional wave operator.
We study the elliptic maximal functions defined by averages over ellipses and rotated ellipses which are multi-parametric variants of the circular maximal function. We prove that those maximal functions are … We study the elliptic maximal functions defined by averages over ellipses and rotated ellipses which are multi-parametric variants of the circular maximal function. We prove that those maximal functions are bounded on $L^p$ for some $p\neq \infty$. For this purpose, we obtain some sharp multi-parameter local smoothing estimates.
Weighted inequality on the Hardy-Littlewood maximal function is completely understood while it is not well understood for the spherical maximal function. For the power weight $|x|^α$, it is known that … Weighted inequality on the Hardy-Littlewood maximal function is completely understood while it is not well understood for the spherical maximal function. For the power weight $|x|^α$, it is known that the spherical maximal operator on $\mathbb{R}^d$ is bounded on $L^p(|x|^α)$ only if $1-d\leq α&lt;(d-1)(p-1)-d$ and under this condition, it is known to be bounded except $α=1-d$. In this paper, we prove the case of the critical order, $α=1-d$.
In this note we show that the strong spherical maximal function in $\mathbb R^d$ is bounded on $L^p$ if $p>2(d+1)/(d-1)$ for $d\ge 3$. In this note we show that the strong spherical maximal function in $\mathbb R^d$ is bounded on $L^p$ if $p>2(d+1)/(d-1)$ for $d\ge 3$.
An unprecedented amount of SARS-CoV-2 data has been accumulated compared with previous infectious diseases, enabling insights into its evolutionary process and more thorough analyses. This study investigates SARS-CoV-2 features as … An unprecedented amount of SARS-CoV-2 data has been accumulated compared with previous infectious diseases, enabling insights into its evolutionary process and more thorough analyses. This study investigates SARS-CoV-2 features as it evolved to evaluate its infectivity. We examined viral sequences and identified the polarity of amino acids in the receptor binding motif (RBM) region. We detected an increased frequency of amino acid substitutions to lysine (K) and arginine (R) in variants of concern (VOCs). As the virus evolved to Omicron, commonly occurring mutations became fixed components of the new viral sequence. Furthermore, at specific positions of VOCs, only one type of amino acid substitution and a notable absence of mutations at D467 were detected. We found that the binding affinity of SARS-CoV-2 lineages to the ACE2 receptor was impacted by amino acid substitutions. Based on our discoveries, we developed APESS, an evaluation model evaluating infectivity from biochemical and mutational properties. In silico evaluation using real-world sequences and in vitro viral entry assays validated the accuracy of APESS and our discoveries. Using Machine Learning, we predicted mutations that had the potential to become more prominent. We created AIVE, a web-based system, accessible at https://ai-ve.org to provide infectivity measurements of mutations entered by users. Ultimately, we established a clear link between specific viral properties and increased infectivity, enhancing our understanding of SARS-CoV-2 and enabling more accurate predictions of the virus.
Carbapenem-resistant Klebsiella pneumoniae (CRKP) poses a significant threat to public health owing to its multidrug resistance and rapid dissemination. This study analyzed CRKP isolates collected from bloodstream infections in nine … Carbapenem-resistant Klebsiella pneumoniae (CRKP) poses a significant threat to public health owing to its multidrug resistance and rapid dissemination. This study analyzed CRKP isolates collected from bloodstream infections in nine regions of South Korea using the Kor-GLASS surveillance system between 2017 and 2021. A total of 3,941 K. pneumoniae isolates were collected. Among them, 119 (3%) isolates were identified as CRKP. Most CRKP (79.7%) belonged to sequence type 307 (ST307), followed by ST11 (6.8%). All CRKP isolates exhibited multidrug resistance, with 78.8% carrying the IncX3 plasmid encoding the KPC-2 gene. Phylogenetic and genomic analyses revealed that ST307 isolates exhibited low single nucleotide polymorphism (SNP) differences. SNP differences among ST307 strains ranged from a minimum of 1 to a maximum of 140, indicating close genetic relatedness. All ST307 strains harbored the KL102 and O1/O2v2 loci, and genomic analysis revealed high prevalence of key resistance genes such as KPC (91.5%) and CTX-M-15 (83.9%), alongside mutations in the QRDR (ParC-80I, GyrA-83I) and ompK genes. Two major clusters were identified, with cluster 1 harboring yersiniabactin lineage 16 (ICEkp12) and cluster 2 showing higher virulence, including the yersiniabactin lineage 17 (ICEkp10) and colibactin-associated genes. These findings underscore the dominance of ST307 among CRKP isolates in Korea, which is driven by clonal expansion and the critical role of mobile genetic elements. Therefore, enhanced genomic surveillance and targeted infection control measures are urgently required to address the spread of CRKP in clinical settings.
We define a crossing point such that f(x)g(x) for x and f(x)g(x) for x> where f and g are probability density functions. We may encounter suchy situation when we compare … We define a crossing point such that f(x)g(x) for x and f(x)g(x) for x> where f and g are probability density functions. We may encounter suchy situation when we compare two histograms from two independent observations. For example two contingency tables where initially admitted students and actually enrolled students are classified according to their high school ranking may show such situation, In this paper we consider maximum likelihood estimation of cell probabilities when a crossing point exists, We first assume a known crossing point and find an estimator. The estimation procedure for the case of unknown crossing point is just a straightforward extension. A real data is analyzed for an illustrative purpose.
Carbapenem-resistant Klebsiella pneumoniae (CRKP) poses a significant threat to public health owing to its multidrug resistance and rapid dissemination. This study analyzed CRKP isolates collected from bloodstream infections in nine … Carbapenem-resistant Klebsiella pneumoniae (CRKP) poses a significant threat to public health owing to its multidrug resistance and rapid dissemination. This study analyzed CRKP isolates collected from bloodstream infections in nine regions of South Korea using the Kor-GLASS surveillance system between 2017 and 2021. A total of 3,941 K. pneumoniae isolates were collected. Among them, 119 (3%) isolates were identified as CRKP. Most CRKP (79.7%) belonged to sequence type 307 (ST307), followed by ST11 (6.8%). All CRKP isolates exhibited multidrug resistance, with 78.8% carrying the IncX3 plasmid encoding the KPC-2 gene. Phylogenetic and genomic analyses revealed that ST307 isolates exhibited low single nucleotide polymorphism (SNP) differences. SNP differences among ST307 strains ranged from a minimum of 1 to a maximum of 140, indicating close genetic relatedness. All ST307 strains harbored the KL102 and O1/O2v2 loci, and genomic analysis revealed high prevalence of key resistance genes such as KPC (91.5%) and CTX-M-15 (83.9%), alongside mutations in the QRDR (ParC-80I, GyrA-83I) and ompK genes. Two major clusters were identified, with cluster 1 harboring yersiniabactin lineage 16 (ICEkp12) and cluster 2 showing higher virulence, including the yersiniabactin lineage 17 (ICEkp10) and colibactin-associated genes. These findings underscore the dominance of ST307 among CRKP isolates in Korea, which is driven by clonal expansion and the critical role of mobile genetic elements. Therefore, enhanced genomic surveillance and targeted infection control measures are urgently required to address the spread of CRKP in clinical settings.
In this note we show that the strong spherical maximal function in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"> <mml:semantics> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mi>d</mml:mi> </mml:msup> … In this note we show that the strong spherical maximal function in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"> <mml:semantics> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mi>d</mml:mi> </mml:msup> <mml:annotation encoding="application/x-tex">\mathbb R^d</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is bounded on <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper L Superscript p"> <mml:semantics> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>p</mml:mi> </mml:msup> <mml:annotation encoding="application/x-tex">L^p</mml:annotation> </mml:semantics> </mml:math> </inline-formula> if <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="p greater-than 2 left-parenthesis d plus 1 right-parenthesis slash left-parenthesis d minus 1 right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mi>p</mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">(</mml:mo> <mml:mi>d</mml:mi> <mml:mo>+</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mi>d</mml:mi> <mml:mo>−</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">p&gt;2(d+1)/(d-1)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="d greater-than-or-equal-to 3"> <mml:semantics> <mml:mrow> <mml:mi>d</mml:mi> <mml:mo>≥</mml:mo> <mml:mn>3</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">d\ge 3</mml:annotation> </mml:semantics> </mml:math> </inline-formula>.
An unprecedented amount of SARS-CoV-2 data has been accumulated compared with previous infectious diseases, enabling insights into its evolutionary process and more thorough analyses. This study investigates SARS-CoV-2 features as … An unprecedented amount of SARS-CoV-2 data has been accumulated compared with previous infectious diseases, enabling insights into its evolutionary process and more thorough analyses. This study investigates SARS-CoV-2 features as it evolved to evaluate its infectivity. We examined viral sequences and identified the polarity of amino acids in the receptor binding motif (RBM) region. We detected an increased frequency of amino acid substitutions to lysine (K) and arginine (R) in variants of concern (VOCs). As the virus evolved to Omicron, commonly occurring mutations became fixed components of the new viral sequence. Furthermore, at specific positions of VOCs, only one type of amino acid substitution and a notable absence of mutations at D467 were detected. We found that the binding affinity of SARS-CoV-2 lineages to the ACE2 receptor was impacted by amino acid substitutions. Based on our discoveries, we developed APESS, an evaluation model evaluating infectivity from biochemical and mutational properties. In silico evaluation using real-world sequences and in vitro viral entry assays validated the accuracy of APESS and our discoveries. Using Machine Learning, we predicted mutations that had the potential to become more prominent. We created AIVE, a web-based system, accessible at https://ai-ve.org to provide infectivity measurements of mutations entered by users. Ultimately, we established a clear link between specific viral properties and increased infectivity, enhancing our understanding of SARS-CoV-2 and enabling more accurate predictions of the virus.
We study the elliptic maximal functions defined by averages over ellipses and rotated ellipses which are multi-parametric variants of the circular maximal function. We prove that those maximal functions are … We study the elliptic maximal functions defined by averages over ellipses and rotated ellipses which are multi-parametric variants of the circular maximal function. We prove that those maximal functions are bounded on $L^p$ for some $p\neq \infty$. For this purpose, we obtain some sharp multi-parameter local smoothing estimates.
Weighted inequality on the Hardy-Littlewood maximal function is completely understood while it is not well understood for the spherical maximal function. For the power weight $|x|^α$, it is known that … Weighted inequality on the Hardy-Littlewood maximal function is completely understood while it is not well understood for the spherical maximal function. For the power weight $|x|^α$, it is known that the spherical maximal operator on $\mathbb{R}^d$ is bounded on $L^p(|x|^α)$ only if $1-d\leq α&lt;(d-1)(p-1)-d$ and under this condition, it is known to be bounded except $α=1-d$. In this paper, we prove the case of the critical order, $α=1-d$.
In this note we show that the strong spherical maximal function in $\mathbb R^d$ is bounded on $L^p$ if $p>2(d+1)/(d-1)$ for $d\ge 3$. In this note we show that the strong spherical maximal function in $\mathbb R^d$ is bounded on $L^p$ if $p>2(d+1)/(d-1)$ for $d\ge 3$.
We investigate $L^p$ boundedness of the maximal function defined by the averaging operator $f\to \mathcal{A}_t^s f$ over the two-parameter family of tori $\mathbb{T}_t^{s}:=\{ ( (t+s\cos\theta)\cos\phi,\,(t+s\cos\theta)\sin\phi,\, s\sin\theta ): \theta, \phi \in … We investigate $L^p$ boundedness of the maximal function defined by the averaging operator $f\to \mathcal{A}_t^s f$ over the two-parameter family of tori $\mathbb{T}_t^{s}:=\{ ( (t+s\cos\theta)\cos\phi,\,(t+s\cos\theta)\sin\phi,\, s\sin\theta ): \theta, \phi \in [0,2\pi) \}$ with $c_0t>s>0$ for some $c_0\in (0,1)$. We prove that the associated (two-parameter) maximal function is bounded on $L^p$ if and only if $p>2$. We also obtain $L^p$--$L^q$ estimates for the local maximal operator on a sharp range of $p,q$. Furthermore, the sharp smoothing estimates are proved including the sharp local smoothing estimates for the operators $f\to \mathcal A_t^s f$ and $f\to \mathcal A_t^{c_0t} f$. For the purpose, we make use of Bourgain--Demeter's decoupling inequality for the cone and Guth--Wang--Zhang's local smoothing estimates for the $2$ dimensional wave operator.
Image classification models tend to make decisions based on peripheral attributes of data items that have strong correlation with a target variable (i.e., dataset bias). These biased models suffer from … Image classification models tend to make decisions based on peripheral attributes of data items that have strong correlation with a target variable (i.e., dataset bias). These biased models suffer from the poor generalization capability when evaluated on unbiased datasets. Existing approaches for debiasing often identify and emphasize those samples with no such correlation (i.e., bias-conflicting) without defining the bias type in advance. However, such bias-conflicting samples are significantly scarce in biased datasets, limiting the debiasing capability of these approaches. This paper first presents an empirical analysis revealing that training with "diverse" bias-conflicting samples beyond a given training set is crucial for debiasing as well as the generalization capability. Based on this observation, we propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples. To this end, our method learns the disentangled representation of (1) the intrinsic attributes (i.e., those inherently defining a certain class) and (2) bias attributes (i.e., peripheral attributes causing the bias), from a large number of bias-aligned samples, the bias attributes of which have strong correlation with the target variable. Using the disentangled representation, we synthesize bias-conflicting samples that contain the diverse intrinsic attributes of bias-aligned samples by swapping their latent features. By utilizing these diversified bias-conflicting features during the training, our approach achieves superior classification accuracy and debiasing results against the existing baselines on synthetic and real-world datasets.
$L^p$ boundedness of the circular maximal function $\mathcal M_{\mathbb{H}^1}$ on the Heisenberg group $\mathbb{H}^1$ has received considerable attentions. While the problem still remains open, $L^p$ boundedness of $\mathcal M_{\mathbb{H}^1}$ on … $L^p$ boundedness of the circular maximal function $\mathcal M_{\mathbb{H}^1}$ on the Heisenberg group $\mathbb{H}^1$ has received considerable attentions. While the problem still remains open, $L^p$ boundedness of $\mathcal M_{\mathbb{H}^1}$ on Heisenberg radial functions was recently shown for $p&gt;2$ by Beltran, Guo, Hickman, and Seeger [2]. In this paper we extend their result considering the local maximal operator $M_{\mathbb{H}^1}$ which is defined by taking supremum over $1
Purpose Nyquist ghost artifacts in echo planar imaging (EPI) are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous … Purpose Nyquist ghost artifacts in echo planar imaging (EPI) are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high‐field MRI due to the nonlinear and time‐varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k‐space interpolation problem that can be solved using structured low‐rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k‐space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non‐accelerated EPI acquisitions. Theory and Methods To take advantage of the even and odd‐phase directional redundancy, the k‐space data are divided into 2 channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi‐coil k‐space data into additional input channels. Then, our k‐space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k‐space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k‐space data from both ghost and subsampling. Results Reconstruction results using 3T and 7T in vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster. Conclusions The proposed k‐space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high‐field MRI without changing the current acquisition protocol.
Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field … Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field MRI due to the non-linear and time-varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k-space interpolation problem that can be solved using structured low-rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k-space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non-accelerated EPI acquisitions. To take advantage of the even and odd-phase directional redundancy, the k-space data is divided into two channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi-coil k-space data into additional input channels. Then, our k-space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k-space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k-space data from both ghost and subsampling. Reconstruction results using 3T and 7T in-vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster.The proposed k-space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high-field MRI without changing the current acquisition protocol.
Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field … Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field MRI due to the non-linear and time-varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k-space interpolation problem that can be solved using structured low-rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k-space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non-accelerated EPI acquisitions. To take advantage of the even and odd-phase directional redundancy, the k-space data is divided into two channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi-coil k-space data into additional input channels. Then, our k-space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k-space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k-space data from both ghost and subsampling. Reconstruction results using 3T and 7T in-vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster.The proposed k-space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high-field MRI without changing the current acquisition protocol.
Let $\Gamma$ be a finite graph and let $\Gamma^{\mathrm{e}}$ be its extension graph. We inductively define a sequence $\{\Gamma_i\}$ of finite induced subgraphs of $\Gamma^{\mathrm{e}}$ through successive applications of an … Let $\Gamma$ be a finite graph and let $\Gamma^{\mathrm{e}}$ be its extension graph. We inductively define a sequence $\{\Gamma_i\}$ of finite induced subgraphs of $\Gamma^{\mathrm{e}}$ through successive applications of an operation called "doubling along a star". Then we show that every finite induced subgraph of $\Gamma^{\mathrm{e}}$ is isomorphic to an induced subgraph of some $\Gamma_i$.
저궤도에서 운용되는 위성은 대기 저항에 의한 연료소모가 크며, 연료소모는 임무수명 및 발사무게에 영향을 미치게 되어 위성 형상에 따른 항력의 예측이 중요하다. 본 논문에서는 직접모사법을 이용하여 파라볼릭 안테나를 탑재한 저궤도 위성의 … 저궤도에서 운용되는 위성은 대기 저항에 의한 연료소모가 크며, 연료소모는 임무수명 및 발사무게에 영향을 미치게 되어 위성 형상에 따른 항력의 예측이 중요하다. 본 논문에서는 직접모사법을 이용하여 파라볼릭 안테나를 탑재한 저궤도 위성의 임무고도의 변화와 받음각에 따른 항력 및 항력 계수의 변화를 살펴보았다. 저궤도의 희박 기체의 거동을 모사하는 직접모사법의 적용성을 검증하기 위해 스타샤인(Starshine) 위성의 비행데이터를 이용하여 고도, 대기와 표면의 상호작용에 따른 항력 계수를 비교하였다. 결론적으로 계산결과로부터 저궤도 위성의 정밀한 궤도수명 계산에 적합한 항력 계수를 도출하였다. Consumption of the fuel on the satellite operating in low earth orbit, is increased due to the air resistance and the amount of increase makes the satellite lifetime decrease or the satellite mass risen. Therefore the prediction of drag force of the satellite is important. In the paper, drag force and drag coefficient analysis of the parabolic antenna satellite in low earth orbit using direct simulation monte carlo method (DSMC) is conducted according to the mission altitude and angle of attack. To verify the DSMC simulated rarefied air movement, Starshine satellite drag coefficient according to the altitude and gas-surface interaction are compared with the flight data. Finally, from the analysis results, it leads to appropriate satellite drag coefficient for orbit lifetime calculation.
We define a crossing point such that f(x)g(x) for x and f(x)g(x) for x> where f and g are probability density functions. We may encounter suchy situation when we compare … We define a crossing point such that f(x)g(x) for x and f(x)g(x) for x> where f and g are probability density functions. We may encounter suchy situation when we compare two histograms from two independent observations. For example two contingency tables where initially admitted students and actually enrolled students are classified according to their high school ranking may show such situation, In this paper we consider maximum likelihood estimation of cell probabilities when a crossing point exists, We first assume a known crossing point and find an estimator. The estimation procedure for the case of unknown crossing point is just a straightforward extension. A real data is analyzed for an illustrative purpose.
is bounded on L(R) if p > n/(n − 1). He also showed that no such result can hold for p ≤ n/(n − 1) if n ≥ 2. Thus, … is bounded on L(R) if p > n/(n − 1). He also showed that no such result can hold for p ≤ n/(n − 1) if n ≥ 2. Thus, the 2-dimensional case is more complicated since the circular maximal operator corresponding to n = 2 is not bounded on L. Some 10 years passed before Bourgain [2] finally showed that the circular maximal function is bounded on L(R) for every 2 p depending on p > 2. In either case, though, it seems certain that the techniques in [2] or [9] will not give sharp estimates of this type. Recently, though, Schlag [11] obtained bounds which are of the best possible nature. Specifically, if we set
Let [unk](f)(x) denote the supremum of the averages of f taken over all (surfaces of) spheres centered at x. Then f --> [unk](f) is bounded on L(p)(R(n)), whenever p > … Let [unk](f)(x) denote the supremum of the averages of f taken over all (surfaces of) spheres centered at x. Then f --> [unk](f) is bounded on L(p)(R(n)), whenever p > n/(n - 1), and n >/= 3.
where dσr is the normalized surface measure on r S1. It is easy to see that M is not bounded on L2 (see Example 1.1 below). A well-known result of … where dσr is the normalized surface measure on r S1. It is easy to see that M is not bounded on L2 (see Example 1.1 below). A well-known result of Bourgain [1] asserts that M is bounded on Lp for 2 < p ≤ ∞. We will consider the question of boundedness of M and Mδ from Lp to Lq. Unless stated to the contrary, we will be dealing only with functions defined on R2. Absolute constants will be denoted by C, and the notation ??? will mean = up to a constant.
The purpose of this paper is to improve certain known regularity results for the wave equation and to give a simple proof of Bourgain's circular maximal theorem [1]. We use … The purpose of this paper is to improve certain known regularity results for the wave equation and to give a simple proof of Bourgain's circular maximal theorem [1]. We use easy wave front analysis along with techniques previously used in proofs of the Carleson-Sj6lin theorem (see [3],[5],[7]) and in the proof of sharp regularity properties of Fourier integral operators [13]. The circular means operators are defined by
We prove a sharp square function estimate for the cone in R 3 and consequently the local smoothing conjecture for the wave equation in 2 + 1 dimensions.2 To be … We prove a sharp square function estimate for the cone in R 3 and consequently the local smoothing conjecture for the wave equation in 2 + 1 dimensions.2 To be more specific, Sogge originally made the conjecture for α in the range α > 1 2 -2 p and Wolff confirmed Sogge's conjecture for p ≥ 74 and α in this range.Later in the work [15] of Heo, Nazarov and Seeger it was conjectured further that when p > 4 the conjecture should hold for α ≥ 1 2 -2 p . 3 Such kind of "locally constant" heuristic will be used a few times in the current paper.To justify this intuition one can use Corollary 4.3 in [3].See also Lemma 6.1 and Lemma 6.2 in Section 6 of the current paper.4 This definition works best if τ is honestly tiled by θ.In general we abuse the notation a bit: Throughout this paper, by writing "summing over θ ⊂ τ ", we really mean "summing over all θ ∈ A(τ )" where the collection A(τ ) is chosen as follows: Each A(τ ) only contains those θ's who intersect τ , and all A(τ ) form a disjoint union {θ} = τ A(τ ).
We consider the problem of endpoint estimates for the circular maximal function defined by <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper M f left-parenthesis x right-parenthesis equals sup Underscript 1 greater-than … We consider the problem of endpoint estimates for the circular maximal function defined by <disp-formula content-type="math/mathml"> \[ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper M f left-parenthesis x right-parenthesis equals sup Underscript 1 greater-than t greater-than 2 Endscripts StartAbsoluteValue integral Underscript upper S Superscript 1 Baseline Endscripts f left-parenthesis x minus t y right-parenthesis d sigma left-parenthesis y right-parenthesis EndAbsoluteValue"> <mml:semantics> <mml:mrow> <mml:mi>M</mml:mi> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>x</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>=</mml:mo> <mml:munder> <mml:mo movablelimits="true" form="prefix">sup</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mn>1</mml:mn> <mml:mo>&gt;</mml:mo> <mml:mi>t</mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:munder> <mml:mrow> <mml:mo>|</mml:mo> <mml:msub> <mml:mo>∫</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>S</mml:mi> <mml:mn>1</mml:mn> </mml:msup> </mml:mrow> </mml:msub> <mml:mi>f</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>x</mml:mi> <mml:mo>−</mml:mo> <mml:mi>t</mml:mi> <mml:mi>y</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mi>d</mml:mi> <mml:mi>σ</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:mi>y</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>|</mml:mo> </mml:mrow> </mml:mrow> <mml:annotation encoding="application/x-tex">Mf(x)=\sup _{1&gt;t&gt;2}\left |\int _{S^1} f(x-ty)d\sigma (y)\right |</mml:annotation> </mml:semantics> </mml:math> \] </disp-formula> where <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="d sigma"> <mml:semantics> <mml:mrow> <mml:mi>d</mml:mi> <mml:mi>σ</mml:mi> </mml:mrow> <mml:annotation encoding="application/x-tex">d\sigma</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is the normalized surface area measure on <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper S Superscript 1"> <mml:semantics> <mml:msup> <mml:mi>S</mml:mi> <mml:mn>1</mml:mn> </mml:msup> <mml:annotation encoding="application/x-tex">S^1</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. Let <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="normal upper Delta"> <mml:semantics> <mml:mi mathvariant="normal">Δ</mml:mi> <mml:annotation encoding="application/x-tex">\Delta</mml:annotation> </mml:semantics> </mml:math> </inline-formula> be the closed triangle with vertices <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-parenthesis 0 comma 0 right-parenthesis comma left-parenthesis 1 slash 2 comma 1 slash 2 right-parenthesis comma left-parenthesis 2 slash 5 comma 1 slash 5 right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>0</mml:mn> <mml:mo>,</mml:mo> <mml:mn>0</mml:mn> <mml:mo stretchy="false">)</mml:mo> <mml:mo>,</mml:mo> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> <mml:mo>,</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> <mml:mo>,</mml:mo> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>5</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">(0,0), (1/2, 1/2), (2/5,1/5)</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. We prove that for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-parenthesis 1 slash p comma 1 slash q right-parenthesis element-of normal upper Delta minus StartSet left-parenthesis 1 slash 2 comma 1 slash 2 right-parenthesis comma left-parenthesis 2 slash 5 comma 1 slash 5 right-parenthesis EndSet"> <mml:semantics> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mi>p</mml:mi> <mml:mo>,</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mi>q</mml:mi> <mml:mo stretchy="false">)</mml:mo> <mml:mo>∈</mml:mo> <mml:mi mathvariant="normal">Δ</mml:mi> <mml:mo class="MJX-variant">∖</mml:mo> <mml:mo fence="false" stretchy="false">{</mml:mo> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> <mml:mo>,</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> <mml:mo>,</mml:mo> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mn>1</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>5</mml:mn> <mml:mo stretchy="false">)</mml:mo> <mml:mo fence="false" stretchy="false">}</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">(1/p,1/q)\in \Delta \setminus \{(1/2,1/2), (2/5,1/5)\}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, there is a constant <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper C"> <mml:semantics> <mml:mi>C</mml:mi> <mml:annotation encoding="application/x-tex">C</mml:annotation> </mml:semantics> </mml:math> </inline-formula> such that <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-vertical-bar upper M f double-vertical-bar Subscript upper L Sub Superscript q Subscript left-parenthesis double-struck upper R squared right-parenthesis Baseline less-than-or-equal-to upper C double-vertical-bar f double-vertical-bar Subscript upper L Sub Superscript p Subscript left-parenthesis double-struck upper R squared right-parenthesis Baseline period"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mi>M</mml:mi> <mml:mi>f</mml:mi> <mml:msub> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>q</mml:mi> </mml:msup> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msub> <mml:mo>≤</mml:mo> <mml:mi>C</mml:mi> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mi>f</mml:mi> <mml:msub> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>p</mml:mi> </mml:msup> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msub> <mml:mo>.</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">\|Mf\|_{L^q(\mathbb R^2)}\le C\|f\|_{L^p(\mathbb R^2)}.</mml:annotation> </mml:semantics> </mml:math> </inline-formula> Furthermore, <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-vertical-bar upper M f double-vertical-bar Subscript upper L Sub Superscript 5 comma normal infinity Subscript left-parenthesis double-struck upper R squared right-parenthesis Baseline less-than-or-equal-to upper C double-vertical-bar f double-vertical-bar Subscript upper L Sub Superscript 5 slash 2 comma 1 Subscript left-parenthesis double-struck upper R squared right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mi>M</mml:mi> <mml:mi>f</mml:mi> <mml:msub> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>L</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mi mathvariant="normal">∞</mml:mi> </mml:mrow> </mml:msup> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msub> <mml:mo>≤</mml:mo> <mml:mi>C</mml:mi> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mi>f</mml:mi> <mml:msub> <mml:mo fence="false" stretchy="false">‖</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>L</mml:mi> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mn>5</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> <mml:mo>,</mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:msup> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="double-struck">R</mml:mi> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msub> </mml:mrow> <mml:annotation encoding="application/x-tex">\|Mf\|_{L^{5,\infty }(\mathbb R^2)}\le C \|f\|_{L^{5/2,1}(\mathbb R^2)}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>.
We prove the l 2 Decoupling Conjecture for compact hypersurfaces with positive definite second fundamental form and also for the cone.This has a wide range of important consequences.One of them … We prove the l 2 Decoupling Conjecture for compact hypersurfaces with positive definite second fundamental form and also for the cone.This has a wide range of important consequences.One of them is the validity of the Discrete Restriction Conjecture, which implies the full range of expected L px,t Strichartz estimates for both the rational and (up to N ε losses) the irrational torus.Another one is an improvement in the range for the discrete restriction theorem for lattice points on the sphere.Various applications to Additive Combinatorics, Incidence Geometry and Number Theory are also discussed.Our argument relies on the interplay between linear and multilinear restriction theory.
Bilinear restriction estimates have appeared in work of Bourgain, Klainerman, and Machedon. In this paper we develop the theory of these estimates (together with the analogues for Kakeya estimates). As … Bilinear restriction estimates have appeared in work of Bourgain, Klainerman, and Machedon. In this paper we develop the theory of these estimates (together with the analogues for Kakeya estimates). As a consequence we improve the <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-parenthesis upper L Superscript p Baseline comma upper L Superscript p Baseline right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>p</mml:mi> </mml:msup> <mml:mo>,</mml:mo> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>p</mml:mi> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">(L^p,L^p)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> spherical restriction theorem of Wolff from <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="p greater-than 42 slash 11"> <mml:semantics> <mml:mrow> <mml:mi>p</mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>42</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>11</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">p &gt; 42/11</mml:annotation> </mml:semantics> </mml:math> </inline-formula> to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="p greater-than 34 slash 9"> <mml:semantics> <mml:mrow> <mml:mi>p</mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>34</mml:mn> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>/</mml:mo> </mml:mrow> <mml:mn>9</mml:mn> </mml:mrow> <mml:annotation encoding="application/x-tex">p &gt; 34/9</mml:annotation> </mml:semantics> </mml:math> </inline-formula>, and also obtain a sharp <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="left-parenthesis upper L Superscript p Baseline comma upper L Superscript q Baseline right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>p</mml:mi> </mml:msup> <mml:mo>,</mml:mo> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>q</mml:mi> </mml:msup> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">(L^p,L^q)</mml:annotation> </mml:semantics> </mml:math> </inline-formula> spherical restriction theorem for <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="q greater-than 4 minus five twenty-sevenths"> <mml:semantics> <mml:mrow> <mml:mi>q</mml:mi> <mml:mo>&gt;</mml:mo> <mml:mn>4</mml:mn> <mml:mo>−<!-- − --></mml:mo> <mml:mfrac> <mml:mn>5</mml:mn> <mml:mn>27</mml:mn> </mml:mfrac> </mml:mrow> <mml:annotation encoding="application/x-tex">q&gt; 4 - \frac {5}{27}</mml:annotation> </mml:semantics> </mml:math> </inline-formula>.
Let M be an n¿ × n matrix of rank r ¿ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient … Let M be an n¿ × n matrix of rank r ¿ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries with relative root mean square error RMSE ¿ C(¿) (nr/|E|) <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1/2</sup> . Further, if r = O(1) and M is sufficiently unstructured, then it can be reconstructed exactly from |E| = O(n log n) entries. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">matrix completion</i> problem, and … This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">matrix completion</i> problem, and comes up in a great number of applications, including the famous <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Netflix Prize</i> and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible, but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">r</i> exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nr</i> log( <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</i> ) samples are needed to recover a random <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</i> x <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</i> matrix of rank <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">r</i> by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nr</i> polylog( <i xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</i> ).
We prove that the elliptic maximal function maps the Sobolev space W_{4,\eta}(\mathbb{R}^2) into L^4(\mathbb{R}^2) for all \eta&gt;1/6 . The main ingredients of the proof are an analysis of the intersection … We prove that the elliptic maximal function maps the Sobolev space W_{4,\eta}(\mathbb{R}^2) into L^4(\mathbb{R}^2) for all \eta&gt;1/6 . The main ingredients of the proof are an analysis of the intersection properties of elliptic annuli and a combinatorial method of Kolasa and Wolff.
While the recent theory of compressed sensing provides an opportunity to overcome the Nyquist limit in recovering sparse signals, a solution approach usually takes the form of an inverse problem … While the recent theory of compressed sensing provides an opportunity to overcome the Nyquist limit in recovering sparse signals, a solution approach usually takes the form of an inverse problem of an unknown signal, which is crucially dependent on specific signal representation. In this paper, we propose a drastically different two-step Fourier compressive sampling framework in a continuous domain that can be implemented via measurement domain interpolation, after which signal reconstruction can be done using classical analytic reconstruction methods. The main idea originates from the fundamental duality between the sparsity in the primary space and the low-rankness of a structured matrix in the spectral domain, showing that a low-rank interpolator in the spectral domain can enjoy all of the benefits of sparse recovery with performance guarantees. Most notably, the proposed low-rank interpolation approach can be regarded as a generalization of recent spectral compressed sensing to recover large classes of finite rate of innovations (FRI) signals at a near-optimal sampling rate. Moreover, for the case of cardinal representation, we can show that the proposed low-rank interpolation scheme will benefit from inherent regularization and an optimal incoherence parameter. Using a powerful dual certificate and the golfing scheme, we show that the new framework still achieves a near-optimal sampling rate for a general class of FRI signal recovery, while the sampling rate can be further reduced for a class of cardinal splines. Numerical results using various types of FRI signals confirm that the proposed low-rank interpolation approach offers significantly better phase transitions than conventional compressive sampling approaches.
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the … This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
We prove that for a finite type curve in ℝ3 the maximal operator generated by dilations is bounded on Lp for sufficiently large p. We also show the endpoint Lp … We prove that for a finite type curve in ℝ3 the maximal operator generated by dilations is bounded on Lp for sufficiently large p. We also show the endpoint Lp → Lp1/p regularity result for the averaging operators for large p. The proofs make use of a deep result of Thomas Wolff about decompositions of cone multipliers.
Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing iterative reconstruction methods in various imaging problems. However, it is still unclear why these deep … Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing iterative reconstruction methods in various imaging problems. However, it is still unclear why these deep learning architectures work for specific inverse problems. Moreover, in contrast to the usual evolution of signal processing theory around the classical theories, the link between deep learning and the classical signal processing approaches, such as wavelets, nonlocal processing, and compressed sensing, are not yet well understood. To address these issues, here we show that the long-sought missing link is the convolution framelets for representing a signal by convolving local and nonlocal bases. The convolution framelets were originally developed to generalize the theory of low-rank Hankel matrix approaches for inverse problems, and this paper further extends this idea so as to obtain a deep neural network using multilayer convolution framelets with perfect reconstruction (PR) under rectified linear unit (ReLU) nonlinearity. Our analysis also shows that the popular deep network components such as residual blocks, redundant filter channels, and concatenated ReLU (CReLU) do indeed help to achieve PR, while the pooling and unpooling layers should be augmented with high-pass branches to meet the PR condition. Moreover, by changing the number of filter channels and bias, we can control the shrinkage behaviors of the neural network. This discovery reveals the limitations of many existing deep learning architectures for inverse problems, and leads us to propose a novel theory for a deep convolutional framelet neural network. Using numerical experiments with various inverse problems, we demonstrate that our deep convolutional framelets network shows consistent improvement over existing deep architectures. This discovery suggests that the success of deep learning stems not from a magical black box, but rather from the power of a novel signal representation using a nonlocal basis combined with a data-driven local basis, which is indeed a natural extension of classical signal processing theory.
Abstract We prove sharp smoothing properties of the averaging operator defined by convolution with a measure on a smooth nondegenerate curve $\gamma $ in $\mathbb R^d$ , $d\ge 3$ . … Abstract We prove sharp smoothing properties of the averaging operator defined by convolution with a measure on a smooth nondegenerate curve $\gamma $ in $\mathbb R^d$ , $d\ge 3$ . Despite the simple geometric structure of such curves, the sharp smoothing estimates have remained largely unknown except for those in low dimensions. Devising a novel inductive strategy, we obtain the optimal $L^p$ Sobolev regularity estimates, which settle the conjecture raised by Beltran–Guo–Hickman–Seeger [1]. Besides, we show the sharp local smoothing estimates on a range of p for every $d\ge 3$ . As a result, we establish, for the first time, nontrivial $L^p$ boundedness of the maximal average over dilations of $\gamma $ for $d\ge 4$ .
Purpose To correct line‐to‐line delays and phase errors in echo‐planar imaging (EPI). Theory and Methods EPI‐trajectory auto‐corrected image reconstruction (EPI‐TrACR) is an iterative maximum‐likelihood technique that exploits data redundancy provided … Purpose To correct line‐to‐line delays and phase errors in echo‐planar imaging (EPI). Theory and Methods EPI‐trajectory auto‐corrected image reconstruction (EPI‐TrACR) is an iterative maximum‐likelihood technique that exploits data redundancy provided by multiple receive coils between nearby lines of k‐space to determine and correct line‐to‐line trajectory delays and phase errors that cause ghosting artifacts. EPI‐TrACR was efficiently implemented using a segmented FFT and was applied to in vivo brain data acquired at 7 T across acceleration (1×–4×) and multishot factors (1–4 shots), and in a time series. Results EPI‐TrACR reduced ghosting across all acceleration factors and multishot factors, compared to conventional calibrated reconstructions and the PAGE method. It also achieved consistently lower ghosting in the time series. Averaged over all cases, EPI‐TrACR reduced root‐mean‐square ghosted signal outside the brain by 27% compared to calibrated reconstruction, and by 40% compared to PAGE. Conclusion EPI‐TrACR automatically corrects line‐to‐line delays and phase errors in multishot, accelerated, and dynamic EPI. While the method benefits from additional calibration data for initialization, it was not a requirement for most reconstructions. Magn Reson Med 79:3114–3121, 2018. © 2017 International Society for Magnetic Resonance in Medicine.
Structured low-rank matrix models have previously been introduced to enable calibrationless MR image reconstruction from sub-Nyquist data, and such ideas have recently been extended to enable navigator-free echo-planar imaging (EPI) … Structured low-rank matrix models have previously been introduced to enable calibrationless MR image reconstruction from sub-Nyquist data, and such ideas have recently been extended to enable navigator-free echo-planar imaging (EPI) ghost correction. This paper presents a novel theoretical analysis which shows that, because of uniform subsampling, the structured low-rank matrix optimization problems for EPI data will always have either undesirable or non-unique solutions in the absence of additional constraints. This theory leads us to recommend and investigate problem formulations for navigator-free EPI that incorporate side information from either image-domain or k-space domain parallel imaging methods. The importance of using nonconvex low-rank matrix regularization is also identified. We demonstrate using phantom and in vivo data that the proposed methods are able to eliminate ghost artifacts for several navigator-free EPI acquisition schemes, obtaining better performance in comparison with the state-of-the-art methods across a range of different scenarios. Results are shown for both single-channel acquisition and highly accelerated multi-channel acquisition.
We prove d-linear analogues of the classical restriction and Kakeya conjectures in Rd. Our approach involves obtaining monotonicity formulae pertaining to a certain evolution of families of gaussians, closely related … We prove d-linear analogues of the classical restriction and Kakeya conjectures in Rd. Our approach involves obtaining monotonicity formulae pertaining to a certain evolution of families of gaussians, closely related to heat flow. We conclude by giving some applications to the corresponding variable-coefficient problems and the so-called "joints" problem, as well as presenting some n-linear analogues for n < d.
A technique is introduced to relate differentiation and covering properties of a basis. In particular, we find that the basis associated with a sparse set of directions differentiates integrals of … A technique is introduced to relate differentiation and covering properties of a basis. In particular, we find that the basis associated with a sparse set of directions differentiates integrals of functions locally in L(2).
We study the boundedness problem for maximal operators $ \mathcal{M} $ associated with averages along smooth hypersurfaces S of finite type in 3-dimensional Euclidean space. For p > 2, we … We study the boundedness problem for maximal operators $ \mathcal{M} $ associated with averages along smooth hypersurfaces S of finite type in 3-dimensional Euclidean space. For p > 2, we prove that if no affine tangent plane to S passes through the origin and S is analytic, then the associated maximal operator is bounded on $ {L^p}\left( {{\mathbb{R}^3}} \right) $ if and only if p > h(S), where h(S) denotes the so-called height of the surface S (defined in terms of certain Newton diagrams). For non-analytic S we obtain the same statement with the exception of the exponent p = h(S). Our notion of height h(S) is closely related to A. N. Varchenko's notion of height h(ϕ) for functions ϕ such that S can be locally represented as the graph of ϕ after a rotation of coordinates. Several consequences of this result are discussed. In particular we verify a conjecture by E. M. Stein and its generalization by A. Iosevich and E. Sawyer on the connection between the decay rate of the Fourier transform of the surface measure on S and the Lp-boundedness of the associated maximal operator $ \mathcal{M} $, and a conjecture by Iosevich and Sawyer which relates the Lp-boundedness of $ \mathcal{M} $ to an integrability condition on S for the distance to tangential hyperplanes, in dimension 3. In particular, we also give essentially sharp uniform estimates for the Fourier transform of the surface measure on S, thus extending a result by V. N. Karpushkin from the analytic to the smooth setting and implicitly verifying a conjecture by V. I. Arnold in our context. As an immediate application of this, we obtain an $ {L^p}\left( {{\mathbb{R}^3}} \right) - {L^2}(S) $ Fourier restriction theorem for S.
Selective manipulation of micrometric objects in a standard microscopy environment is possible with miniaturized acoustical tweezers. Selective manipulation of micrometric objects in a standard microscopy environment is possible with miniaturized acoustical tweezers.
The sharp Wolff-type decoupling estimates of Bourgain-Demeter are extended to the variable coefficient setting.These results are applied to obtain new sharp local smoothing estimates for wave equations on compact Riemannian … The sharp Wolff-type decoupling estimates of Bourgain-Demeter are extended to the variable coefficient setting.These results are applied to obtain new sharp local smoothing estimates for wave equations on compact Riemannian manifolds, away from the endpoint regularity exponent.More generally, local smoothing estimates are established for a natural class of Fourier integral operators; at this level of generality the results are sharp in odd dimensions, both in terms of the regularity exponent and the Lebesgue exponent.
Preview this article: GISAID: Global initiative on sharing all influenza data – from vision to reality, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/eurosurveillance/22/13/eurosurv-22-30494-1-1.gif Preview this article: GISAID: Global initiative on sharing all influenza data – from vision to reality, Page 1 of 1 < Previous page | Next page > /docserver/preview/fulltext/eurosurveillance/22/13/eurosurv-22-30494-1-1.gif
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative … Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.
Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that … Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.
We improve the best known exponent for the restriction conjecture in \mathbb{R}^6 , improving the recent results of Bourgain and Guth. The proof is applicable to any dimension n satisfying … We improve the best known exponent for the restriction conjecture in \mathbb{R}^6 , improving the recent results of Bourgain and Guth. The proof is applicable to any dimension n satisfying n \equiv 0 \mod 3 .
Extending the methods developed in the author’s recent paper and using some techniques from a paper by Sogge and Stein in conjunction with various facts about adapted coordinate systems in … Extending the methods developed in the author’s recent paper and using some techniques from a paper by Sogge and Stein in conjunction with various facts about adapted coordinate systems in two variables, an $L^p$ boundedness theorem is proven for maximal operators over hypersurfaces in $\mathbb {R}^3$ when $p > 2.$ When the best possible $p$ is greater than $2$, the theorem typically provides sharp estimates. This gives another approach to the subject of recent work of Ikromov, Kempe, and Müller (2010).
Let {theta(j)} be a lacunary sequence going to zero. Let [Formula: see text]. Define [Formula: see text]. We prove [Formula: see text]. Let {theta(j)} be a lacunary sequence going to zero. Let [Formula: see text]. Define [Formula: see text]. We prove [Formula: see text].
For smooth curves <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="normal upper Gamma"> <mml:semantics> <mml:mi mathvariant="normal">Γ<!-- Γ --></mml:mi> <mml:annotation encoding="application/x-tex">\Gamma</mml:annotation> </mml:semantics> </mml:math> </inline-formula> in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="bold upper R Superscript n"> … For smooth curves <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="normal upper Gamma"> <mml:semantics> <mml:mi mathvariant="normal">Γ<!-- Γ --></mml:mi> <mml:annotation encoding="application/x-tex">\Gamma</mml:annotation> </mml:semantics> </mml:math> </inline-formula> in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="bold upper R Superscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="bold">R</mml:mi> </mml:mrow> </mml:mrow> <mml:mi>n</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{{\mathbf {R}}^n}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> with certain curvature properties it is shown that the composition of the Fourier transform in <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="bold upper R Superscript n"> <mml:semantics> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="bold">R</mml:mi> </mml:mrow> </mml:mrow> <mml:mi>n</mml:mi> </mml:msup> </mml:mrow> <mml:annotation encoding="application/x-tex">{{\mathbf {R}}^n}</mml:annotation> </mml:semantics> </mml:math> </inline-formula> followed by restriction to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="normal upper Gamma"> <mml:semantics> <mml:mi mathvariant="normal">Γ<!-- Γ --></mml:mi> <mml:annotation encoding="application/x-tex">\Gamma</mml:annotation> </mml:semantics> </mml:math> </inline-formula> defines a bounded operator from <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper L Superscript p Baseline left-parenthesis bold upper R Superscript n Baseline right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>p</mml:mi> </mml:msup> </mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mi mathvariant="bold">R</mml:mi> </mml:mrow> </mml:mrow> <mml:mi>n</mml:mi> </mml:msup> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">{L^p}({{\mathbf {R}}^n})</mml:annotation> </mml:semantics> </mml:math> </inline-formula> to <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="upper L Superscript q Baseline left-parenthesis normal upper Gamma right-parenthesis"> <mml:semantics> <mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:msup> <mml:mi>L</mml:mi> <mml:mi>q</mml:mi> </mml:msup> </mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mi mathvariant="normal">Γ<!-- Γ --></mml:mi> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:annotation encoding="application/x-tex">{L^q}(\Gamma )</mml:annotation> </mml:semantics> </mml:math> </inline-formula> for certain <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="p comma q"> <mml:semantics> <mml:mrow> <mml:mi>p</mml:mi> <mml:mo>,</mml:mo> <mml:mi>q</mml:mi> </mml:mrow> <mml:annotation encoding="application/x-tex">p,q</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. The curvature hypotheses are the weakest under which this could hold, and <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="p"> <mml:semantics> <mml:mi>p</mml:mi> <mml:annotation encoding="application/x-tex">p</mml:annotation> </mml:semantics> </mml:math> </inline-formula> is optimal for a range of <inline-formula content-type="math/mathml"> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="q"> <mml:semantics> <mml:mi>q</mml:mi> <mml:annotation encoding="application/x-tex">q</mml:annotation> </mml:semantics> </mml:math> </inline-formula>. In the proofs the problem is reduced to the estimation of certain multilinear operators generalizing fractional integrals, and they are treated by means of rearrangement inequalities and interpolation between simple endpoint estimates.
The annihilating filter-based low-rank Hankel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. The success … The annihilating filter-based low-rank Hankel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. The success of ALOHA is due to the concise signal representation in the k-space domain thanks to the duality between structured low-rankness in the k-space domain and the image domain sparsity. Inspired by the recent mathematical discovery that links convolutional neural networks to Hankel matrix decomposition using data-driven framelet basis, here we propose a fully data-driven deep learning algorithm for k-space interpolation. Our network can be also easily applied to non-Cartesian k-space trajectories by simply adding an additional regridding layer. Extensive numerical experiments show that the proposed deep learning method consistently outperforms the existing image-domain deep learning approaches.
Abstract We give a short proof of a slightly weaker version of the multilinear Kakeya inequality proven by Bennett, Carbery and Tao. Abstract We give a short proof of a slightly weaker version of the multilinear Kakeya inequality proven by Bennett, Carbery and Tao.