Access 250 million scholarly works, Sugaku's trained academic models and other top AI models.
Sugaku helps you make your research and mathematical problem solving more productive.

Trusted By Researchers at Leading Institutions

Caltech University of Toronto MIT Stanford USC UC Berkeley and many more...
Your Research Projects
View All

Track your active projects, with AI chat that keeps your project context and focuses on the item you're working on.

Your Reading List
View All

Stay up to date on research and bring in AI to collaborate, answer your questions or reference into your own projects.

Suggested Papers

The more you use Sugaku, the more personalized your feed becomes.

Abstract Understanding the late-time acceleration of the Universe is one of the major challenges in cosmology today. In this paper, we present a new scalar field model corresponding to a 
 Abstract Understanding the late-time acceleration of the Universe is one of the major challenges in cosmology today. In this paper, we present a new scalar field model corresponding to a generalised axion-like potential. In fact, this model can be framed as a quintessence model based on physically motivated considerations. This potential is capable of alleviating the coincidence problem through a tracking regime. We will as well prove that this potential allows for a late-time acceleration period induced by an effective cosmological constant, which is reached without fine-tuning the initial conditions of the scalar field. In our model, the generalised axion field fuels the late-time acceleration of the Universe rather than fuelling an early dark energy era. Additionally, we will show how the late-time transition to dark energy dominance could be favoured in this model, since the density parameter of the scalar field will rapidly grow in the late phase of the tracking regime.
This systematic review investigates recent empirical research on mathematical literacy, focusing on real-life problem-solving, instructional methods, influencing factors, and assessment practices. Drawing from 37 peer-reviewed studies published between 2015 and 
 This systematic review investigates recent empirical research on mathematical literacy, focusing on real-life problem-solving, instructional methods, influencing factors, and assessment practices. Drawing from 37 peer-reviewed studies published between 2015 and 2024, the review synthesizes findings related to frequently studied constructs, targeted mathematical domains, educational levels, and the effectiveness of various pedagogical interventions. The results indicate that mathematical literacy, achievement, and problem-solving are the most commonly examined outcomes, with algebra and geometry being the most frequently addressed content areas. Instructional approaches such as realistic mathematics education, problem-based learning, and STEM-integrated models consistently show positive impacts on students’ mathematical literacy. However, students continue to struggle with context-based problem formulation and interpretation. Cognitive factors such as executive function and self-efficacy, along with contextual variables like socio-economic status and language proficiency, significantly influence student outcomes. The review also highlights gaps in assessment practices and the need for improved teacher training. Implications for policy, practice, and future research are discussed to support the development of mathematical literacy as a key 21<sup>st</sup> century competency.
ABSTRACT We present a new framing of the seismic location problem using principles drawn from differential geometry. Our interpretation relies upon the common assumption that travel times observed across a 
 ABSTRACT We present a new framing of the seismic location problem using principles drawn from differential geometry. Our interpretation relies upon the common assumption that travel times observed across a network are continuous, differentiable functions of source location. In consequence, travel-time functions constitute a differentiable map between the source region and a Riemannian manifold. The manifold is said to be the image of the source region embedded in a generally high-dimension travel-time vector space. A cluster of events in the source region has an image of discrete points on the manifold, that, except in the simplest cases, cannot be viewed directly. However, it is possible to project the image of a cluster into a tangent space of the manifold for direct visualization. The projection operator can be computed directly from the data without a velocity model, but produces a distorted rendering of the cluster geometry. With a model we can predict the distortions and correct them to estimate cluster geometry. We develop these points with the simplest possible example, one for which direct visualization of the manifold is possible, using the example as an introduction to the relevant concepts from differential geometry in a familiar setting. The tangent space, a local linearization of the manifold, plays a key role. We develop a metric to estimate the limits of linearization, that is, to determine when the curvature of the manifold invalidates the linear assumption. We also examine the interplay of model error, inadequate network geometry, and pick error. We then generalize our results from the simple case to the general case of 3D source regions observed by general networks. Although we do suggest a new “project and correct” method for location, we do not develop it into a practical algorithm. Our intention rather is to highlight new analytical methods grounded in differential geometry.
Scholars trained in the use of factorial ANOVAs have increasingly begun using linear modelling techniques. When models contain interactions between continuous variables (or powers of them), it has long been 
 Scholars trained in the use of factorial ANOVAs have increasingly begun using linear modelling techniques. When models contain interactions between continuous variables (or powers of them), it has long been argued that it is necessary to mean center prior to conducting the analysis. A review of the recommendations offered in statistical textbooks shows considerable disagreement, with some authors maintaining that centering is necessary, and others arguing that it is more trouble than it is worth. We also find errors in people’s beliefs about how to interpret first-order regression coefficients in moderated regression. These coefficients do not index main effects, whether data have been centered or not, but mischaracterizing them is probably more likely after centering. In this study we review the recommendations, and then provide two demonstrations using ordinary least squares (OLS) regression models with continuous predictors. We show that mean centering has no effect on the numeric estimate, the confidence intervals, or the t- or p -values for main effects, interactions, or quadratic terms, provided one knows how to properly assess them. We also highlight some shortcomings of the standardized regression coefficient (ÎČ), and note some advantages of the semipartial correlation coefficient (sr). We demonstrate that some aspects of conventional wisdom were probably never correct; other concerns have been removed by advances in computer precision. In OLS models with continuous predictors, mean centering might or might not aid interpretation, but it is not necessary. We close with practical recommendations.
Gary A. Smith | Chapman and Hall/CRC eBooks
Abstract Airborne infectious diseases are a significant threat to human beings. Nowadays, one of the deadliest airborne diseases, coronavirus (COVID-19), is resulting in a massive health crisis due to its 
 Abstract Airborne infectious diseases are a significant threat to human beings. Nowadays, one of the deadliest airborne diseases, coronavirus (COVID-19), is resulting in a massive health crisis due to its rapid transmission. The World Health Organization for protection against the spread of airborne diseases has set several guidelines. The most effective preventive measure against airborne diseases, according to the World Health Organization, is wearing masks in public places and crowded areas. It is challenging to monitor people manually in these areas. In this study, we collect data from public and local sources to develop an occlusion-aware face mask detection model. This study presents a deep learning-based occlusion-aware face mask detection model designed to identify both proper and improper mask usage, even under partial facial occlusions. A dataset of 4,820 images, including occlusions from hands, objects, and mask misuse, was used to train and evaluate three convolutional neural network models: InceptionV3, MobileNetV2, and DenseNet121. Among them, DenseNet121 achieved the highest accuracy of 96.3% on test data. Therefore, our proposed study is used to investigate occlusion aware face mask classification using deep learning.
Objectives: This study aims to demonstrate the application of multiple comparison tests in situations involving imprecise, incomplete, and uncertain observations. Such data arises in many practical situations. Methods: Five post 
 Objectives: This study aims to demonstrate the application of multiple comparison tests in situations involving imprecise, incomplete, and uncertain observations. Such data arises in many practical situations. Methods: Five post hoc multiple comparison tests viz. Fisher’s LSD test, Tukey’s HSD, Bonferroni (đ”đ‘‡ 𝑁), Scheffe, and Hochberg (đ»đ”đ‘) method is utilized within a one-way design framework, with a significance level of 5%. Simulation studies are applied to the empirical level and power calculation. Findings: The study shows that Tukey’s HSD (𝑇 đŸđ‘) shows the highest power in the determinant part, not the indeterminant part, compared to other tests. For, e.g., For (5, 5, 5, 5) with location parameter (0, .5, 1, 0), the power interval shows [.3953, .3584] where đŒđ‘ ∈ [0, .05]. Overall, đ”đ‘‡ 𝑁 and đ»đ”đ‘ shows a prominent result and are preferable compared to other tests. From application, NANOVA provided the P-value is [<.001, <.001], i.e., tests are statistically significant. The P-value of the tests for pairs [1, 1] and [2, 2] accepted (greater than .05) means the average daily ICU occupancy of corona patients of more than 55 years and between 35 and 55 age groups are not equal. Similarly, other tests also show the same. Novelty: Neutrosophic Multiple comparisons are still developing, and computer programs are still unavailable to support them. We have applied a simulation study to check the performance of the tests. Keywords: Multiple comparisons; NANOVA; Neutrosophic Statistics; Indeterminacy; Covid-19
Background and Purpose: P -values and “statistical significance” (e.g., set α = 0.05, test if P < α) play a central role in modern medical research. However, an overreliance on 
 Background and Purpose: P -values and “statistical significance” (e.g., set α = 0.05, test if P < α) play a central role in modern medical research. However, an overreliance on significance has led to misinterpretation, questionable research practices, and oversimplified conclusions. Reformers have made mutually exclusive calls to abandon statistical significance, to redefine statistical significance (α = 0.005 instead of α = 0.05), or to develop decision rules based on the study context (balancing Type I Errors, α, against Type II Errors, ÎČ). Summary of Key Points: In light of these debates, we have revised the statistical reporting guidelines at JNPT . Across different methods, P -values should always be viewed as one piece of a larger statistical framework. 1. P-values: which indicate how surprising the data are assuming the null hypothesis is true. 2. Confidence Intervals: which show the range of values compatible with the data (i.e., uncertainty in our estimates). Careful consideration is required of all values within the interval, especially null values. 3. The uncertainty of the results with respect to the Study Context , as the same statistical results may hold different implications depending on the research phase or objectives. 4. The Cost of Different Errors: should be explicitly discussed, because false positives/negatives accrue different risks depending on the study context/goals. 5. The Probability of the Hypothesis: which is distinct from the probability of the data under the null hypothesis (the P -value); done formally, this shift requires Bayesian approaches. Recommendations: By integrating these principles, authors can provide nuanced, contextually relevant conclusions that go beyond binary significance thresholds.
Human immunodeficiency virus (HIV) and hepatitis B virus (HBV) co-infection is common due to their shared transmission routes. Understanding their interaction within host cells is key to improving treatment strategies. 
 Human immunodeficiency virus (HIV) and hepatitis B virus (HBV) co-infection is common due to their shared transmission routes. Understanding their interaction within host cells is key to improving treatment strategies. Mathematical models are crucial tools for analyzing within-host viral dynamics and informing therapeutic interventions. This study presents a mathematical framework designed to investigate the interactions and progression of HIV-HBV co-infection within a host. The model captures the distinct biological characteristics of the two viruses: HBV primarily infects liver cells (hepatocytes), while HIV targets CD4 + T cells and can also infect hepatocytes. A system of seven non-linear delay differential equations (DDEs) is formulated to represent the dynamic interactions among uninfected and virus-infected hepatocytes, uninfected and HIV-infected CD4 + T cells, as well as circulating HIV and HBV particles. The model incorporates two biologically significant time delays: the first represents the latency between the initial infection and the onset of productive infection in host cells, while the second accounts for the maturation duration of newly produced virions before they become infectious. The model's mathematical consistency is verified by showing that its solutions remain bounded and non-negative throughout the system's dynamics. Equilibrium points and their associated threshold parameters are identified, with conditions for existence and stability rigorously derived. Global stability of the equilibria is established through the application of carefully designed Lyapunov functionals in conjunction with Lyapunov-LaSalle asymptotic stability theorem, ensuring a rigorous and comprehensive analysis of the system's long-term behavior. The theoretical findings are corroborated by numerical simulations. We conducted a sensitivity analysis of the basic reproduction numbers, R 0 for HIV and R 1 for HBV. The effects of antiviral treatment and time delays on the HIV-HBV co-dynamics are discussed. Minimum efficacy thresholds for anti-HIV and anti-HBV therapies are Determined, and when drug effectiveness surpasses these levels, the model predicts the full elimination of both viruses from the host. Additionally, the length of the time delay interval plays a role similar to that of antiviral treatment, suggesting a potential strategy for developing drug therapies aimed at extending the time delay period. The results of this study highlight the importance of incorporating time delays in models of dual viral infection and support the development of treatment strategies that enhance therapeutic outcomes by extending these delays.
In this paper, we present a rigorous analysis for root-exponential convergence of Hermite approximations, including projection and interpolation methods, for functions that are analytic in an infinite strip containing the 
 In this paper, we present a rigorous analysis for root-exponential convergence of Hermite approximations, including projection and interpolation methods, for functions that are analytic in an infinite strip containing the real axis and satisfy certain restrictions on the asymptotic behavior at infinity within this strip. The key ingredients of our analysis are some new and remarkable contour integral representations for the Hermite coefficients and the remainder of Hermite spectral interpolations with which sharp error estimates for Hermite approximations in the weighted and maximum norms are established. Further extensions to Gauss-Hermite quadrature and the scaling factor are also discussed. Particularly, we prove the root-exponential convergence of Gauss–Hermite quadrature under explicit conditions on the integrands. Numerical experiments confirm our theoretical results.
<title>Abstract</title> A new statistical distribution, termed as Size-Biased Poisson Xgamma distribution, is introduced in this study to enhance modeling and interpretation of data arising from size-biased sampling schemes. In such 
 <title>Abstract</title> A new statistical distribution, termed as Size-Biased Poisson Xgamma distribution, is introduced in this study to enhance modeling and interpretation of data arising from size-biased sampling schemes. In such settings, the probability of selecting a unit is proportional to its size, resulting in the overrepresentation of larger observations. In this work, we consider the specific case where the size of a unit is defined to be the observed value of the response variable, thereby focusing on size-biased distributions where selection is directly linked to outcome magnitude. We derive some structural features of the model and a thorough reliability analysis is also carried out. For parameter estimation, Maximum likelihood estimation (MLE) and method of moments (MoM) are used. As the MLE is not available in closed form, the likelihood function is maximized numerically and a simulation analysis based on MLE further validates the robustness of the model. Additionally, we extend the usefulness of the SBPXG model in predictive analytics by developing an associated regression model for applications with relevant covariate data. Furthermore, we present an INAR(1) process under the SBPXG model to handle count data scenarios. We investigate its special characteristics including conditional maximum likelihood (CML) method. By examining three different datasets, we show that the SBPXG model continuously performs better in terms of goodness-of-fit than other models. Additionally, the Vuong’s Likelihood Ratio Test is applied to formally compare non-nested models, and the results confirm that the SBPXG regression model offers a significantly better fit than conventional alternatives. The findings demonstrate the model’s flexibility and strong potential for analyzing size-biased and overdispersed count data, particularly in healthcare and environmental domains.
This paper considers a nonlinear spherical pendulum whose suspension point performs high-frequency spatial vibrations. The dynamics of this pendulum can be described by averaging its Hamiltonian over phases of vibrations. 
 This paper considers a nonlinear spherical pendulum whose suspension point performs high-frequency spatial vibrations. The dynamics of this pendulum can be described by averaging its Hamiltonian over phases of vibrations. Rotationally symmetric conditions on vibrations are assumed in the averaged Hamiltonian. Under these conditions, a bifurcation diagram for the phase portraits of the averaged system is presented. Numerical simulations of different examples of vibrations are performed. The case of proper degeneration in KAM theory guarantees the coherence of dynamical characteristics between the averaged and exact systems.
Abstract The gambler’s ruin problem for correlated random walks (CRWs), both with and without delays, is addressed using the optional stopping theorem for martingales. We derive closed-form expressions for the 
 Abstract The gambler’s ruin problem for correlated random walks (CRWs), both with and without delays, is addressed using the optional stopping theorem for martingales. We derive closed-form expressions for the ruin probabilities and the expected game duration for CRWs with increments $\{1,-1\}$ and for symmetric CRWs with increments $\{1,0,-1\}$ (CRWs with delays). Additionally, a martingale technique is developed for general CRWs with delays. The gambler’s ruin probability for a game involving bets on two arbitrary patterns is also examined.
Derandomization techniques are often used within advanced randomized algorithms. In particular, pseudorandom objects, such as hash families and expander graphs, are key components of such algorithms, but their verification presents 
 Derandomization techniques are often used within advanced randomized algorithms. In particular, pseudorandom objects, such as hash families and expander graphs, are key components of such algorithms, but their verification presents a challenge. This work shows how such algorithms can be expressed and verified in Isabelle and presents a pseudorandom objects library that abstracts away the deep algebraic/analytic results involved. Moreover, it presents examples that show how the library eases and enables the verification of advanced randomized algorithms. Highlighting the value of this framework is that it was recently used to verify the space-optimal distinct elements algorithm by Blasiok from 2018, which relies on the combination of many derandomization techniques to achieve its optimality.
For positive Hankel matrices, an interval containing all eigenvalues is obtained. With a stronger condition, we also construct two sharper intervals for the eigenvalue localization. The total positivity of positive 
 For positive Hankel matrices, an interval containing all eigenvalues is obtained. With a stronger condition, we also construct two sharper intervals for the eigenvalue localization. The total positivity of positive Hankel matrices is analyzed. The relationship between Hankel TP matrices and some Toeplitz SR matrices is analyzed.
The arithmetic average of the first n primes, pÂŻn=1n∑i=1npi, exhibits very many interesting and subtle properties. Since the transformation from pn→pÂŻn is extremely easy to invert, pn=npÂŻn−(n−1)pÂŻn−1, it is clear 
 The arithmetic average of the first n primes, pÂŻn=1n∑i=1npi, exhibits very many interesting and subtle properties. Since the transformation from pn→pÂŻn is extremely easy to invert, pn=npÂŻn−(n−1)pÂŻn−1, it is clear that these two sequences pn⟷pÂŻn must ultimately carry exactly the same information. But the averaged sequence pÂŻn, while very closely correlated with the primes, (pÂŻn∌12pn), is much “smoother” and much better behaved. Using extensions of various standard results, I shall demonstrate that the prime-averaged sequence pÂŻn satisfies prime-averaged analogues of the Cramer, Andrica, Legendre, Oppermann, Brocard, Fourges, Firoozbakht, Nicholson, and Farhadian conjectures. (So these prime-averaged analogues are not conjectures; they are theorems). The crucial key to enabling this pleasant behaviour is the “smoothing” process inherent in averaging. While the asymptotic behaviour of the two sequences is very closely correlated, the local fluctuations are quite different.
Drug resistance in cancer is shaped not only by evolutionary processes but also by eco-evolutionary interactions between tumor subpopulations. These interactions can support the persistence of resistant cells even in 
 Drug resistance in cancer is shaped not only by evolutionary processes but also by eco-evolutionary interactions between tumor subpopulations. These interactions can support the persistence of resistant cells even in the absence of treatment, undermining standard aggressive therapies and motivating drug holiday-based approaches that leverage ecological dynamics. A key challenge in implementing such strategies is efficiently identifying interaction between drug-sensitive and drug-resistant subpopulations. Evolutionary game theory provides a framework for characterizing these interactions. We investigate whether spatial patterns in single time-point images of cell populations can reveal the underlying game theoretic interactions between sensitive and resistant cells. To achieve this goal, we develop an agent-based model in which cell reproduction is governed by local game-theoretic interactions. We compute a suite of spatial statistics on single time-point images from the agent-based model under a range of games being played between cells. We quantify the informativeness of each spatial statistic and demonstrate that a simple machine learning model can classify the type of game being played. Our findings suggest that spatial structure contains sufficient information to infer ecological interactions. This work represents a step toward clinically viable tools for identifying cell-cell interactions in tumors, supporting the development of ecologically informed cancer therapies.
Ensuring consistent and complete data is a fundamental requirement in any research study involving data analysis, as it forms the basis for producing reliable, meaningful, and actionable results. Missing data 
 Ensuring consistent and complete data is a fundamental requirement in any research study involving data analysis, as it forms the basis for producing reliable, meaningful, and actionable results. Missing data or censored data poses a big challenge for the analysis which may even lead to unreasonable and misleading results. Omitting such kind of data leads to information loss, which may affect the precision of the result. Hence, using such data requires analytical techniques which should be coined explicitly to combat this problem. Many researchers have been dealing with the topic of missing data since 1960s. This paper reviews key statistical methods that have been developed to address the challenges of missing data.