Mathematics Statistics and Probability

Statistics Education and Methodologies

Description

This cluster of papers focuses on advancing statistical literacy and reasoning in education, with a particular emphasis on addressing statistics anxiety, promoting data science education, improving teaching methods for statistical concepts, and enhancing students' quantitative skills. The cluster also explores research methods, probability literacy, statistical thinking, and the need for educational reform to foster inferential reasoning.

Keywords

Statistics Anxiety; Data Science; Statistical Reasoning; Teaching Methods; Quantitative Skills; Research Methods; Probability Literacy; Statistical Thinking; Educational Reform; Inferential Reasoning

This seventh edition of Statistical Methods for Psychology, like the previous editions, surveys statistical techniques commonly used in the behavioral and social sciences, especially psychology and education. Although it is … This seventh edition of Statistical Methods for Psychology, like the previous editions, surveys statistical techniques commonly used in the behavioral and social sciences, especially psychology and education. Although it is designed for advanced undergraduates and graduate students, it does not assume that students have had either a previous course in statistics or a course in mathematics beyond high-school algebra. Those students who have had an introductory course will find that the early material provides a welcome review. The book is suitable for either a one-term or a full-year course, and I have used it successfully for both. Since I have found that students, and faculty, frequently refer back to the book from which they originally learned statistics when they have a statistical problem, I have included material that will make the book a useful reference for future use. The instructor who wishes to omit this material will have no difficulty doing so. I have cut back on that material, however, to include only what is still likely to be useful. The idea of including every interesting idea had led to a book that was beginning to be daunting.
Aiming to remedy what they see as the fragmentary nature of texts on statistics, the authors of this textbook explore both design and analytic questions, and analytic and measurement issues. … Aiming to remedy what they see as the fragmentary nature of texts on statistics, the authors of this textbook explore both design and analytic questions, and analytic and measurement issues. Commentaries are offered on inputs and outputs of computer programs in the context of the topics presented.
In this paper, we explore the rules that determine intuitive predictions and judgments of confidence and contrast these rules to the normative principles of statistical prediction. Two classes of prediction … In this paper, we explore the rules that determine intuitive predictions and judgments of confidence and contrast these rules to the normative principles of statistical prediction. Two classes of prediction are discussed: category prediction and numerical prediction. In a categorical case, the prediction is given in nominal form, for example, the winner in an election, the diagnosis of a patient, or a person's future occupation. In a numerical case, the prediction is given in numerical form, for example, the future value of a particular stock or of a student's grade point average. In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors (Kahneman & Tversky, 1972b, 3; Tversky & Kahneman, 1971, 2; 1973, 11). The present paper is concerned with the role of one of these heuristics – representativeness – in intuitive predictions. Given specific evidence (e.g., a personality sketch), the outcomes under consideration (e.g., occupations or levels of achievement) can be ordered by the degree to which they are representative of that evidence. The thesis of this paper is that people predict by representativeness, that is, they select or order outcomes by the degree to which the outcomes represent the essential features of the evidence.
PART I: LOOKING AT DATA Looking at Data-Distributions Looking at Data-Relationships Producing Data PART II: PROBABILITY AND INFERENCE Probability: The Study of Randomness Sampling Distributions Introduction to Inference Inference for … PART I: LOOKING AT DATA Looking at Data-Distributions Looking at Data-Relationships Producing Data PART II: PROBABILITY AND INFERENCE Probability: The Study of Randomness Sampling Distributions Introduction to Inference Inference for Distributions Inference for Proportions PART III: TOPICS IN INFERENCE Analysis of Two-Way Tables Inference for Regression Multiple Regression One-Way Analysis of Variance Two-Way Analysis of Variance Additional chapters available on the CD-ROM and online: Logistic Regression Nonparametric Tests Bootstrap Methods and Permutation Tests Statistics for Quality: Control and Capability
(1927). Probable Inference, the Law of Succession, and Statistical Inference. Journal of the American Statistical Association: Vol. 22, No. 158, pp. 209-212. (1927). Probable Inference, the Law of Succession, and Statistical Inference. Journal of the American Statistical Association: Vol. 22, No. 158, pp. 209-212.
1.Introduction 1.1 Introduction to statistical methodology 1.2 Descriptive statistics and inferential statistics 1.3 The role of computers in statistics 1.4 Chapter summary 2. Sampling and Measurement 2.1 Variables and their … 1.Introduction 1.1 Introduction to statistical methodology 1.2 Descriptive statistics and inferential statistics 1.3 The role of computers in statistics 1.4 Chapter summary 2. Sampling and Measurement 2.1 Variables and their measurement 2.2 Randomization 2.3 Sampling variability and potential bias 2.4 other probability sampling methods * 2.4 Chapter summary 3. Descriptive statistics 3.1 Describing data with tables and graphs 3.2 Describing the center of the data 3.3 Describing variability of the data 3.4 Measure of position 3.5 Bivariate descriptive statistics 3.6 Sample statistics and population parameters 3.7 Chapter summary 4. Probability Distributions 4.1 Introduction to probability 4.2 Probablitity distributions for discrete and continuous variables 4.3 The normal probability distribution 4.4 Sampling distributions describe how statistics vary 4.5 Sampling distributions of sample means 4.6 Review: Probability, sample data, and sampling distributions 4.7 Chapter summary 5. Statistical inference: estimation 5.1 Point and interval estimation 5.2 Confidence interval for a proportion 5.3 Confidence interval for a mean 5.4 Choice of sample size 5.5 Confidence intervals for median and other parameters* 5.6 Chapter summary 6. Statistical Inference: Significance Tests 6.1 Steps of a significance test 6.2 Significance test for a eman 6.3 Significance test for a proportion 6.4 Decisions and types of errors in tests 6.5 Limitations of significance tests 6.6 Calculating P (Type II error)* 6.7 Small-sample test for a proportion: the binomial distribution* 6.8 Chapter summary 7. Comparison of Two Groups 7.1 Preliminaries for comparing groups 7.2 Categorical data: comparing two proportions 7.3 Quantitative data: comparing two means 7.4 Comparing means with dependent samples 7.5 Other methods for comparing means* 7.6 Other methods for comparing proportions* 7.7 Nonparametric statistics for comparing groups 7.8 Chapter summary 8. Analyzing Association between Categorical Variables 8.1 Contingency Tables 8.2 Chi-squared test of independence 8.3 Residuals: Detecting the pattern of association 8.4 Measuring association in contingency tables 8.5 Association between ordinal variables* 8.6 Inference for ordinal associations* 8.7 Chapter summary 9. Linear Regression and Correlation 9.1 Linear relationships 9.2 Least squares prediction equation 9.3 The linear regression model 9.4 Measuring linear association - the correlation 9.5 Inference for the slope and correlation 9.6 Model assumptions and violations 9.7 Chapter summary 10. Introduction to multivariate Relationships 10.1 Association and causality 10.2 Controlling for other variables 10.3 Types of multivariate relationships 10.4 Inferenential issus in statistical control 10.5 Chapter summary 11. Multiple Regression and Correlation 11.1 Multiple regression model 11.2 Example with multiple regression computer output 11.3 Multiple correlation and R-squared 11.4 Inference for multiple regression and coefficients 11.5 Interaction between predictors in their effects 11.6 Comparing regression models 11.7 Partial correlation* 11.8 Standardized regression coefficients* 11.9 Chapter summary 12. Comparing groups: Analysis of Variance (ANOVA) methods 12.1 Comparing several means: One way analysis of variance 12.2 Multiple comparisons of means 12.3 Performing ANOVA by regression modeling 12.4 Two-way analysis of variance 12.5 Two way ANOVA and regression 12.6 Repeated measures analysis of variance* 12.7 Two-way ANOVA with repeated measures on one factor* 12.8 Effects of violations of ANOVA assumptions 12.9 Chapter summary 13. Combining regression and ANOVA: Quantitative and Categorical Predictors 13.1 Comparing means and comparing regression lines 13.2 Regression with quantitative and categorical predictors 13.3 Permitting interaction between quantitative and categorical predictors 13.4 Inference for regression with quantitative and categorical predictors 13.5 Adjusted means* 13.6 Chapter summary 14. Model Building with Multiple Regression 14.1 Model selection procedures 14.2 Regression diagnostics 14.3 Effects of multicollinearity 14.4 Generalized linear models 14.5 Nonlinearity: polynomial regression 14.6 Exponential regression and log transforms* 14.7 Chapter summary 15. Logistic Regression: Modeling Categorical Responses 15.1 Logistic regression 15.2 Multiple logistic regression 15.3 Inference for logistic regression models 15.4 Logistic regression models for ordinal variables* 15.5 Logistic models for nominal responses* 15.6 Loglinear models for categorical variables* 15.7 Model goodness of fit tests for contingency tables* 15.9 Chapter summary 16. Introduction to Advanced Topics 16.1 Longitudinal data analysis* 16.2 Multilevel (hierarchical) models* 16.3 Event history analysis* 16.4 Path analysis* 16.5 Factor analysis* 16.6 Structural equation models* 16.7 Markov chains* Appendix: SAS and SPSS for Statistical Analyses Tables Answers to selected odd-numbered problems Index
This classic text and reference introduces probability theory for both advanced undergraduate students of statistics and scientists in related fields, drawing on real applications in the physical and biological sciences. … This classic text and reference introduces probability theory for both advanced undergraduate students of statistics and scientists in related fields, drawing on real applications in the physical and biological sciences. The book makes probability exciting. -Journal of the American Statistical Association
This is a main text for an upper-level undergraduate or graduate-level introductory statistics course in departments of psychology, educational psychology, education and related areas. This is a main text for an upper-level undergraduate or graduate-level introductory statistics course in departments of psychology, educational psychology, education and related areas.
The Probable Error of a Mean But, as we decrease the number of experiments, the value of the standard deviation found from the sample of experiments becomes itself subject to … The Probable Error of a Mean But, as we decrease the number of experiments, the value of the standard deviation found from the sample of experiments becomes itself subject to an increasing error, until judgments reached in this way may become altogether misleading.In routine work there are two ways of dealing with this difficulty: (1) an experiment may be repeated many times, until suich a long series is obtained that the standard deviation is determined once and for all with sufficient accuracy.This value can then be used for subsequent shorter series of similar experiments.(2) Where experiments are done in duplicate in the natural course of the work, the mean square of the difference between corresponding pairs is equal to the standard deviation of the population multiplied by v/2.We can thus combine together several series of experiments for the purpose of determining the standard deviation.Owing however to secular change, the value obtained is nearly always too low, successive experitnents being positively correlated.There are other experiments, however, which cannot easily be repeated very often; in such cases it is sometimes necessary to judge of the certainty of the results from a very small sample, which itself affords the only indication of the variability.Some chemical, many biological, and most agricultural and large scale experiments belong to this class, which has hitherto been almost outside the range of statistical enquiry.Again, although it is well known that the method of using the normal curve is only trustworthy when the sample is "large," no one has yet told us very clearly where the limit between "l large " and " small " samples is to b-e drawn.The aim of the present paper is to determine the point at which we may use the tables of the probability integral in judging of the sig,nificance of the mean of a series of experiments, and to furnish alternative tables for utse when the number of experiments is too few.The paper is divided into the following nine sections: I.The equation is determined of the curve which represents the frequency distribution of standard deviations of samples drawn from a normal population.IJ.There is shown to be no kind of correlation between the mean and the standard deviation of such a sample.III.The equation is determined of the curve representing the frequency distribution of a quantity z, which is obtained by dividing the distance between the mean of a sample and the mean of the population by the standard deviation of the sample.IV.The curve found in I. is discussed.V. The curve found in III. is discussed.VI.The two curves are compared with some actual distributions.VII.Tables of the curves found in III. are given for samples of different size.VIII and IX.The tables are explained and some instances are given of their use.X. Conclusions.* See p. 19.* As pointed out in Section V. the normal curve gives too large a value for p when the probability is large.I find the true value in this case to be p = 9976.It matters little however to a conclusion of this kind whether the odds in its favour are 1,660: 1 or merely 416: 1.
Several reasons have contributed to the prolonged neglect into which the study of statistics, in its theoretical aspects, has fallen. In spite of the immense amount of fruitful labour which … Several reasons have contributed to the prolonged neglect into which the study of statistics, in its theoretical aspects, has fallen. In spite of the immense amount of fruitful labour which has been expended in its practical applications, the basic principles of this organ of science are still in a state of obscurity, and it cannot be denied that, during the recent rapid development of practical methods, fundamental problems have been ignored and fundamental paradoxes left unresolved. This anomalous state of statistical science is strikingly exemplified by a recent paper entitled "The Fundamental Problem of Practical Statistics," in which one of the most eminent of modern statisticians presents what purports to be a general proof of BAYES' postulate, a proof which, in the opinion of a second statistician of equal eminence, "seems to rest upon a very peculiar -- not to say hardly supposable -- relation."
Most medical researchers, whether clinical or non-clinical, receive some background in statistics as undergraduates. However, it is most often brief, a long time ago, and largely forgotten by the time … Most medical researchers, whether clinical or non-clinical, receive some background in statistics as undergraduates. However, it is most often brief, a long time ago, and largely forgotten by the time it is needed. Furthermore, many introductory texts fall short of adequately explaining the underlying concepts of statistics, and often are divorced from the reality of conducting and assessing medical research. Practical Statistics for Medical Research is a problem-based text for medical researchers, medical students, and others in the medical arena who need to use statistics but have no specialized mathematics background. The author draws on twenty years of experience as a consulting medical statistician to provide clear explanations to key statistical concepts, with a firm emphasis on practical aspects of designing and analyzing medical research. The text gives special attention to the presentation and interpretation of results and the many real problems that arise in medical research
1. Introduction to Statistics and Data Analysis 2. Probability 3. Random Variables and Probability Distributions 4. Mathematical Expectations 5. Some Discrete Probability Distributions 6. Some Continuous Probability Distributions 7. Functions … 1. Introduction to Statistics and Data Analysis 2. Probability 3. Random Variables and Probability Distributions 4. Mathematical Expectations 5. Some Discrete Probability Distributions 6. Some Continuous Probability Distributions 7. Functions of Random Variables (optional) 8. Fundamental Distributions and Data Description 9. One and Two Sample Estimation Problems 10. One and Two Sided Tests of Hypotheses 11. Simple Linear Regression 12. Multiple Linear Regression 13. One Factor Experiments: General 14. Factorial Experiments (Two or More Factors) 15. 2k Factorial Experiments and Fractions 16. Nonparametric Statistics 17. Statistical Quality Control 18. Bayesian Statistics
Proven Material for a Course on the Introduction to the Theory and/or on the Applications of Classical Nonparametric Methods Since its first publication in 1971, Nonparametric Statistical Inference has been … Proven Material for a Course on the Introduction to the Theory and/or on the Applications of Classical Nonparametric Methods Since its first publication in 1971, Nonparametric Statistical Inference has been widely regarded as the source for learning about nonparametric statistics. The fifth edition carries on this tradition while thoroughly revising at least 50 percent of the material. New to the Fifth Edition Updated and revised contents based on recent journal articles in the literature A new section in the chapter on goodness-of-fit tests A new chapter that offers practical guidance on how to choose among the various nonparametric procedures covered Additional problems and examples Improved computer figures This classic, best-selling statistics book continues to cover the most commonly used nonparametric procedures. The authors carefully state the assumptions, develop the theory behind the procedures, and illustrate the techniques using realistic research examples from the social, behavioral, and life sciences. For most procedures, they present the tests of hypotheses, confidence interval estimation, sample size determination, power, and comparisons of other relevant procedures. The text also gives examples of computer applications based on Minitab, SAS, and StatXact and compares these examples with corresponding hand calculations. The appendix includes a collection of tables required for solving the data-oriented problems. Nonparametric Statistical Inference, Fifth Edition provides in-depth yet accessible coverage of the theory and methods of nonparametric statistical inference procedures. It takes a practical approach that draws on scores of examples and problems and minimizes the theorem-proof format. Jean Dickinson Gibbons was recently interviewed regarding her generous pledge to Virginia Tech.
Abstract For interval estimation of a proportion, coverage probabilities tend to be too large for "exact" confidence intervals based on inverting the binomial test and too small for the interval … Abstract For interval estimation of a proportion, coverage probabilities tend to be too large for "exact" confidence intervals based on inverting the binomial test and too small for the interval based on inverting the Wald large-sample normal test (i.e., sample proportion ± z-score × estimated standard error). Wilson's suggestion of inverting the related score test with null rather than estimated standard error yields coverage probabilities close to nominal confidence levels, even for very small sample sizes. The 95% score interval has similar behavior as the adjusted Wald interval obtained after adding two "successes" and two "failures" to the sample. In elementary courses, with the score and adjusted Wald methods it is unnecessary to provide students with awkward sample size guidelines.
W e show that social scientists often do not take full advantage of the information available in their statistical results and thus miss opportunities to present quantities that could shed … W e show that social scientists often do not take full advantage of the information available in their statistical results and thus miss opportunities to present quantities that could shed the greatest light on their research questions. In this article we suggest an approach, built on the technique of statistical simulation, to extract the currently overlooked information and present it in a reader-friendly manner. More specifically, we show how to convert the raw results of any statistical procedure into expressions that (1) convey numerically precise estimates of the quantities of greatest substantive interest, (2) include reasonable measures of uncertainty about those estimates, and (3) require little specialized knowledge to understand. The following simple statement satisfies our criteria: “Other things being equal, an additional year of education would increase your annual income by $1,500 on average, plus or minus about $500.” Any smart high school student would understand that sentence, no matter how sophisticated the statistical model and powerful the computers used to produce it. The sentence is substantively informative because it conveys a key quantity of interest in terms the reader wants to know. At the same time, the sentence indicates how uncertain the researcher is about the estimated quantity of interest. Inferences are never certain, so any honest presentation of statistical results must include some qualifier, such as “plus or minus $500” in the present example. Making the Most of Statistical Analyses: Improving Interpretation and Presentation
Journal Article Mathematical Methods of Statistics Get access Mathematical Methods of Statistics. By HAROLD CRAMER. Princeton University Press (London : Geoffrey Cumberlege), 1946. Pp. xvi + 575. 33s. 6d.) R. … Journal Article Mathematical Methods of Statistics Get access Mathematical Methods of Statistics. By HAROLD CRAMER. Princeton University Press (London : Geoffrey Cumberlege), 1946. Pp. xvi + 575. 33s. 6d.) R. C. Geary R. C. Geary Department of Applied Economics, Cambridge Search for other works by this author on: Oxford Academic Google Scholar The Economic Journal, Volume 57, Issue 226, 1 June 1947, Pages 200–202, https://doi.org/10.2307/2226151 Published: 01 June 1947
This article described three heuristics that are employed in making judgments under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object … This article described three heuristics that are employed in making judgments under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.
The standard rules of probability can be interpreted as uniquely valid principles in logic. In this book, E. T. Jaynes dispels the imaginary distinction between 'probability theory' and 'statistical inference', … The standard rules of probability can be interpreted as uniquely valid principles in logic. In this book, E. T. Jaynes dispels the imaginary distinction between 'probability theory' and 'statistical inference', leaving a logical unity and simplicity, which provides greater technical power and flexibility in applications. This book goes beyond the conventional mathematics of probability theory, viewing the subject in a wider context. New results are discussed, along with applications of probability theory to a wide variety of problems in physics, mathematics, economics, chemistry and biology. It contains many exercises and problems, and is suitable for use as a textbook on graduate level courses involving data analysis. The material is aimed at readers who are already familiar with applied mathematics at an advanced undergraduate level or higher. The book will be of interest to scientists working in any area where inference from incomplete information is necessary.
Computing mathematical expectation for an experiment involving a finite number of numerical outcomes is straightforward. Let X denote the random variable having n possible values x 1 , x 2 … Computing mathematical expectation for an experiment involving a finite number of numerical outcomes is straightforward. Let X denote the random variable having n possible values x 1 , x 2 , x 3 ,…, x n . Letting p k denote the probability of x k , the expected value of X is $$E\,(X)\, = \,\sum\limits_{k\, = \,1}^n {{x_k}{p_k}} ,$$ which can be interpreted as a weighted average of all x k , where the weight of each outcome is represented by its probability. But caution is required when interpreting the sum if there are infinitely many outcomes and the series fails to converge absolutely.
This article is a short summary of the report of survey team 3, presented to the 15th International Congress on Mathematical Education (ICME-15) in Sydney in July 2024. This article is a short summary of the report of survey team 3, presented to the 15th International Congress on Mathematical Education (ICME-15) in Sydney in July 2024.

Probability

2025-06-19
| Cambridge University Press eBooks
| Cambridge University Press eBooks
İstatistiksel araştırma süreci, istatistiksel bir araştırma sorusu oluşturabilme, istatistiksel araştırma sorusu doğrultusunda veri toplama planı yapabilme, verileri analiz edebilme ve elde edilen bulgular doğrultusunda yorum yapabilme aşamalarından oluşmaktadır. Bu süreç … İstatistiksel araştırma süreci, istatistiksel bir araştırma sorusu oluşturabilme, istatistiksel araştırma sorusu doğrultusunda veri toplama planı yapabilme, verileri analiz edebilme ve elde edilen bulgular doğrultusunda yorum yapabilme aşamalarından oluşmaktadır. Bu süreç bireylerin verileri anlamlandırabilmesi ve bulgulara eleştirel bakabilmesi için önemlidir. Bu çalışmada, bilim ve sanat merkezlerinde görev yapan ortaokul matematik öğretmenlerinin istatistiksel araştırma sürecine yönelik mesleki gelişimlerini sağlamak amaçlanmıştır. Çalışma grubu bir bilim ve sanat merkezinde görev yapan 3 ortaokul matematik öğretmenidir. Çalışma kapsamında öğretmenlerin istatistiksel araştırma sürecine yönelik önceki deneyimlerinden yola çıkarak öğretmenlerin mesleki gelişimini desteklemeye yönelik izlenecek yola dair hedefler belirlenmiş, hedefler doğrultusunda mesleki gelişim için izlenecek yol ve kullanılacak araçlar geliştirilmiştir. Bu yol ve araçlarla öğretmenler istatistiksel araştırma sürecini deneyimlemiştir. Uygulanan mesleki gelişim programının, öğretmenlerin mesleki gelişimlerini sağlamaya yönelik hedeflere ulaşmaya katkı sunduğu gözlenmiştir.

Statistics

2025-06-19
| Cambridge University Press eBooks
This study examined the effectiveness of CALTECH Seminar in enhancing the computational thinking skills of Junior High School Students under Grade 10 students in statistical measures at Sto. Niño National … This study examined the effectiveness of CALTECH Seminar in enhancing the computational thinking skills of Junior High School Students under Grade 10 students in statistical measures at Sto. Niño National High School. The researchers proposed an intervention plan named the CALTECH (Scientific Calculator) Seminar. In this seminar, the researchers identified learning competencies related to statistical measures from the Department of Education’s K-12 curriculum guide for Junior High School – Mathematics – 10. The researchers implemented CALTECH Seminar intervention in the subject, Mathematics – 10, which deals with statistical measures. Employing a quantitative-descriptive method supplemented with interviews, the study provided a comprehensive understanding of the computational thinking skills and the effectiveness of the CALTECH Seminar intervention. Twenty-Nine Grade-10 Banahaw students purposefully and categorizing them into instructional and frustration groups based on their pretest scores. The intervention included using the Survey Questionnaire to evaluate computational thinking skills before and after implementing CALTECH Seminar. Findings revealed a substantial difference in the scores before and after the intervention. The mean pre-test score was 23.28% (needs improvement), while the mean post-test score was 79.48% (highly satisfactory. Moreover, results showed that students performed better in the post-test than the pre-test, implying a significant difference between the two test scores, t (28) =17.5, p<.001, with a standardized effect size of Cohen’s d=3.23. To provide a comprehensive understanding of the students’ experiences, the researchers conducted in-depth interviews with selected participants. From their responses, five (5) themes were identified from the insights of students: 1) difficulties in applying statistical concepts pre-intervention; 2) enhancing problem-solving skills; 3) improving computational thinking; 4) building confidence in solving statistical problems; and 5) incorporating real-world applications enhance learning. In conclusion, the CALTECH Seminar was effective in enhancing the computational thinking skills in statistical measures of the students. Keywords: CALTECH Seminar, Computational Thinking Skills, İnteractive Learning Approaches, Quantitative-Descriptive, Philippines
Statistical anxiety is a condition characterized by intense experiences of fear, tension, and discomfort when faced with tasks that require the use of specialized software or interpreting statistical information. This … Statistical anxiety is a condition characterized by intense experiences of fear, tension, and discomfort when faced with tasks that require the use of specialized software or interpreting statistical information. This often leads many students to avoid performing statistical tasks, whichnot only affects their academic education but also their continued professional development. The current study aims to present basic statistical concepts (descriptive statistics) in an understandable manner and to propose familiar software for their calculation that does not require specialized training. The presented model for entering data into Microsoft Excel, processing it, calculating, and describing descriptive statistics aims to show the reader that this is not a difficult and time-consuming task and is entirely achievable using software that „is lying around on our computers anyway“ (Thiagarajan, B., 2023: 8).
Ensuring statistical productivity is essential for enhancing research efficiency and quality. This study examined the correlation between Artificial Intelligence (AI) adoption, research skills, and statistical knowledge with statistical productivity among … Ensuring statistical productivity is essential for enhancing research efficiency and quality. This study examined the correlation between Artificial Intelligence (AI) adoption, research skills, and statistical knowledge with statistical productivity among graduate students in Misamis Occidental during the academic year 2023–2024. Employing a descriptive-correlational design, 192 graduate students were selected through stratified random sampling. Data were collected using researcher-developed questionnaires and analysed using mean, standard deviation, Pearson correlation, and stepwise multiple regression analysis. The findings indicated high levels of AI adoption, research skills, and statistical knowledge, all of which were positively correlated with statistical productivity. AI engagement, research proficiency, and statistical expertise were significant predictors of statistical productivity. Active use of AI and a strong understanding of statistical concepts notably enhanced students’ ability to analyse and interpret data. To further improve productivity, it is recommended that school administrators establish research centres equipped with appropriate software and tools to support research and statistical analysis.
Mehdi Ghayoumi | Chapman and Hall/CRC eBooks
This study aimed to compare manual and software-based techniques for analyzing quantitative data in educational research. Specifically, it investigated whether using Chi-Square, paired sample t-tests, one-way ANOVA, and Pearson correlation … This study aimed to compare manual and software-based techniques for analyzing quantitative data in educational research. Specifically, it investigated whether using Chi-Square, paired sample t-tests, one-way ANOVA, and Pearson correlation would yield different results when analyzed manually versus using Statistical Package for the Social Sciences (SPSS) version 20. A comparative research design was employed, and datasets generated by the researcher were analyzed through both methods. Findings showed that both manual and SPSS analyses produced identical statistical results. However, the statistical analysis software method proved to be significantly faster than manual data analysis. Manual analysis offers greater flexibility and potentially deeper understanding; it is more time-consuming and susceptible to human error. In contrast, statistical software provides quicker and more accurate results and could handle complex computations, though it requires technical knowledge and may involve time to understand the syntax’s when using and also may involve installation costs. The study concluded that both manual and statistical software-based techniques are accurate, but statistical methods offer greater efficiency. Researchers are encouraged to use either method for key statistical tests such as the t-test, Chi-Square, ANOVA, and Pearson correlation, depending on context and resources. Additionally, learning manual techniques may strengthen a researcher's understanding of statistical concepts and improve interpretation skills.
Tolga Demirhan | Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi
Öğrencilerin başarı durumunu öngörmek tüm eğitim kurumları tarafından istenen bir durum olmakla birlikte başarıya etki eden çok faktör olması ve verilerin normalde dengesiz olmasından dolayı oldukça zordur. Bu zorlu süreçte, … Öğrencilerin başarı durumunu öngörmek tüm eğitim kurumları tarafından istenen bir durum olmakla birlikte başarıya etki eden çok faktör olması ve verilerin normalde dengesiz olmasından dolayı oldukça zordur. Bu zorlu süreçte, öğrencinin başarısını tahmin etmek için makine öğrenmesi yaklaşımlarından faydalanılmaktadır. Deneyime dayalı bir yaklaşım olan makine öğrenmesi çalışmalarında; araştırmaya konu olan veri setinin elde edilmesi bu yaklaşımın önemli bir adımıdır. Ancak elde edilen veriler ile işlem ve analizler yapabilmek zordur. Bununla birlikte zaman alıcı ve maliyetli olabilmektedir. Bu sorunun üstesinden gelmek için Scikit-learn gibi ücretsiz python kütüphanelerinden faydalanılabilmektedir. Bu çalışmada; eğitim alanlı veri setlerinde büyük öneme sahip öğrenci performans tahmin işlemlerinin, Scikit-learn kullanılarak analiz edilmesi ve sürecin kullanılan kod betikleri ile birlikte paylaşılarak literatüre ve araştırmacılara katkı sunması amaçlanmıştır. Bu bağlamda çalışmada sırasıyla Naive Bayes (NB), Rastgele Orman (RO), Destek vektör Makinesi (DVM), Karar Ağacı (KA) algoritmaları kullanılmıştır. Geliştirilen tahmin modelinin başarısını ifade etmek için akademik araştırmalarda yaygın olarak kullanılan doğruluk, kesinlik, duyarlılık, F1-puan, kappa ve karışıklık matrisi ölçütleri kullanılmıştır.
Tomi Apra Santosa | RIGGS Journal of Artificial Intelligence and Digital Business
The study found out the influence of the deep learning model in science learning. This research is a type of quantitative research with a meta-analysis approach. The source of research … The study found out the influence of the deep learning model in science learning. This research is a type of quantitative research with a meta-analysis approach. The source of research data is 12 national journals indexed by SINTA, DOAJ or EBSCO. The eligibility criteria are that the research must be a quantitative method, the experimental method, the journal is published in 2022-2025, the research must be relevant, the participants come from elementary, junior high, high school and college students. Data analysis is quantitative analysis with the help of JASP applications. The results of this study can be concluded that there is a significant influence of the deep learning model in science learning with an effect size value of 0.915 in the high effect size category. These findings explain that the deep learning model is effectively applied at the elementary school to university education levels.
Psychologists often take statistical hypothesis testing to be a procedure that detects whether effects do or do not exist; the output of this procedure can then be used to inform … Psychologists often take statistical hypothesis testing to be a procedure that detects whether effects do or do not exist; the output of this procedure can then be used to inform theories, which should predict the effects that do exist, and not those effects that do not. Call this the modular view of statistics: Statistics can do their job relatively separately from the theories that are being tested, just so long as the right effects are being tested. The interface between statistics and science can be called the pragmatics of statistical inference. The modular view declares that interface is simple for hypothesis testing: It consists of science telling statistics what to test for; and statistics telling science whether it exists or not. One reason why there has been a credibility crisis may be that the modular view is dominant yet unhelpful. If statistics cannot in an atheoretical way declare what does and does not exist, then psychologists have not been genuinely testing their theories. In this chapter I reject the modular view. One might reject the whole concept of getting evidence of whether or not there is an effect; instead, I accept this aspect of current practice but argue one can only get evidence for a relevant effect being there or not given a theory of what size effect could be expected on that theory.
Abstract To this date, few standardized tests measuring students' performance with regards to statistics exist. Only four tests have been proposed for college or university students. The goal of the … Abstract To this date, few standardized tests measuring students' performance with regards to statistics exist. Only four tests have been proposed for college or university students. The goal of the present study is to investigate these tests. University professors or instructors experienced in teaching statistics were asked to list the concepts they think are being assessed by each item of the tests. A total of 708 responses were obtained from 42 participants. Using thematic analysis, 18 fundamental statistical concepts were identified. Unplanned analyses on participants' scores were also computed. The results suggest that there is no consensus on some of the items' correct answers. The study has practical implications for teaching statistics, from learning goals to assessment methods.
This study applies descriptive and exploratory data analysis techniques to examine the performance of students from 59 undergraduate programs in Early Childhood Education and Pedagogy in Colombia on the 2022 … This study applies descriptive and exploratory data analysis techniques to examine the performance of students from 59 undergraduate programs in Early Childhood Education and Pedagogy in Colombia on the 2022 Saber Pro standardized test. Official datasets from ICFES were processed and analyzed using SPAD 5.6 and SPSS 28, focusing on five generic competencies: critical reading, quantitative reasoning, written communication, English, and citizenship competencies. Performance levels were examined in relation to institutional, sectoral, and regional variables. The findings reveal consistently low performance in quantitative reasoning, advantages associated with accredited institutions, and regional disparities. This research highlights the potential of data science tools—such as data mining and statistical visualization—for guiding evidence-based educational strategies and policy-making. It underscores the value of educational data systems as foundations for improving academic outcomes and reducing inequality through informed decision-making.
Yixuan Li , Alex Endert , Crystal N. Wise +5 more | Computer-supported collaborative learning/˜The œComputer-Supported Collaborative Learning Conference
La Constitución Política es la institución garante de mantener la suprema jerarquía El presente artículo aborda la temática Resolución de problemas de multiplicación mediante modelado como un medio para fomentar … La Constitución Política es la institución garante de mantener la suprema jerarquía El presente artículo aborda la temática Resolución de problemas de multiplicación mediante modelado como un medio para fomentar el pensamiento matemático en el nivel de Educación General Básica. El objetivo general es diseñar una estrategia didáctica basada en la modelación matemática de problemas multiplicativos que contribuyan al desarrollo del cálculo. El problema científico identificado radica en la limitada capacidad de los estudiantes para modelar situaciones de multiplicación, lo que afecta el desarrollo del cálculo en la Educación General Básica. La investigación se caracteriza por ser de tipo descriptivo-exploratorio, con un enfoque mixto, tanto cualitativo como cuantitativo. Para recabar información, se aplicó una encuesta dirigida a expertos en el desarrollo de habilidades y pensamiento matemático, específicamente en la resolución de problemas multiplicativos mediante modelación en la EGB. Los resultados evidencian que esta estrategia favorece el desarrollo de competencias matemáticas y del pensamiento lógico, aunque algunos expertos advierten que su aplicabilidad puede verse limitada en ciertos contextos educativos.