Business, Management and Accounting Management Information Systems

Big Data and Business Intelligence

Description

This cluster of papers explores the impact of big data analytics on business performance, with a focus on supply chain management, predictive analytics, data science, and decision support systems. It delves into the challenges and opportunities of leveraging big data for firm performance and sustainability, as well as the integration of business intelligence and knowledge management. The research also examines the role of big data in innovation, risk mitigation, and operational transparency within organizations.

Keywords

Big Data; Analytics; Business Intelligence; Supply Chain Management; Predictive Analytics; Data Science; Firm Performance; Decision Support Systems; Data Warehousing; Sustainability

The issue of whether IS positivist researchers were sufficiently validating their instruments was initially raised fifteen years ago and rigor in IS research is still one of the most critical … The issue of whether IS positivist researchers were sufficiently validating their instruments was initially raised fifteen years ago and rigor in IS research is still one of the most critical scientific issues facing the field. Without solid validation of the instruments that are used to gather data on which findings and interpretations are based, the very scientific basis of the profession is threatened. This study builds on four prior retrospectives of IS research that conclude that IS positivist researchers continue to face major barriers in instrument, statistical, and other forms of validation. It goes beyond these studies by offering analyses of the state-of-the-art of research validities and deriving specific heuristics for research practice in the validities. Some of these heuristics will, no doubt, be controversial. But we believe that it is time for the IS academic profession to bring such issues into the open for community debate. This article is a first step in that direction. Based on our interpretation of the importance of a long list of validities, this paper suggests heuristics for reinvigorating the quest for validation in IS research via content/construct validity, reliability, manipulation validity, and statistical conclusion validity. New guidelines for validation and new research directions are offered.
High-definition televisions should, by now, be a huge success. Philips, Sony, and Thompson invested billions of dollars to develop TV sets with astonishing picture quality. From a technology perspective, they've … High-definition televisions should, by now, be a huge success. Philips, Sony, and Thompson invested billions of dollars to develop TV sets with astonishing picture quality. From a technology perspective, they've succeeded: Console manufacturers have been ready for the mass market since the early 1990s. Yet the category has been an unmitigated failure, not because of deficiencies, but because critical complements such as studio production equipment were not developed or adopted in time. Under-performing complements have left console producers in the position of offering a Ferrari in a world without gasoline or highways--an admirable engineering feat, but not one that creates value for customers. The HDTV story exemplifies the promise and peril of innovation ecosystems--the collaborative arrangements through which firms combine their individual offers into a coherent, customer-facing solution. When they work, innovation ecosystems allow companies to create value that no one firm could have created alone. The benefits of these systems are real. But for many organizations the attempt at ecosystem innovation has been a costly failure. This is because, along with new opportunities, innovation ecosystems also present a new set of risks that can brutally derail a firm's best efforts. Innovation ecosystems are characterized by three fundamental types of risk: initiative risks--the familiar uncertainties of managing a project; interdependence risks--the uncertainties of coordinating with complementary innovators; and integration risks--the uncertainties presented by the adoption process across the value chain. Firms that assess ecosystem risks holistically and systematically will be able to establish more realistic expectations, develop a more refined set of environmental contingencies, and arrive at a more robust innovation strategy. Collectively, these actions will lead to more effective implementation and more profitable innovation.
An innovation classic. From Steve Jobs to Jeff Bezos, Clay Christensen's work continues to underpin today's most innovative leaders and organizations. A seminal work on disruption--for everyone confronting the growth … An innovation classic. From Steve Jobs to Jeff Bezos, Clay Christensen's work continues to underpin today's most innovative leaders and organizations. A seminal work on disruption--for everyone confronting the growth paradox. For readers of the bestselling The Innovator's Dilemma--and beyond--this definitive work will help anyone trying to transform their business right now. In The Innovator's Solution, Clayton Christensen and Michael Raynor expand on the idea of disruption, explaining how companies can and should become disruptors themselves. This classic work shows just how timely and relevant these ideas continue to be in today's hyper-accelerated business environment. Christensen and Raynor give advice on the business decisions crucial to achieving truly disruptive growth and propose guidelines for developing your own disruptive growth engine. The authors identify the forces that cause managers to make bad decisions as they package and shape new ideas--and offer new frameworks to help create the right conditions, at the right time, for a disruption to succeed. This is a must-read for all senior managers and business leaders responsible for innovation and growth, as well as members of their teams. Based on in-depth research and theories tested in hundreds of companies across many industries, The Innovator's Solution is a necessary addition to any innovation library--and an essential read for entrepreneurs and business builders worldwide.
This tutorial explains in detail what factorial validity is and how to run its various aspects in PLS. The tutorial is written as a teaching aid for doctoral seminars that … This tutorial explains in detail what factorial validity is and how to run its various aspects in PLS. The tutorial is written as a teaching aid for doctoral seminars that may cover PLS and for researchers interested in learning PLS. An annotated example with data is provided as an additional tool to assist the reader in reconstructing the detailed example.
This working paper has been thoroughly revised and superseded by two distinct articles. The first is a revised and peer-reviewed version of the original article: Okoli, Chitu (2015), A Guide … This working paper has been thoroughly revised and superseded by two distinct articles. The first is a revised and peer-reviewed version of the original article: Okoli, Chitu (2015), A Guide to Conducting a Standalone Systematic Literature Review. Communications of the Association for Information Systems (37:43), November 2015, pp. 879-910. This article presents a methodology for conducting a systematic literature review with many examples from IS research and references to guides with further helpful details. The article is available from Google Scholar or from the author's website. The second extension article focuses on developing theory with literature reviews: Okoli, Chitu (2015), The View from Giants’ Shoulders: Developing Theory with Theory-Mining Systematic Literature Reviews. SSRN Working Paper Series, December 8, 2015. This article identifies theory-mining reviews, which are literature reviews that extract and synthesize theoretical concepts from the source primary studies. The article demonstrates by citation analysis that, in information systems research, this kind of literature review is more highly cited than other kinds of literature review. The article provides detailed guidelines to writing a high-quality theory-mining review.
The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research raises the need to compare and contrast the different types of SEM techniques … The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research raises the need to compare and contrast the different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squares-based SEM. Finally, the article discusses linear regression models and suggests guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.
A company's capability to conceive and quality prototypes, and bring a product to market quicker than its competitors is increasingly the focal point of competition, according to the authors of … A company's capability to conceive and quality prototypes, and bring a product to market quicker than its competitors is increasingly the focal point of competition, according to the authors of this book. At the core of a successful new product launch is management's ability to integrate the marketing, manufacturing, and functions for problem solving and fast action, particularly during the critical design-build-test cycles of prototype creation. Companies that consistently design it right the first time have a formidable edge in the crucial race to market.
PART ONE: INTRODUCTION TO QUALITATIVE RESEARCH Learning Aims Preparatory Thoughts Considering Quantitative or Qualitative Research Diversity in Qualitative Research Defining and Delineating Qualitative Research in This Book Overview of the … PART ONE: INTRODUCTION TO QUALITATIVE RESEARCH Learning Aims Preparatory Thoughts Considering Quantitative or Qualitative Research Diversity in Qualitative Research Defining and Delineating Qualitative Research in This Book Overview of the Qualitative Research Process Redaings I Learnt Much From Doing Your Own Qualitative Research Project: Step 1 PART TWO: RESEARCH DESIGN Learning Aims Planning a Research Project Literature Review Research Question and Purpose Legitimizing the Choice for Qualitative Research Sampling, Recruitment and Access Readings I Learnt Much From Doing Your Own Qualitative Research Project: step 2 PART THREE: ETHICS IN QUALITATIVE RESEARCH Learning aims Ethics in Social Research Sensitive Topics Balancing Risk and Benefits for Participants Researcher's Stress Ethical Issues in Analysis Readings I Learnt Much From Other Resources Doing Your Own Qualitative Research Project: Step 3 PART FOUR: DATA COLLECTION Learning Aims Data Qualitative Data Collection Instruments Used Writing Memos Preparing Data for Analysis Readings I Learnt Much From Doing Your Own Qualitative Research Project: Step 4 PART FIVE: PRINCIPLES OF QUALITATIVE ANALYSIS Learning Aims What is Analysis? Segmenting Data Reassembling Data Three Starting Points A Stepwise Approach: The Spiral of Analysis Readings I Learnt Much From Doing Your Own Qualitative Research Project: Step 5 PART SIX: DOING QUALITATIVE ANALYSIS Learning Aims Introduction to Coding Open Coding Axial Coding Selective Coding Reflections on Analysis Readings I Learnt Much From Doing Your Own Qualitative Research Project: Step 6 PART SEVEN: INTEGRATIVE PROCEDURES Learning Aims Heuristics for Discovery Think Aloud Analysis Special Devices: Focus Groups and Visuals Using the Computer in Analysis Reflections on Computer-assisted Data Analysis Readings I Learnt Much From Doing Your Own Qualitative Research Project: Step 7 PART EIGHT: FINDINGS Learning Aims Results of Qualitative Research Meta-syntheses of Qualitative Studies Muddling Qualitative Methods Mixed Methods Research Practical Use Readings I Learnt Much From Doing Your Own Qualitative Research Project: Step 8 PART NINE: QUALITY OF THE RESEARCH Learning Aims Thinking About Quality 'Doing' Quality Checklists for Asserting Quality of the Analysis External Validity or Generalizability Quality Challenges Readings I Learnt Much From Doing Your Own Qualitative Research Project: Step 9 PART TEN: WRITING THE RESEARCH REPORT Learning Aims Writing: The Last Part of the Analysis The Researcher as Writer The Position of Participants Structure of the Final Report Publishing Readings I Learnt Much From Doing Your Own Qualitative Research Project: Step 10
(1987). Numerical Recipes: The Art of Scientific Computing. Technometrics: Vol. 29, No. 4, pp. 501-502. (1987). Numerical Recipes: The Art of Scientific Computing. Technometrics: Vol. 29, No. 4, pp. 501-502.
This paper defines an information system design theory (ISDT) to be a prescriptive theory which integrates normative and descriptive theories into design paths intended to produce more effective information systems. … This paper defines an information system design theory (ISDT) to be a prescriptive theory which integrates normative and descriptive theories into design paths intended to produce more effective information systems. The nature of ISDTs is articulated using Dubin's concept of theory building and Simon's idea of a science of the artificial. An example of an ISDT is presented in the context of Executive Information Systems (EIS). Despite the increasing awareness of the potential of EIS for enhancing executive strategic decision-making effectiveness, there exists little theoretical work which directly guides EIS design. We contend that the underlying theoretical basis of EIS can be addressed through a design theory of vigilant information systems. Vigilance denotes the ability of an information system to help an executive remain alertly watchful for weak signals and discontinuities in the organizational environment relevant to emerging strategic threats and opportunities. Research on managerial information scanning and emerging issue tracking as well as theories of open loop control are synthesized to generate vigilant information system design theory propositions. Transformation of the propositions into testable empirical hypotheses is discussed.
An easily-applied technique for measuring attribute importance and performance can further the development of effective marketing programs. An easily-applied technique for measuring attribute importance and performance can further the development of effective marketing programs.
(2002). Monte Carlo Strategies in Scientific Computing. Technometrics: Vol. 44, No. 4, pp. 403-404. (2002). Monte Carlo Strategies in Scientific Computing. Technometrics: Vol. 44, No. 4, pp. 403-404.
This paper revisits the data-information-knowledge-wisdom (DIKW) hierarchy by examining the articulation of the hierarchy in a number of widely read textbooks, and analysing their statements about the nature of data, … This paper revisits the data-information-knowledge-wisdom (DIKW) hierarchy by examining the articulation of the hierarchy in a number of widely read textbooks, and analysing their statements about the nature of data, information, knowledge, and wisdom. The hierarchy referred to variously as the ‘Knowledge Hierarchy’, the ‘Information Hierarchy’ and the ‘Knowledge Pyramid’ is one of the fundamental, widely recognized and ‘taken-for-granted’ models in the information and knowledge literatures. It is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy. After revisiting Ackoff’s original articulation of the hierarchy, definitions of data, information, knowledge and wisdom as articulated in recent textbooks in information systems and knowledge management are reviewed and assessed, in pursuit of a consensus on definitions and transformation processes. This process brings to the surface the extent of agreement and dissent in relation to these definitions, and provides a basis for a discussion as to whether these articulations present an adequate distinction between data, information, and knowledge. Typically information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge, but there is less consensus in the description of the processes that transform elements lower in the hierarchy into those above them, leading to a lack of definitional clarity. In addition, there is limited reference to wisdom in these texts.
PART I: INTRODUCTION 1. WHAT IS STATISTICS? Introduction / Why Study Statistics? / Some Current Applications of Statistics / What Do Statisticians Do? / Quality and Process Improvement / A … PART I: INTRODUCTION 1. WHAT IS STATISTICS? Introduction / Why Study Statistics? / Some Current Applications of Statistics / What Do Statisticians Do? / Quality and Process Improvement / A Note to the Student / Summary / Supplementary Exercises PART II: COLLECTING THE DATA 2. USING SURVEYS AND SCIENTIFIC STUDIES TO COLLECT DATA Introduction / Surveys / Scientific Studies / Observational Studies / Data Management: Preparing Data for Summarization and Analysis / Summary PART III: SUMMARIZING DATA 3. DATA DESCRIPTION Introduction / Describing Data on a Single Variable: Graphical Methods / Describing Data on a Single Variable: Measures of Central Tendency / Describing Data on a Single Variable: Measures of Variability / The Box Plot / Summarizing Data from More Than One Variable / Calculators, Computers, and Software Systems / Summary / Key Formulas / Supplementary Exercises PART IV: TOOLS AND CONCEPTS 4. PROBABILITY AND PROBABILITY DISTRIBUTIONS How Probability Can Be Used in Making Inferences / Finding the Probability of an Event / Basic Event Relations and Probability Laws / Conditional Probability and Independence / Bayes's Formula / Variables: Discrete and Continuous / Probability Distributions for Discrete Random Variables / A Useful Discrete Random Variable: The Binomial / Probability Distributions for Continuous Random Variables / A Useful Continuous Random Variable: The Normal Distribution / Random Sampling / Sampling Distributions / Normal Approximation to the Binomial / Summary / Key Formulas / Supplementary Exercises PART V: ANALYZING DATA: CENTRAL VALUES, VARIANCES, AND PROPORTIONS 5. INFERENCES ON A POPULATION CENTRAL VALUE Introduction and Case Study / Estimation of / Choosing the Sample Size for Estimating / A Statistical Test for / Choosing the Sample Size for Testing / The Level of Significance of a Statistical Test / Inferences about for Normal Population, s Unknown / Inferences about the Population Median / Summary / Key Formulas / Supplementary Exercises 6. COMPARING TWO POPULATION CENTRAL VALUES Introduction and Case Study / Inferences about 1 - 2: Independent Samples / A Nonparametric Alternative: The Wilcoxon Rank Sum Test / Inferences about 1 - 2: Paired Data / A Nonparametric Alternative: The Wilcoxon Signed-Rank Test / Choosing Sample Sizes for Inferences about 1 - 2 / Summary / Key Formulas / Supplementary Exercises 7. INFERENCES ABOUT POPULATION VARIANCES Introduction and Case Study / Estimation and Tests for a Population Variance / Estimation and Tests for Comparing Two Population Variances / Tests for Comparing k > 2 Population Variances / Summary / Key Formulas / Supplementary Exercises 8. INFERENCES ABOUT POPULATION CENTRAL VALUES Introduction and Case Study / A Statistical Test About More Than Two Population Variances / Checking on the Assumptions / Alternative When Assumptions are Violated: Transformations / A Nonparametric Alternative: The Kruskal-Wallis Test / Summary / Key Formulas / Supplementary Exercises 9. MULTIPLE COMPARISONS Introduction and Case Study / Planned Comparisons Among Treatments: Linear Contrasts / Which Error Rate Is Controlled / Multiple Comparisons with the Best Treatment / Comparison of Treatments to a Control / Pairwise Comparison on All Treatments / Summary / Key Formulas / Supplementary Exercises 10. CATEGORICAL DATA Introduction and Case Study / Inferences about a Population Proportion p / Comparing Two Population Proportions p1 - p2 / Probability Distributions for Discrete Random Variables / The Multinomial Experiment and Chi-Square Goodness-of-Fit Test / The Chi-Square Test of Homogeneity of Proportions / The Chi-Square Test of Independence of Two Nominal Level Variables / Fisher's Exact Test, a Permutation Test / Measures of Association / Combining Sets of Contingency Tables / Summary / Key Formulas / Supplementary Exercises PART VI: ANALYZING DATA: REGRESSION METHODS, MODEL BUILDING 11. SIMPLE LINEAR REGRESSION AND CORRELATION Linear Regression and the Method of Least Squares / Transformations to Linearize Data / Correlation / A Look Ahead: Multiple Regression / Summary of Key Formulas. Supplementary Exercises. 12. INFERENCES RELATED TO LINEAR REGRESSION AND CORRELATION Introduction and Case Study / Diagnostics for Detecting Violations of Model Conditions / Inferences about the Intercept and Slope of the Regression Line / Inferences about the Population Mean for a Specified Value of the Explanatory Variable / Predictions and Prediction Intervals / Examining Lack of Fit in the Model / The Inverse Regression Problem (Calibration): Predicting Values for x for a Specified Value of y / Summary / Key Formulas / Supplementary Exercises 13. MULTIPLE REGRESSION AND THE GENERAL LINEAR MODEL Introduction and Case Study / The General Linear Model / Least Squares Estimates of Parameters in the General Linear Model / Inferences about the Parameters in the General Linear Model / Inferences about the Population Mean and Predictions from the General Linear Model / Comparing the Slope of Several Regression Lines / Logistic Regression / Matrix Formulation of the General Linear Model / Summary / Key Formulas / Supplementary Exercises 14. BUILDING REGRESSION MODELS WITH DIAGNOSTICS Introduction and Case Study / Selecting the Variables (Step 1) / Model Formulation (Step 2) / Checking Model Conditions (Step 3) / Summary / Key Formulas / Supplementary Exercises PART VII: ANALYZING DATA: DESIGN OF EXPERIMENTS AND ANOVA 15. DESIGN CONCEPTS FOR EXPERIMENTS AND STUDIES Experiments, Treatments, Experimental Units, Blocking, Randomization, and Measurement Units / How Many Replications? / Studies for Comparing Means versus Studies for Comparing Variances / Summary / Key Formulas / Supplementary Exercises 16. ANALYSIS OF VARIANCE FOR STANDARD DESIGNS Introduction and Case Study / Completely Randomized Design with Single Factor / Randomized Block Design / Latin Square Design / Factorial Experiments in a Completely Randomized Design / The Estimation of Treatment Differences and Planned Comparisons in the Treatment Means / Checking Model Conditions / Alternative Analyses: Transformation and Friedman's Rank-Based Test / Summary / Key Formulas / Supplementary Exercises 17. ANALYSIS OF COVARIANCE Introduction and Case Study / A Completely Randomized Design with One Covariate / The Extrapolation Problem / Multiple Covariates and More Complicated Designs / Summary / Key Formulas / Supplementary Exercises 18. ANALYSIS OF VARIANCE FOR SOME UNBALANCED DESIGNS Introduction and Case Study / A Randomized Block Design with One or More Missing Observations / A Latin Square Design with Missing Data / Incomplete Block Designs / Summary / Key Formulas / Supplementary Exercises 19. ANALYSIS OF VARIANCE FOR SOME FIXED EFFECTS, RANDOM EFFECTS, AND MIXED EFFECTS MODELS Introduction and Case Study / A One-Factor Experiment with Random Treatment Effects / Extensions of Random-Effects Models / A Mixed Model: Experiments with Both Fixed and Random Treatment Effects / Models with Nested Factors / Rules for Obtaining Expected Mean Squares / Summary / Key Formulas / Supplementary Exercises 20. SPLIT-PLOT DESIGNS AND EXPERIMENTS WITH REPEATED MEASURES Introduction and Case Study / Split-Plot Designs / Single-Factor Experiments with Repeated Measures / Two-Factor Experiments with Repeated Measures on One of the Factors / Crossover Design / Summary / Key Formulas / Supplementary Exercises PART VIII: COMMUNICATING AND DOCUMENTING THE RESULTS OF A STUDY OR EXPERIMENT 21. COMMUNICATING AND DOCUMENTING THE RESULTS OF A STUDY OR EXPERIMENT Introduction / The Difficulty of Good Communication / Communication Hurdles: Graphical Distortions / Communication Hurdles: Biased Samples / Communication Hurdles: Sample Size / The Statistical Report / Documentation and Storage of Results / Summary / Supplementary Exercises
An international association advancing the multidisciplinary study of informing systems. Founded in 1998, the Informing Science Institute (ISI) is a global community of academics shaping the future of informing science. An international association advancing the multidisciplinary study of informing systems. Founded in 1998, the Informing Science Institute (ISI) is a global community of academics shaping the future of informing science.
This study presents a method of testing casual hypotheses, called regression-discontinuity analysis, in situations where the investigator is unable to randomly assign Ss to experimental and control groups. The Ss … This study presents a method of testing casual hypotheses, called regression-discontinuity analysis, in situations where the investigator is unable to randomly assign Ss to experimental and control groups. The Ss were selected from near winners--5126 students who received certificates of merit and 2
завдання прийняв завдання прийняв
Big Data (BD), with their potential to ascertain valued insights for enhanced decision-making process, have recently attracted substantial interest from both academics and practitioners. Big Data Analytics (BDA) is increasingly … Big Data (BD), with their potential to ascertain valued insights for enhanced decision-making process, have recently attracted substantial interest from both academics and practitioners. Big Data Analytics (BDA) is increasingly becoming a trending practice that many organizations are adopting with the purpose of constructing valuable information from BD. The analytics process, including the deployment and use of BDA tools, is seen by organizations as a tool to improve operational efficiency though it has strategic potential, drive new revenue streams and gain competitive advantages over business rivals. However, there are different types of analytic applications to consider. Therefore, prior to hasty use and buying costly BD tools, there is a need for organizations to first understand the BDA landscape. Given the significant nature of the BD and BDA, this paper presents a state-of-the-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/employed by organizations to help others understand this landscape with the objective of making robust investment decisions. In doing so, systematically analysing and synthesizing the extant research published on BD and BDA area. More specifically, the authors seek to answer the following two principal questions: Q1 – What are the different types of BD challenges theorized/proposed/confronted by organizations? and Q2 – What are the different types of BDA methods theorized/proposed/employed to overcome BD challenges?. This systematic literature review (SLR) is carried out through observing and understanding the past trends and extant patterns/themes in the BDA research area, evaluating contributions, summarizing knowledge, thereby identifying limitations, implications and potential further research avenues to support the academic community in exploring research themes/patterns. Thus, to trace the implementation of BD strategies, a profiling method is employed to analyze articles (published in English-speaking peer-reviewed journals between 1996 and 2015) extracted from the Scopus database. The analysis presented in this paper has identified relevant BD research studies that have contributed both conceptually and empirically to the expansion and accrual of intellectual wealth to the BDA in technology and organizational resource management discipline.
Business research refers to any type of researching done when starting or running any kind of business. All business research is done to learn information that could make the company … Business research refers to any type of researching done when starting or running any kind of business. All business research is done to learn information that could make the company more successful. Business research methods vary depending on the size of the company and the type of information needed. Large or small business research help a company analyze its strengths and weaknesses by learning what customers are looking for in terms of products or services the business is offering. Then a company can use the business research information to adjust itself to better serve customers, gain over the competition and have a better chance of staying in business. This book integrates the theory of research methods with practical business examples, as well as giving the opportunity to link the theory to own business experience, so enabling to use the knowledge gained in future career. It is, thus, primarily intended for use as a reference for teaching business research methods at the undergraduate and postgraduate level. It is also intended for current and prospective business managers.
Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an … Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.
An easily-applied technique for measuring attribute importance and performance can further the development of effective marketing programs. An easily-applied technique for measuring attribute importance and performance can further the development of effective marketing programs.
In the rapidly evolving digital economy, organizations face increasingly complex and dynamic risks that challenge traditional approaches to Enterprise Risk Management (ERM). Artificial Intelligence (AI) has emerged as a transformative … In the rapidly evolving digital economy, organizations face increasingly complex and dynamic risks that challenge traditional approaches to Enterprise Risk Management (ERM). Artificial Intelligence (AI) has emerged as a transformative force, offering novel capabilities for data-driven risk prediction, assessment, and mitigation. This paper explores the integration of AI into ERM frameworks, positioning it as a strategic enabler for predictive and preventive decision-making. We examine how machine learning algorithms, natural language processing, and big data analytics empower organizations to anticipate risks with higher accuracy and respond proactively to emerging threats. The study synthesizes insights from current industry practices, case studies, and academic literature to outline a framework for AI-enhanced ERM. It further discusses implementation challenges, ethical concerns, and governance mechanisms essential for successful adoption. The research concludes with actionable recommendations for leveraging AI as a value-creating asset in enterprise risk strategy, thereby fostering resilience, agility, and sustainable competitive advantage in an increasingly uncertain global environment.
To fully leverage Google Analytics and derive actionable insights, web analytics practitioners must go beyond standard implementation and customize the setup for specific functional requirements, which involves additional web development … To fully leverage Google Analytics and derive actionable insights, web analytics practitioners must go beyond standard implementation and customize the setup for specific functional requirements, which involves additional web development efforts. Previous studies have not provided solutions for estimating web analytics development efforts, and practitioners must rely on ad hoc practices for time and budget estimation. This study presents a COSMIC-based measurement framework to measure the functional size of Google Analytics implementations, including two examples. Next, a set of 50 web analytics projects were sized in COSMIC Function Points and used as inputs to various machine learning (ML) effort estimation models. A comparison of predicted effort values with actual values indicated that Linear Regression, Extra Trees, and Random Forest ML models performed well in terms of low Root Mean Square Error (RMSE), high Testing Accuracy, and strong Standard Accuracy (SA) scores. These results demonstrate the feasibility of applying functional size for web analytics and its usefulness in predicting web analytics project efforts. This study contributes to enhancing rigor in web analytics project management, thereby enabling more effective resource planning and allocation.
The study examines decisive elements affecting Business Intelligence (BI) acceptance within e-banking operations in Jordan by analyzing how IT infrastructure, data analytics competency, cybersecurity preparedness, and regulatory alignment impact the … The study examines decisive elements affecting Business Intelligence (BI) acceptance within e-banking operations in Jordan by analyzing how IT infrastructure, data analytics competency, cybersecurity preparedness, and regulatory alignment impact the process. A series of questions were asked to 374 staff members from IT departments and compliance teams, as well as data analysis divisions representing Islamic and commercial banks in Amman, Jordan. The research period spanned from March to July 2024, while SEM was the analysis method. The study found that IT infrastructure (β = 0.38, p < 0.001) and data analytics capability (β = 0.45, p < 0.001), along with cybersecurity (β = 0.41, p < 0.001), positively influence BI adoption. The impact of regulatory compliance failed to reach statistical significance with a coefficient of 0.22 (p = 0.07). The success of BI implementation in Jordanian banks relies mainly on technology hardware capabilities, together with internal company resources, yet strict regulations pose obstacles unless they support a bank’s mission. The business solutions identified through this research will help banking leaders and software developers, together with government officials, to improve data-driven decision systems in Jordan’s financial market.
Background of the study: Discussions on RDM have grown rapidly in scholarly platforms, emerging as a key topic within library and information science (LIS). While existing studies have reviewed and … Background of the study: Discussions on RDM have grown rapidly in scholarly platforms, emerging as a key topic within library and information science (LIS). While existing studies have reviewed and analyzed RDM literature, their scope is often limited to specific areas or timeframes. It is necessary for a detailed and current analysis of RDM literature, providing deeper insights into its complexities, evolution, and future directions. Purpose: The study presents mapping knowledge domains as a method to uncover the thematic landscape, identify significant clusters, and provide a structured understanding of interconnected concepts within the field of RDM. Method: Data were retrieved from Elsevier’s Scopus database as of August 2023. The study conducts bibliometric analysis to examine geographical distribution, publication outlet, authorship trends, and performance metrics within the field. Findings: The dataset spans from 1977 to 2023, with an increase in publications exceeding ten per year from 2012 onwards, amounting to 684 documents in various languages and reference types. The study identifies four research clusters derived from these documents, highlighting key themes namely, RDM services, data sharing, information systems, and data management. Conclusion: The findings underscore the growth of RDM-related research and contribute to a deeper understanding of the underlying structure of RDM, for researchers, practitioners, and policymakers, enabling them to address current challenges and anticipate future developments in this rapidly evolving field.
A. Rajashekar Reddy | World Journal of Advanced Engineering Technology and Sciences
This article examines the fundamental concepts, architectural distinctions, and strategic implications of data warehouses and data lakes in contemporary enterprise data management. As organizations face exponential growth in data volume … This article examines the fundamental concepts, architectural distinctions, and strategic implications of data warehouses and data lakes in contemporary enterprise data management. As organizations face exponential growth in data volume and diversity, traditional siloed approaches prove increasingly insufficient to address the full spectrum of analytical requirements. The article provides a comprehensive technical analysis of data warehouse structures—characterized by subject-orientation, integration, time-variance, and non-volatility—alongside the defining features of data lakes, including schema-on-read flexibility, support for heterogeneous data types, and horizontal scalability. Through comparative assessment, the article explores how these paradigms differ in structure, query performance, governance requirements, and optimal use cases. Further examination reveals emerging convergence trends, particularly the lake house architecture that combines warehouse performance with lake flexibility, multi-tier processing workflows, and event-driven systems enabling real-time analytics. The article extends beyond technical implementation to address strategic considerations in enterprise data architecture design, governance implementation, and organizational structure, offering guidance on selecting appropriate technologies based on data characteristics, analytical maturity, technical capabilities, and resource constraints.
Pega Systems has emerged as a leader in the field of Business Process Management (BPM), leveraging artificial intelligence to automate workflows, enhance decision-making, and improve customer experiences. This article investigates … Pega Systems has emerged as a leader in the field of Business Process Management (BPM), leveraging artificial intelligence to automate workflows, enhance decision-making, and improve customer experiences. This article investigates the integration of Pega AI with BPM workflows and its impact on operational efficiency, decision accuracy, and process automation across organizations. Through a mixed methods approach combining case studies, performance metrics, and expert interviews, the article evaluates how Pega AI drives innovation in various industries, including banking, insurance, healthcare, and telecommunications. The article covers key capabilities such as predictive analytics, decision management, natural language processing, and robotic process automation, along with how these features contribute to smarter, more agile business processes. The article's findings provide insights into practical applications, implementation challenges, return on investment considerations, and emerging trends in AI-powered BPM solutions.
Purpose: This study aims to understand the complexities and nuances surrounding artificial intelligence (AI) integration within the context of Malaysian public sector accounting. By exploring the perspectives and experiences of … Purpose: This study aims to understand the complexities and nuances surrounding artificial intelligence (AI) integration within the context of Malaysian public sector accounting. By exploring the perspectives and experiences of key stakeholders, this research seeks to contribute to the development of strategies for successful AI implementation in the sector. Design/ Methodology/ Approach: A qualitative research approach, involving semi-structured interviews with 18 public sector accountants, was employed to comprehensively explore their experiences, perceptions and challenges of AI adoption. Findings: Findings indicate a foundational understanding of AI's potential amongst public sector accountants, with its application envisioned for automating repetitive tasks, reduce manual process, and perform data analytics. While challenges such as system integration, network disruptions, and resistance from senior accountants hinder implementation, participants' express optimism about AI's role in enhancing efficiency and decision-making. This positive outlook is coupled with an expectation of reallocating staff towards higher-value functions like internal controls and auditing. Research Limitations/ Implications: The study's limitations arise primarily from its sample size of 18 participants and the focused selection of public departments. This restricted scope may constrain the generalisability of the findings to the broader spectrum of Malaysian public sector accounting services. Practical Implications: By identifying key challenges and opportunities, policymakers can establish guidelines and frameworks for a phased approach, prioritizing departments with the necessary infrastructure and human capital. This strategic approach will facilitate the seamless integration of AI, maximizing its benefits while mitigating potential risks. Originality/ Values: As an early exploration in this field, this study offers valuable insights that may assist policymakers in developing a strategic roadmap for AI implementation within the Malaysian public sector accounting domain. Keywords: Public sector accounting, AI adoption, accounting information systems (AIS), Malaysia
Big Data pipelines and Generative Artificial Intelligence (GenAI) have enabled new approaches to financial risk prediction. This paper deals with the Cloud-centric data engineering framework, where massive Big Data technologies … Big Data pipelines and Generative Artificial Intelligence (GenAI) have enabled new approaches to financial risk prediction. This paper deals with the Cloud-centric data engineering framework, where massive Big Data technologies are merged with GenAI to allow a more accurate, faster, and dependable financial risk assessment. The proposed concept utilizes distributed computing paradigms to acquire, process, and analyze high-velocity financial data sourced from multiple environments, including transactional datasets, market feeds, and social sentiment data. Due to the usage of GenAI within this framework, this system can detect complex patterns, simulate various stress scenarios, and provide insightful early warnings, which the conventional models did not highlight. The discussion also involves Cloud-centric designs to guarantee proper elasticity and fault tolerance with seamless integration into the modern DevOps toolchains. In this case, the outcome is capable of reactive analytics and adaptive model deployment on a massive scale. The contributions are highlighted by the development of dynamic preprocessing, feature, and model selection steps for Big Data engineering and GenAI on the Apache Spark, Kafka, and Kubernetes frameworks. The validation process is associated with the experimental demonstration of the superior early warning signal detection and loss avoidance rate. The resulting system might be viewed as a novel approach that merges the capabilities of Big Data engineering and GenAI in the Cloud setup to form a practical step for proactiveness and data-drivenness in the given field, which is particularly important with the current complexity and velocity of financial data.
In the face of accelerating digital transformation and AI-driven innovations in the post-COVID-19 era, the effectiveness of Human Resource Information Systems (HRIS) is critical to organizational resilience and sustainable digital … In the face of accelerating digital transformation and AI-driven innovations in the post-COVID-19 era, the effectiveness of Human Resource Information Systems (HRIS) is critical to organizational resilience and sustainable digital transformation in highly regulated sectors. This study examines how information quality, executive innovativeness, and staff IT capabilities influence HRIS effectiveness and evaluates the mediating role of Information System (IS) Ambidexterity, defined as an organization’s ability to explore and exploit its IS resources concurrently. By confirming the impact of organizational enablers on HRIS effectiveness, the study provides theoretical grounding for digital transformation strategies rooted in Resource-Based View (RBV) and Dynamic Capabilities Theory (DCT). Partial Least Squares Structural Equation Modeling (PLS-SEM) using SmartPLS was employed for its strength in modeling complex relationships and validating latent constructs in organizational contexts. Empirical data were gathered from 157 HR leaders across financial institutions in Pakistan. The results confirm that the identified enablers significantly impact both IS Ambidexterity and HRIS effectiveness and also emerge as strategic levers for building resilient, data-driven HRIS frameworks. IS Ambidexterity, a relatively underexplored construct in information systems research, enhances the strategic contribution of HRIS by serving as a dynamic capability that enables organizations to adapt and create sustained value in evolving digital environments. HRIS effectiveness contributes to efficiency, agility, strategic responsiveness, and cost optimization in financial institutions. The findings contribute to theory by integrating IS enablers with dynamic capability mediation, enriching the RBV-DCT interplay. This study provides evidence-based insights for developing economies pursuing sustainable digital transformations.
This article explores the formation of a unified methodological framework for data management within modern enterprises. The relevance of the topic stems from ongoing changes in economic and social structures, … This article explores the formation of a unified methodological framework for data management within modern enterprises. The relevance of the topic stems from ongoing changes in economic and social structures, driven by digital transformation and the increasing importance of information as a critical resource. Data is now recognized as one of the most valuable assets in contemporary business. Companies striving not only to achieve their business objectives but also to ensure long-term, sustainable market performance face the urgent need to develop tools and workflows for managing data of varying quality and formats. The aim of this study is to examine both fundamental and forward-looking methodological aspects of data handling and to identify the key characteristics of this process. Data management — across its many forms, as discussed in the article — is conceptualized as a comprehensive system based on a digital model that supports the effective operation of businesses in the face of new technological implementations. The authors identify the structural elements of such a system and highlight the interdependence between the development of methodological approaches to data management and the broader digital transformation of companies. The research methodology includes analysis, deduction, and analogy. The findings may be of interest to both domestic and international researchers for further study in the field of enterprise data management methodology, to business professionals seeking to optimize their data management strategies, and to students and postgraduate scholars studying enterprise data management.
The article presents the results of a study aimed at identifying the current and potential roles of digital platforms in supply chain management. The research reviews existing theoretical approaches and … The article presents the results of a study aimed at identifying the current and potential roles of digital platforms in supply chain management. The research reviews existing theoretical approaches and developments in this area, as well as analyzes case studies of successful digitalization. The authors demonstrate how the transition of digital platforms to working with Big Data sources is transforming their role in the management process. Previously, at the stage of digital reporting, there was a gap between operational logistics functional management and expert analytical work with aggregated indicators. However, new systems that provide for the processing, storage, and analysis of data, as well as the visualization of this information and the results of its analysis, make it possible to link strategic goal-setting, KPIs, and supply chain resilience management directly with data flows about functional business processes, creating a “digital twin” of the enterprise. Using structural-functional analysis and the case study method, the study examines management practices synthesizes a comprehensive approach to exploring the interrelation between technological and managerial innovations during the implementation of digital platforms. As a result, the paper proposes a model that outlines the role of digital platforms at different levels of management activity. It distinguishes between technological innovations (the development of digital platforms tailored to organizational needs) and managerial innovations (adaptive development of data-driven organizational management systems). The findings may be of interest both to academic researchers studying management challenges amid digital transformation and to practitioners conducting applied scientific and analytical studies of digital transformation outcomes in the express delivery industry and related sectors.
Knowledge extraction and sharing is one of the biggest challenges organizations face to ensure successful and long-lasting knowledge repositories. The North Carolina Department of Transportation (NCDOT) commissioned a web-based knowledge … Knowledge extraction and sharing is one of the biggest challenges organizations face to ensure successful and long-lasting knowledge repositories. The North Carolina Department of Transportation (NCDOT) commissioned a web-based knowledge management program called Communicate Lessons, Exchange Advice, Record (CLEAR) for end-users to promote employee-generated innovation and to institutionalize organizational knowledge. Reusing knowledge from an improperly managed database is problematic and potentially causes substantial financial loss and reduced productivity for an organization. Poorly managed databases can hinder effective knowledge dissemination across the organization. Data-driven dashboards offer a promising solution by facilitating evidence-driven decision-making through increased information access to disseminate, understand and interpret datasets. This paper describes an effort to create data visualizations in Tableau for CLEAR’s gatekeeper to monitor content within the knowledge repository. Through the three web-based strategic dashboards relating to lessons learned and best practices, innovation culture index, and website analytics, the information displays will aid in disseminating useful information to facilitate decision-making and execute appropriate time-critical interventions. Particular emphasis is placed on utility-related issues, as data from the NCDOT indicate that approximately 90% of projects involving utility claims experienced one or two such incidents. These claims contributed to an average increase in project costs of approximately 2.4% and schedule delays averaging 70 days. The data dashboards provide key insights into all 14 NCDOT divisions, supporting the gatekeeper in effectively managing the CLEAR program, especially relating to project performance, cost savings, and schedule improvements. The chronological analysis of the CLEAR program trends demonstrates sustained progress, validating the effectiveness of the dashboard framework. Ultimately, these data dashboards will promote organizational innovation in the long run by encouraging end-user participation in the CLEAR program.
This comprehensive article explores the strategic integration of Artificial Intelligence, Internet of Things, and Big Data technologies as a framework for public sector transformation. It examines how these convergent technologies … This comprehensive article explores the strategic integration of Artificial Intelligence, Internet of Things, and Big Data technologies as a framework for public sector transformation. It examines how these convergent technologies create opportunities for governments to shift from reactive to proactive service models while enhancing operational efficiency. The analysis investigates implementation pathways across multiple domains including predictive service delivery, infrastructure maintenance, and emergency response systems. It addresses critical governance challenges related to data privacy, cybersecurity vulnerabilities, and organizational adaptation requirements. The article presents a forward-looking perspective on the evolution of technology-enabled public administration, emphasizing the potential for more responsive, transparent and citizen-centric governance models that leverage data-driven insights to improve decision-making and resource allocation across all levels of government.
| Wiley eBooks
This paper aims to explore the challenges of maintaining and modernizing legacy systems, particularly COBOL-based platforms, the backbone of many financial and administrative systems. By exploring the DOGE team’s initiative … This paper aims to explore the challenges of maintaining and modernizing legacy systems, particularly COBOL-based platforms, the backbone of many financial and administrative systems. By exploring the DOGE team’s initiative to modernize government IT systems on a relevant case study, the author analyzes the pros and cons of AI and Agile methodologies in addressing the limitations of static and highly resilient legacy architectures. A systematic literature review was conducted to assess the state of the art about legacy system modernization, AI integration, and Agile methodologies. Then, the gray literature was analyzed to provide practical insights into how government agencies can modernize their IT infrastructures while addressing the growing shortage of COBOL experts. Findings suggest that AI may support interoperability, automation, and knowledge abstraction, but also introduce new risks related to cybersecurity, workforce disruption, and knowledge retention. Furthermore, the transition from Waterfall to Agile approaches poses significant epistemological and operational challenges. The results highlight the importance of adopting a hybrid human–AI model and structured governance strategies to ensure sustainable and secure system evolution. This study offers valuable insights for organizations that are facing the challenge of balancing the desire for modernization with the need to ensure their systems remain functional and manage tacit knowledge transfer.
Esta é a publicação do microcurso Aplicação Prática de Analytics e Business Intelligence para Saúde. Nessa jornada do conhecimento, o/a discente terá uma visão geral das principais ferramentas técnicas utilizadas … Esta é a publicação do microcurso Aplicação Prática de Analytics e Business Intelligence para Saúde. Nessa jornada do conhecimento, o/a discente terá uma visão geral das principais ferramentas técnicas utilizadas no mercado, da importância dos conceitos matemáticos e estatísticos para a extração de informações relevantes e de como apresentar os resultados de forma clara e objetiva por meio de dashboards. Além disso, será abordada a questão da ética e da Segurança da Informação. O livro aborda os principais tópicos necessários para a formação de um analista de dados.
In the changing world of marketing’s landscape, it is essential to prioritize customers effectively to boost engagement and get the most out of your investments. This research delves into how … In the changing world of marketing’s landscape, it is essential to prioritize customers effectively to boost engagement and get the most out of your investments. This research delves into how cutting-edge data capture and networking technologies can elevate customer prioritization tactics. Through a dataset analysis the study combines the Hierarchy Process with the Term Frequency Inverse Document Frequency approach to assess and order customers according to various factors. The Analytic Hierarchy Process helps break down decision making tasks into to handle hierarchical structures whereas the Term Frequency Inverse Document Frequency technique examines text data to pinpoint important customer characteristics effectively. Using both of these methods together allows for an evaluation of customer value and behavior trends. The results demonstrate that utilizing these approaches substantially boosts the precision and effectiveness of prioritizing customers when compared to practices. In the research findings point out elements that influence customer interaction and showcase how detailed data examination can reveal unidentified customer groups. It stresses the significance of utilizing data gathering and networking tools to guide marketing choices. Through embracing these methods firms can improve their marketing strategies to reach audiences and in turn boost customer happiness leading to long term growth. This study enhances marketing by offering a structure, for prioritizing customers and showcasing how blending analytical methods can revolutionize marketing tactics in a data focused setting.
Industrial revolutions have continually redefined how production systems sense, decide, and act. However, much of the literature remains concentrated on Industry 4.0, offering limited insight into the Decision Support Systems … Industrial revolutions have continually redefined how production systems sense, decide, and act. However, much of the literature remains concentrated on Industry 4.0, offering limited insight into the Decision Support Systems required for the emerging paradigms of Industries 5.0, 6.0, and 7.0. This survey traces the progressive evolution of decision intelligence across these stages, examining both computational foundations and socio-ethical dimensions. In Industry 4.0, decision-making is guided by rule-based automation and data-driven analytics. Industry 5.0 introduces human-centric frameworks that emphasize explainability, fairness, and collaborative intelligence. Industry 6.0 integrates biological, cognitive, and computational feedback, demanding systems that adapt to neural and physiological signals. Looking ahead, Industry 7.0 envisions self-organizing, anticipatory ecosystems where Natural Organic Artificial Intelligence systems (NOAI-systems) enable self-sustaining decision-making in autonomous systems aligned with environmental and societal dynamics. The survey identifies enabling paradigms such as machine learning, explainable AI, quantum optimization, neuromorphic computing, and bio-neural interfaces. It explores the risks emerging from diminishing human oversight, including transparency, cognitive safety, and value alignment. A maturity model and comparative matrix are presented to illustrate the shift in decision models, human roles, system adaptability, and industrial contexts. Ultimately, this study emphasizes that the future of decision support is not merely a technological challenge but a systemic transformation. Advancing toward resilient and ethically aligned industrial ecosystems requires cross-disciplinary collaboration spanning computer science, engineering, ethics, neuroscience, sustainability, and public policy.
A Inteligência Artificial (IA) na personalização do ensino emerge como uma solução inovadora para atender às demandas educacionais contemporâneas. A escolha deste tema justifica-se pela crescente necessidade de métodos pedagógicos … A Inteligência Artificial (IA) na personalização do ensino emerge como uma solução inovadora para atender às demandas educacionais contemporâneas. A escolha deste tema justifica-se pela crescente necessidade de métodos pedagógicos que se adaptem às particularidades de cada aluno, promovendo um aprendizado mais eficaz e significativo. O objetivo principal do estudo é investigar os efeitos da personalização do ensino mediada por IA na performance acadêmica dos alunos. A metodologia adotada consiste em uma análise bibliográfica, onde se revisam estudos e dados de plataformas educacionais que implementam tecnologias de IA para personalização. Os principais resultados indicam que a personalização do ensino, por meio da IA, resulta em uma melhora significativa na performance acadêmica, pois permite que os alunos aprendam no seu próprio ritmo e de acordo com seus estilos de aprendizagem. As conclusões mais relevantes apontam que a integração da IA no ambiente educacional não apenas facilita a adaptação do conteúdo, mas também aumenta o engajamento dos alunos, promovendo um aprendizado mais ativo e autônomo. A pesquisa destaca a importância de investir em tecnologias de IA como uma estratégia para transformar a educação e atender às necessidades individuais dos estudantes.
Purpose This paper aims to identify the critical success factors for technology adoption in procurement and to develop a strategic roadmap to facilitate this process. Design/methodology/approach The research employed a … Purpose This paper aims to identify the critical success factors for technology adoption in procurement and to develop a strategic roadmap to facilitate this process. Design/methodology/approach The research employed a three-phase methodology combining systematic literature review using the probabilistic composition of preferences, field research using the critical incident technique with procurement experts and Grounded Theory for data analysis. The resulting roadmap was compared with established innovation adoption theories. Findings Twelve critical success factors were identified, with structured process optimization emerging as the foundation for successful technology adoption in procurement. The strategic roadmap integrates these factors in a sequential implementation process. The Diffusion of Innovation Theory, Institutional Theory and Actor-Network Theory provide theoretical support for different stages of the adoption process. Research limitations/implications The study focused on the procurement area with a sample of ten experienced professionals, potentially limiting generalizability. The literature review primarily utilized the Scopus database, which may have excluded some relevant research. Practical implications The strategic roadmap offers procurement professionals a structured approach to technology adoption with specific, actionable implementation steps for each phase. It can improve success rates in the adoption of new technologies within the procurement areas of organizations, leading to increased productivity. Originality/value This paper uniquely integrates practitioner experience with theoretical frameworks to create a process-first – rather than technology-first – approach to procurement digitalization, providing both temporal sequencing of success factors and empirically derived implementation guidance.
Anusha Joodala | International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences
The growing movement towards cloud-based services, it is essential for companies to be compliant with strict regulatory standards like SOC 2, HIPAA, and GDPR to secure data, privacy, and trust. … The growing movement towards cloud-based services, it is essential for companies to be compliant with strict regulatory standards like SOC 2, HIPAA, and GDPR to secure data, privacy, and trust. In this paper, we discuss a cloud-native approach that utilizes AWS microservices to help customers deal with the intricate compliance mandates of these regulations. The paper explores the prospect of adopting the use of microservices in the cloud as a means to achieve the development of a cloud-based system that is secure, scalable and auditable. In particular, the paper describes how AWS services including AWS Identity and Access Management (IAM), Amazon S3, AWS Lambda, and Amazon CloudWatch help customers meet the demands of SOC 2, HIPAA, and GDPR. We'll then walk through multiple architectural patterns and practices for how customers can automate their compliance processes, enforce security standards, and enable ongoing monitoring to ensure that their organization is in a state of compliance. The discoveries affirm the significance of a cloud-natively designed infrastructure in compliance, and allow establishments and firms to scale more efficiently with enhanced security.
Sweta Tiwari | International journal of data science and machine learning.
Ensuring that financial technology solutions are effective, safe, and comply with regulations is necessary in the changing technology field. The research introduces a new Quality Assurance framework that ensures that … Ensuring that financial technology solutions are effective, safe, and comply with regulations is necessary in the changing technology field. The research introduces a new Quality Assurance framework that ensures that FinTech systems follow strict rules, process transactions instantly, and have the most secure possible systems. Using up-to-date automated testing, optimization techniques, and CI/CD practices, the approach boosts the system’s reliability, scalability, and quick response. Research shows that using this approach boosts defect detection results, speeds up development, and reduces risks, setting a new high standard for QA in FinTech. This study provides useful information for both experts and academics working on improving software quality and system dependability in high-stakes finance.
Data workflows are an important component of modern analytical systems, enabling structured data extraction, transformation, integration, and delivery across diverse applications. Despite their importance, these workflows are often developed using … Data workflows are an important component of modern analytical systems, enabling structured data extraction, transformation, integration, and delivery across diverse applications. Despite their importance, these workflows are often developed using ad hoc approaches, leading to scalability and maintenance challenges. This paper proposes a structured, three-level methodology—conceptual, logical, and physical—for modeling data workflows using Business Process Model and Notation (BPMN). A custom BPMN metamodel is introduced, along with a tool built on BPMN.io, that enforces modeling constraints and supports translation from high-level workflow designs to executable implementations. Logical models are further enriched through blueprint definitions, specified in a formal, implementation-agnostic JSON schema. The methodology is validated through a case study, demonstrating its applicability across ETL and machine learning domains, promoting clarity, reuse, and automation in data pipeline development.
Asha Singh , Divya Beri | International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences
The integration of Artificial Intelligence (AI) into investment management is transforming the industry by providing advanced tools and algorithms that enhance the efficiency and effectiveness of investment strategies. This white … The integration of Artificial Intelligence (AI) into investment management is transforming the industry by providing advanced tools and algorithms that enhance the efficiency and effectiveness of investment strategies. This white paper explores the significant impact of AI on investment management, emphasizing its role in enhancing data analysis, processing big data, and recognizing patterns and anomalies. AI's ability to analyze vast amounts of market data in real-time, leverage machine learning, and perform predictive analytics helps investment managers make more informed decisions. By integrating data from various sources, such as financial reports, news articles, and social media, AI-powered systems offer a comprehensive view of the market, facilitating improved decision-making. This paper examines the benefits, challenges, and future prospects of AI in the investment management sector.
The rapid development and deployment of artificial intelligence (AI) technologies have sparked intense public interest and debate. While these innovations promise to revolutionise various aspects of human life, it is … The rapid development and deployment of artificial intelligence (AI) technologies have sparked intense public interest and debate. While these innovations promise to revolutionise various aspects of human life, it is crucial to understand the complex emotional responses they elicit from potential adopters and users. Such findings can offer crucial guidance for stakeholders involved in the development, implementation, and governance of AI technologies like OpenAI’s ChatGPT, a large language model (LLM) that garnered significant attention upon its release, enabling more informed decision-making regarding potential challenges and opportunities. While previous studies have employed data-driven approaches towards investigating public reactions to emerging technologies, they often relied on sentiment polarity analysis, which categorises responses as positive or negative. However, this binary approach fails to capture the nuanced emotional landscape surrounding technological adoption. This paper overcomes this limitation by presenting a comprehensive analysis for investigating the emotional landscape surrounding technology adoption by using the launch of ChatGPT as a case study. In particular, a large corpus of social media texts containing references to ChatGPT was compiled. Text mining techniques were applied to extract emotions, capturing a more nuanced and multifaceted representation of public reactions. This approach allows the identification of specific emotions such as excitement, fear, surprise, and frustration, providing deeper insights into user acceptance, integration, and potential adoption of the technology. By analysing this emotional landscape, we aim to provide a more comprehensive understanding of the factors influencing ChatGPT’s reception and potential long-term impact. Furthermore, we employ topic modelling to identify and extract the common themes discussed across the dataset. This additional layer of analysis allows us to understand the specific aspects of ChatGPT driving different emotional responses. By linking emotions to particular topics, we gain a more contextual understanding of public reaction, which can inform decision-making processes in the development, deployment, and regulation of AI technologies.