Knowability as Continuity: A Topological Account of Informational Dependence

Abstract

We study knowable informational dependence between empirical questions, modeled as continuous functional dependence between variables in a topological setting. We also investigate epistemic independence in topological terms and show that it is compatible with functional (but non-continuous) dependence. We then proceed to study a stronger notion of knowability based on uniformly continuous dependence. On the technical logical side, we determine the complete logics of languages that combine general functional dependence, continuous dependence, and uniformly continuous dependence.

Locations

Ask a Question About This Paper

Summary

This paper introduces a sophisticated framework for understanding and modeling informational dependence and knowability in empirical settings, moving beyond traditional logical approaches that often assume idealized, sharp values. It achieves this by leveraging the mathematical tools of general topology and metric spaces.

The central insight is to interpret empirical “variables” (such as physical quantities like position or velocity) as functions mapping states of the world to topological spaces, where open sets represent observable, imprecise measurements or “approximations.” Similarly, “questions” are formalized as topologies on the state space, with open neighborhoods representing partial answers.

The paper’s significance lies in bridging the gap between abstract logical notions of dependence and the realities of empirical inquiry, where observations are inherently imprecise. It proposes that knowability can be understood as continuity. Specifically:
* Exact dependence (as studied in prior work like Logic of Functional Dependence, LFD) means that knowing the precise value of one variable (X) completely determines the precise value of another (Y).
* Continuous dependence (referred to as “epistemic dependence”) is introduced as a more realistic notion. If Y depends continuously on X, it means that an observer can come to know the value of Y to any desired accuracy, provided they obtain sufficiently accurate approximations of X. This is formalized as the global continuity of the function mapping X-values to Y-values in a topological setting. A locally continuous version captures “knowable epistemic dependence,” allowing for this inference within a specific neighborhood.
* Uniformly continuous dependence (referred to as “strong epistemic dependence”) represents an even stronger form of knowability, particularly relevant in metric spaces. It captures the idea of “epistemic know-how,” where the observer knows in advance precisely how accurate their measurement of X needs to be to achieve a desired accuracy for Y, uniformly across the domain.

Key innovations include:
1. Topological Semantics for Dependence: Formalizing empirical variables and questions as topological maps and spaces, respectively, allowing for the representation of imprecise observations and approximations.
2. Graded Notions of Knowability: Introducing and formally distinguishing between mere functional dependence, continuous dependence (which implies information flow and knowability), and uniformly continuous dependence (which captures prescriptive “know-how”).
3. Analysis of Independence: Defining “topological independence,” where observations of X yield no new information about Y. Crucially, the paper demonstrates the counter-intuitive finding that functional dependence and topological independence can coexist – meaning Y might be completely determined by X, yet no approximate measurement of X can reveal anything new about Y.
4. Logical Systems: The development of two new logical systems, LCD (Logic of Continuous Dependence) and LUD (Logic of Uniform Dependence). These logics are shown to be sound, complete, and decidable for their respective classes of models (topological dependence models for LCD, and metric/pseudo-locally Lipschitz models for LUD), providing rigorous tools for reasoning about these new epistemic concepts.
5. Epistemic Opacity of Point-Continuity: The paper highlights that mere point-continuity (a very local form of dependence) does not guarantee knowability in a robust sense, deeming it “epistemically opaque” due to its extreme sensitivity to small errors in observation.
6. “Epistemic Bootstrapping”: The observation that in “favorable” environments (like locally compact spaces, which include Euclidean spaces), epistemic dependence automatically “bootstraps” to locally strong (uniformly continuous) dependence, simplifying the path from scientific understanding to practical know-how.

The main prior ingredients upon which this work builds are:
* Dependence Logic (DL) and Logic of Functional Dependence (LFD): These prior logical systems established the formal study of exact, set-theoretic functional dependence between variables and questions, providing the foundational concepts that this paper then extends topologically.
* General Topology: Concepts such as topological spaces, open sets, neighborhoods, continuity, interior operators, and product spaces are fundamental for defining the empirical setting and notions of knowability.
* Metric Space Theory: Crucial for formalizing the stronger notion of uniform continuity and the concept of “know-how,” which require notions of distance and error.
* Modal Logic: The logical framework used for axiomatization, including S4 modal axioms for knowledge/knowability, and standard techniques for proving completeness and decidability (e.g., canonical models, p-morphisms).
* Epistemic Logic: Provides the philosophical and logical context for reasoning about knowledge, belief, and information, informing the epistemic interpretations of the mathematical concepts.
* Situation Theory: An older theory that viewed information flow through “constraints” between situations, providing a philosophical precursor to the idea of dependence as an information-carrying relation.

We study knowable informational dependence between empirical questions, modeled as continuous functional dependence between variables in a topological setting. We also investigate epistemic independence in topological terms and show that … We study knowable informational dependence between empirical questions, modeled as continuous functional dependence between variables in a topological setting. We also investigate epistemic independence in topological terms and show that it is compatible with functional (but non-continuous) dependence. We then proceed to study a stronger notion of knowability based on uniformly continuous dependence. On the technical logical side, we determine the complete logics of languages that combine general functional dependence, continuous dependence, and uniformly continuous dependence.
Abstract In recent work, Stalnaker proposes a logical framework in which belief is realized as a weakened form of knowledge 35. Building on Stalnaker’s core insights, and using frameworks developed … Abstract In recent work, Stalnaker proposes a logical framework in which belief is realized as a weakened form of knowledge 35. Building on Stalnaker’s core insights, and using frameworks developed in 11 and 3, we employ topological tools to refine and, we argue, improve on this analysis. The structure of topological subset spaces allows for a natural distinction between what is known and (roughly speaking) what is knowable ; we argue that the foundational axioms of Stalnaker’s system rely intuitively on both of these notions. More precisely, we argue that the plausibility of the principles Stalnaker proposes relating knowledge and belief relies on a subtle equivocation between an “evidence-in-hand” conception of knowledge and a weaker “evidence-out-there” notion of what could come to be known . Our analysis leads to a trimodal logic of knowledge, knowability, and belief interpreted in topological subset spaces in which belief is definable in terms of knowledge and knowability . We provide a sound and complete axiomatization for this logic as well as its uni-modal belief fragment. We then consider weaker logics that preserve suitable translations of Stalnaker’s postulates, yet do not allow for any reduction of belief. We propose novel topological semantics for these irreducible notions of belief, generalizing our previous semantics, and provide sound and complete axiomatizations for the corresponding logics.
In recent work, Stalnaker proposes a logical framework in which belief is realized as a weakened form of knowledge. Building on Stalnaker's core insights, and using frameworks developed by Bjorndahl … In recent work, Stalnaker proposes a logical framework in which belief is realized as a weakened form of knowledge. Building on Stalnaker's core insights, and using frameworks developed by Bjorndahl and Baltag et al., we employ topological tools to refine and, we argue, improve on this analysis. The structure of topological subset spaces allows for a natural distinction between what is known and (roughly speaking) what is knowable; we argue that the foundational axioms of Stalnaker's system rely intuitively on both of these notions. More precisely, we argue that the plausibility of the principles Stalnaker proposes relating knowledge and belief relies on a subtle equivocation between an "evidence-in-hand" conception of knowledge and a weaker "evidence-out-there" notion of what could come to be known. Our analysis leads to a trimodal logic of knowledge, knowability, and belief interpreted in topological subset spaces in which belief is definable in terms of knowledge and knowability. We provide a sound and complete axiomatization for this logic as well as its uni-modal belief fragment. We then consider weaker logics that preserve suitable translations of Stalnaker's postulates, yet do not allow for any reduction of belief. We propose novel topological semantics for these irreducible notions of belief, generalizing our previous semantics, and provide sound and complete axiomatizations for the corresponding logics.
Topological models of empirical and formal inquiry are increasingly prevalent. They have emerged in such diverse fields as domain theory [1, 16], formal learning theory [18], epistemology and philosophy of … Topological models of empirical and formal inquiry are increasingly prevalent. They have emerged in such diverse fields as domain theory [1, 16], formal learning theory [18], epistemology and philosophy of science [10, 15, 8, 9, 2], statistics [6, 7] and modal logic [17, 4]. In those applications, open sets are typically interpreted as hypotheses deductively verifiable by true propositional information that rules out relevant possibilities. However, in statistical data analysis, one routinely receives random samples logically compatible with every statistical hypothesis. We bridge the gap between propositional and statistical data by solving for the unique topology on probability measures in which the open sets are exactly the statistically verifiable hypotheses. Furthermore, we extend that result to a topological characterization of learnability in the limit from statistical data.
Common knowledge and only knowing capture two intuitive and natural notions that have proven to be useful in a variety of settings, for example to reason about coordination or agreement … Common knowledge and only knowing capture two intuitive and natural notions that have proven to be useful in a variety of settings, for example to reason about coordination or agreement between agents, or to analyse the knowledge of knowledge-based agents. While these two epistemic operators have been extensively studied in isolation, the approaches made to encode their complex interplay failed to capture some essential properties of only knowing. We propose a novel solution by defining a notion of $\mu$-biworld for countable ordinals $\mu$, which approximates not only the worlds that an agent deems possible, but also those deemed impossible. This approach allows us to define a multi-agent epistemic logic with common knowledge and only knowing operators, and a three-valued model semantics for it. Moreover, we show that we only really need biworlds of depth at most $\omega^2+1$. Based on this observation, we define a Kripke semantics on a canonical Kripke structure and show that this semantics coincides with the model semantics. Finally, we discuss issues arising when combining negative introspection or truthfulness with only knowing and show how positive introspection can be integrated into our logic.
We introduce the concepts of dependence and independence in a very general framework. We use a concept of rank to study dependence and independence. By means of the rank we … We introduce the concepts of dependence and independence in a very general framework. We use a concept of rank to study dependence and independence. By means of the rank we identify (total) dependence with inability to create more diversity, and (total) independence with the presence of maximum diversity. We show that our theory of dependence and independence covers a variety of dependence concepts, for example the seemingly unrelated concepts of linear dependence in algebra and dependence of variables in logic.
We introduce an atomic formula intuitively saying that given variables are independent from given other variables if a third set of variables is kept constant. We contrast this with dependence … We introduce an atomic formula intuitively saying that given variables are independent from given other variables if a third set of variables is kept constant. We contrast this with dependence logic. We show that our independence atom gives rise to a natural logic capable of formalizing basic intuitions about independence and dependence.
We give an overview of some developments in dependence and independence logic. This is a tiny selection, intended for a newcomer, from a rapidly growing literature on the topic. Furthermore, … We give an overview of some developments in dependence and independence logic. This is a tiny selection, intended for a newcomer, from a rapidly growing literature on the topic. Furthermore, we discuss conditional independence atoms and we prove that conditional and non-conditional independence logic are equivalent. Finally, we briefly discuss an application of our logics to belief representation.
We give an overview of some developments in dependence and independence logic. This is a tiny selection, intended for a newcomer, from a rapidly growing literature on the topic. Furthermore, … We give an overview of some developments in dependence and independence logic. This is a tiny selection, intended for a newcomer, from a rapidly growing literature on the topic. Furthermore, we discuss conditional independence atoms and we prove that conditional and non-conditional independence logic are equivalent. Finally, we briefly discuss an application of our logics to belief representation.
We study the learning power of iterated belief revision methods. Successful learning is understood as convergence to correct, i.e., true, beliefs. We focus on the issue of universality: whether or … We study the learning power of iterated belief revision methods. Successful learning is understood as convergence to correct, i.e., true, beliefs. We focus on the issue of universality: whether or not a particular belief revision method is able to learn everything that in principle is learnable. We provide a general framework for interpreting belief revision policies as learning methods. We focus on three popular cases: conditioning, lexicographic revision, and minimal revision. Our main result is that conditioning and lexicographic revision can drive a universal learning mechanism, provided that the observations include only and all true data, and provided that a non-standard, i.e., non-well-founded prior plausibility relation is allowed. We show that a standard, i.e., well-founded belief revision setting is in general too narrow to guarantee universality of any learning method based on belief revision. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition, which says that only finitely many errors occur, and that every error is eventually corrected, we show that lexicographic revision is still universal in this setting, while the other two methods are not.
This book develops a view of logic as a theory of information-driven agency and intelligent interaction between many agents - with conversation, argumentation and games as guiding examples. It provides … This book develops a view of logic as a theory of information-driven agency and intelligent interaction between many agents - with conversation, argumentation and games as guiding examples. It provides one uniform account of dynamic logics for acts of inference, observation, questions and communication, that can handle both update of knowledge and revision of beliefs. It then extends the dynamic style of analysis to include changing preferences and goals, temporal processes, group action and strategic interaction in games. Throughout, the book develops a mathematical theory unifying all these systems, and positioning them at the interface of logic, philosophy, computer science and game theory. A series of further chapters explores repercussions of the 'dynamic stance' for these areas, as well as cognitive science.
We propose a number of powerful dynamic-epistemic logics for multi-agent information sharing and acts of publicly or privately accessing other agents’ information databases. The static base of our logics is … We propose a number of powerful dynamic-epistemic logics for multi-agent information sharing and acts of publicly or privately accessing other agents’ information databases. The static base of our logics is obtained by adding to standard epistemic logic comparative epistemic assertions for groups or individuals, as well as a common distributed knowledge operator (that combines features of both common knowledge and distributed knowledge). On the dynamic side, we introduce actions by which epistemic superiority can be acquired: “sharing all one knows” (by e.g. giving access to one’s information database to all or some of the other agents), as well as more complex informational events, such as hacking. We completely axiomatize several such logics and prove their decidability.
Abstract This paper presents a simple decidable logic of functional dependence LFD, based on an extension of classical propositional logic with dependence atoms plus dependence quantifiers treated as modalities, within … Abstract This paper presents a simple decidable logic of functional dependence LFD, based on an extension of classical propositional logic with dependence atoms plus dependence quantifiers treated as modalities, within the setting of generalized assignment semantics for first order logic. The expressive strength, complete proof calculus and meta-properties of LFD are explored. Various language extensions are presented as well, up to undecidable modal-style logics for independence and dynamic logics of changing dependence models. Finally, more concrete settings for dependence are discussed: continuous dependence in topological models, linear dependence in vector spaces, and temporal dependence in dynamical systems and games.
This is an advanced 2001 textbook on modal logic, a field which caught the attention of computer scientists in the late 1970s. Researchers in areas ranging from economics to computational … This is an advanced 2001 textbook on modal logic, a field which caught the attention of computer scientists in the late 1970s. Researchers in areas ranging from economics to computational linguistics have since realised its worth. The book is for novices and for more experienced readers, with two distinct tracks clearly signposted at the start of each chapter. The development is mathematical; prior acquaintance with first-order logic and its semantics is assumed, and familiarity with the basic mathematical notions of set theory is required. The authors focus on the use of modal languages as tools to analyze the properties of relational structures, including their algorithmic and algebraic aspects, and applications to issues in logic and computer science such as completeness, computability and complexity are considered. Three appendices supply basic background information and numerous exercises are provided. Ideal for anyone wanting to learn modern modal logic.
Dependence is a common phenomenon, wherever one looks: ecological systems, astronomy, human history, stock markets - but what is the logic of dependence? This book is the first to carry … Dependence is a common phenomenon, wherever one looks: ecological systems, astronomy, human history, stock markets - but what is the logic of dependence? This book is the first to carry out a systematic logical study of this important concept, giving on the way a precise mathematical treatment of Hintikka's independence friendly logic. Dependence logic adds the concept of dependence to first order logic. Here the syntax and semantics of dependence logic are studied, dependence logic is given an alternative game theoretic semantics, and results about its complexity are proven. This is a graduate textbook suitable for a special course in logic in mathematics, philosophy and computer science departments, and contains over 200 exercises, many of which have a full solution at the end of the book. It is also accessible to readers, with a basic knowledge of logic, interested in new phenomena in logic.
Abstract There are many proposed aims for scientific inquiry--to explain or predict events, to confirm or falsify hypotheses, or to find hypotheses that cohere with our other beliefs in some … Abstract There are many proposed aims for scientific inquiry--to explain or predict events, to confirm or falsify hypotheses, or to find hypotheses that cohere with our other beliefs in some logical or probabilistic sense. This book is devoted to a different proposal--that the logical structure of the scientist’s method should guarantee eventual arrival at the truth given the scientist’s background assumptions. Interest in this methodological property, called “logical reliability,” stems from formal learning theory, which draws its insights not from the theory of probability, but from the theory of computability. Kelly first offers an accessible explanation of formal learning theory, then goes on to develop and explore a systematic framework in which various standard learning theoretic results can be seen as special cases of simpler and more general considerations. This approach answers such important questions as whether there are computable methods more reliable than Bayesian updating or Popper’s method of conjectures and refutations. Finally, Kelly clarifies the relationship between the resulting framework and other standard issues in the philosophy of science, such as probability, causation, and relativism. His work is a major contribution to the literature and will be essential reading for scientists, logicians, and philosophers
Abstract Let us begin with the problems which gave rise to domain theory: Least fixpoints as meanings of recursive definitions. Recursive definitions of procedures, data structures and other computational entities … Abstract Let us begin with the problems which gave rise to domain theory: Least fixpoints as meanings of recursive definitions. Recursive definitions of procedures, data structures and other computational entities abound in programming languages. Indeed, recursion is the basic effective mechanism for describing infinite computational behaviour in finite terms.