Academia.eduAcademia.edu

Some Notes on Temporal Justification Logic

2015, arXiv (Cornell University)

Justification logics are modal-like logics with the additional capability of recording the reason, or justification, for modalities in syntactic structures, called justification terms. Justification logics can be seen as explicit counterparts to modal logic. The behavior and interaction of agents in distributed system is often modeled using logics of knowledge and time. In this paper, we sketch some preliminary ideas on how the modal knowledge part of such logics of knowledge and time could be replaced with an appropriate justification logic.

Some Notes on Temporal Justification Logic Samuel Bucheli arXiv:1510.07247v1 [cs.LO] 25 Oct 2015 October 25, 2015 Abstract Justification logics are modal-like logics with the additional capability of recording the reason, or justification, for modalities in syntactic structures, called justification terms. Justification logics can be seen as explicit counterparts to modal logic. The behavior and interaction of agents in distributed system is often modeled using logics of knowledge and time. In this paper, we sketch some preliminary ideas on how the modal knowledge part of such logics of knowledge and time could be replaced with an appropriate justification logic. 1 Introduction Justification logics [AF12] are epistemic logics that feature explicit reasons for an agent’s knowledge and belief. Originally, Artemov developed justification logic to provide a constructive semantics for intuitionistic logic. Later this type of logics was introduced into formal epistemology, where it provides a novel approach to several epistemic puzzles and problems of multi-agent systems [Art06, Art08, Art10, BKS11a, BKS11b, KS12, AK14, BKS14, KMOS15]. Instead of an implicit statement ϕ, which stands for the agent knows ϕ, justification logics include explicit statements of the form [t]ϕ, which mean t justifies the agent’s knowledge of ϕ. A common approach to model distributed systems of interacting agents is using logics of knowledge and time, with the interplay between these two modalities leading to interesting properties and questions [FHMV95, vdMW03, HvdMV04]. While knowledge in such systems has typically been modeled using the modal logic S5, it is a natural question to ask what happens when we model knowledge in such logics using a justification logic. In the following, we will sketch some preliminary ideas towards such a logic, and indicate further necessary work with appropriate questions. After briefly introducing the syntax in Section 2, we propose an axiomatization in Section 3, including possible additional principles. The resulting logic is illustrated with the proof of some simple properties in Section 4. Finally, we introduce interpreted systems as the chosen semantics in Section 5 and we show soundness in Section 6, where the question of completeness is also briefly addressed. The paper concludes with additional questions and remarks regarding further directions of research in Section 7. 1 2 Syntax In the following, let h be a fixed number of agents, Const a given set of proof constants, Var a given set of proof variables, and Prop a given set of atomic propositions.. The set of justification terms Tmi for agent 1 ≤ i ≤ h is defined inductively by ti ::= ci | xi | !ti | ?ti | ti + ti | ti · ti , where ci ∈ Consti and xi ∈ Vari . The set of formulae Fml is inductively defined by   ϕ | ϕ | ϕ U ϕ | ti i ϕ , ϕ ::= P | ⊥ | ϕ → ϕ | where 1 ≤ i ≤ h, ti ∈ Tmi and P ∈ Prop. We use the following usual abbreviations ¬ϕ := ϕ → ⊥ , ⊤ := ¬⊥ , ϕ ∨ ψ := ¬ϕ → ψ , ϕ ∧ ψ := ¬(¬ϕ ∨ ¬ψ , ϕ ↔ ψ := (ϕ → ψ) ∧ (ψ → ϕ) , ♦ϕ := ¬¬ϕ . Associativity and precedence of connectives, as well as the corresponding omission of brackets, are handled in the usual manner. 3 Axioms The axiom system for temporal justification logic consists of three parts, namely propositional logic, temporal logic, and justification logic. 3.1 Propositional Logic For propositional logic, we take 0. all propositional tautologies (Prop) as axioms and the rule modus ponens, as usual: ϕ ϕ→ψ (MP) . ψ 3.2 Temporal Logic For the temporal part, we use [Gor99] (see also Section A), with axioms 1. (ϕ → ψ) → ( ϕ → ( -k) ψ) 2. (ϕ → ψ) → (ϕ → ψ) (-k) 2 3. ¬ϕ ↔ ¬ 4. ϕ → (ϕ ∧ 5. (ϕ → ϕ (fun) ϕ) (mix) ϕ) → (ϕ → ϕ) (ind) 6. ϕ U ψ → ♦ψ ( U 1) 7. ϕ U ψ ↔ ψ ∨ (ϕ ∧ and rules (ϕ U ψ)) ( U 2) ϕ ( -nec) , ϕ ϕ (-nec) . ϕ We use LTL to denote the Hilbert system given by the axioms and rules for temporal logic above, plus the axioms and rules for propositional logic. 3.3 Justification Logic Finally, for the justification logic, we use the counterpart to the multi-agent version of the modal logic S5, i.e., J5h (cf. [Rub06]), with axioms 8. [t]i (ϕ → ψ) → ([s]i ϕ → [t · s]i ψ) (application) 9. [t]i ϕ ∨ [t + s]i ϕ, [s]i ϕ → [t + s]i ϕ (sum) 10. [t]i ϕ → ϕ (reflexivity) 11. [t]i ϕ → [!t]i [t]i ϕ (positive introspection) 12. ¬ [t]i ϕ → [?t]i ¬ [t]i ϕ (negative introspection) and rule [c]i ϕ ∈ CS (const-nec) , [c]i ϕ where the constant specification CS is a set of formulae [c]i ϕ, where c ∈ Consti is a proof constant and ϕ is an axiom. We call a constant specification CS axiomatically appropriate, if for every axiom ϕ and agent i, there is a constant c ∈ Consti such that [c]i ϕ ∈ CS. For a given constant specification CS, we use J5LTLCS to denote the Hilbert system given by the axioms and rules for propositional logic, temporal logic, and justification logic as presented above. Question 1. What can be done using the following variant of constant necessitation? [c]i ϕ ∈ CS [c]i ϕ 3.4 Additional Principles In J5LTL, epistemic and temporal properties do not interact. It is therefore a natural question to consider some of the following principles, which create a connection between time and knowledge. We assume the language for terms to be augmented in the obvious way. 1. [t]i ϕ →  [⇓ t]i ϕ (-access) 3 2.  [t]i ϕ → [⇑ t]i ϕ (generalize) 3. [t]i ϕ → [↓ t]i ( -access) 4. [t]i 5. ϕ→ ϕ [⇛ t]i ϕ [t]i ϕ → [⇚ t]i ( -right) ( -left) ϕ Remark. 1. This is very plausible, if you have evidence that something always is true, then at every point in time you should be able to access this information. 2. Using evidence this seems more plausible than just using knowledge, as one requires the evidence to be the same at every point in time. 3. This is similar to (-access). One would expect it to be provable from (-access) (see below). 4. This seems plausible: agents do not forget evidence once they have gathered it and can “take it with them”. 5. This one seems seems less plausible, as it implies some form of premonition When writing (principles) ⊢CS ϕ we mean ϕ is provable in J5LTLCS with the principles (principles) treated as additional axioms, i.e., in particular, this is also reflected by constant necessitation and constant specifications, respectively. If the constant specification CS is clear from the context or not relevant, we omit the corresponding subscript. Question 2. What other principles connecting temporal and epistemic properties might be of interest? See also Section B. 4 Some Properties In the following, we illustrate the logic by giving two simple (purely temporal) deductions, namely of ϕ → ϕ and ϕ → ϕ, and then show how the connecting principles from the previous section link knowledge and time. We start with these two typical deductions in LTL. Lemma 1. 1. ⊢ ϕ → ϕ 2. ⊢ ϕ → ϕ Proof. 1. From (mix) and propositional reasoning we get ϕ → ϕ ϕ → ϕ From (1) and ( -nec) we get (ϕ → ϕ) 4 (1) (2) which, in turn, using ( -k) and propositional reasoning gives ϕ → ϕ. By propositional reasoning and using (1) we obtain the desired ϕ → ϕ from this. 2. The following is a valid instance of (ind) (ϕ → ϕ) → (ϕ → ϕ) . Using (2) from the first item, (-nec), and modus ponens we immediately get the desired result. Now, we show how ( -access) can be proved using (-access) and ( -left). Lemma 2. For every agent i, formula ϕ and term t there is a term s(t) such that (-access), ( -left) ⊢ [t]i ϕ → [s(t)]i ϕ Proof. Using propositional logic, we can combine the two given principles [t]i ϕ →  [⇓]i ϕ , [⇓]i ϕ → [⇚⇓ t]i ϕ, and the following instance of the principle proved in the first item of the previous lemma  [⇓]i ϕ → [⇓ t]i ϕ in order to obtain [t]i ϕ → [⇚⇓ t]i ϕ, and we are done. Question 3. Can we prove ( -access) from (-access) without using ( -left)? In contrast to the previous deductions, in the following we require our constant specifications to be axiomatically appropriate. Lemma 3. Let CS be an axiomatically appropriate constant specification. For every agent i, formula ϕ and term t there is a term s(t) such that (generalize) ⊢CS [t]i ϕ → [s(t)]i ϕ Proof. From (const-nec) we get [c3 ]((ϕ1 → ϕ ∧ ϕ) → (ϕ → [c2 ](ϕ1 → ϕ ∧ ϕ) , [c1 ]((ϕ → ϕ)) , ϕ) → (ϕ → ϕ)) , as these are valid instances of a propositional axiom, (mix), and (ind), respectively. 5 Using (application) and modus ponens, we obtain [c2 · c3 ](ϕ → ϕ) . Using (-nec), this gives  [c2 · c3 ](ϕ → ϕ) . From (generalize), (application), and modus ponens, we obtain [⇑ (c2 · c3 )](ϕ → ϕ) . Using (application) and modus ponens two more times, we finally get [t]ϕ → [(c1 · ⇑ (c2 · c3 )) · t]ϕ . Note that the previous proof relies on the fact that agents can “internalize” deductions. This so-called internalization theorem holds in general and is a typical and fundamental property of justification logics. Theorem 1 (Internalization). Let CS be an axiomatically appropriate constant specification. If (generalize), ( -access) ⊢CS ϕ , then, for every 1 ≤ i ≤ h there is a term ti such that (generalize), ( -access) ⊢CS [t]i ϕ . Proof. We proceed by induction on the derivation of ϕ. In case ϕ is an axiom, the claim is immediate by (const-nec). In case ϕ was derived by modus ponens from ψ → ϕ and ψ, then, by induction hypothesis, there are term s1 and s2 such that [s1 ]i (ψ → ϕ) and [s2 ]ψ are provable. Using (application) and modus ponens, we obtain [s1 · s2 ]i ϕ. In case ϕ is [c]i ψ, derived using (const-nec), we can use (positive introspection) and modus ponens in order to obtain [!c]i [c]i ψ . In case ϕ is ψ, derived using (-nec), then, by induction hypothesis, there is a term s such that [s]i ψ is provable. Now, we can use (-nec) in order to obtain  [s]i ψ and then (generalize) and modus ponens to get [⇑ s]i ψ . Finally, if ϕ is ψ, derived using ( -nec), then, as above, we obtain [⇑ s]i ψ and then use ( -access) and modus ponens to get [↓⇑ s]i ψ. Corollary 1. Let CS be an axiomatically appropriate constant specification. If (generalize), (-access), ( -left) ⊢CS ϕ , then, for every 1 ≤ i ≤ h there is a term ti such that (generalize), (-access), ( -left) ⊢CS [t]i ϕ . Question 4. Is internalization provable without these additional principles? 6 5 Semantics Let L be some set of local states. A global state is a (h+1)-tuple hle , l1 , . . . , lh i ∈ Lh+1 . A run r is a function from N to global states, i.e., r : N → Lh+1 . Given a run r and n ∈ N, the global state (r, n) is called a point. A system is a set R of runs. Let CS be a constant specification. An interpreted system I for CS is a tuple (R, E, ν) where • R is a system, • Ei : Tmi × R × N → P(Fml) is an CS-admissible evidence function for each 1 ≤ i ≤ h, • ν : R × N → P(Prop) is a valuation. Given two points (r, n) and (r′ , n′ ), we define (r, n) ∼i (r′ , n′ ) by r(n) = hle , l1 , . . . , lh i, r′ (n′ ) = hle′ , l1′ , . . . , lh′ i, and li = li′ . A CS-admissible evidence function Ei is a function satisfying the following conditions. For all terms t, s ∈ Tm and all points (r, n) and (r′ , n′ ), 1. Ei (r, n, t) ⊆ Ei (r′ , n′ , t), whenever (r, n) ∼i (r′ , n′ ) 2. if [c]i ϕ ∈ CS then ϕ ∈ Ei (r, n, c) (montonicity), (constant specification) 3. if ϕ → ψ ∈ Ei (r, n, t) and ϕ ∈ Ei (r, n, s), then ψ ∈ Ei (r, n, t · s) (application) 4. Ei (r, n, s) ∪ Ei (r, n, t) ⊆ Ei (r, n, s + t) (sum) 5. if ϕ ∈ Ei (r, n, t), then [t]i ϕ ∈ Ei (r, n, !t) (positive introspection) 6. if ϕ 6∈ Ei (r, n, t), then ¬ [t]i ϕ ∈ Ei (r, n, ?t) (negative introspection) Given an interpreted system I = (R, E, ν) for CS, a run r ∈ R and n ∈ N, we define validity of a formula ϕ in I at point (r, n) inductively by (I, r, n)  P iff P ∈ ν(r, n) , (I, r, n) 6 ⊥ , (I, r, n)  ϕ → ψ iff (I, r, n) 6 ϕ or (I, r, n)  ψ , (I, r, n)  ϕ iff (I, r, n + 1)  ϕ , (I, r, n)  ϕ iff (I, r, n + i)  ϕ for all i ≥ 0 , (I, r, n)  ϕ U ψ iff there is some m ≥ 0 such that (I, r, n + m)  ψ and (I, r, n + i)  ϕ for all 0 ≤ i ≤ m , (I, r, n)  [t]i ϕ iff ϕ ∈ Ei (t, r, n) and (I, r′ , n′ )  ϕ for all r′ ∈ R and n′ ∈ N such that (r, n) ∼i (r′ , n′ ) . We call an interpreted system strong if it has the following additional property • if ϕ ∈ Ei (r, n, t), then (I, r, n)  [t]i ϕ 7 (strong evidence). As usual, we write I  ϕ if (I, r, n)  ϕ for all points (r, n), and we write CS ϕ if I  ϕ for all strong interpreted systems I for CS. Question 5. While in modal epistemic logic, S5 is typically used, i.e., the accessibility relation is usually an equivalence relation (as ∼i is here), justification logic is more at home with the justification counterpart to S4. In order to achieve this, would it be possible to extend interpreted systems with additional, explicit accessibility relations Ri ⊆ L × L , which are transitive and reflexive? In particular, this would allow dropping the strong evidence requirement. 6 Soundness Theorem 2 (Soundness). Let CS be a constant specification. If ⊢CS ϕ, then CS ϕ . Proof. We proceed by induction on the derivation of ϕ. Let I be a system and (r, n) a point. If ϕ is a propositional axiom or derived using modus ponens, the result follows as usual. In the case of ( -k), assume (I, r, n)  (ϕ → ψ) and (I, r, n)  ϕ. Then we have (I, r, n + 1)  ϕ → ψ and (I, r, n + 1)  ϕ. Thus, (I, r, n + 1)  ψ and we are done. In the case of (-k), assume (I, r, n)  (ϕ → ψ) and (I, r, n)  ϕ. Then we have (I, r, n + i)  ϕ → ψ and (I, r, n + i)  ϕ for all i ≥ 0. Thus, (I, r, n + i)  ψ for all i ≥ 0 and we are done. In the case of (fun), we have (I, r, n)  ¬ϕ if and only if (I, r, n + 1)  ¬ϕ if and only if (I, r, n + 1) 6 ϕ if and only if (I, r, n) 6 ϕ if and only if (I, r, n)  ¬ ϕ. In the case of (mix), assume (I, r, n)  ϕ. Then we have (I, r, n + i)  ϕ for all i ≥ 0. In particular, we have (I, r, n)  ϕ. Furthermore, we also have (I, r, n + 1 + j)  ϕ for all j ≥ 0. Thus, (I, r, n + 1)  ϕ, which means (I, r, n)  ϕ and we are done. In the case of (ind), assume (I, r, n)  (ϕ → ϕ) and (I, r, n)  ϕ. Then we have (I, r, n + i)  ϕ → ϕ) for all i ≥ 0. By induction on i, we can prove (I, r, n + i)  ϕ using (I, r, n)  ϕ for the induction basis and (I, r, n + i)  ϕ → ϕ) for all i ≥ 0 for the induction step. This yields the desired result. In the case of ( U 1), assume (I, r, n)  ϕ U ψ. Thus we have that there is an m ≥ 0 such that (I, r, n + m)  ψ and (I, r, n + i)  ϕ for all 0 ≤ i ≤ m. In particular, (I, r, n + m)  ψ and thus (I, r, n)  ♦ψ. In the case of ( U 2), for the direction from left to right, assume (I, r, n)  ϕ U ψ. Thus we have that there is an m ≥ 0 such that (I, r, n + m)  ψ and 8 (I, r, n + i)  ϕ for all 0 ≤ i ≤ m. If m = 0, we have (I, r, n)  ψ and we are done. If m > 0, we have (I, r, n)  ϕ, (I, r, n+1+(m−1))  ψ and (I, r, n+j for all 0 ≤ j ≤ m − 1. Thus (I, r, n + 1)  ϕ U ψ and, in turn, (I, r, n)  (ϕ U ψ). In the case of ( U 2), for the direction from right to left, assume (I, r, n)  ψ ∨ (ϕ ∧ (ϕ U ψ)). If (I, r, n)  ψ, the result follows immediately. If (I, r, n)  ϕ ∧ (ϕ U ψ), we have that there is an m ≥ 0 such that (I, r, n + 1 + m)  ψ and (I, r, n + 1 + i)  ϕ for all 0 ≤ i ≤ m. Thus, (I, r, n + (m + 1))  ψ and (I, r, n + i)  ϕ for all 0 ≤ i ≤ (m + 1) and we are done. In the case of ( -nec), by induction hypothesis we have CS ϕ. In particular, this means (I, r, n + 1)  ϕ and we are done. In the case of (-nec), by induction hypothesis we have CS ϕ. In particular, this means (I, r, n + i)  ϕ for all i ≥ 0 and we are done. In the case of (application), assume (I, r, n)  [t]i (ϕ → ψ) and (I, r, n)  [s]i ϕ. Thus, we have ϕ → ψ ∈ Ei (r, n, t) and ϕ ∈ Ei (r, n, s). This gives us ψ ∈ Ei (r, n, t · s) and the result follows from the strong evidence condition. In the first case of (sum), assume (I, r, n)  [t]i ϕ. Thus, we have ϕ ∈ Ei (r, n, t) ⊆ Ei (r, n, t + s). This gives us (I, r, n)  [t + s]i ϕ by the strong evidence condition. The second case follows analogously. In the case of (reflexivity), assume (I, r, n)  [t]i ϕ. Thus we have (I, r′ , n′ )  ϕi for all (r′ , n′ ) with (r, n) ∼i (r′ , n′ ). In particular, (r, n) ∼ (r, n), and therefore (I, r, n)  ϕ and we are done. In the case of (positive introspection), assume (I, r, n)  [t]i ϕ. Thus we have, ϕ ∈ Ei (r, n, t). From the closure conditions on evidence functions we get [t]i ϕ ∈ Ei (r, n, !t). The strong evidence condition then gives us the desired (I, r, n)  [!t]i [t]i ϕ. In the case of (negative introspection), assume (I, r, n)  ¬ [t]i ϕ. by the strong evidence condition, ϕ 6∈ Ei (r, n, t). Thus, ¬ [t]i ϕ ∈ Ei (r, n, ?t). Now, strong evidence again gives us (I, r, n)  [?t]i ¬ [t]i ϕ. Finally, the case of (const-nec) is immediate by the corresponding closure condition on evidence functions and strong evidence. Lemma 4. 1. (-access) is sound for interpreted systems I satisfying if ϕ ∈ Ei (r, n, t), then ϕ ∈ Ei (r, n + k, ⇓ t) for all k ≥ 0 , for all points (r, n), agents i, formulae ϕ and terms t. 2. (generalize) is sound for interpreted systems I satisfying if ϕ ∈ Ei (r, n + k, t) for all k ≥ 0, then ϕ ∈ E(r, n, ⇑ t) , for all points (r, n), agents i, formulae ϕ and terms t. 3. ( -access) is sound for interpreted systems I satisfying if ϕ ∈ Ei (r, n, t), then ϕ ∈ Ei (r, n, ↓ t) , for all points (r, n), agents i, formulae ϕ and terms t. 9 4. ( -right) is sound for interpreted systems I satisfying if ϕ ∈ Ei (r, n, t), then Ei (r, n + 1, ⇛ t) , for all points (r, n), agents i, formulae ϕ and terms t. 5. ( -left) is sound for interpreted systems I satisfying if ϕ ∈ Ei (r, n + 1, t), then ϕ ∈ Ei (r, n, ⇚ t) , for all points (r, n), agents i, formulae ϕ and terms t. Proof. 1. Assume (I, r, n)  [t]i ϕ. Then ϕ ∈ Ei (r, n, t). Thus, ϕ ∈ E(r, n+k, ⇓ t) fo all k ≥ 0. By strong evidence, we get (I, r, n+k)  [⇓ t]ϕ for all k ≥ 0 and the result follows. 2. Assume (I, r, n)   [t]i ϕ. Then (I, r, n + k)  [t]i ϕ for all k ≥ 0. Thus, ϕ ∈ Ei (r, n+ k, t) for all k ≥ 0. Therefore, ϕ ∈ Ei (r, n, ⇓ t) and we obtain the result by strong evidence. 3. Assume (I, r, n)  [t]i ϕ. Then ϕ ∈ Ei (r, n, t). Thus, and we obtain the result by strong evidence. ϕ ∈ Ei (r, n, ↓ t) 4. Assume (I, r, n)  [t]i ϕ. Then, ϕ ∈ Ei (r, n, t). Thus, ϕ ∈ Ei (r, n+1, ⇛ t) and we obtain the result by strong evidence. 5. Assume (I, r, n)  [t]i ϕ. Then ϕ ∈ Ei (r, n+1, t). Thus, t) and we obtain the result by strong evidence. ϕ ∈ Ei (r, n, ⇚ Question 6. Do models satisfying these additional conditions exist at all? Question 7. Are there any (less obvious) semantic conditions guaranteeing soundness for these principles? Question 8. How can one show completeness? Adapting the proof from [HvdMV04], where interpreted systems are obtained from a (finite) canonical model construction might be a feasible route, but the presence of  might make it more cumbersome. Using infinite canonical models might require some form of model surgery, e.g., filtrations, as we are dealing with fixed points. 7 Conclusion We have sketched an axiomatization for a justification logic of knowledge and time, discussed connecting principles between knowledge and time, illustrated the logic with sample derivations, and shown the internalization theorem and soundness. In the course of the presentation, we have raised questions indicating further directions of work. Most prominently, completeness proofs are currently missing. Besides the questions posed above, there are various further routes of research such a logic might open. We will outline these questions in the following, in no particular order. 10 Question 9. Can one build Mkrtychev-style [Mkr97] interpreted systems? These would be models which do not require the accessibility relation ∼i , but solely rely on the evidence function for determining knowledge. Question 10. How can the typical examples, e.g., protocols related to message transmission, be formalized in the logic presented above? See, e.g., [HZ92]. For example, one might consider principles such as   • [t]i ϕ → sentij (t) j ϕ, or   • [t]i ϕ → ♦ sentij (t) j ϕ. Question 11. What happens if we require operations on justification terms to take time, e.g., [t]i ϕ → [!t]i [t]i ϕ ? This might also relate to the logical omniscience problem [AK14]. Question 12. What does a justification logic for knowledge and branching time look like? See also [vdMW03]. Question 13. Can dynamic justification logic be translated into temporal justification logic akin to [vDvdHR13]? See also [BKS14]. Finally, one might also wonder whether the temporal modalities themselves can be justified. This question is independent of the presentation above, more information can be found in Section C. Question 14. What would a justified temporal logic look like? References [AF12] Sergei [N.] Artemov and Melvin Fitting. Justification logic. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Fall 2012 edition, 2012. [AK14] Sergei [N.] Artemov and Roman Kuznets. Logical omniscience as infeasibility. Annals of Pure and Applied Logic, 165(1):6–25, January 2014. Published online August 2013. [Art06] Sergei [N.] Artemov. Justified common knowledge. Theoretical Computer Science, 357(1–3):4–22, July 2006. [Art08] Sergei [N.] Artemov. The logic of justification. The Review of Symbolic Logic, 1(4):477–513, December 2008. [Art10] Sergei [N.] Artemov. Tracking evidence. In Andreas Blass, Nachum Dershowitz, and Wolfgang Reisig, editors, Fields of Logic and Computation, Essays Dedicated to Yuri Gurevich on the Occasion of His 70th Birthday, volume 6300 of Lecture Notes in Computer Science, pages 61–74. Springer, 2010. [BKS11a] Samuel Bucheli, Roman Kuznets, and Thomas Studer. Justifications for common knowledge. Journal of Applied Non-Classical Logics, 21(1):35–60, January–March 2011. 11 [BKS11b] Samuel Bucheli, Roman Kuznets, and Thomas Studer. Partial realization in dynamic justification logic. In Lev D. Beklemishev and Ruy de Queiroz, editors, Logic, Language, Information and Computation, 18th International Workshop, WoLLIC 2011, Philadelphia, PA, USA, May 18–20, 2011, Proceedings, volume 6642 of Lecture Notes in Artificial Intelligence, pages 35–51. Springer, 2011. [BKS14] Samuel Bucheli, Roman Kuznets, and Thomas Studer. Realizing public announcements by justifications. Journal of Computer and System Sciences, 80(6):1046–1066, 2014. 18th Workshop on Logic, Language, Information and Computation (WoLLIC 2011). [FHMV95] Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi. Reasoning about Knowledge. MIT Press, 1995. [Gor99] Rajeev Goré. Tableau methods for modal and temporal logics. In Marcello D’Agostino, Dov M. Gabbay, Reiner Hähnle, and Joachim Posegga, editors, Handbook of Tableau Methods, pages 297–396. Springer Netherlands, 1999. [HvdMV04] Joseph Y. Halpern, Ron van der Meyden, and Moshe Y. Vardi. Complete axiomatizations for reasoning about knowledge and time. SIAM Journal on Computing, 33(3):674–703, 2004. [HZ92] Joseph Y. Halpern and Lenore D. Zuck. A little knowledge goes a long way: Knowledge-based derivations and correctness proofs for a family of protocols. J. ACM, 39(3):449–478, July 1992. [KMOS15] Ioannis Kokkinis, Petar Maksimovi, Zoran Ognjanovi, and Thomas Studer. First steps towards probabilistic justification logic. Logic Journal of IGPL, 2015. [KS12] Roman Kuznets and Thomas Studer. Justifications, ontology, and conservativity. In Thomas Bolander, Torben Braüner, Silvio Ghilardi, and Lawrence Moss, editors, Advances in Modal Logic, Volume 9, pages 437–458. College Publications, 2012. [Mkr97] Alexey Mkrtychev. Models for the logic of proofs. In Sergei Adian and Anil Nerode, editors, Logical Foundations of Computer Science, 4th International Symposium, LFCS’97, Yaroslavl, Russia, July 6–12, 1997, Proceedings, volume 1234 of Lecture Notes in Computer Science, pages 266–275. Springer, 1997. [Rub06] N[atalia] M. Rubtsova. Evidence-based knowledge for S5. In 2005 Summer Meeting of the Association for Symbolic Logic, Logic Colloquium ’05, Athens, Greece, July 28–August 3, 2005, volume 12(2) of Bulletin of Symbolic Logic, pages 344–345. Association for Symbolic Logic, June 2006. Abstract. [vdMW03] Ron van der Meyden and Ka-shu Wong. Complete axiomatizations for reasoning about knowledge and branching time. Studia Logica, 75(1):93–123, 2003. 12 [vDvdHR13] Hans van Ditmarsch, Wiebe van der Hoek, and Ji Ruan. Connecting dynamic epistemic and temporal epistemic logics. Logic Journal of the IGPL, 21(3):380–403, 2013. 13 A An Alternative Presentation of Temporal Logic Linear temporal logic is often presented using an induction rule as follows, see, e.g., [HvdMV04], with axioms • (ϕ → ψ) → ( ϕ → • ¬ϕ ↔ ¬ (fun) ϕ • ϕ U ψ ↔ ψ ∨ (ϕ ∧ and rules ( -k) ψ) (ϕ U ψ)) ( U 2) χ → ¬ψ ∧ χ ( U -ind) . χ → ¬(ϕ U ψ) ϕ ( -nec) , ϕ Here, ♦ and  are defined by ♦ϕ := ⊤ U ϕ , ϕ := ¬♦¬ϕ . We use LTLalt to denote the Hilbert system given by the alternative axioms and rules for temporal logic above, plus the axioms and rules for propositional logic. A.1 Relationship between Temporal Logic and Alternative Presentation of Temporal Logic We will now show that this presentation is equivalent to the one previously given. We start by showing some auxiliary results. Lemma 5. The following rules are derivable in LTL: 1. ϕ→ ϕ , ϕ → ϕ 2. χ →ϕ∧ χ , χ → ϕ 3. χ → ¬ψ . χ → ¬(ϕ U ψ) Proof. 1. Assume ϕ → ϕ. By (-nec) we obtain (ϕ → and (MP), we get the desired ϕ → ϕ. 2. Assume χ → ϕ ∧ ϕ). Using (ind) χ. By propositional reasoning, we get both χ→ χ χ→ϕ (3) (4) χ → χ (5) From (3) we get by using Lemma 6, item 1 above. From (4) we get (χ → ϕ) by (-nec) and from this we obtain χ → ϕ 14 (6) by using (-k) and (MP). Using propositional reasoning we can combine (5) and (6) in order to obtain the desired χ → ϕ . 3. Immediate by using propositional reasoning and the contrapositive of ( U 1) which is ¬ψ → ¬(ϕ U ψ). Combining these results, we obtain the following: Lemma 6. The rule ( U -ind) is derivable in LTL. Proof. Assume χ → ¬ψ ∧ χ. By Lemma 6, item 2 we get χ → ¬ψ. Using Lemma 6, item 3 we get χ → ¬(ϕ U ψ) and we are done. For the other direction, we have: Lemma 7. The following axioms and rules are derivable in LTLalt : 1. (mix) 2. ( U 1) 3. (-nec) 4. (-k) 5. (ind) Proof. Remember that  is a defined connective in LTLalt , i.e., ϕ := ¬(⊤ U ¬ϕ). 1. The following is an instance of ( U 2) ¬ϕ ∨ (⊤ ∧ (⊤ U ¬ϕ)) → ⊤ U ¬ϕ . Using propositional reasoning, this is equivalent to ¬ϕ ∨ (⊤ U ¬ϕ)) → ⊤ U ¬ϕ . Taking the contrapositive of this and using propositional reasoning and (fun), we obtain ¬(⊤ U ¬ϕ) → (ϕ ∧ ¬(⊤ U ¬ϕ)) , which is the desired result ϕ → (ϕ ∧ ϕ . 2. The following is an instance of ( U 1) ψ ∨ (⊤ ∧ (⊤ U ψ)) → ⊤ U ψ . Taking the contrapositive, using propositional reasoning and (fun), we obtain ¬(⊤ U ψ) → ¬ψ ∧ ¬(⊤ U ψ) . 15 Now we can use ( U -ind) in order to obtain ¬(⊤ U ψ) → ¬(ϕ U ψ) , whose contrapositive ϕU ψ → ⊤U ψ is the desired ϕ U ψ → ♦ψ . 3. Assume ϕ. By propositional reasoning and ( -nec) we get ⊤ → ¬¬ϕ ∧ ⊤. Using ( U -ind), we get ⊤ → ¬(⊤ U ¬ϕ) , which is equivalent to ¬(⊤ U ¬ϕ) , which is the desired ϕ . 4. By Lemma 7, item 1, the following instances of (mix) are provable ϕ → (ϕ ∧ ϕ) , (ϕ → ψ) → ((ϕ → ψ) ∧ (ϕ → ψ) . Using propositional reasoning to combine these, we obtain (ϕ ∧ (ϕ → ψ)) → (ϕ ∧ (ϕ → ψ) ∧ ϕ ∧ (ϕ → ψ)) . Using propositional reasoning and ( -k), we get (ϕ ∧ (ϕ → ψ)) → (ψ ∧ (ϕ ∧ (ϕ → ψ)) . Note that this has the form χ → (ψ ∧ χ) , where χ = ϕ ∧ (ϕ → ψ). Now we can use ( U -ind) in order to obtain χ → ¬(⊤ U ¬ψ) , which is (ϕ ∧ (ϕ → ψ)) → ψ , which in turn is propositionally equivalent to the desired (-k). 5. Using Lemma 7, item 1 as above and propositional reasoning we have ϕ ∧ (ϕ → ϕ) → (ϕ ∧ (ϕ → ϕ) ∧ (ϕ → ϕ)) , which, by using further propositional reasoning and ( -k) can be turned into ϕ ∧ (ϕ → ϕ) → (ϕ ∧ (ϕ ∧ (ϕ → ϕ)) . 16 This has the form χ→ϕ∧ where χ = ϕ ∧ (ϕ → χ, ϕ). Hence we can use ( U -ind) in order to obtain χ → ¬(⊤ U ¬ϕ) , which is ϕ ∧ (ϕ → ϕ) → ϕ , which is trivially equivalent to the desired (ind). Finally, putting everything together, we obtain the desired equivalence. Theorem 3. LTL ⊢ ϕ if and only if LTLalt ⊢ ϕ. Proof. Immediate by induction on the derivation and using Lemma 7 and Lemma 6. 17 B Some Connecting Principles in Temporal Modal Logic The following definitions and facts are taken from [HvdMV04]. Here we adapt the language and use Ki ϕ for the modality “agent i knows ϕ” and its dual Li ϕ := ¬Ki ¬ϕ. We first need the following preliminary definitions. Definition 1. For agent i and point (r, n), we define 1. the local state sequence LSSi (r, n) to be the sequence of local states of agent i in run r up to and including time n, but with consecutive repetitions omitted, 2. the future local state sequence FLSSi (r, n) to be the sequence of local states of agent i in run r starting from time n, but with consecutive repetitions omitted. Using these definitions, we can define the following notions. Definition 2. 1. A system has a unique intial state if for all runs r, r′ ∈ R r(0) = r′ (0) . 2. A system is synchronous if for all agents i, points (r, n), and (r′ , n′ ) (r, n) ∼i (r′ , n′ ) implies n = n′ , i.e., agents do know the (global) time. 3. Agent i has perfect recall in a system, if for all points (r, n) and (r′ , n′ ) (r, n) ∼i (r′ , n′ ) implies LSSi (r, n) = LSSi (r′ , n′ ) , i.e., if the agent considers r′ possible, it must have considered it possible at all points in the past. 4. Agent i does not learn in a system, if for all points (r, n) and (r′ , n′ ) (r, n) ∼i (r′ , n′ ) implies FLSSi (r, n) = FLSSi (r′ , n′ ) , i.e., if the agent considers r′ possible, it will do so at all points in the future. Corresponding to these semantic notions, we have the following principles Definition 3. (KT3) Ki ϕ ∧ (Ki ψ ∧ ¬Ki χ) → Li ((Ki ϕ) U ((Ki ψ) U ¬χ)) (KT1) Ki ϕ → Ki ϕ (KT2) Ki ϕ→ (pr) (notquitepr) (prsync) Ki ϕ (KT4) Ki ϕ U Ki ψ → Ki (Ki ϕ U Ki ψ) (nl) 18 (KT5) Ki ϕ → Ki (nlsync) ϕ Ki ϕ ↔ K1 ϕ (knowexch) Finally, the following relationships hold between these principles and semantic notions: • (pr) (strictly) implies (notquitepr), • (pr) gives a sound and complete axiomatization for systems with perfect recall (with or without unique initial state), • (prsync) gives a sound and complete axiomatization for synchronous systems with perfect recall (with or without unique initial state), • (nl) gives a sound and complete axiomatization for systems with no learning (without unique initial state), • (nlsync) gives a sound and complete axiomatization for synchronous systems with no learning without unique initial state, • (pr) and (nl) give a sound and complete axiomatization for systems with perfect recall and no learning without unique initial state, • (pr) and (nl) give a sound and complete axiomatization for single-agent (i.e., h = 1) systems with perfect recall and no learning with unique initial state), • (prsync) and (nlsync) give a sound and complete axiomatization for synchronous systems with perfect recall and no learning without unique initial state, • (prsync) and (nlsync) and (knowexch) give a sound and complete axiomatization for synchronous systems with no learning and with unique initial state (with or without perfect recall ), • Systems with no learning and with unique initial state with more than one agent (i.e., h ≥ 2) do not have a recursive axiomatic characterization since the validity problem is co-r.e.-complete. 19 C A Justified Temporal Logic? Looking at LTL, one might also be tempted to create a justified temporal logic, where we introduce justifications for temporal modalities. This might look as follows. C.1 Syntax We have three different types of terms, corresponding to the three different temporal modalities: t ::= c |x |t ·t | ahead(t ) | uhead(t U ) , t ::= c | x | aind(t , t ) | atail(t ) , t U ::= c U | x U | uappend(t , t U ) | utail(t U ) . Formulae are as usual, but with modalities replaced by terms: h i     ϕ | t  ϕ | t U U (ϕ, ϕ) . ϕ ::= p | ⊥ | ϕ → ϕ | t C.2 Axiomatisation 0. all propositional tautologies 1. [t] (ϕ → ψ) → ([s] ϕ → [t · s] ψ) ( -application) 2. [t] (ϕ → ψ) → ([s] ϕ → [t · s] ψ) (-application) 3. [t] ¬ϕ ↔ ¬ [t] ϕ (fun) 4. [t] ϕ → (ϕ ∧ [ahead(t)] [atail(t)] ϕ) (-mix) 5. [t] (ϕ → [s] ϕ) → (ϕ → [aind(t, s)] ϕ) (-ind) 6. [t, s] U (ϕ, ψ) → ¬ [s] ¬ψ (U ) 7. ψ ∨ (ϕ ∧ [t1 ] ([t2 ] U (ϕ, ψ))) → [uappend(t1 , t2 ), s] U (ϕ) ( U -ind) 8. [t] U (ϕ, ψ) → ψ ∨ (ϕ ∧ [uhead(t)] ([utail(t)] U (ϕ, ψ))) ( U -mix) and rules ϕ ϕ→ψ (MP) , ψ [c] ϕ ∈ CS ( -nec) , [c] ϕ [c] ϕ ∈ CS (-nec) . [c] ϕ Roughly speaking, one could consider justifications for  and U as lists of terms, that are either generated by induction (in the case of ) or by appending elements to a given list (in the case of U ). However, this is just a preliminary sketch and it is not clear, what the actual semantics of such a system would be. Whether and what sense such a system makes has to be left for future work. 20