Giter VIP home page Giter VIP logo

gidti's People

Contributors

alienkevin avatar bor0 avatar chobbes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gidti's Issues

[2.1] logic introduction truth table concerns

In Definition 3, the truth table:

  • using top and bottom symbols for the truth table... Everything kind of blends in? I might try colouring them differently, or something?
  • I also don't know that I like how the table lines up... Seems like things should be centred, not left justified?
  • I think that there should be a vertical line or something after a and b, and before a /\ b to indicate that on the left a and b are assigned values, while the values on the right follow from this...
  • it might even be a good idea to break this table up into several tables, one per connective. This would also let you talk a bit more about what each connective actually means.

If somebody is delving into logic for the first time, I think this section is somewhat unclear. There are no plain English definitions of the logical connectives, and instead a giant truth table that's actually fairly hard to read. The "in other words" section is just a compressed truth table with no text to motivate why these connectives behave the way they do. Even having something simple like "and means that both a and b have to be true in order for a /\ b to be true" would really help to solidify this in a beginner's mind.

[2.1] MU puzzle example issues

In your inference rules, I think it's somewhat unclear what is a variable and what is not a variable. Maybe you should mention that lower case letters are variables. It's also unclear if these stand for single characters, or strings!

Hyphen encoding issue in acknowledgements

The hyphen in the paragraph on Neil Mitchell is getting encoded as " " somehow in the preview PDF. I'm not sure what tools you're using to build the PDF, but if it's pandoc, I've had to fiddle with options to prevent "smart" conversions like this before. :) The fix might be as simple as using two hyphens instead of one.

screen shot 2018-09-22 at 2 29 16 pm

[4.1.4] Iterative process -> tail recursion

I think the missing phrase in 4.1.4 is "tail recursion". The key idea is not that a tail recursive function is an iterative loop, but that a smart enough compiler can pretend that it is and evaluate it using constant function stack space.

The "tail call" in a function body is just the last/outermost function call, and if there's only one recursive call in the body of a function and it occurs in tail position then the definition is tail recursive.

3.1 logical connectives commutativity

Why do we mention that the connectives are commutative? That seems like something you could prove later instead? Also introducing commutativity with oplus (a different operator) would be confusing to beginners, since it's not clear that this means /\ or \/. If you have to have this comment, then I'd probably just include both a /\ b = b /\ a and a \/ b = b \/ a to keep things simple for the reader.

Chapter 2: More info for quantifiers

Section 2.1.2 on quantifiers is a little light on examples and details, particularly for the existential quantifier. This is an important and tricky topic that probably warrants some more fleshing out.

For instance, the negation rules for quantifiers are what they are for a good and intuitive reason.

But also we need to be careful with the quantifiers in intuitionistic logic: $\lnot \forall x . P \Rightarrow \exists x . \lnot P$ is not intuitionistically valid. This makes sense if we remember that intuitionistic truth means constructible, and we can imagine a case where we can construct evidence that P(x) does not hold for any $x$ but can't construct evidence that ~P(x) for any particular x. Similarly, deMorgan's laws are intuitionistically complicated. The bottom of page 6 in this paper says more about this.

[3] ZFC and Peano introductions

Hmmm, I'm not sure how I feel about introducing ZFC and Peano numbers, and then immediately switching to a different section, and introducing several other definitions... Maybe bring these up where they are actually used? Otherwise the reader will think these are irrelevant details at the time, and then will have forgotten them when they are actually needed.

I'm also not sure that the ZFC definition is all that useful. It doesn't really add much, and I also wonder if bringing up the axiom of choice is anything more than a distraction. I understand that you're delving into the ZFC name, but I don't even know if it's worth bringing up "ZFC" as opposed to just saying "set theory", especially since you probably aren't actually going to rigorously introduce ZFC?

Chapter 2: table vs column proofs

(Putting on my novice hat)

Table proofs look a lot easier than column proofs. You just plug in truth values and simplify, where column proofs require planning ahead. Why would we bother with column proofs?

(Taking off novice hat)

Table proofs only work for propositional (zeroth order) theorems - the table method is essentially the decidability algorithm for zeroth order logic. That's why they are easy (if verbose) and always work, and why column proofs become necessary once we're using quantifiers.

It's probably worth pointing this out - there are two methods of proof, but they are not equivalent in power.

Chapter 3.4: note on note

Re. this footnote:

Dependent types are the reason why Idris can formally prove mathematical statements, compared to other programming languages.

Actually it's even cooler than this. Dependent types are not what make proofs possible. What's happening here is there's a deep correspondence between type systems and logical formalisms, the Curry-Howard-Lambek correspondence.

Simply typed lambda calculus corresponds to propositional logic with natural deduction (zeroth order).
Dependently typed lambda calculus corresponds to predicate logic with natural deduction (first order).

What makes dependent types cool is that they allow proofs of statements involving first-order predicates. I think this helps explain why they are so much more powerful than simple types - they are a step up in power (literally) analogous to the step from propositional logic to predicate logic.

Chapter 2: bound variables

In section 2.3 on substitution, since quantifiers were introduced a few sections prior, I think it makes sense to talk about the difference between free and bound variables. (Since the audience may not have seen this before.)

[2] classical logic too fast?

The classical logic section seems like it's going a bit too fast. For instance definition 4 seems very sparse, and I don't think there's enough to talk about a \/ b and ~b implying a. That assumes that the reader is already very familiar with the logical connective symbols, and currently I think that this introduction doesn't give the reader enough.

Before this point I think it would be good to have a few examples of using inference rules (like in the MI to MIIU example) to actually show things like this? Maybe even an exercise or two?

Probably makes sense to do something like that before starting first-order logic?

[4.1.1] Pattern matching

In section 4.1.1, the introduction of pattern matching is a bit out of place. Has Z been mentioned yet?

More generally, pattern matching really only makes sense in the context of algebraic types - it's literally a shorthand for the universal mappings on products and coproducts. I would wait to mention it until after talking about sum and product types in 4.1.2.

Edit- I see you do say more about pattern matching in 4.1.2. In that case I'd just leave it out of 4.1.1.

Chapter 2: functions

Small comment about the discussion on functions.

Functions are defined in terms of binary relations, and they have a nice characterization that is dual to the concepts of "one-to-one" and "onto".

A function is a binary relation $f \subseteq A \times B$ that is also

  • total, meaning that for every $a \in A$ there exists a $b \in B$ such that $(a,b) \in f$, (compare to onto) and
  • well-defined, meaning that if $(a_1,b) \in f$ and $(a_2,b) \in f$ then $a_1 = a_2$ (compare to one-to-one).

Chapter 1: consistency and completeness

This is a pretty minor issue and fully hashing it out may not be worth the time, but I think there's a subtle detail that is glossed over in the definitions of "consistent" and "complete" for formal systems, having to do with the interpretation of the word "true".

In a formal system it doesn't really make sense to say that a given statement is "true", at least not in the way people usually think of that word. The best a formal system can do by itself is say that a statement is "provable" -- follows from the inference rules. We can think of this as syntactic truth. Then we can take a particular model of the formal system -- a thing where the formal system has a concrete interpretation -- and ask whether some statements are true about the model. If it's a bona fide model, then every provable statement should be true. But there may be true statements in that model that aren't provable in the formal system.

This gets sticky in the definition of incomplete:

"the system is incomplete if there are statements that are true but which cannot be proved to be true inside that system."

The problem is: what does it mean to say that a statement in a formal system is true? There are two concepts at play here: a statement can be (1) derivable from the inference rules (syntactically true), or it can be (2) true in every possible interpretation of the formal system (semantically true). Consistency means that we can't prove a statement that turns out to be false in some model. Completeness means that if a statement is true in every model, then we can find a proof for it.

Fully fleshing out the distinction between the two would probably require talking about models, and that's it's own can of worms.

But here's the thing - and this is a pedagogical opinion that you're free to disregard - I'm not sure that it's necessary to understand consistency and completeness in order to use formal logic to do useful work. In practice we use well studied formal systems where these issues have been fleshed out already and the dragons tamed; we choose consistency, and incompleteness doesn't really present any problems.

I haven't gotten into the later chapters, but if consistency/completeness is not used again, I would think about leaving it out.

[4.1.3] :let vs let

I can imagine a novice reader getting confused about the difference between let and :let. I've never used Idris before, but have used Haskell for a decade, so I'll take a stab at it.

let is a gentle extension to the usual lambda calculus syntax. The expression let x = y in e is equivalent to (\x . e) y. There are a couple of good reasons for introducing this syntax. First, it makes it easy to shorten a lambda expression by factoring out common subexpressions. Second, it has nice implications for the type inference algorithm, at least in the simply typed case; IIRC there are let expressions that can be efficiently assigned a (simple) type even though the equivalent lambda form cannot.

At any rate, because let is a first-class part of the lambda expression grammar, we can do weird things like

f (let x = p a in if q then x else g x)

Not to say that deeply nested lets are the best way to define something, but you can do it.

If I understand correctly :let is different - it is just used to bind new names at the top level of an interactive Idris session. I think the equivalent Haskell idiom would be let used in the REPL, which happens to be an instance of the special let form for monads (since the haskell repl is just an iteratively defined value in IO ()).

Chapter 3: sums are tagged unions

In chapter 3, definition 3 and the subsequent examples, I think it'd be helpful to note that for values of a sum type $A \mid B$ it is also possible to tell which summand the value came from. I've heard sums described as tagged unions for this reason -- each value has a tag that tells us which summand it belongs to. (In Haskell these are the constructors Left and Right.)

This is especially important for sums like $A \mid A$. If $a : A$ and we think of $a$ as having type $A \mid A$ as well, how do we tell which copy of $A$ $a$ came from?

Chapter 3: (another) note on note

Extremely minor quibble re. this footnote:

A Turing machine can express any algorithm.

This is only really true since we usually define an algorithm to be that which is computable by a turing machine or an equivalent model. It makes sense to talk about algorithms that run on turing machines with oracles attached, but these may not be computable by a turing machine alone.

I'd express this more like:

"A Turing machine is a very simple abstract machine designed to capture our intuitive understanding of computation in the most general sense. Any formal system that can simulate a Turing machine, and thus also perform arbitrary computations, is called Turing complete. (Untyped) Lambda calculus is Turing complete."

Chapter 2: multicharacter identifiers

I don't have the PDF of chapter 2, so I'm not sure how this got typeset. But usually latex like

apple \in \{ apple, banana \}

will treat e.g. "apple" as a list of variables being multiplied together, which is spaced differently than the italicized word "apple" would be. Personally I'd wrap identifiers like this in \mathrm{}.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.