In Unix, everything is a file; in set theory, everything is a set.


This abstraction is very cool and allows us to prove facts that would not be provable if we thought that sets could include stuff that were not sets and therefore could not be broken down further.

Here's a nice example:

I was asked to prove both of the following statements in Naive Set Theory

When I try to prove something, especially in set theory, I try to visualize it in my head to make an intuitive proof and then write the mathematical proof after. I was able to easily visualize the first statement. Since the power set is just all of the subsets of a set, surely unioning all of the subsets of a set would give the set itself.

The second one was much more confusing and I couldn't visualize it. It’s basically saying, take a set, union all its elements together, then take the power set of that, and the original set will be a subset of that power set. I got lost at union all of its elements together. How does this work? For example, how do we do  ? The elements are just numbers, not sets! … Oh wait, the beginning of the book said that in set theory all we have are sets. So they are actually sets!

For each item in , there are two possibilities of what it could be.

It could either be the empty set (), or a set with items in it (literally everything else including numbers, ordered pairs, functions, etc). If it is the empty set, no elements will be added to the union, but the empty set is always an element of any power set, because it is a subset of all sets. So that element of  is taken care of.

For the rest of the elements of  that were not the empty set, their underlying elements will get merged together in . For example, let’s say . Then . The key thing is that taking the power set of this will create all the sets that were originally in , and more. . This contains all of the sets that were originally in  plus a few more. Once we have the everything is a set mindset, then it becomes visually intuitive why : we are re-building all of the sets that made up , and more. A simple change of perspective turned an unintuitive problem into an intuitive one! I love this.


Why am I writing about this?

When I find a cool idea that challenges my previous thinking, I like to blog about it so that I can think even more clearly about it. I also like to do it because I think more people should know that everything is a set!

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 5:15 AM

I actually dislike making everything a set, it feels similar to programming in Brainfuck. Sure it's Turing complete, but the way programs are structured don't map cleanly to how a human would conceptualize it and you need to write a lot of boilerplate for things you ordinarily don't even think about.

In practice, this leads to confusing notation like using "subset of" or "element of" for "lesser than", which makes it harder to see whether to think of something as a number or just a generic set. Here, since X is not "typed", it is hard to see that it should be thought of as a set of sets rather than just a generic set.

Also you get weird pathological stuff like {1,2} being a topological space.

As a formalism for mathematics, I much prefer type theory which not only more cleanly maps onto how humans think, but also uses simpler axioms. It also has connections to logic, computer science, and category theory (and by extension many other fields of math).

After thinking more about it, I think I understand your thought process. I agree that set theory has lots of pathological stuff (the book even points out that  is quite pathological). However, it seems to me that similar to how you should understand how a Turing machine like brainfuck works before doing advanced programming, you should understand how the foundations of math work before doing advanced math. This is the main reason why I am studying set theory (and will do real analysis soon enough). 

 

Interestingly, there are also multiple formulations of computing, some more popular than others. The languages that I like to use are mainly based on Turing machines (c, zig, etc), but some others (javascript) are a mix and can be formulated like a lambda calculus if you really want. Yet it seems to me that since Turing machines are the most popular formulations of computing, we should learn them (even if we like to use lambda calculus later on). From what I've read, it seems that real analysis is also based upon sets. Actually, after looking this up, it seems you can do analysis in type theory, but that this is off the beaten path. So maybe I should learn set theory because it is the most popular but keep in mind that type theory might be more elegant.

you should understand how the foundations of math work before doing advanced math

Is this merely something that set theoreticians believe, or do mathematicians that are experts at other branches of math actually find set theory useful for their work?

Can you in practice use set theory to discover something new in other branches or math, or does it merely provide a different (and less convenient) way to express things that were already discovered otherwise?

Many statements are undecidable in ZFC; what impact does that have on using set theory as a foundation for other branches of math?

Can you in practice use set theory to discover something new in other branches or math, or does it merely provide a different (and less convenient) way to express things that were already discovered otherwise?

The value of set theory as a foundation comes more from being a widely-agreed upon language that is also powerful enough to express pretty much everything mathematicians can think up, rather than as a tool for making new discoveries. I think it's worth learning at least at a shallow level for this reason, if you want to learn advanced math.

I'd add that set theory gives you tools (like Zorn's lemma and transfinite induction) that aren't particularly exciting themselves, but you do need them to prove results elsewhere (e.g. Tychonoff's theorem, or that every vector space has a basis).

That said there are some examples of results from formalizing math/ logic being used to prove nontrivial things elsewhere. My favourite example is that the compactness theorem of first order logic can be used to prove the Ax–Grothendieck theorem (which states that injective polynomials from C^n -> C^n are bijective). I find this pretty cool.

I don't know or think set theory is special. I just wanted to start at the very beginning. Another reason why I chose to start at set theory is because that is what Soares and Turntrout did and I just wanted somewhere to start (and I needed an easy-ish environment to level up in proofs). The foundations of math seemed like a good place. I plan to do linear algebra next because I think I need better linear algebra intuition for pretty much everything. It seems like it helps with a lot.

Tbh I don't think what specific foundation you use for math matters all that much, so I was mostly bikeshedding in that comment. 99% of things in math are agnostic to foundations. Even Zorn's lemma can be reformulated in type theory. Really, the only thing you'd need to learn the axioms of set theory for is mathematical logic. Also, I don't think type theory is inherently more advanced than set theory, it's just historical happenstance that we teach set theory.

You'll also be delighted to learn that I think that, in general, it's better to learn a "close to the metal" language like C at the same time you learn a lambda calculus/type theory-based language like Haskell at the earliest :3 (although the best first language is something multiparadigm like Scheme or Pyret).

Thank you! When I finish learning set theory and linear algebra, I'll look into type theory. Do you have any recommendations for resources to learn it from?

I'm not metachirality, but I would recommend any introduction to simple type theory (basically: higher-order logic). If you already know first-order logic, it's a natural extension. This is a short one: https://www.sciencedirect.com/science/article/pii/S157086830700081X

Church's simple theory of types is pretty much the "basic" type theory; other type theories extend it in various ways. It's used, e.g., in computer science (some formal proof checkers), linguistics (formal semantics) and philosophy (metaphysics). It can also be used in mathematics as an alternative to the theory of ZFC, which is axiomatized in first-order logic.