# College Math Teaching

## March 17, 2015

### Compact Spaces and the Tychonoff Theorem IV: conclusion

Filed under: topology — Tags: , , — collegemathteaching @ 9:14 pm

We are finishing up a discussion of the Tychonoff Theorem: an arbitrary product of compact spaces is compact (in the product topology, of course). The genesis of this discussion comes from this David Wright article.

In the first post in this series, we gave an introduction to “compactness”.

In the second post, we gave a proof that the finite product of compact spaces is compact.

In the third post, we gave come equivalent definitions of compactness

In particular, we showed that:

1. A space is compact if and only if the space has the following property: if $A \subset X$ is an infinite union of open sets with no finite subcover, then $A$ is a proper subset of $X$; that is, $X -A \neq \emptyset$ and

2. A space is compact if and only if the space has the following property: every infinite subset $E$ has a perfect limit point. Note: a perfect limit point for a set $E$ is a point $x \in X$ such that, for every open $U, x \in U$, $|U \cap E| = |E|$ (the intersection of every open neighborhood of a perfect limit point with $E$ has the same cardinality as $E$.

Note the following about these two facts: each of these facts promises the existence of a specific point rather than the existence/non-existence of a cover of a particular type. Fact 1 promises the existence of an excluded point, and fact 2 promises the existence of a perfect limit point.

When it comes to a point in an infinite of topological spaces, constructing a point is really like constructing a sequence of points (in the case of countable products) or a net of points (in the case of uncountable products). That is, if one wants to construct a point in an infinite product of spaces, one can assume some well ordering of the index used in the product, then construct the first coordinate of the point from the first factor space, the second coordinate from the second factor space, and so on.

We’ll use fact one: the excluded point property to prove Tychonoff’s Theorem.

Proof. Assume that $X = \Pi_{\alpha \in I} X_{\alpha}$ and that $I$ is well ordered. We start out by showing that the product of two compact spaces is compact, and use recursion to get the general result.

Let $\mathscr{O}$ be an infinite union of open sets in $X_1 \times X_2$ with no finite subcover. First, we show that there is some $a \in X_1$ such that for each open set $U, a \in U$, no finite subcollection of $\mathscr{O}$ covers $U \times X_2$. Now if there is some open $U \subset X_1$ where $U$ is disjoint from every $\pi_1 (O), O \in \mathscr{O}$ we are done with this step. So assume not; assume that every $U$ is a subset of the first factor of some $O \in \mathscr{O}$. If it isn’t the case that there is $x \in X_1$ where $U_x \times X_2$ has no finite subcover of elements of $\mathscr{O}$, for each such $U_x \subset X_1, x \in X_1$ there is a finite number of elements of $\mathscr{O}$ that covers $U_x \times X_2$. Now since $X_1$ is compact, a finite number of $U_x$ covers $X_1$, hence a finite subcover of $\mathscr{O}$ covers ALL of $X_1 \times X_2$. Hence some point $a \in X_1$ exists such that no finite subcover of $\mathscr{O}$ covers $U \times X_2$ for any open $U \subset X_1, a \in U$.

Similarly, we can find $b \in X_2$ so that for all open $V \subset X_2, b \in V$, no finite subcollection of $\mathscr{O}$ covers $U \times V$ where $U$ is a basic open set in $X_1$ that contains $a$. This shows that $(a,b) \notin \cup_{O \in \mathscr{O}}$ because, if it were, this single point would lie in some basic open set $U \times V$ which, by definition, is a finite subcover.

Now given an arbitrary product with a well ordered index set $I$ we can now assume that there is some collection of open sets that lacks a finite subcover and inductively define $a_{\gamma} \in X_{\gamma}$ so that, if $U$ is any basic open set containing $\Pi_{\alpha \leq \gamma} \{a_{\alpha} \} \times \Pi_{\alpha > \gamma} X_{\alpha}$ then no finite subcollection of $\mathscr{O}$ covers $U$. The point $(a_{\gamma})$ thus constructed lies in no $\mathscr{O}$.

Note: if you are wondering why this “works”, note that we assumed NOTHING about the compactness of the remaining product space factors $\Pi_{\alpha > \gamma} X_{\alpha}$.
And remember that we are using the product topology: an open set in this topology has the entire space as factors for all but a finite number of indices. So we only exploit the compactness of the leading factors.

### Compact Spaces and the Tychonoff Theorem III

Filed under: topology — Tags: , , — collegemathteaching @ 2:49 am

We continue on our quest to prove the Tychonoff Theorem: an arbitrary product of compact spaces is compact. We just show that this is true for the FINITE product of compact spaces.

It is our goal to do this by using elementary tools and avoiding things like nets (for example, Willard uses ultranets)

We will basically adding background and commentary to David Wright’s excellent 1994 paper which appeared in the Proceedings of the American Mathematical Society. We will use a bit of cardinal arithmetic and facts about ordinals at times.

Yes, we do need some background, but the background we are providing is necessary for the understanding of any mathematics that uses topology anyway.

Conditions which are equivalent to compactness

1. If $X \subset R^n$ in the usual topology, then $X$ is compact if $X$ is closed and bounded.
Proof: Let $X$ be compact. Then $X$ is closed because the usual topology for $R^n$ is Hausdorff. $X$ is bounded as well, as if it weren’t, the if $x \in X, d(x,0) = M, cover$latex X \$ by $\cup_{x \in X} B_x(\frac{1}{M})$. This open cover has no finite subcover as $M$ is unbounded.

Now let $X$ be closed and bounded. Then $X \subset \Pi^n_{i=1} [a,b]$ for some real $a, b$, which is compact by our “junior” Tychonoff Theorem. So $X$ is a closed subset of a compact set and therefore compact.

2. I’ll call this the excluded point condition: Let $U = \cup_{\alpha \in I} U_{\alpha}$, where each $U_{\alpha}$ is open. We say that $U$ lacks a finite subcover if there is no finite subcollection of the $U_{\alpha}$ that covers U. A topological space $X$ is said to have the excluded point condition if any subset $U$ which has an open cover which lacks a finite subcover must exclude at least one point of $X$; that is, any set which has an open cover with no finite subcover must be a proper subset of $X$.

Example of an open cover which lacks a finite subcover: consider $[0,1]$ as a subset of $R^1$ in the usual topology, and let $U = \cup^{\infty}_{n=3}(\frac{1}{n}, \frac{n-1}{n})$; here $\{0, 1 \}$ are the excluded points from this open cover.

Theorem: a space is compact if and only if it has the excluded point condition for open covers.
Proof. If $X$ is compact then any open cover of $X$ has a finite subcover, hence any subset of $X$ which has an open cover which lacks a finite subcover cannot be all of $X$.
Now assume that $X$ has the excluded point condition. Let $U$ be any open cover of $X$. $U$ cannot lack a finite subcover as any subset which has an open cover lacking a finite subcover must exclude a point of $X$.

3. The finite intersection property condition: let $\mathscr{C}$ be any collection of closed sets. We say that $\mathscr{C}$ has the finite intersection property if the following holds: if the intersection of any finite subcollection of elements of $\mathscr{C}$ is non-empty.

Example: in $R^1$, $\mathscr{C} = \{ [1 - \frac{1}{n}, 1+ \frac{1}{n} ], n \in \{1, 2, ...\} \}$ has the finite intersection property. On the other hand, $\{ [n, n+1], n \in \{..-2, -1, 0, 1, 2, ..\} \}$ does not have this property as there are finite subcollections of this set that have an empty intersection.

Theorem: $X$ is compact if and only if the following holds: if $\mathscr{C}$ is a collection of closed sets with the finite intersection property, then an arbitrary intersection of elements of $\mathscr{C}$ is non-empty.

Proof: Let $X$ be compact. Let $\mathscr{C}$ be an infinte collection of closed sets with the finite intersection property. This means that no finite collection of the complements of these sets can cover $X$ Then $X - \cap_{C \in \mathscr{C}} C = \cup_{C \in \mathscr{C}}(X - C)$ is an open cover of a subset of $X$. Since no finite subcollection of these closed set complements (open sets) can cover all of $X$, then $X - \cup_{C \in \mathscr{C}}(X - C)$ is non-empty and therefore so is $\cap_{C \in \mathscr{C}} C$

Now let the finite intersection property hold for $X$. Let $\mathscr{U}$ be any open cover for $X$. This means that $X-\cup_{U \in \mathscr{U}} U = \cap_{U \in \mathscr{U}} (X -U)$ is empty. Hence the collection of closed sets $\{ (X-U), U \in \mathscr{U} \}$ cannot have the finite intersection property which means that there is some finite subcollection $F \subset \mathscr{U}$ where $\cap_{U \in F} (X-U)$ is empty which means $\cup_{U \in F} U$ covers $X$.

4. Limit point compactness: part I. Theorem: a space $X$ is compact if and only if every infinite subset $E \subset X$ has a limit point.
Note: we can actually prove a bit more; that will be in part II. This is a “junior theorem” which can lead the beginner to understanding the “varsity theorem”.

Proof. Let $X$ be compact and let $E$ be an infinite subset of $X$. Consider: $U_{x \in X} U_x$ where $U_x$ is some open set containing $x$. If $E$ has no limit point we can assume that the $U_x$ are chosen so that each $|E \cap U_x|$ is finite for each $x$. Now a finite subcollection of the $U_x$ covers..say $\cup_{i=1}^k U_{x_i}$ and $E = \cup_{i=1}^k (U_{x_i} \cap E)$. This is impossible as each $|(U_{x_i} \cap E|$ is finite but $E$ is infinite.

Now assume that $X$ is limit point compact in that every infinite subset has a limit point. Let $\mathscr{U}$ be an open cover which has no finite subcover. We assume that this open cover is efficient in that for each $U \in \mathscr{U}, U \not \subset \cup_{V \in \mathscr{U}, V \neq U} V$; that is, any $U$ in the open cover contains at least one point not contained in the union of the other open cover sets. Then the set $x_{U} \in U$ is an infinite set with no limit point.

5. Let $E$ be a set with cardinality $c$. We say that $x$ is a perfect limit point of $E$ if for all open sets $U_x$ containing $x$, $|U_x \cap E| = c$. Example: $[0,1]$ has every point as a perfect limit point (usual topology) as $[0,1]$ has the cardinality of the real numbers and if $U$ is open in $R^1$ and contains any point of $[0,1]$ then $U \cap [0,1]$ has the cardinality of the real numbers.

Now the stronger theorem is this: a topological space $X$ is compact if and only if every infinite subset $E$ has a perfect limit point.

Proof. First, assume that $X$ is compact. Let $E$ be an infinite subset with cardinality $c$. Cover $X$ by open sets $\cup_{x \in X} U_x$ where $x \in U_x$. Suppose that for all $U_x, |U_x \cap E| < c$. Now this open cover of $X$ has a finite subcover $U_1, U_2, ...U_k$ and so we have $E = \cup^k_{i=1} U_i \cap E$ and so $|E| \leq |U_1 \cap E| + |U_2 \cap E| + ...+|U_k \cap E|$ where each $|U_j \cap E| < c$. This is impossible because $c$ is an infinite cardinal (a limit cardinal) and it is impossible to reach a limit cardinal by a finite sum of strictly smaller cardinals.

If you are new to this and are a bit confused, start by assuming that $c$ is, say, the first countably infinite cardinal. ALL lesser cardinals are finite cardinals, and it is impossible for a finite sum of finite cardinals to add up to any infinite cardinal. Then, imagine $c$ being the first uncountable cardinal. One can not reach any uncountable cardinal by the finite sum of countable (or smaller) cardinals (the finite sum of countable cardinals is still countable). That is more or less what is going on here.

Now, suppose that every infinite set has a perfect limit point. Let $\cup_{\alpha \in I} U_{\alpha}$ be an open cover which has no finite subcover. We can assume that $I$ is the index of smallest cardinality for which this is true and that the cover is efficient: $U_{\beta} \not \subset \cup_{\alpha < \beta}U_{\alpha}$ that is, the open subcover is built by adding open sets which contain at least one point not contained by the previously added open sets. Also we put a well ordering on $I$ where the cardinality of $\{ \alpha \in I | \alpha < \beta \} < |I|$. If this confuses you a bit, think of a countable index set where the cardinality of the previous indices are finite, or of an uncountable index set where the smaller cardinals are all countable.

So, for each $\beta$ let $x_{\beta} \in U_{\beta} - \cup_{\alpha < \beta} U_{\alpha}$. Now let $E = \cup_{\alpha \in I} x_{\alpha}$ and note $|E| = |I|$ by design.

Now if $x \in X$, there is some $\alpha < I$ where $x \in U_{\alpha}$ but $|E \cap U_{\alpha}| \leq |I|$ as all $\alpha \in I$ have smaller cardinality than $I$ Therefore $E$ has no perfect limit point.

Again, the person new to topology can run through this with $I$ first being the countable ordinal (and every previous ordinal having finite cardinality) or $I$ being the first uncountable ordinal with every previous ordinal having at most countable cardinality.

We now have the background to give a simple proof of the full strength Tychonoff Theorem, which we will do in the next post.

## March 16, 2015

### Compact Spaces and Tychonoff’s Theorem II

Filed under: advanced mathematics, topology — Tags: , , — collegemathteaching @ 6:10 pm

Ok, now lets prove the following: If $X, Y$ are compact spaces, then $X \times Y$ is compact (in the usual product topology). Note: this effectively proves that the finite product of compact spaces is compact. One might call this a “junior” Tychonoff Theorem.

Proof. We will prove this theorem a couple of times; the first proof is the more elementary but less elegant proof. It can NOT be easily extended to show that the arbitrary product of compact spaces is compact (which is the full Tychonoff Theorem).

We will show that an open covering of $X \times Y$ by basis elements of the form $U \times V$, $U$ open in $X$ and $V$ open in $Y$ has a finite subcover.

So let $\mathscr{U}$ be an open cover of $X \times Y$. Now fix $x_{\beta} \in X$ and consider the subset $x_{\beta} \times Y$. This subset is homeomorphic to $Y$ and is therefore compact; therefore there is a finite subcollection of $\mathscr{U}$ which overs $x_{\beta} \times Y$, say $\cup^{\beta, k}_{i=1} U_{\beta, i} \times V_{\beta, i}$ Note that each $U_{\beta, i}$ is an open set in $X$ which contains $x_{\beta}$ and there are only a finite number of these. Hence $\cap^{\beta k}_{i=1} U_{\beta i} = U_{\beta}$ is also an open set which contains $x_{\beta}$. Also know that $U_{\beta} \times Y \subset \cup^{\beta, k}_{i=1} U_{\beta, i} \times V_{\beta, i}$

We can do this for each $x_{\beta} \in X$ and so obtain an open cover of $X$ by $\cup_{x_{\beta} \in X} U_{\beta}$ and because $X$ is compact, a finite subcollection of these covers $X$. Call these $U_1, U_2, U_3....U_m$. For each one of these, we have $U_j \times Y \subset \cup^{j, k}_{i=1} U_{j, i} \times V_{j, i}$.

So, our finite subcover of $X \times Y$ is $\cup^m_{j=1}\cup^{j, k}_{i=1} U_{j, i} \times V_{j, i}$.

Now while this proof is elementary, it doesn’t extend to the arbitrary infinite product case.

So, to set up such an extension, we’ll give some “equivalent” definitions of compactness. Note: at some point, we’ll use some elementary cardinal arithmetic.

## March 15, 2015

### Compact spaces and Tychonoff’s Theorem I

Note to the reader: when I first started this post, I thought it would be a few easy paragraphs. The size began to swell, so I am taking this post in several bite sized parts. This is part I.

Pretty much everyone knows what a compact space is. But not everyone is up on equivalent definitions and on how to prove that the arbitrary product of compact spaces is compact.
The genesis of this blog post is David Wright’s Proceedings of the American Mathematical Society paper (1994) on Tychonoff’s Theorem.

Since I am also writing for my undergraduate topology class, I’ll keep things elementary where possible and perhaps put in more detail than a professional mathematician would have patience for.

I should start by saying why this topic is dear to me: my research area is knot theory; in particular I studied embeddings of the circle into the 3-dimensional sphere $S^3$, which can be thought of as the “compactification” of $R^3$; basically one starts with $R^3$ and then adds a point $\infty$ and declares that the neighborhoods of this new point will be generated by sets of the following form: $\{ (x,y,z) | x^2 + y^2 + z^2 > M^2 \}$ The reason we do this: we often study the complement of the embedded circle, and it is desirable to have a compact set as the complement.

I’ve also studied (in detail) certain classes of embeddings of the real line into non-compact manifolds; to make this study a bit more focused, I insist that such embeddings be “proper” in that the inverse image of a compact set be compact. Hence compactness comes up again, even when the objects of study are not compact.

So, what do we mean by “compact”?

Instead of just blurting out the definition and equivalent formulations, I’ll try to give some intuition. If we are talking about a subset of a metric space, a compact subset is one that is both closed and bounded. Now that is NOT the definition of compactness, though it is true that:

Given a set $X \subset R^n$, $X$ is compact if and only if $X$ is both closed (as a topological subset) and bounded (in that it fits within a sufficiently large closed ball). In $R^1$ compact subsets can be thought of as selected finite unions and arbitrary intersections of closed intervals. In the higher dimensions, think of the finite union and arbitrary intersections of things like closed balls.

Now it is true that if $f:X \rightarrow Y$ is continuous, then if $X$ is a compact topological space, then $f(X)$ is compact (either as a space, or in the subspace topology, if $f$ is not onto.

This leads to a big theorem of calculus: the Extreme Value Theorem: if $f:R^n \rightarrow R$ is continuous over a compact subset $D \subset R^n$ then $f$ attains both a maximum and a minimum value over $D$.

Now in calculus, we rarely use the word “compact” but instead say something about $D$ be a closed, bounded subset. In the case where $n = 1$ we usually say that $D =[a,b]$, a closed interval.

So, in terms of intuition, if one is thinking about subsets of $R^n$, one can think of a compact space as a domain on which any continuous real valued function always attains both a minimum and a maximum value.

Now for the definition
We need some terminology: a collection of open sets $U_{\alpha}$ is said to be an open cover of a space $X$ if $\cup_{\beta \in I } U_{\beta} = X$ and if $A \subset X$ a collection of open sets is said to be an open cover of $A$ if $A \subset \cup_{\beta \in I } U_{\beta}$ A finite subcover is a finite subcollection of the open sets such that $\cup^k_{i=1} U_i = \cup_{\beta \in I} U_{\beta}$.

Here is an example: $(\frac{3}{4}, 1] \cup^{\infty}_{n=1} [0, \frac{n}{n+1})$ is an open cover of $[0,1]$ in the subspace topology. A finite subcover (from this collection) would be $[0, \frac{4}{5}) \cup (\frac{3}{4}, 1]$

Let $X$ be a topological space. We say that $X$ is a compact topological space if any open over of $X$ has a finite subcover. If $C \subset X$ we say that $C$ is a compact subset of $X$ if any open cover of $C$ has a finite subcover.

Prior to going through examples, I think that it is wise to mention something. One logically equivalent definition is this: A space (or a subset) is compact if every cover by open basis elements has a finite subcover. Here is why: if $X$ is compact, then ANY open cover has a finite subcover, and an open cover by basis elements is an open cover. On the other hand: if we assume the “every open cover by open basis elements has a finite subcover” condition: then if $\mathscr{U}$ is an open cover, then we can view this open cover as an open cover of the basis elements whose union is each open $U_{\beta} \in \mathscr{U}$. This open cover of basis elements has a finite subcover of basis elements..say $B_1, B_2, ....B_k$. Then for each basis element, choose a single $U_i \in \mathscr{U}$ for which $B_i \subset U_i$. That is the required open subcover.

Now, when convenient, we can assume that the open cover in question (during a proof) consists of basic open sets. That will simplify things at times.

So, what are some compact spaces and sets, and what are some basic facts?

Let’s see some compact sets, some non compact sets and see some basic facts.

1. Let $X$ be any topological space and $A \subset X$ a finite subset. Then $A$ is a compact subset. Proof: given any open cover of $A$ choose one open set per element of $A$ which contains said element.

2. Let $R$ have the usual topology. Then the integers $Z \subset R^1$ is not a compact subset; choose the open cover $\cup^{\infty}_{n = -\infty} (n - \frac{1}{4}, n+ \frac{1}{4})$ is an infinite cover with no finite subcover. In fact, ANY unbounded subset $A \subset R^n$ in the usual metric topology fails to be compact: for $a \in A$ with $d(a, 0) \geq n$ choose $B_a(\frac{1}{n})$; clearly this open cover can have no finite subcover.

3. The finite union of compact subsets is compact (easy exercise).

4. If $C \subset X$ is compact and $X$ is a Hausdorff topological space ($T_2$) then $C$ is closed. Here is why: let $x \notin C$ and for every $c \in C$ choose $U_c, V_c$ open where $x \in U_c, c \in V_c$. Now $\cup_{c \in C}V_c$ is an open set which contains $C$ and has a finite subcover $\cup_{i=1}^k V_i$ Note that each $U_i$ is an open set which contains $x$ and now we have only a finite number of these. Hence $x \in \cap^k_{i=1} U_i$ which is disjoint from $\cup_{i=1}^k V_i$ which contains $C$. Because $x$ was an arbitrary point in $X -C$, $X-C$ is open which means that $C$ is closed. Note: this proof, with one minor addition, shows that a compact Hausdorff space is regular ($T_3$) we need only show that a closed subset of a compact Hausdorff space is compact. That is easy enough to do: let $\mathscr{U}$ be an open cover for $C$; then the collection $\mathscr{U} \cup (X-C)$ is an open cover for $X$, which has a finite subcover. Let that be $\cup^k_{i=1} U_i \cup (X-C)$ where each $U_i \in \mathscr{U}$. Now since $X-C$ does not cover $C, \cup^k_{i=1} U_i$ does.

So we have proved that a closed subset of a compact set is compact.

5. Let $R$ (or any infinite set) be given the finite complement topology (that is, the open sets are the empty set together with sets whose complements consist of a finite number of points). Then ANY subset is compact! Here is why: let $C$ be any set and let $\mathscr{U}$ be any open cover. Choose $U_1 \in \mathscr{U}$. Since $X -U_1$ is a finite set of points, only a finite number of them can be in $C$, say $c_1, c_2, ...c_k$. Then for each of these, select one open set in the open cover that contains the point; that is the finite subcover.

Note: this shows that non-closed sets can be compact sets, if the underlying topology is not Hausdorff.

6. If $f: X \rightarrow Y$ is continuous and onto and $X$ is compact, then so is $Y$. Proof: let $\cup_{\beta \in I} U_{\beta}$ cover $Y$ and note that $\cup_{\beta}f^{-1}(U_{\beta})$ covers $X$, hence a finite number of these open sets cover: $X = \cup^{k}_{i=1}f^{-1}(U_i)$. Therefore $\cup^k_{i=1}U_i$ covers $Y$. Note: this shows that being compact is a topological invariant; that is, if two spaces are homeomorphic, then either both spaces are compact or neither one is.

7. Ok, let’s finally prove something. Let $R^1$ have the usual topology. Then $[0, 1]$ (and therefore any closed interval) is compact. This is (part) of the famous Heine-Borel Theorem. The proof uses the least upper bound axiom of the real numbers.

Let $\mathscr{U}$ be any open cover for $[0,1]$. If no finite subcover exists, let $x$ be the least upper bound of the subset $F$ of $[0,1]$ that CAN be covered by a finite subcollection of $\mathscr{U}$. Now $x > 0$ because at least one element of $\mathscr{U}$ contains $0$ and therefore contains $[0, \delta )$ for some $\delta > 0$. Assume that $x < 1$. Now suppose $x \in F$, that is $x$ is part of the subset that can be covered by a finite subcover. Then because $x \in U_{\beta}$ for some $U_{\beta} \in \mathscr{U}$ then $(x-\delta, x + \delta) \subset U_{\beta}$ which means that $x + \delta \in F$, which means that $x$ isn’t an upper bound for $F$.

Now suppose $x \notin F$; then because $x < 1$ there is still some $U_{\beta}$ where $(x-\delta, x+ \delta) \subset U_{\beta}$. But since $x = lub(F)$ then $x - \delta \in F$ and so $[0, x- \delta ) \subset F$. So if $F$ can be covered by $\cup^k_{i=1} U_i$ then $\cup^k_{i=1} U_i \cup U_{\beta}$ is a finite subcover of $[0, x + \delta )$ which means that $x$ was not an upper bound. It follows that $x = 1$ which means that the unit interval is compact.

Now what about the closed ball in $R^n$? The traditional way is to show that the closed ball is a closed subset of a closed hypercube in $R^n$ and so if we show that the product of compact spaces is compact, we would be done. That is for later.

8. Now endow $R^1$ with the lower limit topology. That is, the open sets are generated by basis elements $[a, b)$. Note that the lower limit topology is strictly finer than the usual topology. Now in this topology: $[0,1]$ is not compact. (note: none of $(0,1), [0,1), (0, 1]$ are compact in the coarser usual topology, so there is no need to consider these). To see this, cover $[0,1]$ by $\cup ^{\infty}_{n=1} [0, \frac{n}{n+1}) \cup [1, \frac{3}{2})$ and it is easy to see that this open cover has no finite subcover. In fact, with a bit of work, one can show that every compact subset is at most countable and nowhere dense; in fact, if $A$ is compact in the lower limit topology and $a \in A$ there exists some $y_a$ where $(y_a, a) \cap A = \emptyset$.

## March 12, 2015

### Radial plane: interesting topology for the plane

Filed under: advanced mathematics, topology — Tags: , — collegemathteaching @ 4:14 pm

Willard (in the book General Topology) defines something called the “radial plane”: the set of points is $R^2$ and a set $U$ is declared open if it meets the following property: for all $\vec{x} \in U$ and each unit vector $\vec{u}_{\theta} = \langle cos(\theta), sin(\theta) \rangle$ there is some $\epsilon_{\theta} > 0$ such that $\vec{x} + \epsilon_{\theta} \vec{u}_{\theta} \subset U$

In words: a set is open if, for every point in the set, there is an open line segment in every direction from the point that stays with in the set; note the line segments do NOT have to be of the same length in every direction.

Of course, a set that is open in the usual topology for $R^2$ is open in the radial topology.

It turns out that the radial topology is strictly finer than the usual topology.

I am not going to prove that here but I am going to show a very curious closed set.

Consider the following set $C = \{(x, x^4), x > 0 \}$. In the usual topology, this set is neither closed (it lacks the limit point $(0,0)$ ) nor open. But in the radial topology, $C$ is a closed set.

To see this we need only show that there is an open set $U$ that misses $C$ and contains the origin (it is easy to find an open set that shields other points in the complement from $C$. )

First note that the line $x = 0$ contains $(0,0)$ and is disjoint from $C$, as is the line $y = 0$. Now what about the line $y = mx$? $mx = x^4 \rightarrow x^4-mx = (x^3-m)x = 0$ and so the set $\{(x, mx) \}$ meets $C$ only at $x = m^{\frac{1}{3}}, y = m^{\frac{4}{3}}$ and at no other points; hence, by definition, $R^2 - C$ is an open set which contains $(0,0)$.

Of course, we can do that at ANY point on the usual graph of $f(x) = x^4$; the graphs of such “curvy” functions have no limit points.

Therefore such a graph, in the subspace topology…has the discrete topology.

On the other hand, the lattice of rational points in the plane form a countable, dense set (a line segment from a rational lattice point with a rational slope will intercept another rational lattice point).

So we have a separable topological space that lacks a countable basis: $R^2$ with the radial topology is not metric. Therefore it is strictly finer than the usual topology.

PS: I haven’t checked the above carefully, but I am reasonably sure it is right; a reader who spots an error is encouraged to point it out in the comments. I’ll have to think about this a bit.

## February 16, 2015

### Topologist’s Sine Curve: connected but not path connected.

Filed under: student learning, topology — Tags: , , — collegemathteaching @ 1:01 am

I wrote the following notes for elementary topology class here. Note: they know about metric spaces but not about general topological spaces; we just covered “connected sets”.

I’d like to make one concession to practicality (relatively speaking). When it comes to showing that a space is path connected, we need only show that, given any points $x,y \in X$ there exists $f: [a,b] \rightarrow X$ where $f$ is continuous and $f(a) = x, f(b) = y$. Here is why: $s: [0,1] \rightarrow [a,b]$ by $s(t) = a + (b-a)t$ maps $[0,1]$ to $[a,b]$ homeomorphically provided $b \neq a$ and so $f \circ s$ provides the required continuous function from $[0,1]$ into $X$.

Now let us discuss the topologist’s sine curve. As usual, we use the standard metric in $R^2$ and the subspace topology.

Let $S = \{(t, sin(\frac{1}{t}) | t \in (0, \frac{1}{\pi} \}$. See the above figure for an illustration. $S$ is path connected as, given any two points $(x_1, sin(\frac{1}{x_1}), (x_2, sin(\frac{1}{x_2})$ in $S$, then $f(x) = (x, sin(\frac{1}{x})$ is the required continuous function $[x_1, x_2] \rightarrow S$. Therefore $S$ is connected as well.

Note that $(0,0)$ is a limit point for $S$ though $(0,0) \notin S$.

Exercise: what other limit points does $S$ that are disjoint from $S$?

Now let $T = S \cup \{ (0,0) \}$, that is, we add in the point at the origin.

Fact: $T$ is connected. This follows from a result that we proved earlier but here is how a “from scratch” proof goes: if there were open sets $U, V$ in $R^2$ that separated $T$ in the subspace topology, every point of $S$ would have to lie in one of these, say $U$ because $S$ is connected. So the only point of $T$ that could lie in $V$ would be $(0,0)$ which is impossible, as every open set containing $(0,0)$ hits a point (actually, uncountably many) of $S$.

Now we show that $T$ is NOT path connected. To do this, we show that there can be no continuous function $f: [0, \frac{1}{\pi}] \rightarrow T$ where $f(0) = (0,0), f(\frac{1}{\pi}) = (\frac{1}{\pi}, 0 )$.

One should be patient with this proof. It will go in the following stages: first we show that any such function $f$ must include EVERY point of $S$ in its image and then we show that such a function cannot be extended to be continuous at $(0,0)$.

First step: for every $(z, sin(\frac{1}{z})),$ there exists $x \in (0,\frac{1}{\pi} ]$ where $f(x) = (z, sin(\frac{1}{z}) )$ Suppose one point was missed; let $z_0$ denote the least upper bound of all $x$ coordinates of points that are not in the image of $f$. By design $z_0 \neq \frac{1}{\pi}$ (why: continuity and the fact that $f(\frac{1}{\pi}) = (\frac{1}{\pi}, 0)$ ) So $(z_0, sin(\frac{1}{z_0})$ cuts the image of TS into two disjoint open sets $U_1, V_1$ (in the subspace topology): that part with x-coordinate less than and that part with x-coordinate greater than $x = z_0$. So $f^{-1}(U_1)$ and $f^{-1}(V_1)$ form separating open sets for $[0,\frac{1}{\pi}]$ which is impossible.

Note: if you don’t see the second open set in the picture, note that for all $(w, sin(\frac{1}{w})), w > z_0$ one can find and open disk that misses the part of the graph that occurs “before” the $x$ coordinate $z_0$. The union of these open disks (an uncountable union) plus an open disk around $(0,0)$ forms $V_1$; remember that an arbitrary union of open sets is open.

Second step: Now we know that every point of $S$ is hit by $f$. Now we can find the sequence $a_n \in f^{-1}(\frac{1}{n \pi}, 0))$ and note that $a_n \rightarrow 0$ in $[0, \frac{1}{\pi}]$. But we can also find $b_n \in f^{-1}(\frac{2}{1 + 4n \pi}, 1)$ where $b_n \rightarrow 0$ in $[0, \frac{1}{\pi}]$. So we have two sequences in the domain converging to the same number but going to different values after applying $f$. That is impossible if $f$ is continuous.

This gives us another classification result: $T$ and $[0,1]$ are not topologically equivalent as $T$ is not path connected.

## February 2, 2015

### The challenge of teaching undergraduate topology

Filed under: pedagogy, topology — Tags: — collegemathteaching @ 10:44 pm

I mentioned that I’d be teaching undergraduate topology for the first time ever. Yes, I’ve taught “proof required” courses before: on two separate occasions between 1991 and now. But in each case, I taught a senior level abstract algebra course in which every student came in having taken a course that required proof.

Technically speaking, this course doesn’t require a previous “proof” course.

Then there is how long I’ve been doing this; my first published paper appeared in 1991 (I did the work in 1989). That was well before any student in my class was born!!!

So, this process is going to be a bit like a native language speaker trying to teach someone else a new language.

This means: I’ll have to slow down; this book I’ve chosen won’t be too easy after all. 🙂 I am going to have to back way off of the pace. After all, I am teaching both the material AND proof writing.

## January 25, 2015

### An interesting topological space

Filed under: advanced mathematics, topology — Tags: , — collegemathteaching @ 2:46 pm

I am teaching an undergraduate topology course this semester. While we are still going through basic set theory, I’ve been racing ahead and looking at examples. Yes, Counterexamples in Topology (Steen and Seebach) is an excellent reference. In fact, I have two copies! 🙂

One space that caught my eye is Alexandroff Square. Take the usual closed $[0,1] \times [0,1]$ square in the plane. Now if $(x,y)$ is a non-diagonal point, let a local basis be open segments of the form $\{x \} \times (y - \epsilon, y + \epsilon )$, that is, small open vertical line segments that miss the diagonal. Open sets that include diagonal points $(x,x)$ are open horizontal strips $[0,1] \times (x + \epsilon, x - \epsilon)$ MINUS a finite number of vertical line segments $\{x_i \} \times (x + \epsilon, x - \epsilon)$.

This topological space is compact, Hausdorff, regular (that is, $T_3$) and normal (that is, $T_4$)

(quick reminder: Hausdorff (or $T_2$) means that any two points lie in disjoint open sets, Regular means that a point and a disjoint closed set can be separated by mutually disjoint open sets, and normal means that two disjoint closed sets can be separated by mutually disjoint open sets.)

One unusual aspect (to me) about this topology is how different the open sets are; there should be a way of characterizing this property. In a sense, the collection of open sets isn’t homogeneous.

I’ve decided to play with a simpler example that is based on the Alexandroff square:

consider the closed interval $[-1,1]$ For all $x \neq 0$ use the discrete topology. For $0$ use the entire interval minus any finite set as a local basis. Then one obtains many of the same features of the Alexandroff square, for similar reasons.

So, what do I mean by “different types” of open sets for different points?
This might work: let $x, y \in X$. Now I’d say that the open neighborhoods for $x, y$ are similar if, for every open neighborhood $U_x$ containing $x$ there is a continuous bijection $f|U_x \rightarrow X$ where $f(x) = y$ and $f(U_x)$ is an open neighborhood of $y$. That is, $f|U_x$ is a homeomorphism.

Let’s turn to the Alexandroff square for a minute. If $p$ is not a diagonal point, choose $U_p$ to be vertical open line segment. Now let $q$ be a diagonal element; every open $U_q$ has an open subset which is homeomorphic to the standard 2-disk in the plane (usual topology). Hence no local homemorphism from $U_p$ to $U_q$ can exist.

## January 17, 2015

### Convergence of functions and nets (from advanced calculus)

Sequences are a very useful tool but they are inadequate in some astonishingly elementary applications.

Take the case of pointwise convergence of functions (studied earlier in this blog here and here. )

Let’s look at the very simple example: $f(x) = 0$ for all $x \in R$. Yes, this is just the constant function.

Now consider a set $A$ which consists of all functions $g(x) = 0$ if $x \in F$ where $F$ is some finite set of points, and $g(x) =1$ otherwise. Remember $|F| = k 0$ there exists $m$ such that for all $k \geq m$ we have $|h_k(x) -h(x) | < \epsilon$. Note: the $m$ can vary with $x$.

Now recall that $A \subset R^R$. Now what topology are we using in $R^R$, since this is a product space? For pointwise convergence, we use the product topology in which the open sets are the usual open sets for a finite number of values in $R$ and the entire real line for the remaining values.

If this seems strange, consider the easier case $R^N$ where $N$ represents the natural numbers. Then we can view elements of $R^N$ as sequences, each of which takes a real value. And the open sets here will be the “sequences” $(U_1, U_2, U_3, ....U_k, ...)$ where the $U_i$ are copies of the real line, except for a finite number of indices in which case the $U_i$ can be some arbitrary open set in the real line.

For $R^R$, we index by the real numbers instead of by $N$.

So, in terms of pointwise convergence, this means for every $x \in R$ the $\epsilon_x$ is associated with the open set in the product topology where the real line factor associated with that value of $x$ has the usual real line open sets and the remaining factors of $R$ just have the whole real line; in short, they don’t really matter when figuring out of this function converges AT THIS VALUE OF $x$.

So with the product topology in place, look at our $f(x) = 0$ for all $x$. Given ANY open set $U \subset R^R, f \in U$, we see that $U \cap A \ne \emptyset$. For example, if $h(x) = 1$ for $x \ne 0$, $h(0) = 0$ and $U$ is the open set which corresponds to $(-\delta, \delta)$ in the $0$ factor and the real line elsewhere, then $h \in U \cap A$. So, we conclude that $f$ is in the topological closure of $A$.

But THERE IS NO SEQUENCE IN $A$ which converges to $f$. In fact, it is relatively easy to see that if $g_i$ is any sequence in $A$ and if $g_i \rightarrow g$, then $g$ is zero in at most a countable number of points. That is, if we use the Lebesgue integral, $\int^b_a g(x) dx = b-a.$ (of course, $g$ might not be Riemann integrable).

So sequences cannot reach a point in the closure. For the experts: this shows that $R^R$ in the product topology is not a first countable topological space; that is, its topology has no countable neighborhood basis. This also implies that it is not a metric space (or, more precisely, can’t be made into a metric space).

But I am digressing. The point is that, in situations like this, we want another tool to take the place of a sequence. That will be called a net.

Nets and Directed Sets
Roughly, a net is a “sequence like” thing that can be indexed by an uncountable set. But this indexing set needs to have a “direction like” quality to it. So, what works are “directed sets”.

A directed set is a collection of objects $a_{I}$ with a relation $\preccurlyeq$ that satisfy the following properties:

1) $a_I \preccurlyeq a_I$ (reflexive property)

2) If $a_I \preccurlyeq a_J$ and $a_J \preccurlyeq a_K$ then $a_I \preccurlyeq a_K$ (transitive property)

3) Given $a_I$ and $a_J$ there exists $a_K$ where $a_J \preccurlyeq a_K$ and $a_I \preccurlyeq a_K$ (direction)

Note: though a directed set could be an ordered set (say, $R$ with the usual order relation) or a partially ordered set (say, subsets ordered by inclusion), they don’t have to be. For example: one can form a directed set out of the complex numbers by declaring $w \preccurlyeq z$ if $|w| \leq |z|$. Then note $i \preccurlyeq 1$ and $1 \preccurlyeq i$ but $1 \ne i$.

Now a net is a map from a directed set into a space. It is often denoted by $x_I$ ($I$ is an element in the index set, which is a directed set). So, a real valued net indexed by the reals is, well, an object in $R^R$.

Now given a set $U$ in a topological space, we say that a net $x_I$ is eventually in $U$ if there is an index $J$ such that, for all $K \succcurlyeq J$, $x_K \in U$ and we say that $x_I$ is eventually in $U$. We say that a net $x_J \rightarrow x$ if for all open sets $U, x \in U$ we have $x_J$ is eventually in $U$.

Now getting back to our function example: we CAN come up with a net in $A$ that converges to our function $f$; we merely have to be clever at how we choose our index set though. One way: make a directed set $g_I \in A$ by declaring $g_I \preccurlyeq g_J$ if $g^{-1}_I (0) \subset g^{-1}_J(0)$. Now if we take any neighborhood of $f$ in the product topology, (remember that this consists of the product of a finite number of the usual open set in the real line with an infinite number of copies of the real line), we have elements of this net eventually in this open set, namely the functions which are zero for the values of $R$ that correspond to those open sets. (see here for a couple of ways of doing this)

This demonstrates the usefulness of nets. Note that trying to use a “sequence idea” by just starting with a function that is zero at exactly one point and then going to a function that is zero at two points, three points,…can only get you to a function that is zero at a countable number of points, which is NOT in $A$. That is, one “leaves $A$ prior to getting to where one wants to go, which is a function that is zero at all points of the real line.

On the other hand, a directed set can “start” at an uncountable number of elements of $A$ to begin with and get to being eventually in any basic open set containing $f$ in a finite number of steps. Of course, one must allow for an uncountable number of sequence like paths to get into any of the uncountable number of basic open sets, but each path consists of only a finite number of steps.

## January 16, 2015

### Power sets, Function spaces and puzzling notation

I’ll probably be posting point-set topology stuff due to my being excited about teaching the course…finally.

Power sets and exponent notation
If $A$ is a set, then the power set of $A$, often denoted by $2^A$, is a set that consists of all subsets of $A$.

For example, if $A = \{1, 2, 3 \}$, then $2^A = \{ \emptyset , \{1 \}, \{ 2 \}, \{3 \}, \{1, 2 \}, \{1,3 \}, \{2, 3 \}, \{1, 2, 3 \} \}$. Now is is no surprise that if the set $A$ is finite and has $n$ elements, then $2^A$ has $2^n$ elements.

However, there is another helpful way of listing $2^A$. A subset of $A$ can be defined by which elements of $A$ that it has. So, if we order the elements of $A$ as $1, 2, 3$ then the power set of $A$ can be identified as follows: $\emptyset = (0, 0, 0), \{1 \} = (1, 0, 0), \{ 2 \} = (0,1,0), \{ 3 \} = (0, 0, 1), \{1,2 \} = (1, 1, 0), \{1,3 \} = (1, 0, 1), \{2,3 \} = (0, 1, 1), \{1, 2, 3 \} = (1, 1, 1)$

So there is a natural correspondence between the elements of a power set and a sequence of binary digits. Of course, this makes the counting much easier.

The binary notation might seem like an unnecessary complication at first, but now consider the power set of the natural numbers: $2^N$. Of course, listing the power sets would be, at least, cumbersome if not impossible! But there the binary notation really shows its value. Remember that the binary notation is a sequence of 0’s and 1’s where a 0 in the i’th slot means that element isn’t an element in a subset and a 1 means that it is.

Since a subset of the natural numbers is defined by its list of elements, every subset has an infinite binary sequence associated with it. We can order the sequence in the usual order 1, 2, 3, 4, ….
and the sequence 1, 0, 0, 0…… corresponds to the set with just 1 in it, the sequence 1, 0, 1, 0, 1, 0, 1, 0,… corresponds to the set consisting of all odd integers, etc.

Then, of course, one can use Cantor’s Diagonal Argument to show that $2^N$ is uncountable; in fact, if one uses the fact that every non-negative real number has a binary expansion (possibly infinite), one then shows that $2^N$ has the same cardinality as the real numbers.

Power notation
We can expand on this power notation. Remember that $2^A$ can be thought of setting up a “slot” or an “index” for each element of $A$ and then assigning a $1$ or $0$ for every element of $A$. One can then think of this in an alternate way: $2^A$ can be thought of as the set of ALL functions from the elements of $A$ to the set $\{ 0, 1 \}$. This coincides with the “power set” concept as set membership is determined by being either “in” or “not in”. So, the set in the exponent can be thought of either as the indexing set and the base as the value each indexed value can take on (sequences, in the case that the exponent set is either finite or countably finite), OR this can be thought of as the set of all functions where the exponent set is the domain and the base set is the range.

Remember, we are talking about ALL possible functions and not all “continuous” functions, or all “morphisms”, etc.

So, $N^N$ can be thought of as either set set of all possible sequences of positive integers, or, equivalently, the set of all functions of $N$ to $N$.

Then $R^N$ is the set of all real number sequences (i. e. the types of sequences we study in calculus), or, equivalently, the set of all real valued functions of the positive integers.

Now it is awkward to try to assign an ordering to the reals, so when we consider $R^R$ it is best to think of this as the set of all functions $f: R \rightarrow R$, or equivalently, the set of all strings which are indexed by the real numbers and have real values.

Note that sequences don’t really seem to capture $R^R$ in the way that they capture, say, $R^N$. But there is another concept that does, and that concept is the concept of the net, which I will talk about in a subsequent post.