Optimization, Step 4: Compactness


The Wikipedia article for compact spaces is large, but extremely informative. I want to distill the important points of what it means to be compact in the context of optimization.

The motivation for something to be compact is a generalization of being closed and bounded. Indeed, any subset A \subseteq \mathbb R that is both closed and bounded is compact. Take this as a definition for now, and we’ll expand on it and re-adjust our thinking. Recall that \mathbb R is both closed and open, but it is definitely not bounded, as it has no upper or lower bound. Without loss of generality, we will only consider closed intervals, i.e. [a,b]. Being closed means that A contains its endpoints, but in general it means that a sequence with terms in A has a limit that also stays within A. We would like it if compactness would mean just that – that sequences with terms in A have their limit point in A. Is the ‘definition’ of compactness we used at the beginning of this paragraph consistent with this logic? Yes and no, and we’ll delve into that now.

Compactness was studied from two different fronts at the same time: Bolzano and Weierstrass in terms of sequences, Borel and Lebesgue from the point of view of something called open covers. If you have the time, this is what you want to read about the history of compactness. When math students are asked about what compactness is, they normally spit out the open cover version, because it is the more complete one. As with continuity, there are now multiple equivalent definitions.

3.11a Definition: compact
A topological space is called compact if each of its open covers has a finite subcover.

Here’s an equivalent definition.

3.11b Definition: compact
A topological space A is compact if every net on A has a convergent subnet.

This sounds like the terminology for sequences, but it isn’t true for sequences. I only want to tease you with what nets are, because they’re cool. Sequences are treated as functions from the natural numbers to a set A, i.e. (x_n)_{n\in\mathbb N} = \{x_1,x_2,x_3,...\}, (x_n) \in A. With sequences we treat x as the label of the function, recalling that x(n)=x_n and hence x:\mathbb N \rightarrow \mathbb A. The natural numbers are countably infinite, but it might make sense to allow the domain of the sequence be uncountable, but it has to have a similar ordering as the natural numbers. The domain needs to be what is called a directed set, and functions from directed sets onto other sets are called nets. E.g. for a directed set D, x: D \rightarrow A is a net. See the Wikipedia page on nets for more. Finally, it is worth checking out this page to get an understanding for the open-cover definition, amongst other things.

We want the Definition 3.11b to work with sequences instead of nets, if that is possible. Unfortunately it’s not possible: the open cover definition and the sequences definition are mutually incompatible, and so we call it something slightly different.

3.12 Definition: sequentially compact
A topological space A is called sequentially compact if every sequence in A has a convergent subsequence whose limit is in A.

Does being compact imply sequential compactness? Nope, unfortunately not.
Does being sequentially compact imply compactness? Nope, unfortunately not.
Definition 3.12 is what we would like to focus our attention on and we’d like it to be the full version of compact, but it’s not. Two things are important to note, however:

Thing 1: In a metric space, compactness and sequential compactness are the same thing. More precisely, a metric space is compact iff it is sequentially compact. Recall that all metric spaces are topological spaces, i.e. metric spaces \subset topological spaces (however not all topological spaces are metric spaces). That little bit of extra structure that metric spaces affords lets the two completely different definitions mean the same thing for metric spaces.

Thing 2: Borel proved that a subset of \mathbb R is compact iff it is closed and bounded. It’s now called the Heine-Borel theorem. A mathematician named Cousins proved it for subsets of \mathbb R^2 in the same year, and then Borel provided an argument for n dimensions three years later. Wikipedia uses the n-dimensional case. Note that it says for a subset of Euclidian space \mathbb R^n; Euclidian space has the structure of a Euclidian metric and is thus a metric space. We are still only doing things in one dimension, but when referring to the Heine-Borel theorem I will probably say something like “in finite dimensional Euclidian spaces, closed and bounded \Leftrightarrow compact”.

Our train of logic goes something like this: Say I have a closed and bounded interval [a,b]\subset \mathbb R. We will call the interval A, i.e. A:=[a,b]. Because A is bounded, the infimum exists. Because it is also closed, the minimum exists (\inf (a,b) = a however \min (a,b) = undefined. \inf [a,b) = \min [a,b) = a). In the definition of infimum I can create a sequence converging to the infimum, i.e. a sequence (x_n)_{n\in\mathbb N} exists such that \begin{aligned}\lim_{n\to \infty} x_n = \inf A = a  \end{aligned}. Even better, we know because of closedness that the sequence goes to the minimum of A, i.e. \begin{aligned}\lim_{n\to \infty} x_n = \min A = a  \end{aligned}. This minimizing sequence exists because the space has an infimum (minimizing sequences are defined to be any sequence that tends to the infimum of a set). Equip the space with the Euclidian metric. By the Heine-Borel theorem, A is a closed and bounded subset of \mathbb R^n (i.e. a Euclidian space) hence A is compact. Because A is a metric space and it is compact it is also sequentially compact, meaning that all minimizing sequences have convergent subsequences. Continuous functions map convergent sequences to convergent sequences by definition\forall (x_n)_{n\in \mathbb N} \subseteq A : \begin{aligned} \lim_{n \to \infty} x_n = a \Rightarrow \lim_{n\to\infty} f(x_n) = f(a)\end{aligned}. Now we have a sort-of proof that minimizers exist in the space B of a function so long as A is a metric space.

Continuous functions have a lot of definitions that are all equivalent; they’re very interesting objects when viewed topologically. The “convergent sequences” definition (as opposed to the \begin{aligned}\lim_{x\to x_0} f(x) = f(x_0) \end{aligned}) is the one that is useful here. The chain of logic above told us that the minimizer exists.

Look at this paragraph in the Wikipedia article of the Bolzano-Weierstrass theorem. It brings the idea of the Heine-Borel theorem and the Bolzano-Weierstrass theorems together. Terence Tao has a great exposition on compactness here.


We like dealing with compact spaces because the image of a compact function under a continuous function is compact. If the image B is compact and a metric space, then it has a minimizing sequence with a convergent subsequence.

Or, you can think of it as:

If the space A is compact and a metric space, it is also sequentially compact and has a minimizing sequence with a convergent subsequence. A continuous function maps convergent sequences to convergent sequences: i.e. x_n \to x also implies f(x_n) \to f(x).

1 thought on “Optimization, Step 4: Compactness”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s