Question 3

3 3

Clearly we have the trivial subspace \{(0,0)\}, since this contains the additive identity for \mathbb R^2 and has \lambda (0,0) + (0,0) = (0,0) for all \lambda \in \mathbb R.

Note that the straight lines through the origin in \mathbb R^2 are all proper subspaces of \mathbb R^2. We propose that these are the only non-trivial proper subspaces. We do this by showing that if a subspace V contains points on two distinct straight lines through the origin, then we can take linear combinations of these points to obtain any point in \mathbb R^2, so said subspace must be \mathbb R^2 itself.

These two approaches are equivalent, the first just uses a few lemmas.

With knowledge of bases

Let V be a subspace of \mathbb R^2. Note that if (\alpha, \beta), (\gamma, \delta) \in V lie on distinct straight lines through the origin, then (\alpha, \beta) is not a multiple of (\gamma, \delta), so (\alpha, \beta) and (\gamma, \delta) are linearly independent. Since \dim \mathbb R^2 = 2, \{(\alpha, \beta), (\gamma, \delta)\} therefore forms a basis for \mathbb R^2, ie. all elements of \mathbb R^2 can be written as a linear combination of these two vectors. So V = \mathbb R^2.

Without knowledge of bases:
Let V be a subspace of \mathbb R^2. Suppose (\alpha, \beta), (\gamma, \delta) \in V lie on distinct straight lines through the origin. Suppose first \alpha = 0. (clearly then \gamma \ne 0, otherwise they both line on the y-axis) With a view to write an arbitrary point (x, y) \in \mathbb R^2, write:

t(0, \beta) + s(\gamma, \delta) = (x,y)

This is the case iff s = \dfrac x \gamma and \displaystyle t\beta + \frac x \gamma \delta = y, so \displaystyle t = \frac 1 \beta \left(y - \frac x \gamma \delta\right). Since, for all (x, y) \in \mathbb R^2 we can write:

\displaystyle (x, y) = \frac 1 \beta \left(y - \frac x \delta\right) (0, \beta) + \frac x \gamma (\gamma, \delta)

so from closure under linear combination we have \mathbb R^2 \subseteq V in this case, so \mathbb R^2 = V.

The case \gamma = 0 follows similarly. Suppose now that \alpha, \gamma \ne 0. Then we can express (\alpha, \beta) and (\gamma, \delta) as multiples of (1, m_1) and (1, m_2), say, respectively, with m_1 \ne m_2.

Then write:

t (1, m_1) + s (1, m_2) = (x,y)

This is the case iff t + s = x, giving s = x - t so that t m_1 + (x - t)m_2 = y. Rearranging, t(m_1 - m_2) = y - xm_2, so that:

\displaystyle \alpha = \frac {y - xm_2} {m_1 - m_2}

then:

\displaystyle \beta = \frac {xm_1 - xm_2 - y + xm_2} {m_1 - m_2} = \frac {xm_1 - y} {m_1 - m_2}

So for any (x,y) \in \mathbb R^2 we can write:

\displaystyle (x,y) = \frac {y - xm_2} {m_1 - m_2} (1, m_1) + \frac {xm_1 - y} {m_1 - m_2} (1, m_2)

That is, any element of \mathbb R^2 can be written as a linear combination of (\alpha, \beta) and (\gamma, \delta), so again V = \mathbb R^2.