-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating
Mathematics of Machine Learning
By :
Although our vector spaces contain infinitely many vectors, we can reduce the complexity by finding special subsets that can express any other vector.
To make this idea precise, let’s consider our recurring example ℝn. There, we have a special vector set
| e1 | = (1,0,…,0) | ||
| e2 | = (0,1,…,0) | ||
![]() |
|||
| en | = (0,0,…,1) |
which can be used to express each vector x = (x1,…,xn) as
For instance, e1 = (1,0) and e2 = (0,1) in ℝ2.
What we have just seen feels extremely trivial and it seems to only complicate things. Why would we need to write vectors in the form of x = ∑ i=1nxiei, instead of simply using the coordinates (x1,…,xn) ? Because, in fact, the coordinate notation depends on the underlying vector set ({e1,…,en} in our case) used to express other vectors.
A vector is not the same as its coordinates! A single vector can have multiple different coordinates in different systems, and switching between...
