In part 7 of this series, one of the examples for the use of inner product of two vectors was that of a final score based on a series of tests with different weights.
We have a vector of weights for each test, and a vector of Jesse’s scores for each test, and we can combine them to get a final score. The way we combine these vectors is called inner product.
However, a teacher rarely figures out final scores for just a single student. Each student has his/her own test scores, e.g. like this:
All these vectors have four entries, one for September, one for October, one for November, and one for December. Yet clearly, one of these vectors is not like the others. It plays a different role, and its numbers have a different meaning. The weight vector contains weights, the others all contain test scores.
A representation that reflects the similarity between the score vectors could look like this:
but since this is both hard to draw and hard to read, we could turn it sideways, so to speak, and arrive at the following representation:
We now have one weight vector, and five score vectors (one for each student), and one final score vector. The five score vectors are arranged in what is known as a matrix. The word matrix is an old word, and its use here has nothing to do with the movie. Though we can view a matrix as a collection of vectors, we can equally view a vector as a special kind of matrix, a matrix that is especially skinny. A matrix is said to have rows and columns, and each row is a vector (called a row vector) and each column of a matrix is a vector (called a column vector). In our example, the matrix contains all the test scores, and its second column is the test vector for Jordan, and its third row is the row vector for the November test.
We can see that the final score vector was obtained from the test score matrix by applying the weights vector on each of its columns independently. The resulting final score vector carries forward the Jesse-ness and Jordan-ness of the score matrix, but loses any November-ness.
You have now seen our first matrix multiplication, though we still haven’t quite arrived at the accepted standard notation for it. We’ll expand on this in the next post.