epicartificials blog

Vectors & Matrices

18.04.2020

For this week's homework we got to implement matrices and vectors. We will skip over the basic operations with vectors and matrices, since they can be easily found on wikipedia and other tutorial sites. What I would like to look is the time complexity of the matrix operations and how to do them better.

Our school assignment:

In this assignment, we will implement a few bits of basic linear
algebra on top of the ‹number› class from assignment 3. We will add
2 additional data types: ‹vector› and ‹matrix› (please try to avoid
confusing ‹vector› with ‹std::vector›).

Implement the following basic operations:

 • vector addition and subtraction (operators ‹+› and ‹-›),
 • multiplication of a vector by a scalar (from both sides, ‹*›),
 • dot product of two vectors (operator ‹*›),
 • matrix addition (operator ‹+›),
 • multiplication of a vector by a matrix (again ‹*›, both sides),
 • equality on both vectors and matrices.

Implement these additional operations (‹m› is a ‹matrix›):

 • ‹m.gauss()› performs in-place Gaussian elimination: after the
   call, ‹m› should be in a reduced row echelon form
 • ‹m.rank()› returns the rank of the matrix (as ‹int›)
 • ‹m.det()› returns the determinant of the matrix
 • ‹m.inv()› returns the inverse of the matrix
 • ‹m.row( int n )› returns the ‹n›-th row as a ‹vector›
 • ‹m.col( int n )› returns the ‹n›-th column as a ‹vector›

With the exception of ‹gauss›, all methods should be ‹const›.

Object construction:

 • ‹vector› from a single ‹int› (dimension) to get a zero vector
 • ‹vector› from an ‹std::vector› of ‹number›
 • ‹matrix› from 2 ‹int› values (columns and rows, gives zero matrix)
 • ‹matrix› from an ‹std::vector› of ‹vector› (rows)

The one-argument constructor variants should be ‹explicit›.

The behaviour is undefined if the ‹vector› instances passed to a
‹matrix› constructor are not all of the same dimension and when
‹det› or ‹inv› are called on a non-square matrix or ‹inv› on a
singular matrix. Likewise, operations on dimensionally mismatched
arguments are undefined. All dimensions must be positive.

We know that vector is an array of numbers and matrix is a 2D array of numbers, or even better, rows / columns of vectors.

$$\begin{pmatrix} a & b & c \\ d & e & f \end{pmatrix}$$

Implementing the basic operations should be quite simple. All can be found here on wikipedia. The problem we need to focus is the Big-O complexity or in the test's case execution time, which are correlated.

vector addition and subtraction : $\mathcal{O}(n)$
multiplication of a vector by a scalar : $\mathcal{O}(n)$
dot product of two vectors : $\mathcal{O}(n)$
matrix addition : $\mathcal{O}(n)$
multiplication of a vector by a matrix : $\mathcal{O}(n)$
equality : $\mathcal{O}(n)$

This was quite simple, I don't think we can get better than that. Lets see how to solve the rest.

Gaussian elimination

We can see on wikipedia that in order to obtain the gaussian elimination we:

The total complexity in the end is $\mathcal{O}(columns \cdot rows \cdot \frac{n^2}{2}) = \mathcal{O}(n \cdot \frac{n^2}{2}) = \mathcal{O}(\frac{n^3}{2})$

To obtain reduced row echelon form we need:

Matrix in row echelon form.
The leading entry in each nonzero row is a 1.
Each column containing a leading 1 has zeros in all its other entries.

This means that we need to fill the whole column with 0 (except the pivot). This means that instead of $\mathcal{O}(\frac{n^2}{2})$ we get the whole $\mathcal{O}(n^2)$ and therefore the final complexity will be $\mathcal{O}(n^3)$.

Rank

This corresponds to the maximal number of linearly independent columns of matrix.

My first approach was to use Gaussian elimination and then count the number of non-zero rows (pivot is non-zero). This has of course $\mathcal{O}(n^3 + \min(columns, rows)) = \mathcal{O}(n^3)$ complexity.

Determinant

I first tried to implement the method that we learned in math class few years back which was Laplace expansion. After implmenting this, I found out that this method has unfortunately complexity of $\mathcal{O}(n!)$.

After some searching I found a method called LU decomposition where we split out matrix $A = P^{-1}LU$ and compute the determinant as $det(A) = (-1)^Sdet(L)det(U)$ where S is the number of row exchanges in the decomposition.

This turns out to have a complexity of $\mathcal{O}(n^3)$, which is much better.

I have used this algorithm to obtain $L$ and $U$ matrices. And since we used $0$ row exchanges $S = 0$ and therefore we only get $det(A) = det(L)det(U)$.

alt text

We can easily now compute the determinants of both $L$ and $U$, since we need to multiply only the diagonal. Also we can notice, that using this algorithm the diagnoal of $L$ is always made of $1$, therefore the $det(L) = 1$.

There is only one problem. This pseudocode does not calculate $det(A) = 0$ because division by zero occurs. To solve this we need to add a check for singular matrix. I found a solution here where they add this code into the inner loop. When matrix is singular the $det(A) = 0$ and we handle that case separately.

p = 0
for(i = k to n)
    if |Aik| > p then p = |Aik|

if p=0 then STOP "singular matrix"

Inverse matrix

To find the inverse matrix I thought of using the way I was taught in school and that was joining our matrix with an identity matrix and through operations get the identity matrix on the other side.

Turns out the only thing that needed to be done was to use a modified Gauss Jordan Elimination. Instead of applying the operation to the square, we apply the operation to both matrices, or in other words we apply the operation to the whole row.

The algorithm that I used can be found here.

As it is using the Guass elimination + iterating several times over the matrix the complexity is $\mathcal{O}(n^3)$.

#math #C++ #algorithm