Skip to content

Commit 9a8b41e

Browse files
ClémentClément
authored andcommitted
Added note on matrix multiplication.
1 parent 18d15f5 commit 9a8b41e

File tree

2 files changed

+84
-8
lines changed
  • source

2 files changed

+84
-8
lines changed
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
using System;
2+
3+
class Program
4+
{
5+
// This method computes the product of two matrices.
6+
static int[,] MatrixMultiplication(int[,] matrix1, int[,] matrix2)
7+
{
8+
int row1 = matrix1.GetLength(0);
9+
int column1 = matrix1.GetLength(1);
10+
int row2 = matrix2.GetLength(0);
11+
int column2 = matrix2.GetLength(1);
12+
13+
if (column1 != row2)
14+
{
15+
throw new ArgumentException("Those matrixes cannot be multiplied!!");
16+
}
17+
else
18+
{
19+
int temp = 0;
20+
int[,] matrixproduct = new int[row1, column2];
21+
22+
for (int i = 0; i < row1; i++)
23+
{
24+
for (int j = 0; j < column2; j++)
25+
{
26+
temp = 0;
27+
for (int k = 0; k < column1; k++)
28+
{
29+
temp += matrix1[i, k] * matrix2[k, j];
30+
}
31+
matrixproduct[i, j] = temp;
32+
}
33+
}
34+
35+
return matrixproduct;
36+
}
37+
}
38+
39+
// Useful to display matrices.
40+
static void Display(int[,] matP)
41+
{
42+
for (int row = 0; row < matP.GetLength(0); row++)
43+
{
44+
for (int col = 0; col < matP.GetLength(1); col++)
45+
{
46+
Console.Write(
47+
String.Format("|{0,4} ", matP[row, col])
48+
);
49+
}
50+
Console.WriteLine(" |");
51+
}
52+
}
53+
54+
static void Main()
55+
{
56+
int[,] A = { { 12, 10, 3 }, { 11, 9, 7 } };
57+
int[,] B = { { 2, 3 }, { 4, 5 }, { 6, 8 } };
58+
int[,] AxB = MatrixMultiplication(A, B);
59+
60+
Display(A);
61+
Console.WriteLine("times");
62+
Display(B);
63+
Console.WriteLine(" is ");
64+
Display(AxB);
65+
}
66+
}

source/docs/programming_and_computer_usage/complexity.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -136,8 +136,8 @@ We will ignore [how much time](https://cs.stackexchange.com/a/21799) is required
136136

137137
Base | Size of $n$ |
138138
--- | ------------ |
139-
Base $b$ | $\log_2(n) \ \log_2(b)$ |
140-
Base $b'$ | $\log_2(n) \ \log_2(b')$ |
139+
Base $b$ | $\log_2(n) / \log_2(b)$ |
140+
Base $b'$ | $\log_2(n) / \log_2(b')$ |
141141

142142
Hence, converting a number $n$ in base $b$ into a number in base $b'$ results in a number that uses $\log_{2}(b) / \log_{2}(b')$ more (or less!) space.
143143
Notice, and this is very important, that is expression *does not depend on $n$, but only on $b$ and $b'$*, hence the "constant factor" property of the big O notation tells us that we do not care about such a change.
@@ -148,7 +148,7 @@ This corresponds intuitively to 32 bits being able to store at most a 10-digit n
148148
If our program in base $b$ uses memory of order $O(g(n))$, it means that a program performing the same task, with the same algorithm, but using integers in base $b'$, would have its memory usage bounded by $O((\log_2(b) / \log_2(b')) \times g(n))$.
149149
By adapting the constant factor principle of the big O notation, we can see that this is a negligible factor that can be omitted.
150150

151-
However, if the $b'$ base is 1, then the new program will use $O(n \times g(n))$: if $g(n)$ is greater than linear, this will make a difference^[This can already be seen by observing that $\log_{2}(b) / \log_{2}(b')$, if $b' = 1$, is impossible, since we cannot divide by $\log_2(1) =0$.]
151+
However, if the $b'$ base is 1, then the new program will use $O(n \times g(n))$: if $g(n)$ is greater than linear, this will make a difference^[This can already be seen by observing that $\log_{2}(b) / \log_{2}(b')$, if $b' = 1$, is impossible, since we cannot divide by $\log_2(1) =0$.].
152152
Of course, unary representation is *not* reasonable, so we will always assume that our representations are related by some constant, making the function order of magnitude insensible to such details.
153153

154154
You can have a look at [the complexity of various arithmetic functions](https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Arithmetic_functions) and see that the representation is not even discussed, as those results are insensible to them, provided they are "reasonable".
@@ -209,23 +209,33 @@ Note, also, that both algorithms have the same worst case and average case compl
209209

210210
The [binary search algorithm](./lectures/data/search#binary-search) looks for a particular value in a *sorted* array by leveraging this additional information: it "jumps" in the middle of the array, and if the value is found, it terminates, if the value is less than the target value, it keep looking in the right half of the array, and it keeps looking in the left half of the array otherwise.
211211

212-
What is the time complexity of such an algorithm? It halves the array at every step, and we know that if the array is of size $1$, then it will terminate (either because the value was found, or because it was not in the array).
212+
What is the (worst case) time complexity of such an algorithm? It halves the array at every step, and we know that if the array is of size $1$, then it will terminate (either because the value was found, or because it was not in the array).
213213
That means that, if the array is of size $n$, in the worst case,
214214

215215
- after $1$ step, we have an array of size $n / 2$ left to explore,
216216
- after $2$ steps, we have an array of size $n / 4$ left to explore,
217217
- after $3$ steps, we have an array of size $n / 8$ left to explore,
218-
- … after $k$ steps, we have an array of size $n / (2^k) left to explore.
218+
- … after $k$ steps, we have an array of size $n / (2^k)$ left to explore.
219219

220-
Hence, we need to determine what is a $k$ such that $n / (2^k) \leqslant 1$ (since we terminate when the array is of size $1$)
220+
Hence, we need to determine what is a $k$ such that
221221

222+
$$n / (2^k) \leqslant 1$$
222223

224+
since we terminate when the array is of size $1$. It is easy to see that if $2^k \geqslant n$, then this will be true, hence $k = \log_2(n)$, as $2^{\log_2(n)} = n$.
223225

226+
#### Matrix Multiplication
224227

228+
Consider the ["schoolbook algorithm for multiplication"](https://en.wikipedia.org/wiki/Computational_complexity_of_matrix_multiplication#Schoolbook_algorithm):
225229

230+
```{download="./code/projects/MatrixMultiplication.zip"}
231+
!include`snippetStart="// This method computes the product of two matrices.",snippetEnd="// Useful to display matrices."` code/projects/MatrixMultiplication/MatrixMultiplication/Program.cs
232+
```
226233

227234

235+
We can see that
228236

229-
#### Matrix Multiplication
237+
- The first loop iterates `row1` times,
238+
- The second loop iterates `column2` times,
239+
- The third loop iterates `column1` times,
230240

231-
Consider the ["schoolbook algorithm for multiplication"](https://en.wikipedia.org/wiki/Computational_complexity_of_matrix_multiplication#Schoolbook_algorithm)
241+
If we multiply square matrices, then `row1`, `colmun2` and `column1` are all equal to the same value, $n$, that we take as input of the problem: then we can see by the product rule above that this algorithm requires time $O(n^3)$ to complete.

0 commit comments

Comments
 (0)