Euclidean Algorithm

Introduction

Most of the modern cryptosystems were developed based on the fact that it is extremely inefficient to factoring large composite numbers, usually in exponential complexity. Therefore, breaking these cryptosystems have been extremely hard. With a quantum computer, the quantum Shor’s algorithm is able to factorize large composite numbers in polynomial complexity. However, the Shor’s algorithm does have a fraction that uses classical algorithm to find the greatest common divisor of two integers. The most efficient classical algorithms to find the greatest common divisor of two integers is Euclidean algorithm and its variants.

In this blog post, I would like to discuss the Euclidean algorithm and systematically prove its usefulness.

Prerequisites

Greatest Common Divisor Reduction

Let $a = bq + r$, where $a$, $b$, $q$, and $r$ are integers. Then $\text{gcd}(a,b) = \text{gcd}(b,r)$.

Proof

Suppose $d | a$ and $d | b$, then we have

$$
\begin{align}
a &= d k_1 \\
b &= d k_2 \\
\end{align}
$$

for some $k_1, k_2 \in \mathbb{N}$.

$$
\begin{align}
r &= a - bq \\
&= d k_1 - d k_2 q \\
&= d (k_1 - k_2 q) \\
\end{align}
$$

This means that any common divisor for $a$ and $b$ must also be a divisor for $r$, and further any common divisor for $a$ and $b$ must also be a common divisor for $b$ and $r$. So one of the common divisors for $a$ and $b$, $\text{gcd}(a,b)$, is a common divisor for $b$ and $r$. Since the greatest common divisor for $b$ and $r$ is $\text{gcd}(b,r)$, then we have $\text{gcd}(a,b) \leq \text{gcd}(b,r)$.

Similarly, we could prove that any common divisor for $b$ and $r$ must also be a divisor for $a$, and further any common divisor for $b$ and $r$ must also be a common divisor for $a$ and $b$. So one of the common divisors for $b$ and $r$, $\text{gcd}(b,r)$, is a common divisor for $a$ and $b$. Since the greatest common divisor for $a$ and $b$ is $\text{gcd}(a,b)$, then we have $\text{gcd}(b,r) \leq \text{gcd}(a,b)$.

Because $\text{gcd}(a,b) \leq \text{gcd}(b,r)$ and $\text{gcd}(b,r) \leq \text{gcd}(a,b)$, we must have $\text{gcd}(a,b) = \text{gcd}(b,r)$.

This concludes the proof.

Fibonacci Sequence

The numbers in the Fibonacci sequence, $f_0, f_1, f_2, f_3 \cdots, $, given $f_0 = 0$ and $f_1 = 1$, are defined as

$$
f_n = f_{n-1} + f_{n-2}
$$

for integer $n \geq 2$.

The Fibonacci sequence has a property that is useful for proving the usefulness of the Euclidean algorithm.

$f_n > \alpha^{n-2}$ for $n \geq 3$ where $\alpha = \frac{1+\sqrt{5}}{2}$.

Proof

We could prove this property using strong induction. Let $P(n)$ be the statement that $f_n > \alpha^{n-2}$. We would like to show that $P(n)$ is true for $n \geq 3$.

Basis Step: it is easy to verify that $P(3)$ and $P(4)$ are true.

Induction Step: Assume $P(j)$, namely $f_j > \alpha^{j-2}$, is true for all integers $j$ with $3 \leq j \leq$ where $k \geq 4$, we must show $P(k+1)$ is true.

Because $\alpha^2 = \alpha + 1$,

$$
\begin{align}
\alpha^{k-1} &= \alpha^2 \alpha^{k-3} \\
&= (\alpha + 1) \alpha^{k-3} \\
&= \alpha^{k-2} + \alpha^{k-3} \\
\end{align}
$$

Because of the induction hypothesis, we have

$$
\begin{align}
f_{k-1} &> \alpha^{k-3} \\
f_k &> \alpha^{k-2} \\
\end{align}
$$

Therefore,

$$
\begin{align}
f_{k+1} &= f_{k} +f_{k-1} \\
&> \alpha^{k-2} + \alpha^{k-3} \\
&= \alpha^{k-1} \\
\end{align}
$$

Therefore, $f_{k+1} > \alpha^{k-1}$, $P(k+1)$ is true. This concludes the proof.

Euclidean Algorithm

Based on the property of the greatest common divisor reduction in the prerequisites, the greatest common divisor problem could be solved recursively. Instead of using recursively function, we implemented the function in a iterative manner.

Euclidean Algorithm Pseudocode

When the remainder $r$ becomes 0, $\text{gcd}(b,r) = \text{gcd}(b,0) = b$, we finish the greatest common divisor computation.

1
2
3
4
5
6
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a

It does not matter if $a < b$. If $a < b$, after the first iteration, the value of $a$ and $b$ will switch.

One might ask why the remainder $r$ in Euclidean algorithm will eventually become 0 and the iteration will eventually stop. This will become more clear in our proof for the Lamé’s theorem.

Euclidean Algorithm Implementation

gcd.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#include <iostream>

int gcd(int a, int b)
{
int t;
while (b != 0)
{
t = b;
b = a % b;
a = t;
}
return a;
}

int main()
{
int a = 6;
int b = 15;
std::cout << gcd(6, 15) << std::endl;
}
1
2
3
$ g++ gcd.cpp -o gcd
$ ./gcd
3

Lamé’s Theorem

Let $a$ and $b$ be positive integers with $a \geq b$. Then the number of divisions used by the Euclidean algorithm to find $\text{gcd}(a,b)$ is less than or equal to five times the number of decimal digits in $b$.

Proof

Suppose the Euclidean algorithm is applied to find $\text{gcd}(a,b)$ with $a \geq b$. In practice, even if $a < b$, because $a = 0 b + a$, $\text{gcd}(a,b) = \text{gcd}(b,a)$, computing $\text{gcd}(a,b)$ is equivalent to computing $\text{gcd}(b,a)$ with $b \geq a$.

We initialize $r_0 = a$ and $r_1 = b$.

$$
\begin{align}
r_0 &= r_1 q_1 + r_2 & 0 \leq & r_2 < r_1 \\
r_1 &= r_2 q_2 + r_3 & 0 \leq & r_3 < r_2 \\
&\vdots & &\vdots \\
r_{n-2} &= r_{n-1} q_{n-1} + r_{n} & 0 \leq & r_{n} < r_{n-1} \\
r_{n-1} &= r_{n} q_{n} & & \\
\end{align}
$$

There must exist an ending condition where $r_{n-1} = r_{n} q_{n}$. Because if there is no such ending condition $r_1 > r_2 > \cdots > r_{k} > \cdots > 0$, However, because $r_1$ is finite and all $r_{i}$ is positive integers for $i \geq 0$, there must exist a $r_{j}$ for some $j \geq 0$ so that $r_{j} \leq 0$. This raises a contradiction. Therefore, there must exist an ending condition where $r_{n-1} = r_{n} q_{n}$. This also answers the question for the Euclidean algorithm why the remainder $r$ in Euclidean algorithm will eventually become 0 and the iteration will eventually stop.

Note that the quotients $q_1, q_2, \cdots, q_{n-1}, q_{n} \geq 1$ because $r_i > r_{i+1}$ for $i \in [1, n-1]$. In addition, $q_{n} \geq 2$ because $r_{n-1} > r_n$ and $r_n$ divides $r_{n-1}$. In addition, $r_i \geq 1$ for $i \in [1, n]$. This implies that

$$
\begin{align}
r_n &\geq 1 = f_2 \\
r_{n-1} &\geq 2 r_n \geq 2 f_2 = 2 = f_3 \\
r_{n-2} &\geq r_{n-1} + r_n \geq f_3 + f_2 = f_4 \\
&\vdots \\
r_2 &\geq r_3 + r_4 \geq f_{n-1} + f_{n-2} = f_n \\
b = r_1 &\geq r_2 + r_3 \geq f_{n} + f_{n-1} = f_{n + 1}\\
\end{align}
$$

where $f_i$ is the $i$th number in the Fibonacci sequence, $n$ is the number of iterations for the Euclidean algorithm. We are interested to know what the upper bound of the value of $n$ is given $b$.

Using the special property of Fibonacci sequence we have just proved in the prerequisites section, $f_n > \alpha^{n-2}$ for $n \geq 3$ where $\alpha = \frac{1+\sqrt{5}}{2}$.

$$
\begin{align}
b &\geq f_{n + 1} > \alpha^{n - 1} \\
\end{align}
$$

Furthermore, because $\log_{10} \alpha \approx 0.208 > \frac{1}{5}$

$$
\log_{10} b > (n-1) \log_{10} \alpha > \frac{n-1}{5}
$$

Thus, we have

$$
n < 5 \log_{10} b + 1
$$

We further have

$$
n \leq 5(\lfloor\log_{10} b\rfloor + 1)
$$

where $\lfloor\log_{10} b\rfloor + 1$ is the number of decimal digits in $b$, and $n$ is the number of divisions used by the Euclidean algorithm to find $\text{gcd}(a,b)$. This concludes the proof for the Lamé’s theorem.

Asymptotic Complexity

Based on the Lamé’s theorem, the asymptotic complexity for the Euclidean algorithm to compute $\text{gcd}(a,b)$ when $a \geq b$ is $O(h)$ where $h$ is the number of digits in $b$.

Euclidean Algorithm Variants

There are Euclidean algorithm variants, such as the extended Euclidean algorithm, which are actually used in practice instead of the vanilla Euclidean algorithm, due to their efficiency in computation. However, since they don’t improve the asymptotic complexity compared to the vanilla Euclidean algorithm, we are not going into the details for the moment.

References

Author

Lei Mao

Posted on

09-12-2020

Updated on

09-12-2020

Licensed under


Comments