620 lines
13 KiB
TeX
620 lines
13 KiB
TeX
\section{One Bit}
|
|
Before we discuss quantum computation, we first need to construct a few tools. \par
|
|
To keep things simple, we'll use regular (usually called \textit{classical}) bits for now.
|
|
|
|
|
|
|
|
|
|
|
|
\definition{}
|
|
$\mathbb{B}$ is the set of binary digits. In other words, $\mathbb{B} = \{\texttt{0}, \texttt{1}\}$. \par
|
|
\note[Note]{We've seen $\mathbb{B}$ before: It's $(\mathbb{Z}_2, +)$, the addition group mod 2.}
|
|
|
|
\vspace{2mm}
|
|
|
|
Multiplication in $\mathbb{B}$ works just as you'd expect: \par
|
|
$
|
|
\texttt{0} \times \texttt{0} =
|
|
\texttt{0} \times \texttt{1} =
|
|
\texttt{1} \times \texttt{0} = \texttt{0}
|
|
$; and $\texttt{1} \times \texttt{1} = \texttt{1}$.
|
|
|
|
\vspace{2mm}
|
|
|
|
We'll treat addition a bit differently: \par
|
|
$0 + 0 = 0$ and $0 + 1 = 1$, but $1 + 1$, for our purposes, is undefined.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\definition{}
|
|
Let $A$ and $B$ be sets. \par
|
|
The \textit{cartesian product} $A \times B$ is the set of all pairs $(a, b)$ where $a \in A$ and $b \in B$. \par
|
|
As usual, we can write $A \times A \times A$ as $A^3$. \par
|
|
|
|
\vspace{2mm}
|
|
|
|
In this handout, we'll often see the following sets:
|
|
\begin{itemize}
|
|
\item $\mathbb{R}^2$, a two-dimensional plane
|
|
\item $\mathbb{R}^n$, an n-dimensional space
|
|
\item $\mathbb{B}^2$, the set $\{\texttt{00}, \texttt{01}, \texttt{10}, \texttt{11}\}$
|
|
\item $\mathbb{B}^n$, the set of all possible states of $n$ bits.
|
|
\end{itemize}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
What is the size of $\mathbb{B}^n$?
|
|
|
|
\vfill
|
|
\pagebreak
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\generic{Remark:}
|
|
Consider a single classical bit. It takes states in $\{\texttt{0}, \texttt{1}\}$, picking one at a time. \par
|
|
The states \texttt{0} and \texttt{1} are fully independent. They are completely disjoint; they share no parts. \par
|
|
We'll therefore say that \texttt{0} and \texttt{1} \textit{orthogonal} (or equivalently, \textit{perpendicular}). \par
|
|
|
|
\vspace{2mm}
|
|
|
|
We can draw $\vec{0}$ and $\vec{1}$ as perpendicular axis on a plane to represent this:
|
|
|
|
\begin{center}
|
|
\begin{tikzpicture}[scale=1.5]
|
|
\fill[color = black] (0, 0) circle[radius=0.05];
|
|
|
|
\draw[->] (0, 0) -- (1.5, 0);
|
|
\node[right] at (1.5, 0) {$\vec{0}$ axis};
|
|
\fill[color = oblue] (1, 0) circle[radius=0.05];
|
|
\node[below] at (1, 0) {\texttt{0}};
|
|
|
|
\draw[->] (0, 0) -- (0, 1.5);
|
|
\node[above] at (0, 1.5) {$\vec{1}$ axis};
|
|
\fill[color = oblue] (0, 1) circle[radius=0.05];
|
|
\node[left] at (0, 1) {\texttt{1}};
|
|
\end{tikzpicture}
|
|
\end{center}
|
|
|
|
|
|
The point marked $1$ is at $[0, 1]$. It is no parts $\vec{0}$, and all parts $\vec{1}$. \par
|
|
Of course, we can say something similar about the point marked $0$: \par
|
|
It is at $[1, 0] = (1 \times \vec{0}) + (2 \times \vec{1})$. In other words, all $\vec{0}$ and no $\vec{1}$. \par
|
|
|
|
\vspace{2mm}
|
|
|
|
Naturally, the coordinates $[0, 1]$ and $[1, 0]$ denote how much of each axis a point \say{contains.} \par
|
|
We could, of course, mark the point \texttt{x} at $[1, 1]$, which is equal parts $\vec{0}$ and $\vec{1}$: \par
|
|
\note[Note]{
|
|
We could also write $\texttt{x} = \vec{0} + \vec{1}$ explicitly. \\
|
|
I've drawn \texttt{x} as a point on the left, and as a sum on the right.
|
|
}
|
|
|
|
\null\hfill
|
|
\begin{minipage}{0.48\textwidth}
|
|
\begin{center}
|
|
\begin{tikzpicture}[scale=1.5]
|
|
\fill[color = black] (0, 0) circle[radius=0.05];
|
|
|
|
\draw[->] (0, 0) -- (1.5, 0);
|
|
\node[right] at (1.5, 0) {$\vec{0}$ axis};
|
|
|
|
\draw[->] (0, 0) -- (0, 1.5);
|
|
\node[above] at (0, 1.5) {$\vec{1}$ axis};
|
|
|
|
\fill[color = oblue] (1, 0) circle[radius=0.05];
|
|
\node[below] at (1, 0) {\texttt{0}};
|
|
|
|
\fill[color = oblue] (0, 1) circle[radius=0.05];
|
|
\node[left] at (0, 1) {\texttt{1}};
|
|
|
|
\draw[dashed, color = gray, ->] (0, 0) -- (0.9, 0.9);
|
|
\fill[color = oblue] (1, 1) circle[radius=0.05];
|
|
\node[above right] at (1, 1) {\texttt{x}};
|
|
\end{tikzpicture}
|
|
\end{center}
|
|
\end{minipage}
|
|
\hfill
|
|
\begin{minipage}{0.48\textwidth}
|
|
\begin{center}
|
|
\begin{tikzpicture}[scale=1.5]
|
|
\fill[color = black] (0, 0) circle[radius=0.05];
|
|
|
|
\fill[color = oblue] (1, 0) circle[radius=0.05];
|
|
\node[below] at (1, 0) {\texttt{0}};
|
|
|
|
\fill[color = oblue] (0, 1) circle[radius=0.05];
|
|
\node[left] at (0, 1) {\texttt{1}};
|
|
|
|
\draw[dashed, color = gray, ->] (0, 0) -- (0.9, 0.0);
|
|
\draw[dashed, color = gray, ->] (1, 0.1) -- (1, 0.9);
|
|
\fill[color = oblue] (1, 1) circle[radius=0.05];
|
|
\node[above right] at (1, 1) {\texttt{x}};
|
|
\end{tikzpicture}
|
|
\end{center}
|
|
\end{minipage}
|
|
\hfill\null
|
|
|
|
\vspace{4mm}
|
|
|
|
But \texttt{x} isn't a member of $\mathbb{B}$, it's not a valid state. \par
|
|
Our bit is fully $\vec{0}$ or fully $\vec{1}$. There's nothing in between.
|
|
|
|
|
|
\vspace{8mm}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\definition{}
|
|
The unit vectors $\vec{0}$ and $\vec{1}$ form an \textit{orthonormal basis} of the plane $\mathbb{R}^2$. \par
|
|
\note{
|
|
\say{ortho-} means \say{orthogonal}; normal means \say{normal,} which means length $= 1$. \\
|
|
}{
|
|
Note that $\vec{0}$ and $\vec{1}$ are orthonormal by \textit{definition}. \\
|
|
We don't have to prove anything, we simply defined them as such.
|
|
} \par
|
|
|
|
\vspace{2mm}
|
|
|
|
There's much more to say about basis vectors, but we don't need all the tools of linear algebra here. \par
|
|
We just need to understand that a set of $n$ orthogonal unit vectors defines an $n$-dimensional space. \par
|
|
This is fairly easy to think about: each vector corresponds to an axis of the space, and every point
|
|
in that space can be written as a \textit{linear combination} (i.e, a weighted sum) of these basis vectors.
|
|
|
|
\vspace{2mm}
|
|
|
|
For example, the set $\{[1,0,0], [0,1,0], [0,0,1]\}$ (which we usually call $\{x, y, z\})$
|
|
forms an orthonormal basis of $\mathbb{R}^3$. Every element of $\mathbb{R}^3$ can be written as a linear combination of these vectors:
|
|
|
|
\begin{equation*}
|
|
\left[\begin{smallmatrix} a \\ b \\ c \end{smallmatrix}\right]
|
|
=
|
|
a \left[\begin{smallmatrix} 1 \\ 0 \\ 0 \end{smallmatrix}\right] +
|
|
b \left[\begin{smallmatrix} 0 \\ 1 \\ 0 \end{smallmatrix}\right] +
|
|
c \left[\begin{smallmatrix} 0 \\ 0 \\ 1 \end{smallmatrix}\right]
|
|
\end{equation*}
|
|
|
|
The tuple $[a,b,c]$ is called the \textit{coordinate} of a point with respect to this basis.
|
|
|
|
|
|
|
|
\vfill
|
|
\pagebreak
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\definition{}
|
|
This brings us to what we'll call the \textit{vectored representation} of a bit. \par
|
|
Instead of writing our bits as just \texttt{0} and \texttt{1}, we'll break them into their components: \par
|
|
|
|
\null\hfill
|
|
\begin{minipage}{0.48\textwidth}
|
|
\[ \ket{0} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = (1 \times \vec{0}) + (0 \times \vec{1}) \]
|
|
\end{minipage}
|
|
\hfill
|
|
\begin{minipage}{0.48\textwidth}
|
|
\[ \ket{1} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = (0 \times \vec{0}) + (1 \times \vec{1}) \]
|
|
\end{minipage}
|
|
\hfill\null
|
|
|
|
\vspace{2mm}
|
|
|
|
This may seem needlessly complex---and it is, for classical bits. \par
|
|
We'll see why this is useful soon enough.
|
|
|
|
\generic{One more thing:}
|
|
The $\ket{~}$ you see in the two expressions above is called a \say{ket,} and denotes a column vector. \par
|
|
$\ket{0}$ is pronounced \say{ket zero,} and $\ket{1}$ is pronounced \say{ket one.} \par
|
|
This is called bra-ket notation. $\bra{0}$ is called a \say{bra,} but we won't worry about that for now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
Write \texttt{x} and \texttt{y} in the diagram below in terms of $\ket{0}$ and $\ket{1}$. \par
|
|
|
|
\begin{center}
|
|
\begin{tikzpicture}[scale=1.5]
|
|
\fill[color = black] (0, 0) circle[radius=0.05];
|
|
|
|
\draw[->] (0, 0) -- (1.5, 0);
|
|
\node[right] at (1.5, 0) {$\vec{0}$ axis};
|
|
|
|
\draw[->] (0, 0) -- (0, 1.5);
|
|
\node[above] at (0, 1.5) {$\vec{1}$ axis};
|
|
|
|
\fill[color = oblue] (1, 0) circle[radius=0.05];
|
|
\node[below] at (1, 0) {$\ket{0}$};
|
|
|
|
\fill[color = oblue] (0, 1) circle[radius=0.05];
|
|
\node[left] at (0, 1) {$\ket{1}$};
|
|
|
|
\draw[dashed, color = gray, ->] (0, 0) -- (0.9, 0.9);
|
|
\fill[color = ored] (1, 1) circle[radius=0.05];
|
|
\node[above right] at (1, 1) {\texttt{x}};
|
|
|
|
\draw[dashed, color = gray, ->] (0, 0) -- (-0.9, 0.9);
|
|
\fill[color = ored] (-1, 1) circle[radius=0.05];
|
|
\node[above right] at (-1, 1) {\texttt{y}};
|
|
\end{tikzpicture}
|
|
\end{center}
|
|
|
|
\vfill
|
|
\pagebreak
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\section{Two Bits}
|
|
How do we represent multi-bit states using vectors? \par
|
|
Unfortunately, this is hard to visualize---but the idea is simple.
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
What is the set of possible states of two bits (i.e, $\mathbb{B}^2$)?
|
|
|
|
|
|
\vspace{2cm}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\generic{Remark:}
|
|
When we have two bits, we have four orthogonal states:
|
|
$\overrightarrow{00}$, $\overrightarrow{01}$, $\overrightarrow{10}$, and $\overrightarrow{11}$. \par
|
|
We need four dimensions to draw all of these vectors, so I can't provide a picture... \par
|
|
but the idea here is the same as before.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
Write $\ket{00}$, $\ket{01}$, $\ket{10}$, and $\ket{11}$ as column vectors \par
|
|
with respect to the orthonormal basis $\{\overrightarrow{00}, \overrightarrow{01}, \overrightarrow{10}, \overrightarrow{11}\}$.
|
|
|
|
\vfill
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\generic{Remark:}
|
|
So, we represent each possible state as an axis in an $n$-dimensional space. \par
|
|
A set of $n$ bits gives us $2^n$ possible states, which forms a basis in $2^n$ dimensions.
|
|
|
|
\vspace{1mm}
|
|
|
|
Say we now have two seperate bits: $\ket{a}$ and $\ket{b}$. \par
|
|
How do we represent their compound state? \par
|
|
|
|
\vspace{4mm}
|
|
|
|
If we return to our usual notation, this is very easy:
|
|
$a$ is in $\{\texttt{0}, \texttt{1}\}$ and $b$ is in $\{\texttt{0}, \texttt{1}\}$, \par
|
|
so the possible compound states of $ab$ are
|
|
$\{\texttt{0}, \texttt{1}\} \times \{\texttt{0}, \texttt{1}\} = \{\texttt{00}, \texttt{01}, \texttt{10}, \texttt{11}\}$
|
|
|
|
\vspace{1mm}
|
|
|
|
The same is true of any other state set: if $a$ takes values in $A$ and $b$ takes values in $B$, \par
|
|
the compound state $(a,b)$ takes values in $A \times B$. This is trivial.
|
|
|
|
|
|
\vspace{4mm}
|
|
|
|
We would like to do the same in vector notation. Given $\ket{a}$ and $\ket{b}$, \par
|
|
how should we represent the state of $\ket{ab}$?
|
|
|
|
|
|
\vfill
|
|
\pagebreak
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\definition{}
|
|
The \textit{tensor product} between two vectors
|
|
is defined as follows:
|
|
\begin{equation*}
|
|
\begin{bmatrix}
|
|
x_1 \\ x_2
|
|
\end{bmatrix}
|
|
\otimes
|
|
\begin{bmatrix}
|
|
y_1 \\ y_2
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
x_1
|
|
\begin{bmatrix}
|
|
y_1 \\ y_2
|
|
\end{bmatrix}
|
|
|
|
\\[4mm]
|
|
|
|
x_2
|
|
\begin{bmatrix}
|
|
y_1 \\ y_2
|
|
\end{bmatrix}
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
x_1y_1 \\[1mm]
|
|
x_1y_2 \\[1mm]
|
|
x_2y_1 \\[1mm]
|
|
x_2y_2 \\[0.5mm]
|
|
\end{bmatrix}
|
|
\end{equation*}
|
|
|
|
|
|
That is, we take our first vector, multiply the second
|
|
vector by each of its components, and stack the result.
|
|
You could think of this as a generalization of scalar
|
|
mulitiplication, where scalar mulitiplication is a
|
|
tensor product with a vector in $\mathbb{R}^1$:
|
|
\begin{equation*}
|
|
a
|
|
\begin{bmatrix}
|
|
x_1 \\ x_2
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_1
|
|
\end{bmatrix}
|
|
\otimes
|
|
\begin{bmatrix}
|
|
y_1 \\ y_2
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_1
|
|
\begin{bmatrix}
|
|
y_1 \\ y_2
|
|
\end{bmatrix}
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_1y_1 \\[1mm]
|
|
a_1y_2
|
|
\end{bmatrix}
|
|
\end{equation*}
|
|
|
|
|
|
\vspace{2mm}
|
|
|
|
Also, note that the tensor product is very similar to the
|
|
Cartesian product: if we take $x$ and $y$ as sets, with
|
|
$x = \{x_1, x_2\}$ and $y = \{y_1, y_2\}$, the Cartesian product
|
|
contains the same elements as the tensor product---every possible
|
|
pairing of an element in $x$ with an element in $y$:
|
|
\begin{equation*}
|
|
x \times y = \{~(x_1,y_1), (x_1,y_2), (x_2,y_1), (x_2y_2)~\}
|
|
\end{equation*}
|
|
|
|
|
|
In fact, these two operations are (in a sense) essentially identical. \par
|
|
Let's quickly demonstrate this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
Say $x \in \mathbb{R}^n$ and $y \in \mathbb{R}^m$. \par
|
|
What is the dimension of $x \otimes y$?
|
|
|
|
\vfill
|
|
|
|
\problem{}<basistp>
|
|
What is the pairwise tensor product
|
|
$
|
|
\Bigl\{
|
|
\left[
|
|
\begin{smallmatrix}
|
|
1 \\ 0 \\ 0
|
|
\end{smallmatrix}
|
|
\right],
|
|
\left[
|
|
\begin{smallmatrix}
|
|
0 \\ 1 \\ 0
|
|
\end{smallmatrix}
|
|
\right],
|
|
\left[
|
|
\begin{smallmatrix}
|
|
0 \\ 0 \\ 1
|
|
\end{smallmatrix}
|
|
\right]
|
|
\Bigr\}
|
|
\otimes
|
|
\Bigl\{
|
|
\left[
|
|
\begin{smallmatrix}
|
|
1 \\ 0
|
|
\end{smallmatrix}
|
|
\right],
|
|
\left[
|
|
\begin{smallmatrix}
|
|
0 \\ 1
|
|
\end{smallmatrix}
|
|
\right]
|
|
\Bigr\}
|
|
$?
|
|
|
|
\note{in other words, distribute the tensor product between every pair of vectors.}
|
|
|
|
\vfill
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
The vectors we found in \ref{basistp} are a basis of what space? \par
|
|
|
|
\vfill
|
|
\pagebreak
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
The compound state of two vector-form bits is their tensor product. \par
|
|
Compute the following. Is the result what we'd expect?
|
|
\begin{itemize}
|
|
\item $\ket{0} \otimes \ket{0}$
|
|
\item $\ket{0} \otimes \ket{1}$
|
|
\item $\ket{1} \otimes \ket{0}$
|
|
\item $\ket{1} \otimes \ket{1}$
|
|
\end{itemize}
|
|
\hint{
|
|
Remember that the coordinates of
|
|
$\ket{0}$ are $\left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]$,
|
|
and the coordinates of
|
|
$\ket{1}$ are $\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]$.
|
|
}
|
|
|
|
|
|
\vfill
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}<fivequant>
|
|
Of course, writing $\ket{0} \otimes \ket{1}$ is a bit excessive. \par
|
|
We'll shorten this notation to $\ket{01}$. \par
|
|
Thus, the two-bit kets we saw on the previous page are, by definition, tensor products.
|
|
|
|
\vspace{2mm}
|
|
|
|
In fact, we could go further: if we wanted to write the set of bits $\ket{1} \otimes \ket{1} \otimes \ket{0} \otimes \ket{1}$, \par
|
|
we could write $\ket{1101}$---but a shorter alternative is $\ket{13}$, since $13$ is \texttt{1101} in binary.
|
|
|
|
\vspace{2mm}
|
|
|
|
Write $\ket{5}$ as three-bit state vector. \par
|
|
|
|
\begin{solution}
|
|
$\ket{5} = \ket{101} = \ket{1} \otimes \ket{0} \otimes \ket{1} = [0,0,0,0,0,1,0,0]^T$ \par
|
|
Notice how we're counting from the top, with $\ket{000} = [1,0,...,0]$ and $\ket{111} = [0, ..., 0, 1]$.
|
|
\end{solution}
|
|
|
|
\vfill
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\problem{}
|
|
Write the three-bit states $\ket{0}$ through $\ket{7}$ as column vectors. \par
|
|
What do you see?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\vfill
|
|
\pagebreak |