Quantum edits

This commit is contained in:
Mark 2024-02-11 10:09:30 -08:00
parent 858be3dcb4
commit 1f213d9673
Signed by: Mark
GPG Key ID: C6D63995FE72FD80
7 changed files with 156 additions and 139 deletions

View File

@ -45,8 +45,6 @@
\input{parts/04 two halves} \input{parts/04 two halves}
\input{parts/05 logic gates} \input{parts/05 logic gates}
\input{parts/06 quantum gates} \input{parts/06 quantum gates}
%\input{parts/03.00 logic gates}
%\input{parts/03.01 quantum gates}
%\section{Superdense Coding} %\section{Superdense Coding}
%TODO %TODO

View File

@ -1,4 +1,4 @@
\section*{Prerequisite: Vector Basics} \section*{Part 0: Vector Basics}
\definition{Vectors} \definition{Vectors}
An $n$-dimensional \textit{vector} is an element of $\mathbb{R}^n$. In this handout, we'll write vectors as columns. \par An $n$-dimensional \textit{vector} is an element of $\mathbb{R}^n$. In this handout, we'll write vectors as columns. \par
@ -90,6 +90,103 @@ What is the dot product of two orthogonal vectors?
\vfill \vfill
\pagebreak \pagebreak
\definition{Linear combinations}
A \textit{linear combination} of two or more vectors $v_1, v_2, ..., v_k$ is the weighted sum
\begin{equation*}
a_1v_1 + a_2v_2 + ... + a_kv_k
\end{equation*}
where $a_i$ are arbitrary real numbers.
\definition{Linear dependence}
We say a set of vectors $\{v_1, v_2, ..., v_k\}$ is \textit{linearly independent} if we can write $0$ as a nontrivial
linear combination of these vectors. For example, the following set is linearly dependent
\begin{equation*}
\Bigl\{
\left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right],
\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right],
\left[\begin{smallmatrix} 0.5 \\ 0.5 \end{smallmatrix}\right]
\Bigr\}
\end{equation*}
Since $
\left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right] +
\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right] -
2 \left[\begin{smallmatrix} 0.5 \\ 0.5 \end{smallmatrix}\right]
= 0
$. A graphical representation of this is below.
\null\hfill
\begin{minipage}{0.48\textwidth}
\begin{center}
\begin{tikzpicture}[scale=1]
\fill[color = black] (0, 0) circle[radius=0.05];
\node[right] at (1, 0) {$\left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]$};
\node[above] at (0, 1) {$\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]$};
\draw[->] (0, 0) -- (1, 0);
\draw[->] (0, 0) -- (0, 1);
\draw[->] (0, 0) -- (0.5, 0.5);
\node[above right] at (0.5, 0.5) {$\left[\begin{smallmatrix} 0.5 \\ 0.5 \end{smallmatrix}\right]$};
\end{tikzpicture}
\end{center}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\begin{center}
\begin{tikzpicture}[scale=1]
\fill[color = black] (0, 0) circle[radius=0.05];
\node[below] at (0.5, 0) {$\left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]$};
\node[right] at (1, 0.5) {$\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]$};
\draw[->] (0, 0) -- (0.95, 0);
\draw[->] (1, 0) -- (1, 0.95);
\draw[->] (1, 1) -- (0.55, 0.55);
\draw[->] (0.5, 0.5) -- (0.05, 0.05);
\node[above left] at (0.5, 0.5) {$-2\left[\begin{smallmatrix} 0.5 \\ 0.5 \end{smallmatrix}\right]$};
\end{tikzpicture}
\end{center}
\end{minipage}
\hfill\null
\problem{}
Find a linearly independent set of vectors in $\mathbb{R}^3$
\vfill
\definition{Coordinates}
Say we have a set of linearly independent vectors $B = \{b_1, ..., b_k\}$. \par
We can write linear combinations of $B$ as \textit{coordinates} with respect to this set:
\vspace{2mm}
If we have a vector $v = x_1b_1 + x_2b_2 + ... + x_kb_k$, we can write $v = (x_1, x_2, ..., x_k)$ with respect to $B$.
\vspace{4mm}
For example, take
$B = \biggl\{
\left[\begin{smallmatrix} 1 \\ 0 \\ 0 \end{smallmatrix}\right],
\left[\begin{smallmatrix} 0 \\ 1 \\ 0\end{smallmatrix}\right],
\left[\begin{smallmatrix} 0 \\ 0 \\ 1 \end{smallmatrix}\right]
\biggr\}$ and $v = \left[\begin{smallmatrix} 8 \\ 3 \\ 9 \end{smallmatrix}\right]$
The coordinates of $v$ with respect to $B$ are, of course, $(8, 3, 9)$.
\problem{}
What are the coordinates of $v$ with respect to the basis
$B = \biggl\{
\left[\begin{smallmatrix} 1 \\ 0 \\ 1 \end{smallmatrix}\right],
\left[\begin{smallmatrix} 0 \\ 1 \\ 0\end{smallmatrix}\right],
\left[\begin{smallmatrix} 0 \\ 0 \\ -1 \end{smallmatrix}\right]
\biggr\}$?
%For example, the set $\{[1,0,0], [0,1,0], [0,0,1]\}$ (which we usually call $\{x, y, z\})$ %For example, the set $\{[1,0,0], [0,1,0], [0,0,1]\}$ (which we usually call $\{x, y, z\})$
%forms an orthonormal basis of $\mathbb{R}^3$. Every element of $\mathbb{R}^3$ can be written as a linear combination of these vectors: %forms an orthonormal basis of $\mathbb{R}^3$. Every element of $\mathbb{R}^3$ can be written as a linear combination of these vectors:

View File

@ -55,14 +55,9 @@ What is the size of $\mathbb{B}^n$?
% NOTE: this is time-travelled later in the handout. % NOTE: this is time-travelled later in the handout.
% if you edit this, edit that too. % if you edit this, edit that too.
\cgeneric{Remark} \generic{Remark:}
Consider a single classical bit. It takes states in $\{\texttt{0}, \texttt{1}\}$, picking one at a time. \par Consider a single classical bit. It takes states in $\{\texttt{0}, \texttt{1}\}$, picking one at a time. \par
The states \texttt{0} and \texttt{1} are fully independent. They are completely disjoint; they share no parts. \par We'll write the states \texttt{0} and \texttt{1} as orthogonal unit vectors, labeled $\vec{e}_0$ and $\vec{e}_1$:
We'll therefore say that \texttt{0} and \texttt{1} \textit{orthogonal} (or equivalently, \textit{perpendicular}). \par
\vspace{2mm}
We can draw $\vec{e}_0$ and $\vec{e}_1$ as perpendicular axis on a plane to represent this:
\begin{center} \begin{center}
\begin{tikzpicture}[scale=1.5] \begin{tikzpicture}[scale=1.5]
@ -84,27 +79,21 @@ We can draw $\vec{e}_0$ and $\vec{e}_1$ as perpendicular axis on a plane to repr
The point marked $1$ is at $[0, 1]$. It is no parts $\vec{e}_0$, and all parts $\vec{e}_1$. \par The point marked $1$ is at $[0, 1]$. It is no parts $\vec{e}_0$, and all parts $\vec{e}_1$. \par
Of course, we can say something similar about the point marked $0$: \par Of course, we can say something similar about the point marked $0$: \par
It is at $[1, 0] = (1 \times \vec{e}_0) + (0 \times \vec{e}_1)$, and is thus all $\vec{e}_0$ and no $\vec{e}_1$. \par It is at $[1, 0] = (1 \times \vec{e}_0) + (0 \times \vec{e}_1)$, and is thus all $\vec{e}_0$ and no $\vec{e}_1$. \par
\note[Note]{$[0, 1]$ and $[1, 0]$ are coordinates in the basis $\{\vec{e}_0, \vec{e}_1\}$}
\vspace{2mm} \vspace{2mm}
Naturally, the coordinates $[0, 1]$ and $[1, 0]$ denote how much of each axis a point \say{contains.} \par We could, of course, mark the point \texttt{x} at $[1, 1]$ which is equal parts $\vec{e}_0$ and $\vec{e}_1$: \par
We could, of course, mark the point \texttt{x} at $[1, 1]$, which is equal parts $\vec{e}_0$ and $\vec{e}_1$: \par
\note[Note]{
We could also write $\texttt{x} = \vec{e}_0 + \vec{e}_1$ explicitly. \\
I've drawn \texttt{x} as a point on the left, and as a sum on the right.
}
\null\hfill
\begin{minipage}{0.48\textwidth}
\begin{center} \begin{center}
\begin{tikzpicture}[scale=1.5] \begin{tikzpicture}[scale=1.5]
\fill[color = black] (0, 0) circle[radius=0.05]; \fill[color = black] (0, 0) circle[radius=0.05];
\draw[->] (0, 0) -- (1.5, 0); \draw[->] (0, 0) -- (1.5, 0);
\node[right] at (1.5, 0) {$\vec{e}_0$ axis}; \node[right] at (1.5, 0) {$\vec{e}_0$};
\draw[->] (0, 0) -- (0, 1.5); \draw[->] (0, 0) -- (0, 1.5);
\node[above] at (0, 1.5) {$\vec{e}_1$ axis}; \node[above] at (0, 1.5) {$\vec{e}_1$};
\fill[color = oblue] (1, 0) circle[radius=0.05]; \fill[color = oblue] (1, 0) circle[radius=0.05];
\node[below] at (1, 0) {\texttt{0}}; \node[below] at (1, 0) {\texttt{0}};
@ -117,38 +106,14 @@ We could, of course, mark the point \texttt{x} at $[1, 1]$, which is equal parts
\node[above right] at (1, 1) {\texttt{x}}; \node[above right] at (1, 1) {\texttt{x}};
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\begin{center}
\begin{tikzpicture}[scale=1.5]
\fill[color = black] (0, 0) circle[radius=0.05];
\fill[color = oblue] (1, 0) circle[radius=0.05];
\node[below] at (1, 0) {\texttt{0}};
\fill[color = oblue] (0, 1) circle[radius=0.05];
\node[left] at (0, 1) {\texttt{1}};
\draw[dashed, color = gray, ->] (0, 0) -- (0.9, 0.0);
\draw[dashed, color = gray, ->] (1, 0.1) -- (1, 0.9);
\fill[color = oblue] (1, 1) circle[radius=0.05];
\node[above right] at (1, 1) {\texttt{x}};
\end{tikzpicture}
\end{center}
\end{minipage}
\hfill\null
\vspace{4mm} \vspace{4mm}
But \texttt{x} isn't a member of $\mathbb{B}$, it's not a valid state. \par But \texttt{x} isn't a member of $\mathbb{B}$---it's not a state that a classical bit can take. \par
Our bit is fully $\vec{e}_0$ or fully $\vec{e}_1$. By our current definitions, there's nothing in between. \par By our current definitions, the \textit{only} valid states of a bit are $\texttt{0} = [1, 0]$ and $\texttt{1} = [0, 1]$.
\note{
Note that the unit vectors $\vec{e}_0$ and $\vec{e}_1$ form an \textit{orthonormal basis} of the plane $\mathbb{R}^2$.
}
\vfill \vfill
\pagebreak \pagebreak
@ -169,7 +134,7 @@ Our bit is fully $\vec{e}_0$ or fully $\vec{e}_1$. By our current definitions, t
\definition{Vectored Bits} \definition{Vectored Bits}
This brings us to what we'll call the \textit{vectored representation} of a bit. \par This brings us to what we'll call the \textit{vectored representation} of a bit. \par
Instead of writing our bits as just \texttt{0} and \texttt{1}, we'll break them into their components: \par Instead of writing our bits as just \texttt{0} and \texttt{1}, we'll break them into their $\vec{e}_0$ and $\vec{e}_1$ components: \par
\null\hfill \null\hfill
\begin{minipage}{0.48\textwidth} \begin{minipage}{0.48\textwidth}
@ -186,10 +151,11 @@ Instead of writing our bits as just \texttt{0} and \texttt{1}, we'll break them
This may seem needlessly complex---and it is, for classical bits. \par This may seem needlessly complex---and it is, for classical bits. \par
We'll see why this is useful soon enough. We'll see why this is useful soon enough.
\generic{One more thing:} \vspace{4mm}
The $\ket{~}$ you see in the two expressions above is called a \say{ket,} and denotes a column vector. \par The $\ket{~}$ you see in the two expressions above is called a \say{ket,} and denotes a column vector. \par
$\ket{0}$ is pronounced \say{ket zero,} and $\ket{1}$ is pronounced \say{ket one.} \par $\ket{0}$ is pronounced \say{ket zero,} and $\ket{1}$ is pronounced \say{ket one.} This is called bra-ket notation. \par
This is called bra-ket notation. $\bra{0}$ is called a \say{bra,} but we won't worry about that for now. \note[Note]{$\bra{0}$ is called a \say{bra,} but we won't worry about that for now.}

View File

@ -1,74 +1,44 @@
\section{Two Bits} \section{Two Bits}
How do we represent multi-bit states using vectors? \par
Unfortunately, this is hard to visualize---but the idea is simple.
\problem{}<compoundclassicalbits>
As we already know, the set of states a single bit can take is $\mathbb{B} = \{\texttt{0}, \texttt{1}\}$. \par
What is the set of compound states \textit{two} bits can take? How about $n$ bits? \par
\hint{Cartesian product.}
\vspace{5cm}
Of course, \ref{compoundclassicalbits} is fairly easy: \par
If $a$ is in $\{\texttt{0}, \texttt{1}\}$ and $b$ is in $\{\texttt{0}, \texttt{1}\}$,
the values $ab$ can take are
$\{\texttt{0}, \texttt{1}\} \times \{\texttt{0}, \texttt{1}\} = \{\texttt{00}, \texttt{01}, \texttt{10}, \texttt{11}\}$.
\problem{} \vspace{2mm}
What is the set of possible states of two bits (i.e, $\mathbb{B}^2$)?
\vspace{2cm}
\cgeneric{Remark}
When we have two bits, we have four orthogonal states:
$\overrightarrow{00}$, $\overrightarrow{01}$, $\overrightarrow{10}$, and $\overrightarrow{11}$. \par
We need four dimensions to draw all of these vectors, so I can't provide a picture... \par
but the idea here is the same as before.
\problem{}
Write $\ket{00}$, $\ket{01}$, $\ket{10}$, and $\ket{11}$ as column vectors \par
with respect to the orthonormal basis $\{\overrightarrow{00}, \overrightarrow{01}, \overrightarrow{10}, \overrightarrow{11}\}$.
\vfill
\cgeneric{Remark}
So, we represent each possible state as an axis in an $n$-dimensional space. \par
A set of $n$ bits gives us $2^n$ possible states, which forms a basis in $2^n$ dimensions.
\vspace{1mm}
Say we now have two seperate bits: $\ket{a}$ and $\ket{b}$. \par
How do we represent their compound state? \par
\vspace{4mm}
If we return to our usual notation, this is very easy:
$a$ is in $\{\texttt{0}, \texttt{1}\}$ and $b$ is in $\{\texttt{0}, \texttt{1}\}$, \par
so the possible compound states of $ab$ are
$\{\texttt{0}, \texttt{1}\} \times \{\texttt{0}, \texttt{1}\} = \{\texttt{00}, \texttt{01}, \texttt{10}, \texttt{11}\}$
\vspace{1mm}
The same is true of any other state set: if $a$ takes values in $A$ and $b$ takes values in $B$, \par The same is true of any other state set: if $a$ takes values in $A$ and $b$ takes values in $B$, \par
the compound state $(a,b)$ takes values in $A \times B$. the compound state $(a,b)$ takes values in $A \times B$.
\vspace{2mm}
\vspace{4mm} We would like to do the same in vector notation. Given bits $\ket{a}$ and $\ket{b}$,
how should we represent the state of $\ket{ab}$? We'll spend the rest of this section solving this problem.
We would like to do the same in vector notation. Given $\ket{a}$ and $\ket{b}$, \par
how should we represent the state of $\ket{ab}$?
\problem{}
When we have two bits, we have four orthogonal states:
$\overrightarrow{00}$, $\overrightarrow{01}$, $\overrightarrow{10}$, and $\overrightarrow{11}$. \par
\vspace{2mm}
Write $\ket{00}$, $\ket{01}$, $\ket{10}$, and $\ket{11}$ as column vectors \par
with respect to the orthonormal basis $\{\overrightarrow{00}, \overrightarrow{01}, \overrightarrow{10}, \overrightarrow{11}\}$.
\vfill
\pagebreak \pagebreak
@ -83,8 +53,7 @@ how should we represent the state of $\ket{ab}$?
\definition{Tensor Products} \definition{Tensor Products}
The \textit{tensor product} between two vectors The \textit{tensor product} of two vectors is defined as follows:
is defined as follows:
\begin{equation*} \begin{equation*}
\begin{bmatrix} \begin{bmatrix}
x_1 \\ x_2 x_1 \\ x_2
@ -233,9 +202,12 @@ $?
\problem{} \problem{}
What is the \textit{span} of the vectors we found in \ref{basistp}? \par What is the \textit{span} of the vectors we found in \ref{basistp}? \par
In other words, what is the set of vectors that can be written as weighted sums of the vectors above? In other words, what is the set of vectors that can be written as linear combinations of the vectors above?
\vfill \vfill
Look through the above problems and convince yourself of the following fact: \par
If $a$ is a basis of $A$ and $b$ is a basis of $B$, $a \otimes b$ is a basis of $A \times B$.
\pagebreak \pagebreak
@ -275,9 +247,7 @@ Compute the following. Is the result what we'd expect?
\problem{}<fivequant> \problem{}<fivequant>
Of course, writing $\ket{0} \otimes \ket{1}$ is a bit excessive. \par Of course, writing $\ket{0} \otimes \ket{1}$ is a bit excessive. We'll shorten this notation to $\ket{01}$. \par
We'll shorten this notation to $\ket{01}$. \par
Thus, the two-bit kets we saw on the previous page are, by definition, tensor products.
\vspace{2mm} \vspace{2mm}
@ -302,10 +272,7 @@ Write $\ket{5}$ as three-bit state vector. \par
\problem{} \problem{}
Write the three-bit states $\ket{0}$ through $\ket{7}$ as column vectors. \par Write the three-bit states $\ket{0}$ through $\ket{7}$ as column vectors. \par
\hint{ \hint{You do not need to compute every tensor product. Do a few and find the pattern.}
You do not need to compute every tensor product. \\
Do a few, you should quickly see the pattern.
}

View File

@ -1,17 +1,5 @@
\section{Half a Qubit} \section{Half a Qubit}
First, a pair of definitions. We've used both these terms implicitly in the previous section,
but we'll need to introduce proper definitions before we continue.
\definition{}
A \textit{linear combination} of two vectors $u$ and $v$ is the sum $au + bv$ for scalars $a$ and $b$. \par
\note[Note]{In other words, a linear combination is exactly what it sounds like.}
\definition{}
A \textit{normalized vector} (also called a \textit{unit vector}) is a vector with length 1.
\vspace{4mm}
\begin{tcolorbox}[ \begin{tcolorbox}[
enhanced, enhanced,
breakable, breakable,
@ -38,7 +26,7 @@ A \textit{normalized vector} (also called a \textit{unit vector}) is a vector wi
\end{tcolorbox} \end{tcolorbox}
\cgeneric{Remark} \generic{Remark:}
Just like a classical bit, a \textit{quantum bit} (or \textit{qubit}) can take the values $\ket{0}$ and $\ket{1}$. \par Just like a classical bit, a \textit{quantum bit} (or \textit{qubit}) can take the values $\ket{0}$ and $\ket{1}$. \par
However, \texttt{0} and \texttt{1} aren't the only states a qubit may have. However, \texttt{0} and \texttt{1} aren't the only states a qubit may have.
@ -47,7 +35,7 @@ However, \texttt{0} and \texttt{1} aren't the only states a qubit may have.
We'll make sense of quantum bits by extending the \say{vectored} bit representation we developed in the previous section. We'll make sense of quantum bits by extending the \say{vectored} bit representation we developed in the previous section.
First, let's look at a diagram we drew a few pages ago: First, let's look at a diagram we drew a few pages ago:
\begin{ORMCbox}{Time Travel (Page 2)}{black!10!white}{black!65!white} \begin{ORMCbox}{Time Travel (Page 5)}{black!10!white}{black!65!white}
A classical bit takes states in $\{\texttt{0}, \texttt{1}\}$, picking one at a time. \par A classical bit takes states in $\{\texttt{0}, \texttt{1}\}$, picking one at a time. \par
We'll represent \texttt{0} and \texttt{1} as perpendicular unit vectors $\ket{0}$ and $\ket{1}$, We'll represent \texttt{0} and \texttt{1} as perpendicular unit vectors $\ket{0}$ and $\ket{1}$,
show below. show below.
@ -169,7 +157,7 @@ In addition, $\ket{\psi}$ \textit{collapses} when it is measured: it instantly c
leaving no trace of its previous state. \par leaving no trace of its previous state. \par
If we measure $\ket{\psi}$ and get $\ket{1}$, $\ket{\psi}$ becomes $\ket{1}$---and If we measure $\ket{\psi}$ and get $\ket{1}$, $\ket{\psi}$ becomes $\ket{1}$---and
it will remain in that state until it is changed. it will remain in that state until it is changed.
Quantum bits \textit{cannot} be measured without their state collapsing. \par Quantum bits cannot be measured without their state collapsing. \par
\pagebreak \pagebreak
@ -187,7 +175,7 @@ Quantum bits \textit{cannot} be measured without their state collapsing. \par
\problem{} \problem{}
\begin{itemize} \begin{itemize}
\item What is the probability we get $\ket{0}$ if we measure $\ket{\psi_0}$? \par \item What is the probability we get $\ket{0}$ when we measure $\ket{\psi_0}$? \par
\item What outcomes can we get if we measure it a second time? \par \item What outcomes can we get if we measure it a second time? \par
\item What are these probabilities for $\ket{\psi_1}$? \item What are these probabilities for $\ket{\psi_1}$?
\end{itemize} \end{itemize}

View File

@ -105,7 +105,7 @@ Find a matrix $A$ so that $A\ket{\texttt{ab}}$ works as expected. \par
\vfill \vfill
\pagebreak \pagebreak
\cgeneric{Remark} \generic{Remark:}
The way a quantum circuit handles information is a bit different than the way a classical circuit does. The way a quantum circuit handles information is a bit different than the way a classical circuit does.
We usually think of logic gates as \textit{functions}: they consume one set of bits, and return another: We usually think of logic gates as \textit{functions}: they consume one set of bits, and return another:
@ -275,7 +275,7 @@ Find the matrix that corresponds to the above transformation. \par
\vfill \vfill
\cgeneric{Remark} \generic{Remark:}
We could draw the above transformation as a combination $X$ and $I$ (identity) gate: We could draw the above transformation as a combination $X$ and $I$ (identity) gate:
\begin{center} \begin{center}
\begin{tikzpicture}[scale=0.8] \begin{tikzpicture}[scale=0.8]

View File

@ -127,7 +127,7 @@ If we measure the result of \ref{applycnot}, what are the probabilities of getti
\vfill \vfill
\cgeneric{Remark} \generic{Remark:}
As we just saw, a quantum gate is fully defined by the place it maps our basis states $\ket{0}$ and $\ket{1}$ \par As we just saw, a quantum gate is fully defined by the place it maps our basis states $\ket{0}$ and $\ket{1}$ \par
(or, $\ket{00...0}$ through $\ket{11...1}$ for multi-qubit gates). This directly follows from \ref{qgateislinear}. (or, $\ket{00...0}$ through $\ket{11...1}$ for multi-qubit gates). This directly follows from \ref{qgateislinear}.
@ -217,7 +217,8 @@ The \textit{Hadamard Gate} is given by the following matrix: \par
\end{bmatrix} \end{bmatrix}
\end{equation*} \end{equation*}
This is exactly the first column of the matrix product. This is exactly the first column of the matrix product. \par
Also, note that each element of $Ac_0$ is the dot product of a row in $A$ and a column in $c_0$.
\end{ORMCbox} \end{ORMCbox}