591 lines
13 KiB
TeX
Raw Normal View History

2024-02-15 16:23:25 -08:00
\section{Probabilistic Bits}
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\definition{}
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
As we already know, a \textit{classical bit} may take the values \texttt{0} and \texttt{1}. \par
We can model this with a two-sided coin, one face of which is labeled \texttt{0}, and the other, \texttt{1}. \par
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\vspace{2mm}
Of course, if we toss such a \say{bit-coin,} we'll get either \texttt{0} or \texttt{1}. \par
We'll denote the probability of getting \texttt{0} as $p_0$, and the probability of getting \texttt{1} as $p_1$. \par
As with all probabilities, $p_0 + p_1$ must be equal to 1.
\vfill
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\definition{}
Say we toss a \say{bit-coin} and don't observe the result. We now have a \textit{probabilistic bit}, with a probability $p_0$
of being \texttt{0}, and a probability $p_1$ of being \texttt{1}.
2024-01-27 13:16:52 -08:00
\vspace{2mm}
2024-02-15 16:23:25 -08:00
We'll represent this probabilistic bit's \textit{state} as a vector:
$\left[\begin{smallmatrix}
p_0 \\ p_1
\end{smallmatrix}\right]$ \par
We do \textbf{not} assume this coin is fair, and thus $p_0$ might not equal $p_1$.
\note{
This may seem a bit redundant: since $p_0 + p_1$, we can always calculate one probability given the other. \\
We'll still include both probabilities in the state vector, since this provides a clearer analogy to quantum bits.
}
\vfill
\definition{}
The simplest probabilistic bit states are of course $[0]$ and $[1]$, defined as follows:
\begin{itemize}
\item $[0] = \left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]$
\item $[1] = \left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]$
\end{itemize}
That is, $[0]$ represents a bit that we known to be \texttt{0}, \par
and $[1]$ represents a bit we know to be \texttt{1}.
\vfill
\definition{}
$[0]$ and $[1]$ form a \textit{basis} for all possible probabilistic bit states: \par
Every other probabilistic bit can be written as a \textit{linear combination} of $[0]$ and $[1]$:
\begin{equation*}
\begin{bmatrix} p_0 \\ p_1 \end{bmatrix}
=
p_0 \begin{bmatrix} 1 \\ 0 \end{bmatrix} +
p_1 \begin{bmatrix} 0 \\ 1 \end{bmatrix}
=
p_0 [0] + p_1 [1]
\end{equation*}
\vfill
\pagebreak
\problem{}
Every possible state of a probabilistic bit is a two-dimensional vector. \par
Draw all possible states on the axis below.
\begin{center}
\begin{tikzpicture}[scale = 2.0]
\fill[color = black] (0, 0) circle[radius=0.05];
\node[below left] at (0, 0) {$\left[\begin{smallmatrix} 0 \\ 0 \end{smallmatrix}\right]$};
\draw[->] (0, 0) -- (1.2, 0);
\node[right] at (1.2, 0) {$p_0$};
\fill[color = oblue] (1, 0) circle[radius=0.05];
\node[below] at (1, 0) {$[0]$};
\draw[->] (0, 0) -- (0, 1.2);
\node[above] at (0, 1.2) {$p_1$};
\fill[color = oblue] (0, 1) circle[radius=0.05];
\node[left] at (0, 1) {$[1]$};
\end{tikzpicture}
\end{center}
\begin{solution}
\begin{center}
\begin{tikzpicture}[scale = 2.0]
\fill[color = black] (0, 0) circle[radius=0.05];
\node[below left] at (0, 0) {$\left[\begin{smallmatrix} 0 \\ 0 \end{smallmatrix}\right]$};
\draw[ored, -, line width = 2] (0, 1) -- (1, 0);
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\draw[->] (0, 0) -- (1.2, 0);
\node[right] at (1.2, 0) {$p_0$};
\fill[color = oblue] (1, 0) circle[radius=0.05];
\node[below] at (1, 0) {$[0]$};
\draw[->] (0, 0) -- (0, 1.2);
\node[above] at (0, 1.2) {$p_1$};
\fill[color = oblue] (0, 1) circle[radius=0.05];
\node[left] at (0, 1) {$[1]$};
\end{tikzpicture}
\end{center}
\end{solution}
\vfill
\pagebreak
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\section{Measuring Probabilistic Bits}
\definition{}
As we noted before, a probabilistic bit represents a coin we've tossed but haven't looked at. \par
We do not know whether the bit is \texttt{0} or \texttt{1}, but we do know the probability of both of these outcomes. \par
2024-01-27 13:16:52 -08:00
\vspace{2mm}
2024-02-15 16:23:25 -08:00
If we \textit{measure} (or \textit{observe}) a probabilistic bit, we see either \texttt{0} or \texttt{1}---and thus our
knowledge of its state is updated to either $[0]$ or $[1]$, since we now certainly know what face the coin landed on.
\vspace{2mm}
Since measurement changes what we know about a probabilistic bit, it changes the probabilistic bit's state.
When we measure a bit, it's state \textit{collapses} to either $[0]$ or $[1]$, and the original state of the
bit vanishes. We \textit{cannot} recover the state $[x_0, x_1]$ from a measured probabilistic bit.
\definition{Multiple bits}
Say we have two probabilistic bits, $x$ and $y$, \par
with states
$[x]=[ x_0, x_1]$
and
$[y]=[y_0, y_1]$
\vspace{2mm}
The \textit{compound state} of $[x]$ and $[y]$ is exactly what it sounds like: \par
It is the probabilistic two-bit state $\ket{xy}$, where the probabilities of the first bit are
determined by $[x]$, and the probabilities of the second are determined by $[y]$.
\problem{}<firstcompoundstate>
Say $[x] = [\nicefrac{2}{3}, \nicefrac{1}{3}]$ and $[y] = [\nicefrac{3}{4}, \nicefrac{1}{4}]$. \par
\begin{itemize}[itemsep = 1mm]
\item If we measure $x$ and $y$ simultaneously, \par
what is the probability of getting each of \texttt{00}, \texttt{01}, \texttt{10}, and \texttt{11}?
\item If we measure $y$ first and observe \texttt{1}, \par
what is the probability of getting each of \texttt{00}, \texttt{01}, \texttt{10}, and \texttt{11}?
2024-01-27 13:16:52 -08:00
\end{itemize}
2024-02-15 16:23:25 -08:00
\note[Note]{$[x]$ and $[y]$ are column vectors, but I've written them horizontally to save space.}
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\vfill
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\problem{}
With $x$ and $y$ defined as above, find the probability of measuring each of \texttt{00}, \texttt{01}, \texttt{10}, and \texttt{11}.
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\vfill
2024-01-27 13:16:52 -08:00
\problem{}
2024-02-15 16:23:25 -08:00
Say $[x] = [\nicefrac{2}{3}, \nicefrac{1}{3}]$ and $[y] = [\nicefrac{3}{4}, \nicefrac{1}{4}]$. \par
What is the probability that $x$ and $y$ produce different outcomes?
2024-01-27 13:16:52 -08:00
\vfill
\pagebreak
2024-02-15 16:23:25 -08:00
\section{Tensor Products}
\definition{Tensor Products}
The \textit{tensor product} of two vectors is defined as follows:
\begin{equation*}
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}
\otimes
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix}
=
\begin{bmatrix}
x_1
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix}
\\[4mm]
x_2
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix}
\end{bmatrix}
=
\begin{bmatrix}
x_1y_1 \\[1mm]
x_1y_2 \\[1mm]
x_2y_1 \\[1mm]
x_2y_2 \\[0.5mm]
\end{bmatrix}
\end{equation*}
That is, we take our first vector, multiply the second
vector by each of its components, and stack the result.
You could think of this as a generalization of scalar
mulitiplication, where scalar mulitiplication is a
tensor product with a vector in $\mathbb{R}^1$:
\begin{equation*}
a
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}
=
\begin{bmatrix}
a_1
\end{bmatrix}
\otimes
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix}
=
\begin{bmatrix}
a_1
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix}
\end{bmatrix}
=
\begin{bmatrix}
a_1y_1 \\[1mm]
a_1y_2
\end{bmatrix}
\end{equation*}
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\problem{}
Say $x \in \mathbb{R}^n$ and $y \in \mathbb{R}^m$. \par
What is the dimension of $x \otimes y$?
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\vfill
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\problem{}<basistp>
What is the pairwise tensor product
$
\Bigl\{
\left[
\begin{smallmatrix}
1 \\ 0 \\ 0
\end{smallmatrix}
\right],
\left[
\begin{smallmatrix}
0 \\ 1 \\ 0
\end{smallmatrix}
\right],
\left[
\begin{smallmatrix}
0 \\ 0 \\ 1
\end{smallmatrix}
\right]
\Bigr\}
\otimes
\Bigl\{
\left[
\begin{smallmatrix}
1 \\ 0
\end{smallmatrix}
\right],
\left[
\begin{smallmatrix}
0 \\ 1
\end{smallmatrix}
\right]
\Bigr\}
$?
\note{in other words, distribute the tensor product between every pair of vectors.}
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\vfill
2024-01-27 13:16:52 -08:00
2024-02-11 10:09:30 -08:00
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\problem{}
What is the \textit{span} of the vectors we found in \ref{basistp}? \par
In other words, what is the set of vectors that can be written as linear combinations of the vectors above?
2024-01-27 13:16:52 -08:00
\vfill
2024-02-15 16:23:25 -08:00
Look through the above problems and convince yourself of the following fact: \par
If $a$ is a basis of $A$ and $b$ is a basis of $B$, $a \otimes b$ is a basis of $A \times B$. \par
\note{If you don't understand what this says, ask an instructor. \\ This is the reason we did the last few problems!}
\begin{instructornote}
\textbf{The idea here is as follows:}
If $a$ is in $\{\texttt{0}, \texttt{1}\}$ and $b$ is in $\{\texttt{0}, \texttt{1}\}$,
the values $ab$ can take are
$\{\texttt{0}, \texttt{1}\} \times \{\texttt{0}, \texttt{1}\} = \{\texttt{00}, \texttt{01}, \texttt{10}, \texttt{11}\}$.
\vspace{2mm}
The same is true of any other state set: if $a$ takes values in $A$ and $b$ takes values in $B$, \par
the compound state $(a,b)$ takes values in $A \times B$.
\vspace{2mm}
We would like to do the same with probabilistic bits. \par
Given bits $\ket{a}$ and $\ket{b}$, how should we represent the state of $\ket{ab}$?
\end{instructornote}
2024-01-27 13:16:52 -08:00
\pagebreak
2024-02-15 16:23:25 -08:00
\problem{}
Say $[x] = [\nicefrac{2}{3}, \nicefrac{1}{3}]$ and $[y] = [\nicefrac{3}{4}, \nicefrac{1}{4}]$. \par
What is $[x] \otimes [y]$? How does this relate to \ref{firstcompoundstate}?
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\vfill
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\problem{}
The compound state of two vector-form bits is their tensor product. \par
Compute the following. Is the result what we'd expect?
\begin{itemize}
\item $[0] \otimes [0]$
\item $[0] \otimes [1]$
\item $[1] \otimes [0]$
\item $[1] \otimes [1]$
\end{itemize}
\hint{
Remember that
$[0] = \left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]$
and
$[1] = \left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]$.
}
\vfill
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\problem{}<fivequant>
Of course, writing $[0] \otimes [1]$ is a bit excessive. We'll shorten this notation to $[01]$. \par
2024-01-27 13:16:52 -08:00
\vspace{2mm}
2024-02-15 16:23:25 -08:00
In fact, we could go further: if we wanted to write the set of bits $[1] \otimes [1] \otimes [0] \otimes [1]$, \par
we could write $[1101]$---but a shorter alternative is $[13]$, since $13$ is \texttt{1101} in binary.
\vspace{2mm}
Write $[5]$ as three-bit probabilistic state. \par
\begin{solution}
$[5] = [101] = [1] \otimes [0] \otimes [1] = [0,0,0,0,0,1,0,0]^T$ \par
Notice how we're counting from the top, with $[000] = [1,0,...,0]$ and $[111] = [0, ..., 0, 1]$.
\end{solution}
\vfill
\problem{}
Write the three-bit states $[0]$ through $[7]$ as column vectors. \par
\hint{You do not need to compute every tensor product. Do a few and find the pattern.}
\vfill
\pagebreak
2024-01-27 13:16:52 -08:00
2024-02-11 10:09:30 -08:00
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\section{Operations on Probabilistic Bits}
Now that we can write probabilistic bits as vectors, we can represent operations on these bits
with linear transformations---in other words, as matrices.
\definition{}
Consider the NOT gate, which operates as follows: \par
\begin{itemize}
\item $\text{NOT}[0] = [1]$
\item $\text{NOT}[1] = [0]$
\end{itemize}
What should NOT do to a probabilistic bit $[x_0, x_1]$? \par
If we return to our coin analogy, we can think of the NOT operation as
flipping a coin we have already tossed, without looking at it's state.
Thus,
\begin{equation*}
\text{NOT} \begin{bmatrix}
x_0 \\ x_1
\end{bmatrix} = \begin{bmatrix}
x_1 \\ x_0
\end{bmatrix}
\end{equation*}
\begin{ORMCbox}{Review: Matrix Multiplication}{black!10!white}{black!65!white}
Matrix multiplication works as follows:
\begin{equation*}
AB =
\begin{bmatrix}
1 & 2 \\
3 & 4 \\
\end{bmatrix}
\begin{bmatrix}
a_0 & b_0 \\
a_1 & b_1 \\
\end{bmatrix}
=
\begin{bmatrix}
1a_0 + 2a_1 & 1b_0 + 2b_1 \\
3a_0 + 4a_1 & 3b_0 + 4b_1 \\
\end{bmatrix}
\end{equation*}
Note that this is very similar to multiplying each column of $B$ by $A$. \par
The product $AB$ is simply $Ac$ for every column $c$ in $B$:
\begin{equation*}
Ac_0 =
\begin{bmatrix}
1 & 2 \\
3 & 4 \\
\end{bmatrix}
\begin{bmatrix}
a_0 \\ a_1
\end{bmatrix}
=
\begin{bmatrix}
1a_0 + 2a_1 \\
3a_0 + 4a_1
\end{bmatrix}
\end{equation*}
This is exactly the first column of the matrix product. \par
Also, note that each element of $Ac_0$ is the dot product of a row in $A$ and a column in $c_0$.
\end{ORMCbox}
2024-01-27 13:16:52 -08:00
\problem{}
2024-02-15 16:23:25 -08:00
Compute the following product:
\begin{equation*}
\begin{bmatrix}
1 & 0.5 \\ 0 & 1
\end{bmatrix}
\begin{bmatrix}
3 \\ 2
\end{bmatrix}
\end{equation*}
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\vfill
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\generic{Remark:}
Also, recall that every matrix is linear map, and that every linear map may be written as a matrix. \par
We often use the terms \textit{matrix}, \textit{transformation}, and \textit{linear map} interchangably.
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\pagebreak
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\problem{}
Find the matrix that represents the NOT operation on one probabilistic bit.
2024-01-27 13:16:52 -08:00
2024-02-15 16:23:25 -08:00
\begin{solution}
\begin{equation*}
\begin{bmatrix}
0 & 1 \\ 1 & 0
\end{bmatrix}
\end{equation*}
\end{solution}
\vfill
\problem{Extension by linearity}
Say we have an arbitrary operation $A$. \par
If we know how $A$ acts on $[1]$ and $[0]$, can we compute $A[x]$ for an arbitrary state $[x]$? \par
Say $[x] = [x_0, x_1]$.
\begin{itemize}
\item What is the probability we observe $0$ when we measure $x$?
\item What is the probability that we observe $M[0]$ when we measure $Mx$?
\end{itemize}
2024-01-27 13:16:52 -08:00
\vfill
2024-02-15 16:23:25 -08:00
\problem{}<linearextension>
Write $M[x_0, x_1]$ in terms of $M[0]$, $M[1]$, $x_0$, and $x_1$.
\begin{solution}
\begin{equation*}
M \begin{bmatrix}
x_0 \\ x_1
\end{bmatrix}
=
x_0 M \begin{bmatrix}
1 \\ 0
\end{bmatrix}
+
x_1 M \begin{bmatrix}
0 \\ 1
\end{bmatrix}
=
x_0 M [0] +
x_1 M [1]
\end{equation*}
\end{solution}
\vfill
\generic{Remark:}
Every matrix represents a \textit{linear} map, so the following is always true:
\begin{equation*}
A \times (px + qy) = pAx + qAy
\end{equation*}
\ref{linearextension} is just a special case of this fact.
2024-01-27 13:16:52 -08:00
\pagebreak