\section{Two Bits} How do we represent multi-bit states using vectors? \par Unfortunately, this is hard to visualize---but the idea is simple. \problem{} What is the set of possible states of two bits (i.e, $\mathbb{B}^2$)? \vspace{2cm} \generic{Remark:} When we have two bits, we have four orthogonal states: $\overrightarrow{00}$, $\overrightarrow{01}$, $\overrightarrow{10}$, and $\overrightarrow{11}$. \par We need four dimensions to draw all of these vectors, so I can't provide a picture... \par but the idea here is the same as before. \problem{} Write $\ket{00}$, $\ket{01}$, $\ket{10}$, and $\ket{11}$ as column vectors \par with respect to the orthonormal basis $\{\overrightarrow{00}, \overrightarrow{01}, \overrightarrow{10}, \overrightarrow{11}\}$. \vfill \generic{Remark:} So, we represent each possible state as an axis in an $n$-dimensional space. \par A set of $n$ bits gives us $2^n$ possible states, which forms a basis in $2^n$ dimensions. \vspace{1mm} Say we now have two seperate bits: $\ket{a}$ and $\ket{b}$. \par How do we represent their compound state? \par \vspace{4mm} If we return to our usual notation, this is very easy: $a$ is in $\{\texttt{0}, \texttt{1}\}$ and $b$ is in $\{\texttt{0}, \texttt{1}\}$, \par so the possible compound states of $ab$ are $\{\texttt{0}, \texttt{1}\} \times \{\texttt{0}, \texttt{1}\} = \{\texttt{00}, \texttt{01}, \texttt{10}, \texttt{11}\}$ \vspace{1mm} The same is true of any other state set: if $a$ takes values in $A$ and $b$ takes values in $B$, \par the compound state $(a,b)$ takes values in $A \times B$. \vspace{4mm} We would like to do the same in vector notation. Given $\ket{a}$ and $\ket{b}$, \par how should we represent the state of $\ket{ab}$? \pagebreak \definition{Tensor Products} The \textit{tensor product} between two vectors is defined as follows: \begin{equation*} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \otimes \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} x_1 \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \\[4mm] x_2 \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} x_1y_1 \\[1mm] x_1y_2 \\[1mm] x_2y_1 \\[1mm] x_2y_2 \\[0.5mm] \end{bmatrix} \end{equation*} That is, we take our first vector, multiply the second vector by each of its components, and stack the result. You could think of this as a generalization of scalar mulitiplication, where scalar mulitiplication is a tensor product with a vector in $\mathbb{R}^1$: \begin{equation*} a \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} a_1 \end{bmatrix} \otimes \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} a_1 \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} a_1y_1 \\[1mm] a_1y_2 \end{bmatrix} \end{equation*} \vspace{2mm} Also, note that the tensor product is very similar to the Cartesian product: if we take $x$ and $y$ as sets, with $x = \{x_1, x_2\}$ and $y = \{y_1, y_2\}$, the Cartesian product contains the same elements as the tensor product---every possible pairing of an element in $x$ with an element in $y$: \begin{equation*} x \times y = \{~(x_1,y_1), (x_1,y_2), (x_2,y_1), (x_2y_2)~\} \end{equation*} In fact, these two operations are (in a sense) essentially identical. \par Let's quickly demonstrate this. \problem{} Say $x \in \mathbb{R}^n$ and $y \in \mathbb{R}^m$. \par What is the dimension of $x \otimes y$? \vfill \problem{} What is the pairwise tensor product $ \Bigl\{ \left[ \begin{smallmatrix} 1 \\ 0 \\ 0 \end{smallmatrix} \right], \left[ \begin{smallmatrix} 0 \\ 1 \\ 0 \end{smallmatrix} \right], \left[ \begin{smallmatrix} 0 \\ 0 \\ 1 \end{smallmatrix} \right] \Bigr\} \otimes \Bigl\{ \left[ \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \right], \left[ \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right] \Bigr\} $? \note{in other words, distribute the tensor product between every pair of vectors.} \vfill \problem{} The vectors we found in \ref{basistp} are a basis of what space? \par \vfill \pagebreak \problem{} The compound state of two vector-form bits is their tensor product. \par Compute the following. Is the result what we'd expect? \begin{itemize} \item $\ket{0} \otimes \ket{0}$ \item $\ket{0} \otimes \ket{1}$ \item $\ket{1} \otimes \ket{0}$ \item $\ket{1} \otimes \ket{1}$ \end{itemize} \hint{ Remember that the coordinates of $\ket{0}$ are $\left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]$, and the coordinates of $\ket{1}$ are $\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]$. } \vfill \problem{} Of course, writing $\ket{0} \otimes \ket{1}$ is a bit excessive. \par We'll shorten this notation to $\ket{01}$. \par Thus, the two-bit kets we saw on the previous page are, by definition, tensor products. \vspace{2mm} In fact, we could go further: if we wanted to write the set of bits $\ket{1} \otimes \ket{1} \otimes \ket{0} \otimes \ket{1}$, \par we could write $\ket{1101}$---but a shorter alternative is $\ket{13}$, since $13$ is \texttt{1101} in binary. \vspace{2mm} Write $\ket{5}$ as three-bit state vector. \par \begin{solution} $\ket{5} = \ket{101} = \ket{1} \otimes \ket{0} \otimes \ket{1} = [0,0,0,0,0,1,0,0]^T$ \par Notice how we're counting from the top, with $\ket{000} = [1,0,...,0]$ and $\ket{111} = [0, ..., 0, 1]$. \end{solution} \vfill \problem{} Write the three-bit states $\ket{0}$ through $\ket{7}$ as column vectors. \par \hint{ You do not need to compute every tensor product. \\ Do a few, you should quickly see the pattern. } \vfill \pagebreak