Incomplete port

This commit is contained in:
2025-10-02 07:56:09 -07:00
parent 121780df6c
commit df91fd9f96
9 changed files with 1367 additions and 0 deletions

View File

@ -0,0 +1,349 @@
#import "@local/handout:0.1.0": *
#import "@preview/cetz:0.4.2"
= Probabilistic Bits
#definition()
As we already know, a _classical bit_ may take the values `0` and `1`.
We can model this with a two-sided coin, one face of which is labeled `0`, and the other, `1`.
#v(2mm)
Of course, if we toss such a "bit-coin," we'll get either `0` or `1`.
We'll denote the probability of getting `0` as $p_0$, and the probability of getting `1` as $p_1$.
As with all probabilities, $p_0 + p_1$ must be equal to 1.
#v(1fr)
#definition()
Say we toss a "bit-coin" and don't observe the result. We now have a _probabilistic bit_, with a probability $p_0$ of being `0`, and a probability $p_1$ of being `1`.
#v(2mm)
We'll represent this probabilistic bit's _state_ as a vector: $mat(p_0; p_1)$
We do *not* assume this coin is fair, and thus $p_0$ might not equal $p_1$.
#note[This may seem a bit redundant: since $p_0 + p_1 = 1$, we can always calculate one probability given the other. We'll still include both probabilities in the state vector, since this provides a clearer analogy to quantum bits.]
#v(1fr)
#definition()
The simplest probabilistic bit states are of course $[0]$ and $[1]$, defined as follows:
- $[0] = mat(1; 0)$
- $[1] = mat(0; 1)$
That is, $[0]$ represents a bit that we known to be `0`, and $[1]$ represents a bit we know to be `1`.
#v(1fr)
#definition()
$[0]$ and $[1]$ form a _basis_ for all possible probabilistic bit states:
Every other probabilistic bit can be written as a _linear combination_ of $[0]$ and $[1]$:
$ mat(p_0; p_1) = p_0 mat(1; 0) + p_1 mat(0; 1) = p_0 [0] + p_1 [1] $
#v(1fr)
#pagebreak()
#problem()
Every possible state of a probabilistic bit is a two-dimensional vector.
Draw all possible states on the axis below.
#table(
columns: (1fr,),
align: center,
stroke: none,
align(center, cetz.canvas({
import cetz.draw: *
set-style(content: (frame: "rect", stroke: none, fill: none, padding: .25))
scale(200%)
line(
(0, 1.5),
(0, 0),
(1.5, 0),
stroke: black + 0.25mm,
)
mark((0, 1.5), (0, 2), symbol: ")>", fill: black)
mark((1.5, 0), (2, 0), symbol: ")>", fill: black)
content((0, 1.5), $p_1$, anchor: "south")
content((1.5, 0), $p_0$, anchor: "west")
circle((0, 0), radius: 0.6mm, fill: black, name: "00")
content("00.south", $mat(0; 0)$, anchor: "north")
circle((0, 1), radius: 0.6mm, fill: oblue, stroke: oblue, name: "00")
content("00.west", $[1]$, anchor: "east")
circle((1, 0), radius: 0.6mm, fill: oblue, stroke: oblue, name: "00")
content("00.south", $[0]$, anchor: "north")
})),
)
#solution[
#table(
columns: (1fr,),
align: center,
stroke: none,
align(center, cetz.canvas({
import cetz.draw: *
set-style(content: (
frame: "rect",
stroke: none,
fill: none,
padding: .25,
))
scale(200%)
line(
(0, 1.5),
(0, 0),
(1.5, 0),
stroke: black + 0.25mm,
)
mark((0, 1.5), (0, 2), symbol: ")>", fill: black)
mark((1.5, 0), (2, 0), symbol: ")>", fill: black)
content((0, 1.5), $p_1$, anchor: "south")
content((1.5, 0), $p_0$, anchor: "west")
line(
(1, 0),
(0, 1),
stroke: ored + 1mm,
)
circle((0, 0), radius: 0.6mm, fill: black, name: "00")
content("00.south", $mat(0; 0)$, anchor: "north")
circle((0, 1), radius: 0.6mm, fill: oblue, stroke: oblue, name: "00")
content("00.west", $[1]$, anchor: "east")
circle((1, 0), radius: 0.6mm, fill: oblue, stroke: oblue, name: "00")
content("00.south", $[0]$, anchor: "north")
})),
)
]
#v(1fr)
#pagebreak()
= Measuring Probabilistic Bits
#definition()
As we noted before, a probabilistic bit represents a coin we've tossed but haven't looked at.
We do not know whether the bit is `0` or `1`, but we do know the probability of both of these outcomes.
#v(2mm)
If we _measure_ (or _observe_) a probabilistic bit, we see either `0` or `1`—and thus our knowledge of its state is updated to either $[0]$ or $[1]$, since we now certainly know what face the coin landed on.
#v(2mm)
Since measurement changes what we know about a probabilistic bit, it changes the probabilistic bit's state. When we measure a bit, its state _collapses_ to either $[0]$ or $[1]$, and the original state of the bit vanishes. We _cannot_ recover the state $[x_0, x_1]$ from a measured probabilistic bit.
#definition("Multiple bits")
Say we have two probabilistic bits, $x$ and $y$, with states $[x] = [x_0, x_1]$ and $[y] = [y_0, y_1]$
#v(2mm)
The _compound state_ of $[x]$ and $[y]$ is exactly what it sounds like: It is the probabilistic two-bit state $|x y angle.r$, where the probabilities of the first bit are determined by $[x]$, and the probabilities of the second are determined by $[y]$.
#problem(label: "firstcompoundstate")
Say $[x] = [2/3, 1/3]$ and $[y] = [3/4, 1/4]$.
- If we measure $x$ and $y$ simultaneously, what is the probability of getting each of `00`, `01`, `10`, and `11`?
- If we measure $y$ first and observe `1`, what is the probability of getting each of `00`, `01`, `10`, and `11`?
#note[*Note:* $[x]$ and $[y]$ are column vectors, but I've written them horizontally to save space.]
#v(1fr)
#problem()
Say $[x] = [2/3, 1/3]$ and $[y] = [3/4, 1/4]$.
What is the probability that $x$ and $y$ produce different outcomes?
#v(1fr)
#pagebreak()
= Tensor Products
#definition("Tensor Products")
The _tensor product_ of two vectors is defined as follows:
$
mat(x_1; x_2) times.circle mat(y_1; y_2) = mat(x_1 mat(y_1; y_2); x_2 mat(y_1; y_2)) = mat(x_1 y_1; x_1 y_2; x_2 y_1; x_2 y_2)
$
That is, we take our first vector, multiply the second vector by each of its components, and stack the result. You could think of this as a generalization of scalar multiplication, where scalar multiplication is a tensor product with a vector in $RR^1$:
$
a mat(x_1; x_2) = mat(a_1) times.circle mat(y_1; y_2) = mat(a_1 mat(y_1; y_2)) = mat(a_1 y_1; a_1 y_2)
$
#problem()
Say $x in RR^n$ and $y in RR^m$.
What is the dimension of $x times.circle y$?
#v(1fr)
#problem(label: "basistp")
What is the following pairwise tensor product?
#v(4mm)
$
{mat(1; 0; 0), mat(0; 1; 0), mat(0; 0; 1)}
times.circle
{mat(1; 0), mat(0; 1)}
$
#v(4mm)
#hint[Distribute the tensor product between every pair of vectors.]
#v(1fr)
#problem()
What is the _span_ of the vectors we found in @basistp?
In other words, what is the set of vectors that can be written as linear combinations of the vectors above?
#v(1fr)
#pagebreak()
#problem()
Say $[x] = [2/3, 1/3]$ and $[y] = [3/4, 1/4]$.
What is $[x] times.circle [y]$? How does this relate to @firstcompoundstate?
#v(1fr)
#problem()
The compound state of two vector-form bits is their tensor product.
Compute the following. Is the result what we'd expect?
- $[0] times.circle [0]$
- $[0] times.circle [1]$
- $[1] times.circle [0]$
- $[1] times.circle [1]$
#hint[Remember that $[0] = mat(1; 0)$ and $[1] = mat(0; 1)$.]
#v(1fr)
#problem(label: "fivequant")
Writing $[0] times.circle [1]$ is a bit tedious. We'll shorten this notation to $[01]$.
In fact, we could go further: if we wanted to write the set of bits $[1] times.circle [1] times.circle [0] times.circle [1]$, \
we could write $[1101]$—but a shorter alternative is $[13]$, since $13$ is `1101` in binary.
#v(2mm)
Write $[5]$ as a three-bit probabilistic state.
#solution[
$[5] = [101] = [1] times.circle [0] times.circle [1] = [0,0,0,0,0,1,0,0]^T$ \
Notice how we're counting from the top, with $[000] = [1,0,...,0]$ and $[111] = [0, ..., 0, 1]$.
]
#v(1fr)
#problem()
Write the three-bit states $[0]$ through $[7]$ as column vectors.
#hint[You do not need to compute every tensor product. Do a few and find the pattern.]
#v(1fr)
#pagebreak()
= Operations on Probabilistic Bits
Now that we can write probabilistic bits as vectors, we can represent operations on these bits with linear transformations—in other words, as matrices.
#definition()
Consider the NOT gate, which operates as follows:
- $"NOT"[0] = [1]$
- $"NOT"[1] = [0]$
What should NOT do to a probabilistic bit $[x_0, x_1]$?
If we return to our coin analogy, we can think of the NOT operation as flipping a coin we have already tossed, without looking at its state. Thus,
$ "NOT" mat(x_0; x_1) = mat(x_1; x_0) $
#review_box("Review: Multiplying vectors by matrices")[
#v(2mm)
$
A v = mat(1, 2; 3, 4) mat(v_0; v_1) = mat(1 v_0 + 2 v_1; 3 v_0 + 4 v_1)
$
#v(2mm)
Note that each element of $A v$ is the dot product of a row in $A$ and a column in $v$.
]
#problem()
Compute the following product:
$ mat(1, 0.5; 0, 1) mat(3; 2) $
#v(1fr)
#remark()
Also, recall that every matrix is linear map, and that every linear map may be written as a matrix. We often use the terms _matrix_, _transformation_, and _linear map_ interchangeably.
#pagebreak()
#problem()
Find the matrix that represents the NOT operation on one probabilistic bit.
#solution[
$
mat(0, 1; 1, 0)
$
]
#v(1fr)
#problem("Extension by linearity")
Say we have an arbitrary operation $M$.
If we know how $M$ acts on $[1]$ and $[0]$, can we compute $M[x]$ for an arbitrary state $[x]$?
Say $[x] = [x_0, x_1]$.
- What is the probability we observe $0$ when we measure $x$?
- What is the probability that we observe $M[0]$ when we measure $M x$?
#v(1fr)
#problem(label: "linearextension")
Write $M[x_0, x_1]$ in terms of $M[0]$, $M[1]$, $x_0$, and $x_1$.
#solution[
$
M mat(x_0; x_1) = x_0 M mat(1; 0) + x_1 M mat(0; 1) = x_0 M[0] + x_1 M[1]
$
]
#v(1fr)
#remark() Every matrix represents a _linear_ map, so the following is always true:
$ A times (p x + q y) = p A x + q A y $
@linearextension is just a special case of this fact.