Minor edits

This commit is contained in:
Mark 2024-09-29 13:05:38 -07:00
parent 9a531509ca
commit 89425270b9
Signed by: Mark
GPG Key ID: C6D63995FE72FD80
5 changed files with 19 additions and 97 deletions

View File

@ -117,7 +117,7 @@ Show that $\mathcal{E}(\mathcal{A} + \mathcal{B}) = \mathcal{E}(\mathcal{A}) + \
\definition{}
Let $A$ and $B$ be events on a sample space $\Omega$. \par
We say that $A$ and $B$ are \textit{independent} if $\mathcal{P}(A \cap B) = \mathcal{P}(A) + \mathcal{P}(B)$. \par
We say that $A$ and $B$ are \textit{independent} if $\mathcal{P}(A \cap B) = \mathcal{P}(A) \times \mathcal{P}(B)$. \par
Intuitively, events $A$ and $B$ are independent if the outcome of one does not affect the other.
\definition{}

View File

@ -11,42 +11,45 @@ tosses of this die contain exactly one six? \par
\hint{Start with small $l$.}
\begin{solution}
$\mathcal{P}(\text{last } l \text{ tosses have exactly one 6}) = (\nicefrac{1}{6})(\nicefrac{5}{6})^l \times l$
$\mathcal{P}(\text{last } l \text{ tosses have exactly one 6}) = (\nicefrac{1}{6})(\nicefrac{5}{6})^{l-1} \times l$
\end{solution}
\vfill
\problem{}
For what value of $l$ is the probability in \ref{lastl} maximal? \par
The following table may help.
The following table may help. \par
\note{We only care about integer values of $l$.}
\begin{center}
\begin{tabular}{|| c | c | c ||}
\hline
\rule{0pt}{3.5mm} % Bonus height for exponent
$l$ & $(\nicefrac{5}{6})^l$ & $(\nicefrac{1}{6})(\nicefrac{5}{6})^l$ \\
$l$ & $(\nicefrac{5}{6})^l$ & $(\nicefrac{1}{6})(\nicefrac{5}{6})^{l}$ \\
\hline\hline
1 & 0.83 & 0.133 \\
0 & 1.00 & 0.167 \\
\hline
2 & 0.69 & 0.115 \\
1 & 0.83 & 0.139 \\
\hline
3 & 0.57 & 0.095 \\
2 & 0.69 & 0.116 \\
\hline
4 & 0.48 & 0.089 \\
3 & 0.58 & 0.096 \\
\hline
4 & 0.48 & 0.080 \\
\hline
5 & 0.40 & 0.067 \\
\hline
6 & 0.33 & 0.055 \\
6 & 0.33 & 0.056 \\
\hline
7 & 0.27 & 0.045 \\
7 & 0.28 & 0.047 \\
\hline
8 & 0.23 & 0.038 \\
8 & 0.23 & 0.039 \\
\hline
\end{tabular}
\end{center}
\begin{solution}
$(\nicefrac{1}{6})(\nicefrac{5}{6})^l \times l$ is maximal at $x = 5.48$, so $l = 5$. \par
$(\nicefrac{1}{6})(\nicefrac{5}{6})^{l-1} \times l$ is maximal at $l = 5.48$, so $l = 5$. \par
$l = 6$ is close enough.
\end{solution}

View File

@ -69,7 +69,7 @@ Come up with a strategy that produces better odds.
The remark from the previous solution still holds: \par
When we're looking at the first applicant, we have no information; \par
when we're looking at the second, we have no choices.
when we're looking at the last, we have no choices.
\vspace{2mm}
@ -178,7 +178,8 @@ if we use the \say{look-then-leap} strategy detailed above? \par
\vspace{2mm}
Unraveling our previous logic, we find that the probability we are interested in is also $\frac{k-1}{x-1}$.
Unraveling our previous logic, we find that the probability we are interested in is also $\frac{k-1}{x-1}$. \par
\note{Assuming that $x \geq k$. Of course, this probability is 0 otherwise.}
\end{solution}
\vfill
@ -229,7 +230,7 @@ Let $r = \frac{k-1}{n}$, the fraction of applicants we reject. Show that
\vfill
\problem{}
With a bit of faily unpleasant calculus, we can show that the following is true for large $n$:
With a bit of fairly unpleasant calculus, we can show that the following is true for large $n$:
\begin{equation*}
\sum_{x=k}^{n}\frac{1}{x-1}
~\approx~ \text{ln}\Bigl(\frac{n}{k}\Bigr)

View File

@ -88,7 +88,6 @@ Given some $y$, what is the probability that all five $\mathcal{X}_i$ are smalle
Say we have a random variable $\mathcal{X}$ which we observe $n$ times. \note{(for example, we repeatedly roll a die)}
We'll arrange these observations in increasing order, labeled $x_1 < x_2 < ... < x_n$. \par
Under this definition, $x_i$ is called the \textit{$i^\text{th}$ order statistic}---the $i^\text{th}$ smallest sample of $\mathcal{X}$.
a
\problem{}<ostatone>
Say we have a random variable $\mathcal{X}$ uniformly distributed on $[0, 1]$, of which we take $5$ observations. \par

View File

@ -1,81 +0,0 @@
\section{The Secretary, Again}
Now, let's solve the secretary problem as as a stopping rule problem. \par
The first thing we need to do is re-write it into the form we discussed in the previous section. \par
Namely, we need...
\begin{itemize}
\item A sequence of random variables $\mathcal{X}_1, \mathcal{X}_2, ..., \mathcal{X}_t$
\item A sequence of reward functions $y_0, y_1(\sigma_1), ..., y_t(\sigma_t)$.
\end{itemize}
\vspace{2mm}
For convenience, I've summarized the secretary problem below:
\begin{itemize}
\item We have exactly one position to fill, and we must fill it with one of $n$ applicants.
\item These $n$ applicants, if put together, can be ranked unambiguously from \say{best} to \say{worst}.
\item We interview applicants in a random order, one at a time.
\item After each interview, we reject the applicant and move on, \par
or select the applicant and end the process.
\item We cannot return to an applicant we've rejected.
\item Our goal is to select the \textit{overall best} applicant.
\end{itemize}
\definition{}
First, we'll define a sequence of $\mathcal{X}_i$ that fits this problem. \par
Each $\mathcal{X}_i$ will gives us the \textit{relative rank} of each applicant. \par
For example, if $\mathcal{X}_i = 1$, the $i^\text{th}$ applicant is the best of the first $i$. \par
If $\mathcal{X}_i = 3$, two applicants better than $i$ came before $i$.
\problem{}
What values can $\mathcal{X}_1$ take, and what are their probabilities? \par
How about $\mathcal{X}_2$, $\mathcal{X}_3$, and $\mathcal{X}_4$?
\vfill
\remark{}
Now we need to define $y_n(\sigma_n)$. Intuitively, it may make sense to set $y_n = 1$ if the $n^\text{th}$
applicant is the best, and $y_n = 0$ otherwise---but this doesn't work.
\vspace{2mm}
As defined in the previous section, $y_n$ can only depend on $\sigma_n = [x_1, x_2, ..., x_n]$, the previous $n$ observations.
We cannot define $y_n$ as specified above because, having seen $\sigma_n$, we \textit{cannot} know whether or not the $n^\text{th}$
applicant is the best.
\vspace{2mm}
To work around this, we'll define our reward for selecting the $n^\text{th}$ applicant as the \textit{probability}
that this applicant is the best.
\problem{}
Define $y_n$.
\begin{solution}
\begin{itemize}
\item An applicant should only be selected if $\mathcal{X}_i = 1$
\item if we accept an the $j^\text{th}$ applicant, the probability we select the absolute best is equal to \par
the probability that the best of the first $j$ candidates is the best overall. \par
\vspace{1mm}
This is just the probability that the best candidate overall appears among the first $j$, \par
and is thus $\nicefrac{j}{n}$.
\end{itemize}
So,
\begin{equation*}
y_j(\sigma_j) =
\begin{cases}
\nicefrac{j}{n} & x_j = 1 \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
\vspace{2mm}
Note that $y_0 = 0$, and that $y_n$ depends only on $x_n$.
\end{solution}
\vfill
\pagebreak