81 lines
2.9 KiB
TeX
81 lines
2.9 KiB
TeX
\section{The Secretary, Again}
|
|
|
|
Now, let's solve the secretary problem as as a stopping rule problem. \par
|
|
The first thing we need to do is re-write it into the form we discussed in the previous section. \par
|
|
Namely, we need...
|
|
\begin{itemize}
|
|
\item A sequence of random variables $\mathcal{X}_1, \mathcal{X}_2, ..., \mathcal{X}_t$
|
|
\item A sequence of reward functions $y_0, y_1(\sigma_1), ..., y_t(\sigma_t)$.
|
|
\end{itemize}
|
|
|
|
\vspace{2mm}
|
|
|
|
For convenience, I've summarized the secretary problem below:
|
|
\begin{itemize}
|
|
\item We have exactly one position to fill, and we must fill it with one of $n$ applicants.
|
|
\item These $n$ applicants, if put together, can be ranked unambiguously from \say{best} to \say{worst}.
|
|
\item We interview applicants in a random order, one at a time.
|
|
\item After each interview, we reject the applicant and move on, \par
|
|
or select the applicant and end the process.
|
|
\item We cannot return to an applicant we've rejected.
|
|
\item Our goal is to select the \textit{overall best} applicant.
|
|
\end{itemize}
|
|
|
|
\definition{}
|
|
First, we'll define a sequence of $\mathcal{X}_i$ that fits this problem. \par
|
|
Each $\mathcal{X}_i$ will gives us the \textit{relative rank} of each applicant. \par
|
|
For example, if $\mathcal{X}_i = 1$, the $i^\text{th}$ applicant is the best of the first $i$. \par
|
|
If $\mathcal{X}_i = 3$, two applicants better than $i$ came before $i$.
|
|
|
|
\problem{}
|
|
What values can $\mathcal{X}_1$ take, and what are their probabilities? \par
|
|
How about $\mathcal{X}_2$, $\mathcal{X}_3$, and $\mathcal{X}_4$?
|
|
|
|
\vfill
|
|
|
|
\remark{}
|
|
Now we need to define $y_n(\sigma_n)$. Intuitively, it may make sense to set $y_n = 1$ if the $n^\text{th}$
|
|
applicant is the best, and $y_n = 0$ otherwise---but this doesn't work.
|
|
|
|
\vspace{2mm}
|
|
|
|
As defined in the previous section, $y_n$ can only depend on $\sigma_n = [x_1, x_2, ..., x_n]$, the previous $n$ observations.
|
|
We cannot define $y_n$ as specified above because, having seen $\sigma_n$, we \textit{cannot} know whether or not the $n^\text{th}$
|
|
applicant is the best.
|
|
|
|
\vspace{2mm}
|
|
|
|
To work around this, we'll define our reward for selecting the $n^\text{th}$ applicant as the \textit{probability}
|
|
that this applicant is the best.
|
|
|
|
\problem{}
|
|
Define $y_n$.
|
|
|
|
\begin{solution}
|
|
\begin{itemize}
|
|
\item An applicant should only be selected if $\mathcal{X}_i = 1$
|
|
\item if we accept an the $j^\text{th}$ applicant, the probability we select the absolute best is equal to \par
|
|
the probability that the best of the first $j$ candidates is the best overall. \par
|
|
|
|
\vspace{1mm}
|
|
|
|
This is just the probability that the best candidate overall appears among the first $j$, \par
|
|
and is thus $\nicefrac{j}{n}$.
|
|
\end{itemize}
|
|
|
|
So,
|
|
\begin{equation*}
|
|
y_j(\sigma_j) =
|
|
\begin{cases}
|
|
\nicefrac{j}{n} & x_j = 1 \\
|
|
0 & \text{otherwise}
|
|
\end{cases}
|
|
\end{equation*}
|
|
|
|
\vspace{2mm}
|
|
Note that $y_0 = 0$, and that $y_n$ depends only on $x_n$.
|
|
|
|
\end{solution}
|
|
|
|
\vfill
|
|
\pagebreak |