Mark
/
celeste-ai
Archived
1
0
Fork 0
This repository has been archived on 2023-11-28. You can view files and clone it, but cannot push or open issues/pull-requests.
celeste-ai/report/parts/background.tex

15 lines
1.2 KiB
TeX
Raw Permalink Normal View History

2023-11-28 09:28:24 -08:00
\section{Background}
% what other people did that is closely related to yours.
Our work is heavily based off the research done by Minh et. al in \textit{Human-Level Control through Deep Reinforcement Learning} \cite{humanlevel}. The algorithm we developed to solve \textit{Celeste Classic} uses a deep Q-learning algorithm supported by replay memory, with a modified reward system and explore-exploit probability. This is very similar to the architecture presented by Minh et al.
\vspace{2mm}
The greatest difference between our approach and the approach of \textit{Human-Level Control} is the input space and neural network type. Minh et. al use a convolutional neural network, which takes the game screen as input. This requires a significant amount of training epochs and computation time, and was thus an unreasonable approach for us. We instead used a plain linear neural network with two inputs: player x and player y.
\vspace{2mm}
Another project similar to ours is AiSpawn's \textit{AI Learns to Speedrun Celeste} \cite{aispawn}. Here, AiSpawn completes the same task we do---solving \textit{Celeste Classic}---but he uses a completely different, evolution-based approach.
\vfill
\pagebreak