Paperspace/DinoRunTutorial
Accompanying code for Paperspace tutorial "Build an AI to play Dino Run"
This project helps anyone interested in seeing how artificial intelligence can learn to play a simple game without being explicitly programmed. It takes the visual input of the Dino Run game and produces game actions (jump or duck). This is for educators, hobbyists, or students curious about reinforcement learning and game AI.
326 stars. No commits in the last 6 months.
Use this if you want to understand how a computer can learn to play a game just by looking at the screen, using a reinforcement learning approach.
Not ideal if you're looking for a general-purpose gaming AI framework or a deep-dive into complex game theory strategies.
Stars
326
Forks
101
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jun 15, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Paperspace/DinoRunTutorial"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
utay/dino-ml
🦎 Simple AI to teach Google Chrome's offline dino to jump obstacles
simply-TOOBASED/dino-bot-3000
A simple bot to play Google's dinosaur game using neural networks and genetic algorithms to get smarter.
Dewep/T-Rex-s-neural-network
T-Rex's neural network (AI for the game T-Rex / Dinosaur on Google Chrome)
kilian-kier/Dino-Game-AI
A simple pygame dino game which can also be trained and played by a NEAT AI
GuglielmoCerri/pytorch-dino-ai-game
Repository for training an AI model to play the Chrome Dino Game using PyTorch and EfficientNet