tanvirtin/snake-neural-networks

Neural Network learning algorithm comparison using a classic game of Snake!

21
/ 100
Experimental

This project compares how different artificial intelligence techniques learn to play the classic game of Snake. It takes the game's state as input (like obstacles and apple position) and outputs the next move for the 'snake'. The goal is to see which AI method can learn to play as well as or better than an average human player, making it useful for those interested in game AI development or comparing learning algorithms.

No commits in the last 6 months.

Use this if you are exploring how different AI learning paradigms, specifically genetic algorithms and brute-force reinforcement, perform in a controlled, simple game environment.

Not ideal if you need a plug-and-play AI for a complex, real-world application, or if you're looking for advanced game-playing AI in environments with adversaries or highly dynamic rule sets.

game-AI machine-learning-comparison genetic-algorithms reinforcement-learning-basics neural-network-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Python

License

MIT

Category

snake-game-ai

Last pushed

Jan 07, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tanvirtin/snake-neural-networks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.