I’m designing a browser-based game that uses socket.io and p5.js, and it explores a lot of group dynamics under the lens of cooperation/competition when players have a shared enemy. In this case, the universal enemy would (in theory) be a ML model that is constantly reacting to the players and working to bring them down. It would either be something that is learning/reacting on the fly (with a system more akin to genetic algorithm based evolution strategies and hard-coded directive parameters) or a pre-trained model that uses its history with hundreds of previous games to inform what it should do in this one (using a reinforcement learning based model).
Sooooo there’s a lot flying around my brain and I’m not sure how to simplify just yet into something that’s deliverable by the final deadline. The options I perceive for the final are as follows:
Make the final project the AI enemy in the game, just to demo as a proof of concept. One of three paths:
Focus solely on the evolutionary approach, using the tensorflow.js demo I’ve been playing around with for ml5.
Develop a training environment for a RL model so I can then use that model in live games.
Instead of using something not currently well documented, use an image classifier on the game map (since it will be a simple array of pixels) to determine a general “direction” for the AI enemy.
Take a more exploratory approach to implementing RL with tensorflow.js
concurrent with ongoing research by Aidan Nelson into this topic with regards to the ml5 library
lots of good similar projects with Open AI’s gym environments and Andrej Karpathy’s ConvNetJS