Here will be a collection of notes and lessons as we learn AI together.
I thought that it would be best to keep everything as basic as possible and learn some fundamentals of AI as applied to the VEX Change Up Game. We will use some more advanced techniques with the sensors such as image recognition etc. but this start is reduced so some very basic actions of interacting with goals to produce a simulated game.
This is very much a work in progress and an experiment of learning AI together so contribute where you can, add information that you have and correct any mistakes or inconsistencies that I may have.
We will develop and train a neural network to connect Inputs to Outputs. The inputs will indicate the state of the game, the outputs are the choices we have as our moves in the game. The connection between Inputs to outputs will give a probability value for each output, the highest value will be the choice to take then move on to the next action with new inputs. Each cycle we will let the opponent make a move. Initially the opponent will just randomly choose with no input of the game state.
The Red Alliance is the AI Player, here is a game in psuedo code
Calculate Game State
Use neural network to choose an output (or move)
Update Game State
Allow Opponent to make a move using a random choice
Update Game State
Repeat for number of moves in a time period
Calculate Final Score
Record Win - Loss - Tie
Repeat for many matches
To consider something simple but not too trivial we can use 3 goals on the center line. We will be making decisions about scoring or descoring in these 3 goals and there is an opponent doing the same thing. In the first model ignore everything about the robot. We can score or descore any goal there is no shortage of balls. The time it takes to do something will be considered in how many moves are in a match.