Recently Deepmind published a paper on their Q-learning algorithm in Nature. Along with that, they also released the code for the algorithm. Unfortunately they released it without graphics. However, with a few more steps, it is possible to enable graphics and watch the AI play.
In the following text I assume you already have the released code running. Note that you need Linux with apt-get to run it (Deepmind released it for Linux only). You can get it to work by downloading the zip file and extracting it. Then go to that directory and run install_dependencies.sh.
Also copy your ATARI ROMs to the roms folder. Make sure the file name of the ROM is all lower case, otherwise you'll get an error.
Also note that enabling graphics slows down the simulation a lot, since it then plays the game in real time. So you should really train a network without graphics first, save it and then load it with graphics enabled. Look further below for instructions on how to do that.
I just want it to work!
I wrote a script for that.
Download the script and run it in the same folder as install_dependencies.sh. This turns on graphics, but remember to train your network before you watch it play, otherwise there isn't much to see.
If you want to revert the changes, run the script with the -r option
$ chmod +x dqn-graphics.shNote that this reverts the display settings, but does not uninstall qtlua.
$ ./dqn-graphics.sh -r
Run it with run_cpu or run_gpu (I haven't been able to test run_gpu though). Have fun watching it play.
Getting it to work manually
$ torch/bin/luarocks install qttorch
Run qlua instead of luajit. To do that, change line 46 in run_cpu (or run_gpu, but I haven't tested that) from
../torch/bin/luajit train_agent.lua $argsto
../torch/bin/qlua train_agent.lua $args
Now enable the display in alewrap. Change line 52 in torch/share/lua/5.1/alewrap/AleEnv.lua from
display=true,Now run run_cpu (or run_gpu) as usual and enjoy watching it play.
How to train your network
NOTE: I can't fully test this part since I don't have enough RAM. So I can't guarantee that it works.
As mentioned above, enabling graphics slows down the training process a lot. This means that it takes a very long time to train a network with graphics enabled. So the best way to train a network is to train it with graphics disabled, save it and then load it again with graphics enabled.
So how can we save the network? Well it does so automatically and prints a message when it does, but the save_freq parameter is set to quite a big value You might also want to make it save more often. So change the save_freq value in run_cpu and/or run_gpu to something smaller. After running it for a while you should find the saved file in the dqn folder.
To load the network, change the netfile parameter in run_cpu or run_gpu to the corresponding filename, but with properly escaped quotes. For example change
Remember to change it back to "\"convnet_atari3\"" whenever you want to create a new network instead of loading an existing one.
How did it work? Tell me in the comments.