Skip navigation
Run Run Shaw Library City University of Hong KongRun Run Shaw Library

Please use this identifier to cite or link to this item: http://dspace.cityu.edu.hk/handle/2031/9493
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCheung, Yiu Chung Jeffreyen_US
dc.date.accessioned2021-11-17T04:08:44Z-
dc.date.available2021-11-17T04:08:44Z-
dc.date.issued2021en_US
dc.identifier.other2021eecycj115en_US
dc.identifier.urihttp://dspace.cityu.edu.hk/handle/2031/9493-
dc.description.abstractPlaying against other players in games are often considered better than playing against AI due to various reasons. Repetitive, predictable actions of the AI are reasons that people tend to play against other players instead of AI. The core problem of this issue is that traditional AI follows fixed rules defined in the code. They do not learn from mistakes and try other approaches like a real player would do. However, the choice of playing against AI should not be laughed at by others. Everyone has their rights to play what they want. Therefore, it is necessary to investigate this problem. This study aims to address the repetitiveness and predictable actions of traditional AI and try to humanize them by applying machine learning algorithms to game AI. This study is conducted using Q-learning and neural network to train a robot in Robocode. Robocode is a programming game. The goal of Robocode is to develop a robot battle tank to fight against other tanks. The tanks in this study were trained against SpinBot, one of the sample tanks included in Robocode. This study used data such as player coordinates, distance to the enemy tank, angle to the enemy tank to form a Q-table, then used epsilon-greedy Q-learning algorithms to train the tank against SpinBot. The task of the Q-table is to find the best action in any given moment with sufficient data. However, due to the nature of the Q-table needing to store every possible state, the number of inputs is limited. To overcome the need to store all possible combinations of the Q-table, this study also tried to use a Neural network to replace the Q-table to train an agent against SpinBot. The input of the neural network is the same as a Q-table. The output of the neural network contains only one node, which indicates the Q-value. After training the tanks using Q-learning and Neural Network, the increase in score can be observed when comparing the untrained agents and the trained agents. The agents are also harder to predict, which is hard to achieve with traditional hard-coded AI. This project can act as a starting point for further studies into Machine Learning AI in games.en_US
dc.rightsThis work is protected by copyright. Reproduction or distribution of the work in any format is prohibited without written permission of the copyright owner.en_US
dc.rightsAccess is restricted to CityU users.en_US
dc.titleMachine Learning AI in Computer Gamesen_US
dc.contributor.departmentDepartment of Electrical Engineeringen_US
dc.description.supervisorSupervisor: Prof. Leung, Andrew C S; Assessor: Dr. Yuen, Kelvin S Yen_US
Appears in Collections:Electrical Engineering - Undergraduate Final Year Projects 

Files in This Item:
File SizeFormat 
fulltext.html149 BHTMLView/Open
Show simple item record


Items in Digital CityU Collections are protected by copyright, with all rights reserved, unless otherwise indicated.

Send feedback to Library Systems
Privacy Policy | Copyright | Disclaimer