dc.contributor.author | Runarsson, TP | |
dc.contributor.author | Lucas, SM | |
dc.date.accessioned | 2020-12-17T17:48:55Z | |
dc.date.available | 2020-12-17T17:48:55Z | |
dc.date.issued | 2014-09-01 | |
dc.identifier.issn | 1943-068X | |
dc.identifier.uri | https://qmro.qmul.ac.uk/xmlui/handle/123456789/69412 | |
dc.description.abstract | © 2013 IEEE. This paper investigates the use of preference learning as an approach to move prediction and evaluation function approximation, using the game of Othello as a test domain. Using the same sets of features, we compare our approach with least squares temporal difference learning, direct classification, and with the Bradley-Terry model, fitted using minorization-maximization (MM). The results show that the exact way in which preference learning is applied is critical to achieving high performance. Best results were obtained using a combination of board inversion and pair-wise preference learning. This combination significantly outperformed the others under test, both in terms of move prediction accuracy, and in the level of play achieved when using the learned evaluation function as a move selector during game play. | en_US |
dc.format.extent | 300 - 313 | |
dc.relation.ispartof | IEEE Transactions on Computational Intelligence and AI in Games | |
dc.title | Preference learning for move prediction and evaluation function approximation in Othello | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/TCIAIG.2014.2307272 | |
pubs.issue | 3 | en_US |
pubs.notes | Not known | en_US |
pubs.publication-status | Published | en_US |
pubs.volume | 6 | en_US |
rioxxterms.funder | Default funder | en_US |
rioxxterms.identifier.project | Default project | en_US |