Show simple item record

dc.contributor.authorRunarsson, TP
dc.contributor.authorLucas, SM
dc.date.accessioned2020-12-17T17:48:55Z
dc.date.available2020-12-17T17:48:55Z
dc.date.issued2014-09-01
dc.identifier.issn1943-068X
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/69412
dc.description.abstract© 2013 IEEE. This paper investigates the use of preference learning as an approach to move prediction and evaluation function approximation, using the game of Othello as a test domain. Using the same sets of features, we compare our approach with least squares temporal difference learning, direct classification, and with the Bradley-Terry model, fitted using minorization-maximization (MM). The results show that the exact way in which preference learning is applied is critical to achieving high performance. Best results were obtained using a combination of board inversion and pair-wise preference learning. This combination significantly outperformed the others under test, both in terms of move prediction accuracy, and in the level of play achieved when using the learned evaluation function as a move selector during game play.en_US
dc.format.extent300 - 313
dc.relation.ispartofIEEE Transactions on Computational Intelligence and AI in Games
dc.titlePreference learning for move prediction and evaluation function approximation in Othelloen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/TCIAIG.2014.2307272
pubs.issue3en_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.volume6en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record