Show simple item record

dc.contributor.authorSasso, R
dc.contributor.authorConserva, M
dc.contributor.authorRauber, P
dc.date.accessioned2024-07-11T13:15:08Z
dc.date.available2024-07-11T13:15:08Z
dc.date.issued2023-01-01
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/98021
dc.description.abstractDespite remarkable successes, deep reinforcement learning algorithms remain sample inefficient: they require an enormous amount of trial and error to find good policies. Model-based algorithms promise sample efficiency by building an environment model that can be used for planning. Posterior Sampling for Reinforcement Learning is such a model-based algorithm that has attracted significant interest due to its performance in the tabular setting. This paper introduces Posterior Sampling for Deep Reinforcement Learning (PSDRL), the first truly scalable approximation of Posterior Sampling for Reinforcement Learning that retains its model-based essence. PSDRL combines efficient uncertainty quantification over latent state space models with a specially tailored continual planning algorithm based on value-function approximation. Extensive experiments on the Atari benchmark show that PSDRL significantly outperforms previous state-of-the-art attempts at scaling up posterior sampling while being competitive with a state-of-the-art (model-based) reinforcement learning method, both in sample efficiency and computational efficiency.en_US
dc.format.extent30042 - 30061
dc.titlePosterior Sampling for Deep Reinforcement Learningen_US
dc.typeConference Proceedingen_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.volume202en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record