
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 48
Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 48
Showing 1-25 of 48 citing articles:
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen, Denız Gündüz, Kaibin Huang, et al.
IEEE Journal on Selected Areas in Communications (2021) Vol. 39, Iss. 12, pp. 3579-3605
Open Access | Times Cited: 347
Mingzhe Chen, Denız Gündüz, Kaibin Huang, et al.
IEEE Journal on Selected Areas in Communications (2021) Vol. 39, Iss. 12, pp. 3579-3605
Open Access | Times Cited: 347
Deep learning, reinforcement learning, and world models
Yutaka Matsuo, Yann LeCun, Maneesh Sahani, et al.
Neural Networks (2022) Vol. 152, pp. 267-275
Open Access | Times Cited: 263
Yutaka Matsuo, Yann LeCun, Maneesh Sahani, et al.
Neural Networks (2022) Vol. 152, pp. 267-275
Open Access | Times Cited: 263
A Minimalist Approach to Offline Reinforcement Learning
Scott Fujimoto, Shixiang Gu
arXiv (Cornell University) (2021)
Open Access | Times Cited: 134
Scott Fujimoto, Shixiang Gu
arXiv (Cornell University) (2021)
Open Access | Times Cited: 134
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu, Aviral Kumar, Rafael Rafailov, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 21
Tianhe Yu, Aviral Kumar, Rafael Rafailov, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 21
Model-Based Offline Planning
Arthur Argenson, Gabriel Dulac-Arnold
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 21
Arthur Argenson, Gabriel Dulac-Arnold
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 21
Online and Offline Reinforcement Learning by Planning with a Learned Model
Julian Schrittwieser, Thomas Hubert, Amol Mandhane, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 20
Julian Schrittwieser, Thomas Hubert, Amol Mandhane, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 20
Offline Reinforcement Learning with Reverse Model-based Imagination
Jianhao Wang, Wenzhe Li, Haozhe Jiang, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 18
Jianhao Wang, Wenzhe Li, Haozhe Jiang, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 18
A Survey of Demonstration Learning
André Correia, Luı́s A. Alexandre
(2023)
Open Access | Times Cited: 7
André Correia, Luı́s A. Alexandre
(2023)
Open Access | Times Cited: 7
Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Takuma Udagawa, Haruka Kiyohara, Yusuke Narita, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 8, pp. 10025-10033
Open Access | Times Cited: 7
Takuma Udagawa, Haruka Kiyohara, Yusuke Narita, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 8, pp. 10025-10033
Open Access | Times Cited: 7
A survey of demonstration learning
André Correia, Luı́s A. Alexandre
Robotics and Autonomous Systems (2024), pp. 104812-104812
Open Access | Times Cited: 2
André Correia, Luı́s A. Alexandre
Robotics and Autonomous Systems (2024), pp. 104812-104812
Open Access | Times Cited: 2
Model-Based Offline Planning
Arthur Argenson, Gabriel Dulac-Arnold
International Conference on Learning Representations (2021)
Closed Access | Times Cited: 14
Arthur Argenson, Gabriel Dulac-Arnold
International Conference on Learning Representations (2021)
Closed Access | Times Cited: 14
Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning
Tengyang Xie, Nan Jiang, Huan Wang, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 12
Tengyang Xie, Nan Jiang, Huan Wang, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 12
OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Anurag Ajay, Aviral Kumar, Pulkit Agrawal, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 12
Anurag Ajay, Aviral Kumar, Pulkit Agrawal, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 12
Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning
Abhishek Gupta, Corey Lynch, Brandon Kinman, et al.
(2023)
Open Access | Times Cited: 4
Abhishek Gupta, Corey Lynch, Brandon Kinman, et al.
(2023)
Open Access | Times Cited: 4
Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization
Michael R. Zhang, Thomas Paine, Ofir Nachum, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 10
Michael R. Zhang, Thomas Paine, Ofir Nachum, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 10
Representation Matters: Offline Pretraining for Sequential Decision Making.
Mengjiao Yang, Ofir Nachum
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 9
Mengjiao Yang, Ofir Nachum
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 9
Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL
Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 9
Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 9
Representation Balancing Offline Model-based Reinforcement Learning
Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim
(2021)
Closed Access | Times Cited: 8
Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim
(2021)
Closed Access | Times Cited: 8
OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Anurag Ajay, Aviral Kumar, Pulkit Agrawal, et al.
International Conference on Learning Representations (2021)
Closed Access | Times Cited: 8
Anurag Ajay, Aviral Kumar, Pulkit Agrawal, et al.
International Conference on Learning Representations (2021)
Closed Access | Times Cited: 8
Mitigating Covariate Shift in Imitation Learning via Offline Data Without Great Coverage
Jonathan Chang, Masatoshi Uehara, Dhruv Sreenivas, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 8
Jonathan Chang, Masatoshi Uehara, Dhruv Sreenivas, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 8
Offline Reinforcement Learning from Images with Latent Space Models
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 8
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 8
MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch Optimization for Deployment Constrained Reinforcement Learning
DiJia Su, Jason D. Lee, John M. Mulvey, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 6
DiJia Su, Jason D. Lee, John M. Mulvey, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 6
Online Tuning for Offline Decentralized Multi-Agent Reinforcement Learning
Jiechuan Jiang, Zongqing Lu
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 7, pp. 8050-8059
Open Access | Times Cited: 2
Jiechuan Jiang, Zongqing Lu
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 7, pp. 8050-8059
Open Access | Times Cited: 2
Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage
Masatoshi Uehara, W. Sun
arXiv (Cornell University) (2021)
Open Access | Times Cited: 5
Masatoshi Uehara, W. Sun
arXiv (Cornell University) (2021)
Open Access | Times Cited: 5
Revisiting Design Choices in Model-Based Offline Reinforcement Learning.
Cong Lu, Philip Ball, Jack Parker-Holder, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 4
Cong Lu, Philip Ball, Jack Parker-Holder, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 4