
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
MOPO: Model-based Offline Policy Optimization
Tianhe Yu, Garrett Thomas, Lantao Yu, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 210
Tianhe Yu, Garrett Thomas, Lantao Yu, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 210
Showing 1-25 of 210 citing articles:
On the Opportunities and Risks of Foundation Models
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1546
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1546
Challenges of real-world reinforcement learning: definitions, benchmarks and analysis
Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, et al.
Machine Learning (2021) Vol. 110, Iss. 9, pp. 2419-2468
Open Access | Times Cited: 320
Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, et al.
Machine Learning (2021) Vol. 110, Iss. 9, pp. 2419-2468
Open Access | Times Cited: 320
Mastering Atari with Discrete World Models
Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 171
Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 171
MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 147
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 147
QPLEX: Duplex Dueling Multi-Agent Q-Learning
Jianhao Wang, Zhizhou Ren, Terry Z. Liu, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 131
Jianhao Wang, Zhizhou Ren, Terry Z. Liu, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 131
Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism
Paria Rashidinejad, Banghua Zhu, Cong Ma, et al.
IEEE Transactions on Information Theory (2022) Vol. 68, Iss. 12, pp. 8156-8196
Open Access | Times Cited: 40
Paria Rashidinejad, Banghua Zhu, Cong Ma, et al.
IEEE Transactions on Information Theory (2022) Vol. 68, Iss. 12, pp. 8156-8196
Open Access | Times Cited: 40
Comparative study of model-based and model-free reinforcement learning control performance in HVAC systems
Cheng Gao, Dan Wang
Journal of Building Engineering (2023) Vol. 74, pp. 106852-106852
Closed Access | Times Cited: 40
Cheng Gao, Dan Wang
Journal of Building Engineering (2023) Vol. 74, pp. 106852-106852
Closed Access | Times Cited: 40
Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation
Chongming Gao, Kexin Huang, Jiawei Chen, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2023), pp. 238-248
Open Access | Times Cited: 25
Chongming Gao, Kexin Huang, Jiawei Chen, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2023), pp. 238-248
Open Access | Times Cited: 25
What Matters in Learning from Offline Human Demonstrations for Robot Manipulation
Ajay Mandlekar, Danfei Xu, Josiah Wong, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 52
Ajay Mandlekar, Danfei Xu, Josiah Wong, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 52
DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning
Xianyuan Zhan, Haoran Xu, Yue Zhang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 4, pp. 4680-4688
Open Access | Times Cited: 31
Xianyuan Zhan, Haoran Xu, Yue Zhang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 4, pp. 4680-4688
Open Access | Times Cited: 31
Finite-Sample Guarantees for Wasserstein Distributionally Robust Optimization: Breaking the Curse of Dimensionality
Rui Gao
Operations Research (2022) Vol. 71, Iss. 6, pp. 2291-2306
Open Access | Times Cited: 28
Rui Gao
Operations Research (2022) Vol. 71, Iss. 6, pp. 2291-2306
Open Access | Times Cited: 28
An empirical investigation of the challenges of real-world reinforcement learning.
Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 48
Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 48
Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 48
Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 48
Safe chance constrained reinforcement learning for batch process control
Max Mowbray, Panagiotis Petsagkourakis, Ehecatl Antonio del Rio‐Chanona, et al.
Computers & Chemical Engineering (2021) Vol. 157, pp. 107630-107630
Open Access | Times Cited: 33
Max Mowbray, Panagiotis Petsagkourakis, Ehecatl Antonio del Rio‐Chanona, et al.
Computers & Chemical Engineering (2021) Vol. 157, pp. 107630-107630
Open Access | Times Cited: 33
Overcoming model bias for robust offline deep reinforcement learning
Phillip Swazinna, Steffen Udluft, Thomas A. Runkler
Engineering Applications of Artificial Intelligence (2021) Vol. 104, pp. 104366-104366
Open Access | Times Cited: 33
Phillip Swazinna, Steffen Udluft, Thomas A. Runkler
Engineering Applications of Artificial Intelligence (2021) Vol. 104, pp. 104366-104366
Open Access | Times Cited: 33
Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for Off-Road Navigation
Samuel Triest, Mateo Guaman Castro, Parv Maheshwari, et al.
(2023)
Open Access | Times Cited: 13
Samuel Triest, Mateo Guaman Castro, Parv Maheshwari, et al.
(2023)
Open Access | Times Cited: 13
Real-world robot applications of foundation models: a review
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 5
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 5
COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning
Avi Singh, Albert S. Yu, T. Jonathan Yang, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 33
Avi Singh, Albert S. Yu, T. Jonathan Yang, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 33
Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification
Youngseog Chung, Ian Char, Han Guo, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 31
Youngseog Chung, Ian Char, Han Guo, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 31
Pessimistic Reward Models for Off-Policy Learning in Recommendation
Olivier Jeunen, Bart Goethals
(2021), pp. 63-74
Closed Access | Times Cited: 28
Olivier Jeunen, Bart Goethals
(2021), pp. 63-74
Closed Access | Times Cited: 28
A model-based hybrid soft actor-critic deep reinforcement learning algorithm for optimal ventilator settings
Shaotao Chen, Xihe Qiu, Xiaoyu Tan, et al.
Information Sciences (2022) Vol. 611, pp. 47-64
Closed Access | Times Cited: 22
Shaotao Chen, Xihe Qiu, Xiaoyu Tan, et al.
Information Sciences (2022) Vol. 611, pp. 47-64
Closed Access | Times Cited: 22
A Review of Uncertainty for Deep Reinforcement Learning
Owen Lockwood, Mei Si
Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (2022) Vol. 18, Iss. 1, pp. 155-162
Open Access | Times Cited: 21
Owen Lockwood, Mei Si
Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (2022) Vol. 18, Iss. 1, pp. 155-162
Open Access | Times Cited: 21
Constraints Penalized Q-learning for Safe Offline Reinforcement Learning
Haoran Xu, Xianyuan Zhan, Xiangyu Zhu
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 8, pp. 8753-8760
Open Access | Times Cited: 20
Haoran Xu, Xianyuan Zhan, Xiangyu Zhu
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 8, pp. 8753-8760
Open Access | Times Cited: 20
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models
Yi Liu, Gaurav Datta, Ellen Novoseller, et al.
(2023), pp. 2921-2928
Open Access | Times Cited: 11
Yi Liu, Gaurav Datta, Ellen Novoseller, et al.
(2023), pp. 2921-2928
Open Access | Times Cited: 11
Conservative reward enhancement through the nearest neighbor integration in model-based Offline Policy Optimization
Xue Li, Bangjun Wang, Xinghong Ling
Expert Systems with Applications (2025), pp. 126888-126888
Closed Access
Xue Li, Bangjun Wang, Xinghong Ling
Expert Systems with Applications (2025), pp. 126888-126888
Closed Access