OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 147

Showing 1-25 of 147 citing articles:

Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine, Aviral Kumar, George Tucker, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 733

Challenges of real-world reinforcement learning: definitions, benchmarks and analysis
Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, et al.
Machine Learning (2021) Vol. 110, Iss. 9, pp. 2419-2468
Open Access | Times Cited: 320

MOPO: Model-based Offline Policy Optimization
Tianhe Yu, Garrett Thomas, Lantao Yu, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 210

Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism
Paria Rashidinejad, Banghua Zhu, Cong Ma, et al.
IEEE Transactions on Information Theory (2022) Vol. 68, Iss. 12, pp. 8156-8196
Open Access | Times Cited: 40

Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation
Chongming Gao, Kexin Huang, Jiawei Chen, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2023), pp. 238-248
Open Access | Times Cited: 25

What Matters in Learning from Offline Human Demonstrations for Robot Manipulation
Ajay Mandlekar, Danfei Xu, Josiah Wong, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 52

DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning
Xianyuan Zhan, Haoran Xu, Yue Zhang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 4, pp. 4680-4688
Open Access | Times Cited: 31

Reinforcement learning and bandits for speech and language processing: Tutorial, review and outlook
Baihan Lin
Expert Systems with Applications (2023) Vol. 238, pp. 122254-122254
Open Access | Times Cited: 17

Federated Offline Reinforcement Learning
Doudou Zhou, Yufeng Zhang, Aaron Sonabend-W, et al.
Journal of the American Statistical Association (2024), pp. 1-12
Open Access | Times Cited: 6

An empirical investigation of the challenges of real-world reinforcement learning.
Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 48

Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization
Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 48

Safe chance constrained reinforcement learning for batch process control
Max Mowbray, Panagiotis Petsagkourakis, Ehecatl Antonio del Rio‐Chanona, et al.
Computers & Chemical Engineering (2021) Vol. 157, pp. 107630-107630
Open Access | Times Cited: 33

Overcoming model bias for robust offline deep reinforcement learning
Phillip Swazinna, Steffen Udluft, Thomas A. Runkler
Engineering Applications of Artificial Intelligence (2021) Vol. 104, pp. 104366-104366
Open Access | Times Cited: 33

AlphaPortfolio for Investment and Economically Interpretable AI
Lin William Cong, Ke Tang, Jingyuan Wang, et al.
SSRN Electronic Journal (2020)
Closed Access | Times Cited: 34

A Survey of Sim-to-Real Transfer Techniques Applied to Reinforcement Learning for Bioinspired Robots
Wei Zhu, Xian Guo, Dai Owaki, et al.
IEEE Transactions on Neural Networks and Learning Systems (2021) Vol. 34, Iss. 7, pp. 3444-3459
Closed Access | Times Cited: 32

Pessimistic Reward Models for Off-Policy Learning in Recommendation
Olivier Jeunen, Bart Goethals
(2021), pp. 63-74
Closed Access | Times Cited: 28

Constraints Penalized Q-learning for Safe Offline Reinforcement Learning
Haoran Xu, Xianyuan Zhan, Xiangyu Zhu
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 8, pp. 8753-8760
Open Access | Times Cited: 20

Conservative reward enhancement through the nearest neighbor integration in model-based Offline Policy Optimization
Xue Li, Bangjun Wang, Xinghong Ling
Expert Systems with Applications (2025), pp. 126888-126888
Closed Access

Trajectory self-correction and uncertainty estimation for enhanced model-based policy optimization
Shan Zhong, Xin Du, Kaijian Xia, et al.
Expert Systems with Applications (2025), pp. 126993-126993
Closed Access

Pessimistic policy iteration with bounded uncertainty
Zhiyong Peng, Changlin Han, Yadong Liu, et al.
Expert Systems with Applications (2025), pp. 127651-127651
Closed Access

Automatic Tuning for Data-driven Model Predictive Control
William R. Edwards, Gao Tang, Giorgos Mamakoukas, et al.
(2021), pp. 7379-7385
Closed Access | Times Cited: 22

COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu, Aviral Kumar, Rafael Rafailov, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 21

Doubly constrained offline reinforcement learning for learning path recommendation
Yun Yue, Huan Dai, Rui An, et al.
Knowledge-Based Systems (2023) Vol. 284, pp. 111242-111242
Closed Access | Times Cited: 8

Is Pessimism Provably Efficient for Offline RL
Ying Jin, Zhuoran Yang, Zhaoran Wang
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 23

Model-Based Offline Planning
Arthur Argenson, Gabriel Dulac-Arnold
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 21

Page 1 - Next Page

Scroll to top