OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning
Çağlar Gülçehre, Ziyu Wang, Alexander Novikov, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 38

Showing 1-25 of 38 citing articles:

A Survey of Zero-shot Generalisation in Deep Reinforcement Learning
Robert Kirk, Amy Zhang, Edward Grefenstette, et al.
Journal of Artificial Intelligence Research (2023) Vol. 76, pp. 201-264
Open Access | Times Cited: 73

A Survey of Generalisation in Deep Reinforcement Learning
Robert Kirk, Amy Zhang, Edward Grefenstette, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 58

Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism
Paria Rashidinejad, Banghua Zhu, Cong Ma, et al.
IEEE Transactions on Information Theory (2022) Vol. 68, Iss. 12, pp. 8156-8196
Open Access | Times Cited: 40

Hyperparameter Selection for Offline Reinforcement Learning.
Tom Le Paine, Cosmin Păduraru, Andrea Michi, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 40

Is Pessimism Provably Efficient for Offline RL
Ying Jin, Zhuoran Yang, Zhaoran Wang
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 23

Model-Based Offline Planning
Arthur Argenson, Gabriel Dulac-Arnold
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 21

Online and Offline Reinforcement Learning by Planning with a Learned Model
Julian Schrittwieser, Thomas Hubert, Amol Mandhane, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 20

A survey of demonstration learning
André Correia, Luı́s A. Alexandre
Robotics and Autonomous Systems (2024), pp. 104812-104812
Open Access | Times Cited: 2

Scalable Reinforcement Learning Framework for Traffic Signal Control under Communication Delays
Aoyu Pang, Maonan Wang, Yirong Chen, et al.
IEEE Open Journal of Vehicular Technology (2024) Vol. 5, pp. 330-343
Open Access | Times Cited: 1

CUDC: A Curiosity-Driven Unsupervised Data Collection Method with Adaptive Temporal Distances for Offline Reinforcement Learning
Chenyu Sun, Hangwei Qian, Chunyan Miao
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 13, pp. 15145-15153
Open Access | Times Cited: 1

Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL
Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 9

Vector Quantized Models for Planning
Sherjil Ozair, Yazhe Li, Ali Razavi, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 9

Value Iteration in Continuous Actions, States and Time
Michael Lutter, Shie Mannor, Jan Peters, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 8

Offline Reinforcement Learning from Images with Latent Space Models
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 8

Benchmarks for Deep Off-Policy Evaluation
Justin Fu, Mohammad Norouzi, Ofir Nachum, et al.
(2021)
Closed Access | Times Cited: 6

Provable Representation Learning for Imitation with Contrastive Fourier Features
Ofir Nachum, Mengjiao Yang
arXiv (Cornell University) (2021)
Open Access | Times Cited: 6

S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning.
Samarth Sinha, Animesh Garg
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 5

Regularized Behavior Value Estimation.
Çağlar Gülçehre, Sergio Gómez Colmenarejo, Ziyu Wang, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 4

Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning
Siyuan Zhang, Nan Jiang
arXiv (Cornell University) (2021)
Open Access | Times Cited: 4

Continuous-Time Fitted Value Iteration for Robust Policies
Michael Lutter, Boris Belousov, Shie Mannor, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2022), pp. 1-15
Open Access | Times Cited: 3

A Study of Model Based and Model Free Offline Reinforcement Learning
Indu Shukla, Haley Dozier, Althea C. Henslee
2021 International Conference on Computational Science and Computational Intelligence (CSCI) (2022), pp. 315-316
Closed Access | Times Cited: 3

Real-Data-Driven Offline Reinforcement Learning for Autonomous Vehicle Speed Decision Making
Jiachen Hao, Shuyuan Xu, Chen Xue-mei, et al.
2022 34th Chinese Control and Decision Conference (CCDC) (2024), pp. 2504-2511
Closed Access

Using Genetic Programming to Improve Data Collection for Offline Reinforcement Learning
D. Halder, Fernando Bação, Georgios Douzas
(2024)
Closed Access

REALab: An Embedded Perspective on Tampering.
Ramana Kumar, Jonathan Uesato, Richard Ngo, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 3

Implicit Behavioral Cloning
Pete Florence, Corey Lynch, Andy Zeng, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 3

Page 1 - Next Page

Scroll to top