OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Critic Regularized Regression
Ziyu Wang, Alexander Novikov, Konrad Żołna, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 88

Showing 26-50 of 88 citing articles:

Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning
Abhishek Gupta, Corey Lynch, Brandon Kinman, et al.
(2023)
Open Access | Times Cited: 4

Learning eco-driving strategies from human driving trajectories
Xiaoyu Shi, Jian Zhang, Xia Jiang, et al.
Physica A Statistical Mechanics and its Applications (2023) Vol. 633, pp. 129353-129353
Closed Access | Times Cited: 4

Pessimistic value iteration for multi-task data sharing in Offline Reinforcement Learning
Chenjia Bai, Lingxiao Wang, Jianye Hao, et al.
Artificial Intelligence (2023) Vol. 326, pp. 104048-104048
Open Access | Times Cited: 4

EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems
Yuanqing Yu, Chongming Gao, Jiawei Chen, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2024), pp. 977-987
Open Access | Times Cited: 1

A multi-agent curiosity reward model for task-oriented dialogue systems
Jingtao Sun, Jiayin Kou, Wenyan Hou, et al.
Pattern Recognition (2024) Vol. 157, pp. 110884-110884
Closed Access | Times Cited: 1

ALR-HT: A fast and efficient Lasso regression without hyperparameter tuning
Yuhang Wang, Bin Zou, Jie Xu, et al.
Neural Networks (2024) Vol. 181, pp. 106885-106885
Closed Access | Times Cited: 1

Offline Learning from Demonstrations and Unlabeled Experience
Konrad Żołna, Alexander Novikov, Ksenia Konyushkova, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 11

Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization
Michael R. Zhang, Thomas Paine, Ofir Nachum, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 10

Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning
Andrea Zanette, Martin J. Wainwright, Emma Brunskill
arXiv (Cornell University) (2021)
Open Access | Times Cited: 10

How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation
Alex X. Lee, Coline Devin, Jost Tobias Springenberg, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2022)
Open Access | Times Cited: 7

Batch-Constrained Distributional Reinforcement Learning for Session-based Recommendation
D. Garg, Priyanka Gupta, Pankaj Malhotra, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 9

Offline Reinforcement Learning with Soft Behavior Regularization
Haoran Xu, Xianyuan Zhan, Jianxiong Li, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 9

The Importance of Pessimism in Fixed-Dataset Policy Optimization
Jacob Buckman, Carles Gelada, Marc G. Bellemare
(2021)
Closed Access | Times Cited: 8

Real World Offline Reinforcement Learning with Realistic Data Source
Gaoyue Zhou, Liyiming Ke, Siddhartha S Srinivasa, et al.
(2023) Vol. 33, pp. 7176-7183
Open Access | Times Cited: 3

Medical Dead-ends and Learning to Identify High-risk States and Treatments
Mehdi Fatemi, Taylor W. Killian, Jayakumar Subramanian, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 7

Benchmarks for Deep Off-Policy Evaluation
Justin Fu, Mohammad Norouzi, Ofir Nachum, et al.
(2021)
Closed Access | Times Cited: 6

Offline Quantum Reinforcement Learning in a Conservative Manner
Zhihao Cheng, Kaining Zhang, Li Shen, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 6, pp. 7148-7156
Open Access | Times Cited: 2

OER: Offline Experience Replay for Continual Offline Reinforcement Learning
Sibo Gai, Donglin Wang, Li He
Frontiers in artificial intelligence and applications (2023)
Open Access | Times Cited: 2

Context-Aware Language Modeling for Goal-Oriented Dialogue Systems
Charlie Snell, Sherry X. Yang, Justin Fu, et al.
Findings of the Association for Computational Linguistics: NAACL 2022 (2022), pp. 2351-2366
Open Access | Times Cited: 4

Continuous Doubly Constrained Batch Reinforcement Learning
Rasool Fakoor, Jonas Mueller, Kavosh Asadi, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 5

Offline Reinforcement Learning Hands-On
Louis Monier, Jakub Kmec, Alexandre Laterre, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 4

Regularized Behavior Value Estimation.
Çağlar Gülçehre, Sergio Gómez Colmenarejo, Ziyu Wang, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 4

Identifying Co-Adaptation of Algorithmic and Implementational Innovations in Deep Reinforcement Learning: A Taxonomy and Case Study of Inference-based Algorithms.
Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 4

Critic-Guided Decision Transformer for Offline Reinforcement Learning
Yuanfu Wang, Chao Yang, Ying Wen, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 14, pp. 15706-15714
Open Access

Scroll to top