
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov, Ashvin Nair, Sergey Levine
arXiv (Cornell University) (2021)
Open Access | Times Cited: 107
Ilya Kostrikov, Ashvin Nair, Sergey Levine
arXiv (Cornell University) (2021)
Open Access | Times Cited: 107
Showing 26-50 of 107 citing articles:
ACT: Empowering Decision Transformer with Dynamic Programming via Advantage Conditioning
Chen-Xiao Gao, Chenyang Wu, Mingjun Cao, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 11, pp. 12127-12135
Open Access | Times Cited: 1
Chen-Xiao Gao, Chenyang Wu, Mingjun Cao, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 11, pp. 12127-12135
Open Access | Times Cited: 1
An Implicit Trust Region Approach to Behavior Regularized Offline Reinforcement Learning
Zhe Zhang, Xiaoyang Tan
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 15, pp. 16944-16952
Open Access | Times Cited: 1
Zhe Zhang, Xiaoyang Tan
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 15, pp. 16944-16952
Open Access | Times Cited: 1
An Off-Policy Reinforcement Learning Algorithm Customized for Multi-Task Fusion in Large-Scale Recommender Systems
Peng Liu
SSRN Electronic Journal (2024)
Open Access | Times Cited: 1
Peng Liu
SSRN Electronic Journal (2024)
Open Access | Times Cited: 1
Modeling Bellman-error with logistic distribution with applications in reinforcement learning
Outongyi Lv, Bingxin Zhou, Lin F. Yang
Neural Networks (2024) Vol. 177, pp. 106387-106387
Closed Access | Times Cited: 1
Outongyi Lv, Bingxin Zhou, Lin F. Yang
Neural Networks (2024) Vol. 177, pp. 106387-106387
Closed Access | Times Cited: 1
A Comparative Study of Data-driven Offline Reinforcement Learning for Fed-batch Process Control
Omid Sobhani, Furkan Elmaz, Michiel Robeyn, et al.
Computer-aided chemical engineering/Computer aided chemical engineering (2024), pp. 3157-3162
Closed Access | Times Cited: 1
Omid Sobhani, Furkan Elmaz, Michiel Robeyn, et al.
Computer-aided chemical engineering/Computer aided chemical engineering (2024), pp. 3157-3162
Closed Access | Times Cited: 1
Guided Online Distillation: Promoting Safe Reinforcement Learning by Offline Demonstration
Jinning Li, Xinyi Liu, Banghua Zhu, et al.
(2024), pp. 7447-7454
Open Access | Times Cited: 1
Jinning Li, Xinyi Liu, Banghua Zhu, et al.
(2024), pp. 7447-7454
Open Access | Times Cited: 1
Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning
Jingyun Yang, Max Sobol Mark, Brandon Vu, et al.
(2024), pp. 4804-4811
Open Access | Times Cited: 1
Jingyun Yang, Max Sobol Mark, Brandon Vu, et al.
(2024), pp. 4804-4811
Open Access | Times Cited: 1
Offline reinforcement learning based feeding strategy of ethylene cracking furnace
Haojun Zhong, Zhenlei Wang, Yuzhe Hao
Computers & Chemical Engineering (2024) Vol. 192, pp. 108864-108864
Closed Access | Times Cited: 1
Haojun Zhong, Zhenlei Wang, Yuzhe Hao
Computers & Chemical Engineering (2024) Vol. 192, pp. 108864-108864
Closed Access | Times Cited: 1
Reference RL: Reinforcement learning with reference mechanism and its application in traffic signal control
Yunxue Lu, Andreas Hegyi, A. Maria Salomons, et al.
Information Sciences (2024), pp. 121485-121485
Closed Access | Times Cited: 1
Yunxue Lu, Andreas Hegyi, A. Maria Salomons, et al.
Information Sciences (2024), pp. 121485-121485
Closed Access | Times Cited: 1
Offline reward shaping with scaling human preference feedback for deep reinforcement learning
Jinfeng Li, Biao Luo, Xiaodong Xu, et al.
Neural Networks (2024) Vol. 181, pp. 106848-106848
Closed Access | Times Cited: 1
Jinfeng Li, Biao Luo, Xiaodong Xu, et al.
Neural Networks (2024) Vol. 181, pp. 106848-106848
Closed Access | Times Cited: 1
How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation
Alex X. Lee, Coline Devin, Jost Tobias Springenberg, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2022)
Open Access | Times Cited: 7
Alex X. Lee, Coline Devin, Jost Tobias Springenberg, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2022)
Open Access | Times Cited: 7
StARformer: Transformer With State-Action-Reward Representations for Robot Learning
Jinghuan Shang, Xiang Li, Kumara Kahatapitiya, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) Vol. 45, Iss. 11, pp. 12862-12877
Closed Access | Times Cited: 6
Jinghuan Shang, Xiang Li, Kumara Kahatapitiya, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) Vol. 45, Iss. 11, pp. 12862-12877
Closed Access | Times Cited: 6
Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Kuan Fang, Patrick Yin, Ashvin Nair, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2022), pp. 4076-4083
Open Access | Times Cited: 6
Kuan Fang, Patrick Yin, Ashvin Nair, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2022), pp. 4076-4083
Open Access | Times Cited: 6
Adaptive Policy Learning for Offline-to-Online Reinforcement Learning
Han Zheng, Xufang Luo, Pengfei Wei, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 9, pp. 11372-11380
Open Access | Times Cited: 3
Han Zheng, Xufang Luo, Pengfei Wei, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 9, pp. 11372-11380
Open Access | Times Cited: 3
Real World Offline Reinforcement Learning with Realistic Data Source
Gaoyue Zhou, Liyiming Ke, Siddhartha S Srinivasa, et al.
(2023) Vol. 33, pp. 7176-7183
Open Access | Times Cited: 3
Gaoyue Zhou, Liyiming Ke, Siddhartha S Srinivasa, et al.
(2023) Vol. 33, pp. 7176-7183
Open Access | Times Cited: 3
Cherry-Picking with Reinforcement Learning
Yunchu Zhang, Liyiming Ke, A. Deshpande, et al.
(2023)
Open Access | Times Cited: 3
Yunchu Zhang, Liyiming Ke, A. Deshpande, et al.
(2023)
Open Access | Times Cited: 3
Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained Representations
Jianren Wang, Sudeep Dasari, Mohan Kumar Srirama, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 3836-3845
Open Access | Times Cited: 3
Jianren Wang, Sudeep Dasari, Mohan Kumar Srirama, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 3836-3845
Open Access | Times Cited: 3
Learning Human-Inspired Force Strategies for Robotic Assembly
Stefan Scherzinger, Arne Roennau, Rüdiger Dillmann
2022 IEEE 18th International Conference on Automation Science and Engineering (CASE) (2023), pp. 1-8
Open Access | Times Cited: 2
Stefan Scherzinger, Arne Roennau, Rüdiger Dillmann
2022 IEEE 18th International Conference on Automation Science and Engineering (CASE) (2023), pp. 1-8
Open Access | Times Cited: 2
OER: Offline Experience Replay for Continual Offline Reinforcement Learning
Sibo Gai, Donglin Wang, Li He
Frontiers in artificial intelligence and applications (2023)
Open Access | Times Cited: 2
Sibo Gai, Donglin Wang, Li He
Frontiers in artificial intelligence and applications (2023)
Open Access | Times Cited: 2
Safe Reinforcement Learning With Dead-Ends Avoidance and Recovery
Xiao Zhang, Hai Zhang, Hongtu Zhou, et al.
IEEE Robotics and Automation Letters (2023) Vol. 9, Iss. 1, pp. 491-498
Open Access | Times Cited: 2
Xiao Zhang, Hai Zhang, Hongtu Zhou, et al.
IEEE Robotics and Automation Letters (2023) Vol. 9, Iss. 1, pp. 491-498
Open Access | Times Cited: 2
Offline Reinforcement Learning for Wireless Network Optimization with Mixture Datasets
Kun Yang, Cong Shen, Jing Yang, et al.
2014 48th Asilomar Conference on Signals, Systems and Computers (2023)
Open Access | Times Cited: 2
Kun Yang, Cong Shen, Jing Yang, et al.
2014 48th Asilomar Conference on Signals, Systems and Computers (2023)
Open Access | Times Cited: 2
Context-Aware Language Modeling for Goal-Oriented Dialogue Systems
Charlie Snell, Sherry X. Yang, Justin Fu, et al.
Findings of the Association for Computational Linguistics: NAACL 2022 (2022), pp. 2351-2366
Open Access | Times Cited: 4
Charlie Snell, Sherry X. Yang, Justin Fu, et al.
Findings of the Association for Computational Linguistics: NAACL 2022 (2022), pp. 2351-2366
Open Access | Times Cited: 4
Diffusion Policies for Out-of-Distribution Generalization in Offline Reinforcement Learning
Suzan Ece Ada, Erhan Öztop, Emre Uğur
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 4, pp. 3116-3123
Open Access
Suzan Ece Ada, Erhan Öztop, Emre Uğur
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 4, pp. 3116-3123
Open Access
Offline Model-Based Optimization via Policy-Guided Gradient Search
Yassine Chemingui, Aryan Deshwal, Trong Nghia Hoang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 10, pp. 11230-11239
Open Access
Yassine Chemingui, Aryan Deshwal, Trong Nghia Hoang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 10, pp. 11230-11239
Open Access
Optimistic Model Rollouts for Pessimistic Offline Policy Optimization
Yuanzhao Zhai, Yiying Li, Zijian Gao, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 15, pp. 16678-16686
Open Access
Yuanzhao Zhai, Yiying Li, Zijian Gao, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 15, pp. 16678-16686
Open Access