OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Scaling Robot Learning with Semantically Imagined Experience
Tianhe Yu, Ted Xiao, Jonathan Tompson, et al.
(2023)
Open Access | Times Cited: 29

Showing 1-25 of 29 citing articles:

DALL-E-Bot: Introducing Web-Scale Diffusion Models to Robotics
Ivan Kapelyukh, Vitalis Vosylius, Edward Johns
IEEE Robotics and Automation Letters (2023) Vol. 8, Iss. 7, pp. 3956-3963
Open Access | Times Cited: 39

RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking
Homanga Bharadhwaj, Jay Vakil, Mohit Sharma, et al.
(2024), pp. 4788-4795
Open Access | Times Cited: 15

Language Models as Zero-Shot Trajectory Generators
Teyun Kwon, Norman Di Palo, Edward Johns
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 7, pp. 6728-6735
Closed Access | Times Cited: 6

Grasp-Anything: Large-scale Grasp Dataset from Foundation Models
An Dinh Vuong, Minh‐Ngoc Vu, Huy Quoc Le, et al.
(2024), pp. 14030-14037
Open Access | Times Cited: 5

EquivAct: SIM(3)-Equivariant Visuomotor Policies beyond Rigid Object Manipulation
Jingyun Yang, Congyue Deng, Jimmy Wu, et al.
(2024), pp. 9249-9255
Open Access | Times Cited: 5

Real-world robot applications of foundation models: a review
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 5

SpawnNet: Learning Generalizable Visuomotor Skills from Pre-trained Network
Xingyu Lin, John So, Sashwat Mahalingam, et al.
(2024), pp. 4781-4787
Closed Access | Times Cited: 4

What’s the Move? Hybrid Imitation Learning via Salient Points
Priya Sundaresan, Hengyuan Hu, Quan Vuong, et al.
(2025)
Open Access

Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models
Gabriel Sarch, Yue Wu, Michael J. Tarr, et al.
(2023)
Open Access | Times Cited: 7

Learning Playing Piano with Bionic-Constrained Diffusion Policy for Anthropomorphic Hand
Yiming Yang, Zechang Wang, Dengpeng Xing, et al.
Cyborg and Bionic Systems (2024) Vol. 5
Open Access | Times Cited: 2

Learning Keypoints for Robotic Cloth Manipulation Using Synthetic Data
Thomas Lips, Victor-Louis De Gusseme, Francis wyffels
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 7, pp. 6528-6535
Open Access | Times Cited: 2

Everyday Objects Rearrangement in a Human-Like Manner via Robotic Imagination and Learning From Demonstration
Alberto Méndez, Adrián Prados, Elisabeth Menéndez, et al.
IEEE Access (2024) Vol. 12, pp. 92098-92119
Open Access | Times Cited: 2

Unlocking Robotic Autonomy: A Survey on the Applications of Foundation Models
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2

Dream2Real: Zero-Shot 3D Object Rearrangement with Vision-Language Models
Ivan Kapelyukh, Yifei Ren, Ignacio Alzugaray, et al.
(2024), pp. 4796-4803
Open Access | Times Cited: 2

Crossway Diffusion: Improving Diffusion-based Visuomotor Policy via Self-supervised Learning
Xiang Li, Varun Belagali, Jinghuan Shang, et al.
(2024), pp. 16841-16849
Open Access | Times Cited: 2

SUGAR : Pre-training 3D Visual Representations for Robotics
Shizhe Chen, Ricardo García, Ivan Laptev, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 129, pp. 18049-18060
Open Access | Times Cited: 2

CyberDemo: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation
Jun Wang, Yuzhe Qin, Kaiming Kuang, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 17, pp. 17952-17963
Closed Access | Times Cited: 2

SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling
Jesse Zhang, Karl Pertsch, Jiahui Zhang, et al.
(2024), pp. 9168-9175
Open Access | Times Cited: 1

Closing the Visual Sim-to-Real Gap with Object-Composable NeRFs
Nikhil Mishra, Maximilian Sieb, Pieter Abbeel, et al.
(2024), pp. 11202-11208
Open Access | Times Cited: 1

Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans
Homanga Bharadhwaj, Abhinav Gupta, Vikash Kumar, et al.
(2024), pp. 6904-6911
Open Access | Times Cited: 1

Seeing the Unseen: Visual Common Sense for Semantic Placement
Ram Ramrakhya, Aniruddha Kembhavi, Dhruv Batra, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. abs/2303.15389, pp. 16273-16283
Closed Access | Times Cited: 1

Diffusion Reward: Learning Rewards via Conditional Video Diffusion
Tao Huang, Guangqi Jiang, Yanjie Ze, et al.
Lecture notes in computer science (2024), pp. 478-495
Closed Access

Learning Instruction-Guided Manipulation Affordance via Large Models for Embodied Robotic Tasks*
Dayou Li, Chenkun Zhao, Shuo Yang, et al.
2022 International Conference on Advanced Robotics and Mechatronics (ICARM) (2024), pp. 662-667
Open Access

Page 1 - Next Page

Scroll to top