OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking
Homanga Bharadhwaj, Jay Vakil, Mohit Sharma, et al.
(2024), pp. 4788-4795
Open Access | Times Cited: 15

Showing 15 citing articles:

RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in One-Shot
Haoshu Fang, Hongjie Fang, Zhenyu Tang, et al.
(2024), pp. 653-660
Open Access | Times Cited: 11

CoPAL: Corrective Planning of Robot Actions with Large Language Models
Frank Joublin, Antonello Ceravola, А. В. Смирнов, et al.
(2024), pp. 8664-8670
Open Access | Times Cited: 5

Real-world robot applications of foundation models: a review
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 5

What’s the Move? Hybrid Imitation Learning via Salient Points
Priya Sundaresan, Hengyuan Hu, Quan Vuong, et al.
(2025)
Open Access

Exploring Embodied Multimodal Large Models: Development, datasets, and future directions
Shoubin Chen, Zehao Wu, Kai Zhang, et al.
Information Fusion (2025), pp. 103198-103198
Closed Access

Envisioning Recommendations on an LLM-Based Agent Platform
Jizhi Zhang, Keqin Bao, Wenjie Wang, et al.
Communications of the ACM (2025)
Closed Access

Everyday Objects Rearrangement in a Human-Like Manner via Robotic Imagination and Learning From Demonstration
Alberto Méndez, Adrián Prados, Elisabeth Menéndez, et al.
IEEE Access (2024) Vol. 12, pp. 92098-92119
Open Access | Times Cited: 2

Open X-Embodiment: Robotic Learning Datasets and RT-X Models : Open X-Embodiment Collaboration0
A. O'Neill, Abdul Rehman, Abhiram Maddukuri, et al.
(2024), pp. 6892-6903
Closed Access | Times Cited: 2

SUGAR : Pre-training 3D Visual Representations for Robotics
Shizhe Chen, Ricardo García, Ivan Laptev, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 129, pp. 18049-18060
Open Access | Times Cited: 2

CyberDemo: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation
Jun Wang, Yuzhe Qin, Kaiming Kuang, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 17, pp. 17952-17963
Closed Access | Times Cited: 2

Track2Act: Predicting Point Tracks from Internet Videos Enables Generalizable Robot Manipulation
Homanga Bharadhwaj, Roozbeh Mottaghi, Abhinav Gupta, et al.
Lecture notes in computer science (2024), pp. 306-324
Closed Access | Times Cited: 2

Kinematic-aware Prompting for Generalizable Articulated Object Manipulation with LLMs
Wenke Xia, Dong Wang, Xincheng Pang, et al.
(2024), pp. 2073-2080
Open Access | Times Cited: 1

Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans
Homanga Bharadhwaj, Abhinav Gupta, Vikash Kumar, et al.
(2024), pp. 6904-6911
Open Access | Times Cited: 1

QUAR-VLA: Vision-Language-Action Model for Quadruped Robots
Pengxiang Ding, Han Zhao, Wenjie Zhang, et al.
Lecture notes in computer science (2024), pp. 352-367
Closed Access

Semantically controllable augmentations for generalizable robot learning
Zoey Qiuyu Chen, Zhao Mandi, Homanga Bharadhwaj, et al.
The International Journal of Robotics Research (2024)
Open Access

Page 1

Scroll to top