OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Michelle A. Lee, Yuke Zhu, Peter Zachares, et al.
IEEE Transactions on Robotics (2020) Vol. 36, Iss. 3, pp. 582-596
Open Access | Times Cited: 161

Showing 26-50 of 161 citing articles:

Assembly strategy for inclined-holes based on vision and force
Lu Yang, Chong Xie, Hongchao Yang, et al.
Engineering Research Express (2025) Vol. 7, Iss. 1, pp. 015233-015233
Closed Access

SoftGrasp: Adaptive grasping for dexterous hand based on multimodal imitation learning
Yihong Li, Ce Guo, Junkai Ren, et al.
Biomimetic Intelligence and Robotics (2025), pp. 100217-100217
Open Access

A Geometric Framework for Quasi-Static Manipulation of a network of elastically connected rigid bodies
Domenico Campolo, Franco Cardin
Applied Mathematical Modelling (2025), pp. 116003-116003
Closed Access

Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers
Ruihan Yang, Minghao Zhang, Nicklas Hansen, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 24

Generalization of orientation trajectories and force–torque profiles for learning human assembly skill
Boyang Ti, Yongsheng Gao, Ming Shi, et al.
Robotics and Computer-Integrated Manufacturing (2022) Vol. 76, pp. 102325-102325
Closed Access | Times Cited: 18

Vision-based interaction force estimation for robot grip motion without tactile/force sensor
Dae-Kwan Ko, Kang-Won Lee, Dong Han Lee, et al.
Expert Systems with Applications (2022) Vol. 211, pp. 118441-118441
Closed Access | Times Cited: 18

IndustReal: Transferring Contact-Rich Assembly Tasks from Simulation to Reality
Bingjie Tang, Michael Lin, Iretiayo Akinola, et al.
(2023)
Open Access | Times Cited: 10

Intra- and Inter-Modal Curriculum for Multimodal Learning
Yuwei Zhou, Xin Wang, Hong Chen, et al.
(2023), pp. 3724-3735
Open Access | Times Cited: 9

Offline Reinforcement Learning of Robotic Control Using Deep Kinematics and Dynamics
Xiang Li, Weiwei Shang, Shuang Cong
IEEE/ASME Transactions on Mechatronics (2024) Vol. 29, Iss. 4, pp. 2428-2439
Closed Access | Times Cited: 3

Multimodal information bottleneck for deep reinforcement learning with multiple sensors
Bang You, Huaping Liu
Neural Networks (2024) Vol. 176, pp. 106347-106347
Closed Access | Times Cited: 3

In-Hand Object Pose Tracking via Contact Feedback and GPU-Accelerated Robotic Simulation
Jacky Liang, Ankur Handa, Karl Van Wyk, et al.
(2020)
Open Access | Times Cited: 25

Bottom-Up Skill Discovery From Unsegmented Demonstrations for Long-Horizon Robot Manipulation
Yifeng Zhu, Peter Stone, Yuke Zhu
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 2, pp. 4126-4133
Open Access | Times Cited: 16

A residual reinforcement learning method for robotic assembly using visual and force information
Zhuangzhuang Zhang, Yizhao Wang, Zhinan Zhang, et al.
Journal of Manufacturing Systems (2023) Vol. 72, pp. 245-262
Closed Access | Times Cited: 8

Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors
Michelle A. Lee, Matthew Tan, Yuke Zhu, et al.
(2021)
Open Access | Times Cited: 20

DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations
Yiwei Lyu, Paul Pu Liang, Zihao Deng, et al.
(2022), pp. 455-467
Open Access | Times Cited: 13

Learning Goal-Oriented Non-Prehensile Pushing in Cluttered Scenes
Nils Dengler, David Großklaus, Maren Bennewitz
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2022)
Open Access | Times Cited: 13

Vision-force-fused curriculum learning for robotic contact-rich assembly tasks
Piaopiao Jin, Yinjie Lin, Yaoxian Song, et al.
Frontiers in Neurorobotics (2023) Vol. 17
Open Access | Times Cited: 7

Integrating a Pipette Into a Robot Manipulator With Uncalibrated Vision and TCP for Liquid Handling
Junbo Zhang, Weiwei Wan, Nobuyuki Tanaka, et al.
IEEE Transactions on Automation Science and Engineering (2023) Vol. 21, Iss. 4, pp. 5503-5522
Closed Access | Times Cited: 7

Learning Robust Skills for Tightly Coordinated Arms in Contact-Rich Tasks
Yaowei Fan, Xinge Li, Kaihang Zhang, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 3, pp. 2973-2980
Closed Access | Times Cited: 2

On the Role of the Action Space in Robot Manipulation Learning and Sim-to-Real Transfer
Elie Aljalbout, F. Frank, Maximilian Karl, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 6, pp. 5895-5902
Open Access | Times Cited: 2

Multimodal fusion-powered English speaking robot
Rong Pan
Frontiers in Neurorobotics (2024) Vol. 18
Open Access | Times Cited: 2

Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras
Iretiayo Akinola, Jacob Varley, Dmitry Kalashnikov
(2020), pp. 4616-4622
Open Access | Times Cited: 18

Excavation Reinforcement Learning Using Geometric Representation
Qingkai Lu, Yifan Zhu, Liangjun Zhang
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 2, pp. 4472-4479
Open Access | Times Cited: 11

Scroll to top