Weihang Guo

I am a first year Ph.D. student at Rice University advised by Prof. Lydia Kavraki. I am also honored to collaborate with Prof. Zak Kingston and Prof. Kaiyu Hang.

My research interests include robot safety, high-performance robotic systems, and multi-robot planning.

I am a developer of The Open Motion Planning Library (OMPL).

profile photo

Honors and Awards

Publications

paper thumbnail
Python Bindings for a Large C++ Robotics Library: The Case of OMPL

Abstract: Python bindings are a critical bridge between high-performance C++ libraries and the flexibility of Python, enabling rapid prototyping, reproducible experiments, and integration with simulation and learning frameworks in robotics research. Yet, generating bindings for large codebases is a tedious process that creates a heavy burden for a small group of maintainers. In this work, we investigate the use of Large Language Models (LLMs) to assist in generating nanobind wrappers, with human experts kept in the loop. Our workflow mirrors the structure of the C++ codebase, scaffolds empty wrapper files, and employs LLMs to fill in binding definitions. Experts then review and refine the generated code to ensure correctness, compatibility, and performance. Through a case study on a large C++ motion planning library, we document common failure modes, including mismanaging shared pointers, overloads, and trampolines, and show how in-context examples and careful prompt design improve reliability. Experiments demonstrate that the resulting bindings achieve runtime performance comparable to legacy solutions. Beyond this case study, our results provide general lessons for applying LLMs to binding generation in large-scale C++ projects.

@misc{guo2026pythonbindingslargec,
  title={Python Bindings for a Large C++ Robotics Library: The Case of OMPL}, 
  author={Weihang Guo and Theodoros Tyrovouzis and Lydia E. Kavraki},
  year={2026},
  eprint={2603.04668},
  archivePrefix={arXiv},
  primaryClass={cs.RO},
  url={https://arxiv.org/abs/2603.04668}, 
}
paper thumbnail
Efficient Multi-Robot Motion Planning for Manifold-Constrained Manipulators by Randomized Scheduling and Informed Path Generation

Abstract: Multi-robot motion planning for high degree-of-freedom manipulators in shared, constrained, and narrow spaces is a complex problem and essential for many scenarios such as construction, surgery, and more. Traditional coupled methods plan directly in the composite configuration space, which scales poorly; decoupled methods, on the other hand, plan separately for each robot but lack completeness. Hybrid methods that obtain paths from individual robots together require the enumeration of many paths before they can find valid composite solutions. This paper introduces Scheduling to Avoid Collisions (StAC), a hybrid approach that more effectively composes paths from individual robots by scheduling (adding stops and coordination motion along all paths) and generates paths that are likely to be feasible by using bidirectional feedback between the scheduler and motion planner for informed sampling. StAC uses 10 to 100 times fewer paths from the low-level planner than state-of-the-art hybrid baselines on challenging problems in manipulator cases.

@ARTICLE{guo2024efficient,
  author={Guo, Weihang and Kingston, Zachary and Hang, Kaiyu and Kavraki, Lydia E.},
  journal={IEEE Robotics and Automation Letters}, 
  title={Efficient Multi-Robot Motion Planning for Manifold-Constrained Manipulators by Randomized Scheduling and Informed Path Generation}, 
  year={2026},
  volume={11},
  number={4},
  pages={4385-4392},
  keywords={Robot kinematics;Collision avoidance;Schedules;Manipulators;Manifolds;System recovery;Multi-robot systems;End effectors;Trajectory;Timing;Multi-robot systems;constrained motion planning;motion and path planning;collision avoidance},
  doi={10.1109/LRA.2026.3662639}
}
paper thumbnail
CaStL: Constraints as Specifications Through LLM Translation for Long-Horizon Task and Motion Planning

Abstract: Large Language Models (LLMs) have demonstrated remarkable ability in long-horizon Task and Motion Planning (TAMP) by translating clear and straightforward natural language problems into formal specifications such as the Planning Domain Definition Language (PDDL). However, real-world problems are often ambiguous and involve many complex constraints. In this paper, we introduce Constraints as Specifications through LLMs (CaStL), a framework that identifies constraints such as goal conditions, action ordering, and action blocking from natural language in multiple stages. CaStL translates these constraints into PDDL and Python scripts, which are solved using an custom PDDL solver. Tested across three PDDL domains, CaStL significantly improves constraint handling and planning success rates from natural language specification in complex scenarios.

@INPROCEEDINGS{guo2025castl,
  author={Guo, Weihang and Kingston, Zachary and Kavraki, Lydia E.},
  booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)}, 
  title={CaStL: Constraints as Specifications Through LLM Translation for Long-Horizon Task and Motion Planning}, 
  year={2025},
  volume={},
  number={},
  pages={11957-11964},
  keywords={Constraint handling;Translation;Uncertainty;Large language models;Planning;Formal specifications;Robotics and automation},
  doi={10.1109/ICRA55743.2025.11127555}
}

Preprints

paper thumbnail
Using VLM Reasoning to Constrain Task and Motion Planning

Abstract: In task and motion planning, high-level task planning is done over an abstraction of the world to enable efficient search in long-horizon robotics problems. However, the feasibility of these task-level plans relies on the downward refinability of the abstraction into continuous motion. When a domain's refinability is poor, task-level plans that appear valid may ultimately fail during motion planning, requiring replanning and resulting in slower overall performance. Prior works mitigate this by encoding refinement issues as constraints to prune infeasible task plans. However, these approaches only add constraints upon refinement failure, expending significant search effort on infeasible branches. We propose VIZ-COAST, a method of leveraging the common-sense spatial reasoning of large pretrained Vision-Language Models to identify issues with downward refinement a priori, bypassing the need to fix these failures during planning. Experiments on two challenging TAMP domains show that our approach is able to extract plausible constraints from images and domain descriptions, drastically reducing planning times and, in some cases, eliminating downward refinement failures altogether, generalizing to a diverse range of instances from the broader domain.

@misc{yan2025using,
  title={Using VLM Reasoning to Constrain Task and Motion Planning}, 
  author={Muyang Yan and Miras Mengdibayev and Ardon Floros and Weihang Guo and Lydia E. Kavraki and Zachary Kingston},
  year={2025},
  eprint={2510.25548},
  archivePrefix={arXiv},
  primaryClass={cs.RO},
  url={https://arxiv.org/abs/2510.25548}, 
}

Invited Talks

Reviewing