Tianshuang (Ethan) Qiu

I am currently a Research Engineer at General Robotics, where I am working on general AI models for robot manipulation.
I graduated with a Master's and bachelor's degree in EECS from the University of California, Berkeley. I was a researcher at BAIR , advised by Ken Goldberg.
In my spare time, I like to go bird watching.

Email  /  CV  /  Github  /  LinkedIn

profile photo

Projects

Beyond the Black Box: How Agentic Architectures Unlock Truly General Robotics
General Robotics, Redmond, WA
Blog Post / Paper

A framework for general-purpose robot intelligence through agentic architectures. Rather than monolithic end-to-end models, the system maintains a repository of modular skills unified under consistent Python interfaces, orchestrated by LLM-based agents. Demonstrated across manipulators, humanoids, quadrupeds, and drones with most behaviors emerging without retraining.

Manifold: Learning How to Be Lovable
Exploratorium, San Francisco, CA
Official Video / Artist Interview

Manifold is an interactive installation at the Exploratorium featuring a robotic hand that learns how to be lovable. The robot navigates its space using motion-sensing cameras, learning in real time how different gestures influence visitors' emotional responses. The hand can bend bidirectionally, performing movements impossible for human hands, and continuously adapts its behavior through machine learning.

Research

Omni-Scan: Creating Visually-Accurate Digital Twin Object Models Using a Bimanual Robot with Handoff and Gaussian Splat Merging
Tianshuang Qiu, Zehan Ma, Karim El-Refai, Hiya Shah, Justin Kerr, Chung Min Kim, Ken Goldberg
IROS, 2025
Project Page / arXiv

Omni-Scan is a pipeline for producing high-quality 3D Gaussian Splat models using a bi-manual robot that grasps an object with one gripper and rotates the object with respect to a stationary camera. The 3DGS training pipeline is modified to support concatenated datasets with gripper occlusion, producing an omni-directional model of the object. Omni-Scan can perform part defect inspection, identifying visual and geometric defects in different industrial and household objects.

Made-ya-look: Using Diffusion Models to Modify Gaze
Alonso Martinez, Tianshuang Qiu, Alexi Efros

Project Page

Made-ya-look is a project that uses diffusion models to modify the human gaze in images. The model subtly manipulates the images to make the viewer look at a specific region.

Blox-Net: Generative Design-for-Robot-Assembly using VLM Supervision, Physics Simulation, and A Robot with Reset
Andrew Goldberg, Kavish Kondap, Tianshuang Qiu , Zehan Ma, Letian Fu, Justin Kerr, Huang Huang, Kaiyuan Chen, Kuan Fang, Ken Goldberg
ICRA, 2025
Project Page / arXiv

Blox-Net iteratively prompts a VLM with a simulation in-the-loop to generate designs based on a text prompt using a set of blocks and assembles them with a real robot. The robot can reset the environment to automatically retry failed attempts.

Breathless: An 8-hour Performance Contrasting Human and Robot Expressiveness
Catie Cuan, Tianshuang Qiu, Shreya Ganti, Ken Goldberg
ISRR, 2024
Project Page / arXiv

Breathless is a performance that pairs a human dancer (Cuan) with an industrial robot arm for an eight-hour dance that unfolds over the timespan of an American workday. The resulting performance contrasts the expressivity of the human body with the precision of robot machinery.

BOMP: Bin Optimized Motion Planning
Zachary Tam, Karthik Dharmarajan, Tianshuang Qiu , Yahav Avigal, Jeffrey Ichnowski, Ken Goldberg
IROS, 2024
Project Page / arXiv

A system that uses synthetic depth images to gain an understanding of how to rapidly plan collision free trajectories. This continues the GOMP line of work and extends it to highly complex environments such as bin picking, where the current obstacle configuration is not known and can be the next object to be picked.

Miscellanea

Computer Vision course projects Course Projects: Computer Vision

Source code taken from Jon Barron's website.