I am currently a Research Engineer at General Robotics, where I am working on general AI models for robot manipulation.
I graduated with a Master's and bachelor's degree in EECS from the University of California, Berkeley.
I was a researcher at BAIR , advised by
Ken Goldberg.
I love being at the intersection of technology and art. I
was on the team for
the Exploratorium
where we built
Manifold: Learning How to Be Lovable.
Omni-Scan is a pipeline for producing high-quality 3D Gaussian Splat models using a bi-manual robot that grasps an object with one gripper and rotates the object with respect to a stationary camera.
The 3DGS training pipeline is modified to support concatenated datasets with gripper occlusion, producing an omni-directional model of the object.
Omni-Scan can perform part defect inspection, identifying visual and geometric defects in different industrial and household objects.
Made-ya-look is a project that uses diffusion models to
modify the human gaze in images. The model subtly
manipulates the images to make the viewer look at a
specific region.
Blox-Net iteratively prompts a VLM with a simulation
in-the-loop to generate designs based on a text prompt
using a set of blocks and assembles them with a real
robot. The robot can reset the environment to
automatically retry failed attempts.
Breathless is a performance that pairs a human dancer
(Cuan) with an industrial robot arm for an eight-hour
dance that unfolds over the timespan of an American
workday. The resulting performance contrasts the
expressivity of the human body with the precision of robot
machinery.
A system that uses synthetic depth images to gain an
understanding of how to rapidly plan collision free
trajectories. This continues the GOMP line of work and
extends it to highly complex environments such as bin
picking, where the current obstacle configuration is not
known and can be the next object to be picked.