Cyber-Physical Platform for Distributed Fabrication

In collaboration with August Lehrecke and Xiliu Yang

Published in SCF ‘22: Proceedings of the 7th Annual ACM Symposium on Computational Fabrication

full paper here

The M.Sc. thesis presents a collaborative multi-robot strategy for the distributed fabrication of Spatial Lacing - a novel system of lightweight, multi-topology fiber structures enabled by parallel manipulation of filament materials.

The parallelized fabrication logic, which takes inspiration from textile production methods, is inherently different from existing construction techniques and poses new challenges for fabrication. The research proposes a distributed cyber-physical platform with mobile robots that can exceed size and flexibility limitations of industrial machinery.

A hybrid behavior-based control schema is developed where robotic behaviors are abstracted from the traditional textile craft of bobbin lace-making and adapted for robotic execution through coordinated collaborative action sequences, creating a new robotic action space.

Parallel task execution, real-time sensor feedback, and the coordination of multiple distributed agents is achieved through a multi-threaded software architecture.

github source


Computer Vision for Calibration and Adaptive Pick and Place

In collaboration with Nils Opgenorth

Developed as part of the ITECH ‘21 Studio “Performative Morphologies”

In order to integrate a new building system within existing timber manufacturing facilities a transportable robotic fabrication platform was investigated. One of the key challenges came from the need to fabricate panels up to 3 meters in length, which is beyond the range of stationary 6-axis robots thus requiring a unique localization process.

We developed a computer vision calibration process based on the eye in hand calibration method which provides a transformation matrix between coordinates in the camera world and the robot world.

First, fiducial markers are repeatedly scanned and the detected marker positions are averaged to reduce error tolerance. Then the coordinates are translated from the camera world to the robot world using the homogenous transformation matrix obtained during initial calibration.

The difference between the expected positions and the measured positions of the fiducial markers is measured and applied to the digital model to correct the positions in the KRL code. Updated positions are sent to the robot in real time via global variables and KukaVarProxy.


Human Multi-robot Collaboration for Timber Assembly

In collaboration with Kiril Bejoulev, Takwa ElGammal, Pei-Ye Huang, Xiliu Yang, and Max Zorn

Developed as part of the ITECH ‘21 Studio “Performative Morphologies”

This project explores two options for the assembly of timber objects when the method of joining must occur from the blind side.

In this demonstration a timber web must be nailed to a timber plate from the blind side of the plate requiring a high level of accuracy to align and ensure the nails are placed in the correct location. One option utilizes two 6-axis industrial robot arms to perform the blind assembly task and the other explores a Human-Robot collaborative process using AR.

In the multi-robot demonstration one robot holds the timber web in the predesignated place while the second robot performs the nailing operation from the blind side of the plate. An accurately aligned digital model enables the correct placement of both the web and the nails.

In the Human-Robot example, one robot holds the timber web in place while a worker assisted by AR is able to identify the correct locations for the joinery and can attach the web from the blind side.


Q-Learning for Generalized A to B Path Planning

Coursework for Computational Design and Digital Fabrication

This project explores reinforcement learning as a method for robotic path planning that does not require explicitly programming each robotic movement. This evolved from the need to pick and place many objects with varying lengths, each requiring significant manual effort for path planning.

The goal was to learn an optimal policy capable of generalizing robot movement for the the unique web sizes. Q-Learning was selected because it allows for an agent (6-axis robot arm) to learn from interactions with the virtual environment via action states in order to determine the best action to take without significant user input or pre-existing data.

The process is implemented with a custom environment for OpenAI Gym that used PyBullet as a physics engine for simulation. For all approaches, the simulations were run locally in a python environment, and data (axis values) was sent to the grasshopper environment for visualization.

Start of Training

End of Training



Virtual Presence Robot

In collaboration with Xiliu Yang

Coursework for Computational Design and Digital Fabrication

This mobile robot interacts with its physical surrounding, by representing the "virtual presence" of the person controlling it remotely. It can be accessed by anyone, anywhere in the world, to dispense treats and play with you (or your pet!)

The work here is developed by two people (one in Germany and one in the US) as an attempt to move beyond traditional means of internet-based communication by creating a physical interface for remote interaction. During the height of the COVID-19 pandemic, as everyone was responsible for limiting physical exposure to one another, we try to bring back the tangible connection that is a part of physical interaction.

The project has 4 main parts - a Mobile Robot Car, a Treat Dispenser, a Joystick, and Network Communication Setup. The Network Setup receives video stream and sends commands to the robot from anywhere in the world through a connection handler on your local network and a public server.

instructables source

Next
Next

Architecture