Verity AG
Where engineering meets creativity
Initially I applied for a pure control engineering position at Verity. During my interview, I was lucky enough to learn about another job opening they were considering at the time: the drone choreographer. Given my interest in music I was quickly persuaded to continue my application process for that position and it turned out to be a perfect match. On the one hand my work consists of creating the choreographies which requires to translate artistic shapes, motions, and visions into mathematical descriptions for paths and trajectories in 3D. On the other hand I work on the underlying c++ choreography framework and automatic trajectory generating algorithms which allows safe transitions for a fleet of drones between figures.
​
​



Recently, due to the pandemic, i have taken a more classical control / software engineering role within Verity's industrial drone branch. Some of my responsibilities are the insight generation from the data gathered by the drones in warehouses (barcode / occupancy detection) from data collected by various sensors, or the creation and validation of warehouse maps. Finally I am also working on evaluating different estimators in simulation environment and on real world data.
Within the past three years I have learned many different skills in various technical topics such as code integration, code reviewing, modern c++, common software design patterns, but also benefited to strengthen my competences in talking to non technical people from the live events industry like directors, production managers, and artists. The various onsite experiences around the world not only allowed me to work with different cultural mindsets but also required me to be very solution oriented in a high pressure environment: If Celine Dion is ready to start her song, with thousands of people in the audience, there is very little room for error.
​
You can have a look at Verity's current showreel which shows various projects I worked on or you can check out Verity's new industrial product.
​
If you want to learn more about my Job at Verity you can also read my Verity Team Spotlight
​
Master Thesis
Parsing blocksworld scenes with an RGB-D sensor
My Master Thesis at ETH Zurich was part of a collaboration between ASL (Autonomous System Lab) and DRZ (Disney Research Zurich). The goal was to create a human-robot interaction setup where both parties can work on a common goal: Building a structure with common childrens building blocks. This requires the parsing of a blocksworld scene which is done with the help of a RGB-D sensor. The latter have gained a lot of focus in research in the last few years and have been made accessible for consumer market (Xbox Kinect, Primesense, iPhone X). The full pipeline that was developed detects the different instancses of blocks in the scene and does a full pose estimation for each of them. If it is the robots turn, a decision logic choses the next block needed to complete the goal structure and instructs the robot to move it accordingly.
If it is the humans turn, the pipeline detects which block was moved to where and updates the orientation and position of the goal structure if needed. In the thesis I touched on topics like 3D feature descriptors, geometric consistency matching methods, ICP, intrinsic camera calibration, and hand eye coordination. The implementation used PCL (PointCloud Library), ROS (Robot Operating System) and the code was written entirely in C++.
Semester Project
Supervised learning of the safe set in autonomous driving
My semester project was part of the research done in the ORCA (Optimal RC Autonomous Racing) project at the automatic control laboratory (ifa). The goal of the ORCA project is two find new control strategies for autonomous agents in competitive racing environments.
The main goal of my semester thesis was to investigate the application of machine learning tools, in particular SVM's, to learn the so called "viability kernel" of the autonomous car. It involved dealing with LibSVM, which is on of the fastest SVM libraries out there, written in C++. However, for ease of use, my project was done in MatLab used the the MatLab interface of LibSVM. Further the project required deep knowledge about control system, especially MPC (model predictive control) and system dynamics in general. For those interested in a summary of the project, I'll give a short explanation about what it was all about in the following.




When considering the dynamic model of the car, as well as its constraints (like staying on the the track or the non-slipping constraints of the tires), the viability kernel describes a subset of the state space in which there exists a sequence of control input that recursively guarantees the fulfilling of the before mentioned constraints. For example we could imagine a car that is currently oriented perpendicular to the track with a high speed. In that case this specific state of the car is not contained in the viability kernel, since it will most certainly leave the track. On the other hand, a state, in which the car is oriented in line with the track direction and has an appropriate speed, is considered as safe and therefore as part of the viability kernel. Obviously this is a big simplification of the problem but you can look at figure 1 which depicts the discretised the viability kernel or safe set (red points) for a certain speed (v=1.2 m/s) and orientation (along the y axis). The main reason why you want to know the viability kernel for a given model and the constraints is that, when doing optimal model predictive control, you need the kernel two guarantee constraints satisfaction for all future time instances. This is obviously on of the highest goal in autonomous driving research.
​
Up until now, the viability kernel could be calculated iteratively for a specific track which would take multiple days. The idea therefore was to use a SVM (support vector machine) to train a set of precomputed turns and track segments in order to then use the SVM to predict the whole track. The benefit of this would be to compute an upcoming turn in a few milliseconds which would be essential for realtime optimal control.
​
We managed to show that this was possible , albeit only for a subspace of the state space due to the high dimensionality. This means that you would have two train several SVMs and perform a lookup like call to the right SVM depending on which subspace your car is currently in. You can see some of the results in the figures on the side.
​
For more information, feel free to download the full thesis and have a look at the details of the methods or results. Also, as a reference for my contributions on the project you can contact my supervisor :
​
Alex Liniger: liniger@control.ee.ethz.ch
Bachelor Thesis
New training strategies and experimental tasks to enhance motor learning
My Bachelor Thesis was done in the Sensory Motor Systems Lab (SMS) at ETH Zurich. It consisted of creating a computer game that could be used for rehabilitation learning of stroke patients using ARMin, a 7-DOF robot arm. It included the design of a game concept that would allow for the right training methods while keeping the patient motivated and entertained. Further I implemented an assisted-as-needed controller which would then run on the robot arm. The latter would then serve as an interface between the subject and the game by replacing the classical joystick. The thesis required the usage of Unity (programming was done in C#) for the game, NX Siemens and Blender for modelling, texturing, and animation, and Matlab Simulink for the controller design. In the following I will give a more detailed summary of the methods used.

The screenshot shows the elliptic movement of the dispenser



The screenshot shows the elliptic movement of the dispenser
The underlaying hypothesis of the project was, that in order to get an optimal progress during motor learning, the subject has to train in the sweet spot of his/her abilities. This means, instead of helping the subject with a constant force of the robot arm, providing just as much help as needed should yield better results. For this purpose, we wanted to create an environment where these training conditions are met while keeping the subject interested in the training. Hence we had to come up with a game which would be interesting, fun and challenging at the same time. Initially, we came up with three concepts but decided to go with “The Ice Cream Maker”. In this sequential game, the subject would be faced by a client that orders a cup of ice cream which in turn, had to be prepared by the subject using different dispensers. Those dispensers would move on different spatial and temporal trajectories. Nearly all the assets in the game were self made using NX Siemens for more structural objects (dispenser, houses, etc..) and Blender for more biological assets (kid customer, arm, etc..). Also, texturing and all predefined animations, like the movement of the customer, were done using Blender.
On the other hand, the assisted as needed controller was designed using a performance measure of the game. It would adaptively change the motor gains so that the subject would always get as much help as needed from the robot. The reference position was given by the current game task and would consist of either an elliptic or a linear motion. Since temporal aspects of a movement are important during motor learning, the before mentioned motions were done during different velocity profiles. The final control currents for each axis were further calculated using 3 force sensors. This means that the resulting controller was a combination of position and impedance control.
The learning setup would consist of a host pc that would run the game and a controller pc that would run the realtime controller. Both would communicate via udp protocol and exchange data like performance measures for the controller and joint positions for visualisation of the game.
​
Two papers, in which I am a co-author, were published using the methods created in the thesis. For further references, feel free to contact my supervisor:
Prof. Dr. Laura Marchal Crespo: laura.marchal@artorg.unibe.ch
Engineering class
An Info talk about engineering in my former highschool
After having completed the first 3 semester of the mechanical engineering bachelor program, one of my fellow student and I were astonished about how little we knew about engineering before our studies. Since we went to the same high school (LNW), we reflected on how little we were informed about the actual use of maths, physics, and electronics. Thus we thought that it would be a great idea to make an event at our old school, where we would talk about what we had learned in those 3 semesters and show the students of graduation classes, how interesting science can be, once it is used adequately. We contacted our old physics teacher, who always was open for new education concepts and he promptly agreed to organise the event by reuniting the graduation classes for an afternoon in a class room. My colleague and I prepared a few slides and organised one or two experiments to demonstrate some principles. We mostly talked about control theory with its wast universe of complex mathematics as well as about some structural analysis of train wheels, or the physics of helicopters.



The self called “Engineering Class” saw a lot of success and positive feedback, which is why we repeated the event in the following year. We were able to use some material of the first time and could now focus into going more into detail by for example creating simulations of control systems which we could show during the talk. I had a lot of fun to organise these lectures and even more so to give the talk. I always enjoy teaching somebody something new and exiting the same way as i enjoy to learn something I did not know.
For further information or references, you can contact my old physics teacher at LNW:
Jean-Paul Larbière : jean-paul.larbiere@education.lu
Semantic Text Clock
A watch that displays time in swiss german language
This project originated from the idea of making a self made Christmas present for my girlfriend. I knew that she was fascinated about semantic clocks, that would show the time in text form. Further she really liked the Swiss language, thus I decided to combine those two facts and designed a semantic text clock in Swiss german. All designing of the hardware was done in Solidworks. The main body consists of two chambers. The back chamber contains all the electronics, namely, an Arduino Nano, a clock module and further passive components needed for button presses or power supply. The front chamber consisted of a light shroud grid that would hold LEDs in each shroud section. This way when lighting up the LEDs for a specific word, only the related letters would light up and there would be no leakage to other letters. The main body was printed using my Ultimaker 2+ 3D printer and the top cover containing the letters was laser cut in a near by maker space. The LEDs that I used were addressable RGB LEDs, which allowed for multiple colours as well as a single data wire to the Arduino. The overall C++ code running on the Arduino
did the following: 1) check for input from the user via button presses, look up the current time on the clock module via I2C, and finally light up the needed LEDs accordingly. In the final product, the user could change the colours, change the intensity, and set a new time. All this was accomplished by only using two physical buttons.
​
The code can be found on my github page, and the models for printing are on thingiverse.




Inverted Pendulum
A selfmade platform to test different control strategies
Since my focus of my studies was dynamics and control systems, I always looked for opportunities to apply the gathered theoretical knowledge on real systems. At ETH, only few courses focused on practical use or applications, so I decided to take matters into my own hand. The idea was to create a platform that would allow for testing different modelling and control strategies. I decided that an inverted pendulum would do the job. On the one hand, the simple dynamic system could be modelled using only two degrees of freedom and on the other hand, the system would be unstable enough that one should see a difference in performance between simple PID and more complex control strategies.
I started the modelling of a rail system, as well as a sleigh that would ride on those rails. That sleigh would carry a pivot-mounted pendulum. At this point I had to decided which sensors I should use since the placement and the kind of the latter would heavily influence the controllability of the system. Theoretically, the position of the sleigh would be enough to control the inverted




pendulum. However, this only works in an ideal wolrd, where there is no friction an the sleigh can slide freely on the rails. However, since the sleigh would be attached via belt to a motor, the tolerances of my 3D printer are not perfect and the other hardware used was low budget, the control of the system using only the position information would be nearly impossible. Therefore, for robustness reasons, I opted for the position of the sleigh, and the angle of the pendulum relative to the rails as sensor outputs. The best option to realise this, would be two use a sleigh motor including an optical decoder for the position and a second decoder for the angle of the pendulum. However, since accurate motor-decoder combinations are quite expensive and this is a hobby build I opted for an ultrasound sensor which delivers sub millimetre accuracy in the needed range. This should be enough with some filtering of the signal. Passive decoders were reasonably affordable, which is why I used one for the angle of the pendulum.
The project is currently work in progress. So far I managed to control the sleigh by itself with simple PID using the ultra sonic sensor after applying a moving average filter. I encountered some cyclic friction produced by the motor-belt coupling. This induces periodic noise in the system and I have to investigate if this poses a major problem for the control. The next step would be to design and print the pendulum holder and do some system analysis.
3D printing
A passion, useful for making and tinkering of all kind
I was always fascinated by desiging 3D models on a 2D screen by using modern CAD programs. In my eyes, the process of design is very satisfactory since you can immediately see results. This is even more true, if you have the possibility to print your models using the modern additive manufacturing process’s which allow you to hold your model, that you designed an hour ago, in your own hands. I loved the thought of having this technology at my fingertips and so I decided to buy a FDM 3D printer for home use. I opted for an Ultimaker 2+, which is more on the expensive side but delivers very good prints and reliably produces models even after a pause of multiple weeks. From the day the printer arrived, 3D printing became a real passion for me. It allows me to design things that i can actually use in everyday life. The models I designed so far range from little things like keychains or fan objects, to usefull objects like personalised shower drains to bigger projects, like some of the projects on this side. From time to time I also like to investigate the possibilities of my printer, for example, by designing screws and bolts to see which tolerances are needed, or by making springs to investigate the behaviour of the rather stiff plastic in specific shapes.




Neural Net
A multi layer perceptron, just for fun!
Before my Master Thesis, I knew that a big part of it would consist in implementing C++ code in a ROS environment. Although I had done some classes and projects in the C++ programming language, it had been some time and I could not remember a lot of the syntax. Further, since I was new to ROS, I though that it would be a good Idea to do a standalone C++ project. This way I could refresh my memories of the syntax as well as eliminating any external influence of the ROS interface. This in turn would help me later in my thesis when troubleshooting with both possible error factors, namely ROS and C++.
​
Since my semester thesis was about machine learning, which awoke my interest in this topic, I decided to implement a multilayer perceptron neural network from the ground up. I used the online book by Michael Nielson to get familiar with the topic. This included a C# implementation which was very useful to get first ideas about the code.
This is an ongoing project which I’m expanding from time to time, just for fun. To test the implementation I created a dataset which consisted of two boolean inputs and their logical XOR result as output. The network learns the relationship within seconds. A next step would be to use the MNIST dataset, which consists of pictures of handwritten digits, and look if this simple network is capable of classifying it. As a further project I also consider looking into convolutional neural network, which I already know in theory from other classes.