Coding the Robotic Operating System

The past week I was working on the code for the Robotic Operating System (ROS). We were able to load our Object Detection model to the Jetson computer and run the model using ROS. I then created another package on ROS that we will use to write our code for the movements of the drone. I linked the two packages (The movements and the computer vision) and was able to launch them together. This following week we will get the drone underwater and test the movements of it so that later on I can start coding some of the tasks for the drone.

 

Leak Testing the Drone

The past week we were able to put the drone together with all the required components to operate it. Before putting the drone underwater we pressure tested it with a vacuum pump which gave us an unsatisfying result. Later we tested it underwater to see if it actually leaked before putting the electronics inside. Sadly the drone ended up leaking. The team was able to fix the cable penetrators that were leaking but the it seemed like the metal plate that was used to enclosing the tube was also leaking. On this upcoming week we will try to fix the issue with the metal plate so that the acrylic tube will be waterproof.

Improving the computer vision

This week I started writing the behavioral tree code in C++ while using the diagrams I created last week as a reference. After writing the behavioral tree code for two of the tasks I worked on improving the object detection model. We recorded new footage underwater after we received the final prints of the objects. I annotated each image and we started the training of the object detection model. As a team we expect the drone to be fully next week before our next pool test. During our next pool test, we plan to use a tether to control the movements of the drone and also test out the object detection model to see how well it works.

Coding for Autonomous Task Sequencing

As the drone is nearing completion mechanically and electrically I thought it was a good idea to start the coding process. This year we decided on using a Behavioral Tree to implement autonomous task sequencing. Behavioural Tree is a more advanced version of Finite State Machines (FSM) which allow a robot or a game agent to be more reactive and handle corner cases which is something an FSM is not capable of doing. The use of a behavioral tree also results in a more robust and modular code which will improve the quality of the coding process. To create the behavioral tree I used a drag-and-drop interface called Groot 2. This interface allowed me to draw the diagrams and it also generated the XML code that represents the structure of the trees that I’ve created. Other than the behavioral tree, Xinhao and I also created and tested a beta computer vision model that we trained with the data we collected last week. The model seemed to work as expected as we tested it with live footage. Now the next step for me as the software lead is to write the C++ code needed to implement the behavioral tree. The steps later would be writing and integrating the ROS2 (Robot Operating Software) code and the Computer Vision model with the behavioral tree code. Even though we have a working Computer Vision Model it is incomplete. We need to create a new one with all the images we want the drone to be able to detect, thus we will be recording more training data next week and training a new model.

Below is an image of a behavioral tree diagram I’ve created for detecting a marker that is at the bottom of the pool and then advancing toward the direction the marker is pointing.

Recording Training Data For The Computer Vision

This week I mainly worked on creating a test tube that we can use to record videos underwater. After completing the test tube which consisted of a small computer, a camera, and a battery we took it to the Morrissey Pool to record training data for the computer vision of our main drone. We used SSH which is a network protocol to communicate with the test tube which was watertight and sealed off.

Building The Drone

This week I worked on the circuitry of the underwater robot. As a team, we first mounted the thrusters on the chassis, then we completed the circuitry to power the thrusters, and last but not least we tested the thrusters using a controller. I also worked on a separate device that consists of a small computer and a camera that we will use next week to record video underwater to train our computer vision for object detection.