Sheffield Field Robotics Challenge

allSegmentation

Last week I had the absolute pleasure of being invited to compete in the Sheffield Field Robotics Challenge as part of the annual TAROS conference. 

In short, I had a great time. It was a bit scary being (as far as I was aware) the only undergraduate in attendance, as everyone else was a masters student or greater.

I arrived early outside one of Sheffield University's buildings to meet a bunch of highly accomplished professionals in the general field of robotics. Once we had all met up we quickly boarded minivans and headed for the peak district.

We proceeded to a little lab with nothing more than a few small rooms and a handful of work benches. In the middle of nowhere. WiFi and general internet connection was scarce. This was extremely worrying for me as I was hoping that my lack of knowledge could be augmented by some quick fire googling.

We worked in teams of 5/6 to compete 4 tasks: We first had to stitch together a bunch of aerial shots to create a 2D map of the area, we also had to try and highlight areas of interest on this map. We also wanted to pick out trees and areas of foliage. Our Next task focused on navigating a X-Copter which we would use to take more local and specialised images, using the data from the stitched map. In task 3 we wanted a small ground vehicle to navigate some rough terrain and reach an area of interest, the aim was to do this autonomously. The final task was to use image analysis to pick out some data from the robot and discover what was in the image, and to hopefully be able to interact with the objects found in the images.

I've never gotten the opportunity to work with ROS so I decided not to put myself forward for any of the robot navigational tasks. I had however just taken a course on the use of Matlab/Octave for the use of image analysis, so that seemed the obvious area for me to try and lend a hand in.

I spent a lot of my time waiting for new images from the actual robot camera, as these would be completely different from anything I could take with my camera and didn't want to focus too much on them as any testing would be almost useless.

I made some test pictures like these: testing

Our little robot was to recognise these coloured balls handing in a tree, and then to reach out and grab them with its robotic arm (to emulate fruit picking).

I spent the majority of my weekend working on these images and trying to segment out the colour in a dynamic way that would not care too much about light levels, shadows or moving images.

When I finally got my hands on some robotic captured images my heart sank as they had such a lower resolution and was absolutely terrible quality:

original

Spending quite a bit of time editing my Matlab scripts I turned out this:

segmentCircle

I struggled to pick up the blue ball in the original image, but grabbed the red one which helped give my team several extra points.

In the end my team came 3rd overall, which I was incredibly pleased with!

A lot of my code has since been uploaded to my github which is a rushed mess, but feel free to have a look if you're interested

346 total views, 2 views today

Your email address will not be published. Required fields are marked *

*