Before graduating from the Self-Driving Car Nanedegree, I was selected by a Udacity content developed to be a beta-tester for the new Robotics Nanodegree. This meant I got early access to the projects and got to provide early feedback and help polish up the contents and the projects. However, in order to pass each term I still had to complete all of the projects, which are shown below.

Project 2 - pick and place

The first project in the robotics nanedegree was a really simple computer vision project that was mainly an intro for people to get used to writing in Python and interacting with simulators. Since it wasn't really that interesting I've decided to skip it for this portfolio.

pick-place.gif

The goal of this project was to solve the inverse kinematics problem. If you have a robot arm as seen the left, if all the joint angles are known, it's quite easy to calculate the resulting position of the end effector (grabber). this is called forward kinematics. However, if you are just given the end effector position (and orientation) the method for solving for all the joint angles is known as inverse kinematics.

Once we created a function that could determine all the joint angles required to satisfy an end effector position, a path would be generated with end effector positions, and a controller would be responsible for moving the joints to the angles specified by the inverse kinematics. If there was any significant error in the joint angles calculated, the pick and place operation seen to the left would be unsuccessful.

For an in-depth writeup of all the technical details of the project, see the README of my GitHub Repo.


Project 3 - Perception

In this project we are given a PR2 robot equipped with an RGB-D camera. An RGB-D camera provides not only color values for each pixel in an image, but also it's depth. That allows it to create a 3D scene of it's environment as shown to the left.

The goal for this project was to take in RGB-D camera data and use it to classify and locate distinct objects in the environment.

The first step is to filter raw data with a passthrough filter that removes any points that are not within a set distance of any other points. The second step was to remove the background which in this case is the table which the object are placed on. This was accomplished using the Random Sample Consensus (RANSAC) algorithm. This algorithm finds points that match a particular model and removes them. Using a plane as our reference model the RANSAC algorithm removed all the points corresponding to the largest plane in the scene, which was the table. 

Euclidean clustering is used to identify points as belonging to the same object if they are within a set distance of each other. With all the separate objects appropriately clustered, the objects are then classified using a support vector machine.

To see full implementation details of this project, checkout the Github Repo.


Project 4 - deep learning

For this project we designed a Fully Convolutional Network that needed to identify which pixels in a camera image were of people and also of one specific person (labeled as "The Hero"). I already talk about FCN's on my Term 3 SDC Nanedegree page so I won't dive any deeper here. The goal of this project was to create an FCN that could identify the Hero and other people so that a quadcopter patrolling the area could follow the Hero.

Below are images showing the models performance. The left image is the raw camera data, the middle is the ground truth, and the right image is my models performance.