Deep Learning

Learning Artificial Intelligence with Udacity

Recently I wrote about my experience with the Udacity’s Self Driving Car Nanodegree (SDCND).

While pursuing this Nanodegree, I was so thrilled by the course material, that I decided to enroll in another nano-degree from Udacity at the end of my Term 2 of SDCND. This was the Artificial Intelligence Nanodegree. The first two terms of the SDCND had helped me to master the basics of Deep Learning and I wanted to explore some of the applications of Deep Learning in other domains like Natural Language Processing (think IBM Watson) and Voice User Interfaces (think Amazon Alexa). The AI-ND seemed like the perfect place to achieve this, partly due to my fantastic experience with the previous Udacity NDs.

The Artificial Intelligence ND is a bit different from the other NDs. There are a total of 4 terms and you need to pay for and complete two of them in order to graduate. In case you desire, you can also enroll for the other modules as well and complete them.

The first term is common and compulsory for all. It teaches you the foundations of AI like Game-Playing, Search, Optimization, Probabilistic AIs, and Hidden Markov Models. The topics are taught by some of the pioneers of AI like Prof. Sebastian ThrunProf. Peter Norvig, and Prof. Thad Starner. All the topics are covered in detail with links to additional research papers and book chapters for additional study.

The course begins with an interesting project of creating a program to solve the Sudoku problem using the concepts of Search and Constraint Propagation. You get an opportunity to play with various heuristics as you try to design an optimum strategy for the game.

Game Playing example

The next project continues from this by implementing an adversarial search agent to play the game of Isolation. Some of the topics that were covered included MinMax, AlphaBeta Search, Iterative Deepening, etc. The project also required an analysis of a research paper. I performed the review of the famous AlphaGo paper, which can be found on my GitHub project page.

From game-playing agents we moved onto the domain of planning problems. I experimented with various automatically generated heuristics, including planning graph heuristics, to solve the problems. Like the previous project, this one also required you to perform a research review.

From planning, we moved to the domain of probabilistic inference. The final project of Term 1 required the understanding of Hidden Markov Models to design a sign-language recognizer. You also get an understanding of the different model selection techniques such as Log likelihood using cross-validation folds, Bayesian Information Criterion and Discriminative Information Criterion.

The next term focused on the concepts and applications of Deep Learning. It covered the basic concepts of Deep Learning like Convolutional Neural Networks (CNN), Recurrent Neural Network (RNN), Semi-supervised learning, etc. and then moved onto the latest developments in the filed like the Generative Adversarial Networks (GANs). At the end of the module, there was an option to choose a specialization. The three options available were Computer Vision, Natural Language Processing and Voice User Interfaces. Since the SDCND had already exposed me to the domain of computer vision and I had already worked on some NLP projects and gone through the Stanford’s CS224d to some extent, I decided to pursue the Voice User Interfaces Specialization. The project involved building a deep neural network that functions as part of an end-to-end automatic speech recognition (ASR) pipeline. The pipeline accepts raw audio as input and return a predicted transcription of the spoken language. Some of the network architectures that I experimented with were RNN; RNN + TimeDistributed Dense; CNN + RNN + TimeDistributed Dense; Deeper RNN + TimeDistributed Dense and Bidirectional RNN + TimeDistributed Dense.

One of the major feature of the projects was the research component. To pass any project you had to give a detailed scientific reasoning and empirical evidence for your implementations and programs. This helped me to develop the skill of critical thinking and efficient problem solving. As is true with any nano-degree, this course was also full of interactions with people from around the world and from all aspects of industry. It was also heavily focused on applications which kept me excited for the entire duration of six-months.

I have continued my learning from this course by following the books “Artificial Intelligence — A modern approach” by Stuart Russell and Peter Norvig and “Deep Learning” by Ian Goodfellow, Yoshua Bengio and Aaron Courville. I still have a long way to go before I master this interesting field of AI, but the nano-degree has definitely shown me the way forward.

Advertisements

Udacity’s Self Driving Car Engineer Nano-Degree

Around September of the year 2016, Udacity announced a one-of-its-kind program. The program spanned over almost 10 months and promised to teach you the basics of one of the most interesting and exciting technology in the industry. It was designed by some of the pioneers in the field, like Prof. Sebastian Thrun, and was offered online, in the comfort and convenience of your home. The course had also bagged industry partnerships with Nvidia and Mercedes among others. The program was the Self Driving Car Engineer Nanodegree and it required proficiency in the basics of programming and machine learning to be eligible for enrollment.

A snapshot from my final capstone project

Without wasting a minute, I logged into my Udacity account and registered for the course. I had already completed a lot of online courses on various topics of my interest and the Nanodegree seemed like a great place to not only learn about the amazing technologies behind the autonomous vehicles, but also get an experience with designing my own self driving car. The course promised to give the students an opportunity to run their final project on a real vehicle by implementing various functionalities like Drive-by-Wire, Traffic Light Detection and Classification, Steering, Path Planning, etc. I was selected for the November cohort of the course and I officially received my access on November 29, 2016.

My Advanced Lane Detection Project from Term 1

Today, three months after completing my Nanodegree, I look back at the course as one of the best investments of my time and money. The course lectures were very well designed and structured. The three terms of the nano-degree were meticulously planned. The first term introduced the concepts of Computer Vision and Deep Learning. The projects involved a lot of scripting with Python and TensorFlow to solve the problems like Lane and Curvature Detection, Vehicle Detection, Steering Angle prediction, etc. The application oriented nature of the projects made it even more interesting.

My Vehicle Detection Project from Term 1

Term 2 was focused on the control side of things. It covered the topics of Sensor Fusion, Localization and Control. This term was heavily dominated by C++ and Algebra. The projects included implementing Extended and Unscented Kalman filters for tracking non-linear motion, Localization using Markov and Particle Filter and Model Predictive Control to drive the vehicle around the track. I learnt many new things in this term, from C++ programming to the mathematics behind the working of Kalman Filter, Particle Filter and MPC to their algorithmic implementations.

My Model Predictive Controller project from Term 2

The final term was focused on stitching together the various topics that were taught and applying them to create your own autonomous vehicle. The topics included path planning, semantic segmentation (or scene understanding), functional safety and finally the capstone project.

My Path Planning project from Term 3

What set the entire nano-degree apart from the other courses was it novelty. There is no other course out there that can teach you so much in such a short amount of time and in so much depth. The course also provided me with a collated set of resources for learning. Apart from the well-designed lecture videos, quizzes and projects, one of the most rewarding experiences was interaction with people from around the world. Everyone who was taking the course was excited and eager to share his/her knowledge and help others. The Slack and the Udacity discussion forums are full of activities. I interacted with people from around the world, from USA to Germany, to Japan. I discussed the projects and lectures with people from different academic and professional backgrounds, from a freshman to a Vice President of Engineering. These interactions not only helped me to create a world-wide network but also opened my eyes to the opportunities that are present around me. I also got an opportunity to explore some of the open courses like Stanford’s CS231n, the materials for which are freely available online. The amazing support of my peers and mentors played a huge role in helping me to master the material.

The nano-degree took a lot of time and effort to complete. Since I also pursued the optional material, which were mostly research papers, it took me more than average time for completion. However, the effect of the course was so profound, that I still go back to the material for revision, interact with new students on Slack and discuss the projects over WhatsApp. The course changed the way I approach the problems provided me with a solid base for future research. I hope that Udacity launches a more advanced version of the course soon.

My implementation for one of the Term 3 optional projects — Object Detection with R-FCN

 

Self Driving Vehicles: Traffic Light Detection and Classification with TensorFlow Object Detection API

With the recent launch of the self driving cars and trucks, the field of autonomous navigation has never been more exciting. What were once research projects in laboratories are now commercially available products. One of the main tasks that any such vehicle must perform well is the task of following the rules of the road. Identifying the traffic lights in the midst of everything is the one of the most important tasks. Thankfully due to the recent advancements in Deep Learning and the ease of use of different Deep Learning Frameworks like Caffe and TensorFlow that can utilize the immense power of GPUs to speed up the computations, this task has become really simple. In this article I will show how anyone can train their own model for the purposes of Traffic Light Detection and Classification using the openly available data-sets and tools. I used the Udacity’s openly available data-sets. Udacity’s Self Driving Car Engineer Nanodegree provides a simulator and some ROS bag files. The model that I have developed was a part of the final capstone project submission in which we need to first pass the tests on the simulator and then pass the test by driving around an actual track on a real vehicle.


Step 1: Gather the data

As with any machine learning exercise, we first need to gather our data on which we will train the model. The simulator images look something like this:

Data the simulator’s camera captures

While the actual images from the track look something like this:

Data the real car’s camera captured from the track

Step 2: Label and annotate the images

The next step is to manually annotate the images for the network. There are many open source tools available for this like LabelImgSloth, etc. The annotation tools create a yaml file that looks something like this:

Output after manual image annotations

This step was the most time consuming of all. When I started, it took me almost 3 hours to understand the working of the tools, install the dependencies and then annotate the simulator data-set. Luckily, one of the main advantages of the Nano-degree is the immense amount of support that you get from discussion with your peers from around the world. One of my peers, Anthony Sarkis has graciously made his annotated data-set openly available for all to use. Thank you Anthony Sarkis for this 🙂


Step 3: Training the Model

For training the model with the API, we first need to convert our data into the TFRecord format. This format basically takes your images and the yaml file of annotations and combines them into one that can be given as input for training. The starter code is provided on the tensorflow’s Github page.

Next we need to setup an object detection pipeline. TensorFlow team also provides sample config files on their repo. For my training, I used two models, ssd_inception_v2_coco and faster_rcnn_resnet101_coco. These models can be downloaded from here.

I needed to adjust the num_classes to 4 and also set the path (PATH_TO_BE_CONFIGURED) for the model checkpoint, the train and test data files as well as the label map. I also reduced the number of region proposals from the author’s original suggestion of 300, to 10 for faster_rcnn and from 100 to 50 for ssd_inception. In terms of other configurations like the learning rate, batch size and many more, I used their default settings. (Note: the second_stage_batch_size must be less than or equal to the max_total_detections so I reduced that to 10 as well for faster_rcnn else it will throw an error.)

Note: The data_augmentation_option is very interesting if your dataset doesn’t have much of variability like different scale, pose etc. A full list of options can be found here (see PREPROCESSING_FUNCTION_MAP).

We also need to create a label map for each class. Example of how to create label maps can be found here. For my case it looked something like this:

label_map

So in the end, we need the following things before going to the next step:

  1. COCO pre-trained network models
  2. The TFRecord files we created earlier
  3. The label_map file with our classes
  4. The image data-set
  5. The TensorFlow model API

Next steps are pretty straightforward. You need access to a GPU for training. I used the AWS p2.xlarge with the udacity-carnd-advanced-deep-learning AMI which has all the dependencies like TensorFlow and Anaconda installed. I trained in total 4 different models — two models with faster-rcnn (one each for simulator images and real images) and two with ssd_inception.

The output of the model inference looks something like this:

Detection on the simulator images
Detection on the real images

The detection and classifications were really good with both the models, though the ssd_inception trained model made a few minor errors like the one in the below image which was correctly classified by the faster_rcnn model.

Wrong Classification by SSD Inception model

However, the plus point of the ssd_incpetion model was that it ran almost 3 times faster than the faster_rcnn model on simulator and almost 5–6 times faster on the real images.

You can see my code and the results on my GitHub repository. It also contains the link to the data-sets and the annotations.

Good luck with your own models 🙂