Creating and Reading XML files with Python

A few days ago my mentor gave me the task of reading an XML file and then creating a new XML file in a different format by using the data of the read XML file. Since I was given the full freedom of using any programming language, I decided to go with C as taking CS50x has now given me enough confidence to approach any programming dragon with my shining C sword and armor. However after a few hours, I realized that I was getting nowhere. So I decided to look into Python. It turns out that working with XML files is an extremely simple task using Python.

For my task I made use of the ‘xml.etree’ and ‘xml.minidom’.

I will now give you a step by step guide on how to create a beautiful (pretty-print) XML file as shown below using python and also some tips on how to read an XML file.


Step 1: Importing the libraries.

import xml.dom.minidom
from xml.etree import ElementTree
from xml.etree.ElementTree import Element
from xml.etree.ElementTree import SubElement

Step 2: Creating the root element.

# <membership/>
membership = Element( 'membership' )

Step 3: Creating child for the root. I will create two children for the root in this example.

# <membership><users/>
users = SubElement( membership, 'users' )
# <membership><groups/>
groups = SubElement( membership, 'groups' )

Step 4: Creating nodes inside the children.

# <membership><users><user/>
SubElement( users, 'user', name='john' )
SubElement( users, 'user', name='charles' )
SubElement( users, 'user', name='peter' )
# <membership><groups><group><user/>
SubElement( group, 'user', name='john' )
SubElement( group, 'user', name='charles' )
# <membership><groups><group/>
group = SubElement( groups, 'group', name='administrators' )
# <membership><groups><group><user/>
SubElement( group, 'user', name='peter' )

Step 5: Converting to string and then pretty-print.

xmls = xml.dom.minidom.parseString(string)
pretty_xml_as_string = xmls.toprettyxml()

Step 6: Writing to a file.

output_file = open('membership.xml', 'w')

This will create a nice XML file for us. If you want to add a text node also just use the following command:
group.text = "This is John"
after any of the element or sub-element which is assigned to users, and it will ad a text-node to the file like below.


While reading an XML file we must proceed in an hierarchical manner.

from xml.etree import ElementTree
document = ElementTree.parse( 'membership.xml' )

document will have an object that is not exactly a node in the XML structure, but it provides a handful of functions to consume the contents of the element hierarchy parsed from the file. Which way you choose is largely a matter of taste and probably influenced by the task at hand. The following are examples:

users = document.find( 'users')

is equivalent to:

membership = document.getroot()
users = membership.find( 'users' )

Finding specific elements

XML is a hierarchical structure. Depending on what you do, you may want to enforce certain hierarchy of elements when consuming the contents of the file. For example, we know that the membership.xml file expects users to be defined like membership -> users -> user. You can quickly get all the user nodes by doing this:

for user in document.findall( 'users/user' ):
print user.attrib[ 'name' ]

Likewise, you can quickly get all the groups by doing this:

for group in document.findall( 'groups/group' ):
print group.attrib[ 'name' ]

Iterating elements

Even after finding specific elements or entry points in the hierarchy, you will normally need to iterate the children of a given node. This can be done like this:

for group in document.findall( 'groups/group' ):
print 'Group:', group.attrib[ 'name' ]
print 'Users:'
for node in group.getchildren():
if node.tag == 'user':
print '-', node.attrib[ 'name' ]

Other times, you may need to visit every single element in the hierarchy from any given starting point. There are two ways of doing it, one includes the starting element in the iteration, the other only its children. Subtle, but important difference, i.e.:

Iterate nodes including starting point:

users = document.find( 'users' )
for node in users.getiterator():
print node.tag, node.attrib, node.text, node.tail

Produces this output:

users {} None None
user {'name': 'john'} None None
user {'name': 'charles'} None None
user {'name': 'peter'} None None

Iterate only the children:

users = document.find( 'users' )
for node in users.getchildren():
print node.tag, node.attrib, node.text, node.tail

Produces this output:

user {'name': 'john'} None None
user {'name': 'charles'} None None
user {'name': 'peter'} None None


I am now a forensic expert !

Okay, so the title is a bit of an exaggeration, but still the task that I managed to do today was something that I had never believed in my wildest dreams that I would be able to accomplish. I just learnt how to recover deleted files using just about hundred lines of C code. Yes! you heard (or rather read) it right. All this was made possible due to the amazing CS50x MOOC that I am taking on edX, an online learning platform.

CS50x is the edX version of the popular CS50 course of the Harvard University, taught by Prof. David J. Malan who is an exceptionally great teacher and one of the best instructor that I have ever had the pleasure of learning from. So as part of the 5th Problem Set (more on first five later), we (who we? the people who are taking the course, duh!) were required to retrieve about 50 jpeg images, which had supposedly been deleted “by mistake”. To solve this problem, some basic knowledge of the JPEG file format, FAT file system and how the images are stored in a CompactFlash (CF) card are stored is required.

So we all know (well at least most of us) that an image is nothing but a large number of pixel values that are stored in some specific order with a specific format. It turns out that the JPEGs too have patterns of bytes that distinguish them from other file formats. In fact, most JPEGs begin with one of two sequences of bytes. Specifically, the first four bytes of most JPEGs are either

0xff 0xd8 0xff 0xe0


0xff 0xd8 0xff 0xe1

from first byte to fourth byte, left to right. Odds are, if you find one of these patterns of bytes on a disk known to store photos then that means that they de-mark the start of a JPEG.

Also it turns out that when you delete a file in a FAT file system, it is not actually erased from memory. What actually happens is that  the system modifies the filename’s first character in the file’s directory entry to signal that the file has been deleted and that the directory entry can be recycled. Second, the system moves all of the file’s FAT clusters to the hard drive’s list of free clusters. Thus, as you might guess, we can easily recover any deleted files by scavenging through the file-system for some specific byte pattern.

You can read more about it from here :

Furthermore, digital cameras tend to store photographs contiguously on CF cards, whereby each photo is stored immediately after the previously taken photo. Accordingly, the start of a JPEG usually demarks the end of another. However, digital cameras generally initialize CF cards with a FAT file system whose “block size” is 512 bytes (B). The implication is that these cameras only write to those cards in units of 512 B. A photo that’s 1 MB (i.e., 1,048,576 B) thus takes up 1048576 ÷ 512 = 2048 “blocks” on a CF card. But so does a photo that’s, say, one byte smaller (i.e., 1,048,575 B)! The wasted space on disk is called “slack space.” Forensic investigators often look at slack space for remnants of suspicious data.

So now lets come to our problem. According to the problem statement, we must read a *.raw format file, which is nothing but a “forensic image” of the card and store its contents, byte after byte. We just open the file using the  fopen(…) function and then read through it using the fread(…) function. Also we read in 512 bytes into the buffer at a time (because a block is stored as 512 bytes in FAT file-system) and then check if the starting 4 bytes represent the start of a JPEG file. If they do, then we have found an image and we write all the block’s data into a file using fwrite(…), and continue writing the next blocks into the same file until we encounter another block which starts with JPEG specific 4 byte format. Since our CF card stores images contiguously as mentioned earlier, we  need not worry about random files and garbage values coming in the middle of two blocks. When we find a another pattern that signifies an new jpeg image, we close our present file and create a new file with a new name, and start writing the data into it until we encounter another JPEG start specific byte pattern.

Continuing this way, we will reach the end of the file, at which point we must close our input *.raw file and our newly written *.jpeg file.

If you do everything right (as I did :P), you will hopefully end up recovering your deleted images. Does’nt it sound too easy? Well, it actually is. It took me less than two hours to go through the entire problem statement for the Forensics assignment and complete it. Also it was the most interesting piece if code that I had ever written (I am not a Computer Science student, and no I don’t code all day). And having done that, I am now going to insert my camera memory card into my computer, delete all the images and then hopefully recover them successfully. So have fun trying this yourself and as they say, “This is CS50”.



#3 Linear Regression with Multiple Variables

In the previous post I talked about the simple case in which we were able to fit our data-set with the help of only one variable. But life is not so simple. In most of the real-life scenarios the data-set will have more than one feature, i.e the output may be dependent on more than one variable. Taking the case of the previous example of housing prices, the price of the house may not only depend on the size of the house but also on the say, the age of the house, the number of rooms in the house, etc. In such cases using the simple linear regression (h = to + t*x) may not work.

Rather we must formulate a new equation which takes care of all the parameters. Turns out this is also quite simple. So suppose you have a data-set in which there are say 4 features. We can then model it using the equation:

h = to+ x1*t1+ x2*t2+ x3*t3+ x4*t4                                                                        equation 1

where x1 is the value of feature 1 (say age of the house), x2 is the value of the second feature (say number of rooms in the house) and so on.

So now our hypothesis (h) is an (n+1) dimensional vector if there are n features in the training set. The cost function remains the same (J = 1/2m * [sum(h(i) – y(i)]^2  over all points) with h calculated using equation 1. To optimize it we may use our loyal gradient descent algorithm with only a slight modification. Shown below are the equations taken directly from the lecture slides (by Prof. Andrew Ng of Stanford University)


What we are simply doing for this case is updating the values for all the parameters simultaneously.

Now as you might guess, this process will become computationally a little expensive if we have a large data-set with a lot of features. In such cases usually there are two ways to speed up the process: 1) Feature Scaling using Mean Normalization; 2) Varying the learning rate.

1) Feature Scaling using Mean Normalization simply means hat we replace each parameter (x1, x2, …) with (x(i) – mean(i))/range, where x(i) is the ith parameter, mean(i) is the mean of the ith feature column and range is simply the standard deviation or max-min value of that feature column.

2) Varying learning rate implies increasing the value of alpha in the above equation. As stated in my last post, if alpha is too small, the algorithm may take a lot of time to converge, while if alpha becomes too high, the Cost function may begin to diverge. So an optimum value of alpha must be chosen by trial and error.


Now Gradient Descent is not the only algorithm that can help us optimize our cost function (J). As Prof. Andrew mentioned, Normal Regression is another great way to optimize our cost function. In this method we neither have have to choose alpha nor iterate over multiple steps. This method gives the optimum point in one step. However it is not suitable if number of features is very large because it might be very slow. In such cases gradient descent works well even with large number of features. Since this is a slightly complex algorithm with a lot of matrix notations, I will not discuss it here for fear of my post becoming too large and complex. But if it really interests you, then please go through the Week 2 lecture videos 6 and 7. However for most of our cases, the gradient descent works just fine.

In the next post I will talk about Logistic Regression and Classification.

Stay tuned….


#2 – Linear Regression with One Variable

In my last post I talked about two types of machine learning algorithms – supervised and unsupervised. The linear regression model comes under the first category. The regression model is basically used to predict the real valued output. So suppose you have a data-set that has the price of houses for houses of different size and you want to predict the price for a house of some particular size not present in the data-set.

Lets assume that the data-set looks like the one below, and you wish to predict the price for a house of size 1250 sq. ft (I am using the lecture example here).


The most logical and simple way to do so would be to draw a straight line preferably through the middle of all the points on the graph. We can certainly do this easily if there were only a few points on the graph, perhaps four or maybe five. But for a larger number of data points, this task becomes too difficult to solve by trial and error.

But thankfully it turns out that there is an algorithm that can do this efficiently for us. It is the “Gradient Descent” algorithm. But before I go into its details, let me talk about another concept of “Cost Function” which will be used in the gradient descent discussion.

The Cost Function lets us figure out how to fit the best possible straight line to our data. To draw a straight line you basically need two parameters – intercept on the y-axis and the slope of the line. So lets denote our straight line with

h = to + t*x                                                         equation 1

where to is the y-intercept and t is the slope of the line (t stands for theta and h stands for hypothesis). Now we use each point on the x-axis and calculate the corresponding value of hypothesis ‘h’ using the above equation. These are the values that we predicted for some particular value of t0 and t. But they may not give us a line that correctly represents our data.

We solve this problem by using the concept of “Cost Function”. We denote our cost function by

J = 1/2m * [sum(h(i) – y(i)]^2  over all points            equation 2

where h(i) is the value of h due to the i th x point and y(i) is the original value of the point on the y axis corresponding to the i th x value. m is the total number of points on the graph or the number of training set examples.

This cost function basically tells us how good is our line is in representing the data-set. If the points predicted by our line are vary far away from the actual data-set values, then the cost function would be very high and we would have to vary the values of to and t so that we can get a new line. In the end we basically want to find those values of to and t that can minimize our cost function. This is where the gradient descent comes into picture.

Gradient Descent is our algorithm for minimizing the cost function J.

The way it does this is that it assumes a value of to and t and then keeps changing it until we hopefully reach a minimum value. The equation to calculate the subsequent values of t0 and t is,


The alpha in the above equation basically controls how big a step you take downwards. If alpha is too small, the algorithm would be very slow and take a long time to compute; if alpha is too large, the algorithm may overshoot the minimum and fail to converge, or may even diverge. Thus an optimal value for alpha must be selected by trial. Usually it becomes clear what is the optimal value after running the algorithm once.

As an example (the one used in lecture), this is the model when a random value of t0 and t are chosen,


And this is the value after running through four iterations of varying t0 and t1. The various circular lines denote the points which have the same cost.

four iter

And this is the final result. The red crosses in the right figures show the path followed by the gradient descent to reach the minimum.


In this way we have a final prediction model for our data-set and now we can hopefully predict the correct price model for any set of given house sizes.

This was Linear Regression with one variable. In my next post I will talk about Linear Regression with Multiple Variables.

Stay tuned….



Create a simple program using NI LabVIEW

The first program that my mentor told me to develop here at NI was a simple calculator that could perform basic mathematical operations. This was basically done to help the new interns get an idea of the LabVIEW environment and its various feature. Developing programs in LV is kind of different from the traditional methods of writing lines of code and compiling them. LV makes use of an interactive graphical method to develop complex software. You basically just drag and drop blocks that perform some specific functions and connect them together through wires to get the desired output.

Now there may be a lot of tutorials and VIs (programs in LV are saved with a *.vi extension and are known as Virtual Instruments) on the internet that will let you do this. But I am sure that the calculator that I am going to show you is different from them. At least I was not able to find any similar program on the internet.

So lets begin!

The first thing I did was to create a while loop with a nested event structure. All the button clicks an the inputs that user enters are handled by this event structure. I went for a simple minimalist interface with the following components on the screen:-

  1. A numeric control that the is used for taking the input from the user.
  2. A numeric indicator that displays the result.
  3. A string indicator that displays the sequence of operations.
  4. Seven buttons for performing basic mathematical operations.
  5. An Enter button to perform calculations.
  6. A Refresh button to clear the previous operations.
  7. An Exit button that will end the program.

To perform the operations I used a case structure. All the operation buttons were connected to a ‘build array’ function which was further connected to ‘Boolean array to number’ function which finally connected to the Case Selector. In this way I could control the case structure with Boolean selector variables. Below is an image which shows what I mean.


To implement the Refresh button I simply used a select operation which took the output of  the Refresh button as the selection criterion (s). Its implementation is shown below.


This means that whenever I pressed the Refresh button, zeros were taken as input and the output window would show 0. Also I used a similar way for displaying the Sequence of Operations. The only difference was that since it was a String indicator, it had to select between the space character and the output from a concatenate string which basically kept on appending each successive operation into the present string. I used shift registers to keep a track of the previous values.

Also I used a shift register to insert my result back into the case structure as the second operand. The bottom-most yellow wire in the last image is actually coming from a shift register in the left. Here is the image to bring some clarity


This is pretty much all there is to the operations part.

Next I decided to add nice touch to my calculator by adding a custom wallpaper. To do this you right click on the front panel scroll bar and select the Properties option. Then in the background tab you can select a custom image as your wallpaper.


This got over quite fast. So to explore even more features I decided to create a standalone application that could run without first starting LV in your system. Now this part requires that you have LV Run Time Engine on your system.

To do this select ‘Build Application (EXE) from VI’ option under the Tools menu. Now I could go on telling you step by step on how to do this but luckily I found a great video on how to do this on You Tube. Here is the link to it:

By the end of this video you can easily create a working standalone application.

Here is the final image of how my calculator turned out to be,



And here is the block Diagram image,


I also made use of some custom buttons for my Exit button which I downloaded from ( You can look around on the internet for some more options and features.

You can access and download my program from

Stay tuned for more…


How to teach a metal box ?

Machine learning is one of the most exciting recent technologies. Every time you Google something or when Facebook recommends a photo tag, machine learning comes into picture. If there is an application which you just cannot program by hand, say writing a computer program to make a helicopter fly, we make use of machine learning by letting the computer learn how to fly the helicopter by itself. Most of natural language processing and most of computer vision today is applied machine learning.

So how do we define Machine Learning?

Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, defines it as:

“the field of study that gives computers the ability to learn without being explicitly programmed”

Here is a slightly more recent definition for ML algorithm by Tom Mitchell, who is currently the Chair of Machine Learning Department at Carnegie Mellon University,

“a computer program is said to learn from experience E, with respect to some task T, and some performance measure P, if its performance on T as measured by P improves with experience E”

So for example if you have a spam filter. Then its task T would be to categorize the mail as spam or no spam; the experience parameter E would be the type of mails that you classify as spam or no spam; and its performance P would be the number of mails that it is able to correctly classify as spam. If you are using an efficient algorithm then your filter must make more and more correct choices over time.

There are several different types of learning algorithms. The two main types are supervised learning and unsupervised learning.

In supervised learning we give the algorithm a data set and also tell it what are the correct answers. All the algorithm has to do is try and find a pattern in the values given so that it can predict what will be the outcome for the next set of inputs. This is also known as a regression problem. To explain this I will use the example given by Prof. Andrew in class. vlcsnap-2014-07-20-17h10m57s84

So suppose you have a data set with the housing prices for houses of different sizes. You want your program to predict the prices of house if it is given their size. To do this you will first tell the algorithm the prices for the different sizes and then let it predict for any other input size the price. You then tell the algorithm whether its prediction was correct or the difference between the correct price and the predicted value. This way, the algorithm will try to correct itself and learn to draw a line connecting all the correct input values and give a correct prediction eventually.

So for each example in Supervised Learning, we are told explicitly what is the so-called right answer

In Unsupervised Learning, we’re given data that doesn’t have any labels. So we’re given the data set and we’re not told what to do with it and we’re not told what each data point is. Instead we’re just told, here’s a bunch of data. I don’t know what’s in this data. I don’t know who’s and what type. I don’t even know what the different types of people are, but can you automatically find some structure in the data? An example of this is the Google News. The algorithm automatically decides and creates clusters of similar news headlines on various topics by itself.

In my next post I will start to talk about more specific learning algorithms, how these algorithms work and how we can go about implementing them.

Hello World – customary first post

So I recently started to delve into the interesting topic of Machine Learning. I have started the Machine Learning course on Coursera which is being taken by Prof. Andrew from Stanford University. Pretty soon I realized that the topic is vast and due to the weekly nature of the updates, recalling old lectures and concepts was becoming more and more difficult.

So here it is! My first blog ever.

In this blog I will not only try to keep a record of what is being taught in the Machine Learning class but also about all the other courses that I am taking through various MOOC platforms like edX, Udacity and of course, Coursera. I will also try and put some guides on certain other topics like LabVIEW since I am also an intern at National Instruments – R&D, Bangalore. There might even be some posts and updates on topics like Statistics and R Programming and maybe some posts on my life (well, its my blog after all :P).

Stay tuned for more …