Heaton Research

Data Science Rosetta Stone: Classification in Python, R, MATLAB, SAS, & Julia

The discovery of the Rosetta Stone in 1799 allowed scholars to finally decipher ancient Egyptian hieroglyphics. The actual content of the Rosetta stone is a very mundane government decree issued by the Egyptian government around 196 BC. The importance of the Rosetta stone is that this mundane decree was written in Ancient Egyptian using both hieroglyphic script and Demotic script, as well as Ancient Greek. The Ancient Greek allowed the translation of the Egyptian hieroglyphics. Sometimes it helps to see what you do not understand represented as what you do understand. The Rosetta Stone has become a metaphor for that which allows the decoding of something else.

Computer Science Big-O Chart in R

This article attempts a “Rosetta stone” of data science by showing a simple classification in the following languages:

Titanic Classification Problem

We will begin with a simple data science classification problem. Classification is where you would like the model to learn to identify a non-numeric outcome for input data. In this case, we would like to input information about passengers on the RMS Titanic and get a prediction on if that passenger might survive. This is a binomial/binary prediction because there are two outcomes: survive (1) or perish (0). To perform this classification a GLM Logistic Regression is used. This technique achieves approximately an 80% accuracy rate.

Considering the pandemonium that was the final hours of the RMS Titanic, an 80% accuracy rate is decent. Given the limited information on the passengers, you simply can’t predict everything. There will always be noise. Generally females had a much higher probability of survival than males. For females the class of their ticket was the second most important factor in survival. For males, age was the second most important factor. However, there is always noise/outliers. Consider Miss Helen Loraine Allison, a female 1st class passenger age 2. Simply by the numbers, she should have survived. But she did not. Somehow she was separated from her parents in the pandemonium that was the sinking Titanic. Trying to predict Helen’s death is not constructive. Similarly, Mr. Karl Edwart Dahl, a male 3rd class passenger age 45. Simply by the numbers, he should have died. But he did not. Likewise, trying to predict cases like Mr. Dahl is simply not productive. An 80% accuracy rate for Titanic is not bad.

The model used in these examples is a Logistic regression. There are more complex models, such as random forests, gradient boosted machines, or deep neural networks. However, because the purpose of this article is to highlight the languages, a relatively simple model was chosen.

Some common features to all languages will be mentioned here:

  • Age - There are a number of missing ages. The Kaggle Titanic tutorial participants have found some very clever means of estimating missing ages. However, for this simple example we will simply fill in all missing ages with the median of the age.

  • Embarked - There are two records where the port that the passenger embarked from is missing. These are simply filled in as the most common port (Southampton). Also, the three departure ports in Europe are encoded into dummy variables.

  • Dropped Fields - The fields ‘Name’, ‘PassengerId’, ‘Ticket’, ‘Cabin’ are all dropped because they are not particularly predictive. Several published solutions do increase accuracy by using these fields. However, to keep this example relatively simple we will not use them.

  • Crossvalidation - The data are split into a training set and validation set. The model is trained using the training set and evaluated using the validation set. This gives a better indication of how accurate the model is by evaluating it on data that it had not seen during training.

Titanic in Python

The first language provided is Python 3.x. Python is a general purpose programming language that is widely used both by data scientists and software developers alike.

The first section simply imports the needed libraries and defines a convenience function that will encode dummy variables. Pandas is used for all data preprocessing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore

path = "./data/"

def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)

Next, we load the data set CSV and perform the preprocessing. Male and female are converted to 1 and 0. Some of the languages have a categorical type and do not require this conversion. Embarked is encoded to dummies and missing values are filled in.

1
2
3
4
5
6
7
8
9
10
11
12
13
filename_read = os.path.join(path,"titanic-dataset.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])

df.drop('Name',1,inplace=True)
df.drop('PassengerId',1,inplace=True)
df.drop('Ticket',1,inplace=True)
df.drop('Cabin',1,inplace=True)
df['Sex'].replace('female', 0,inplace=True)
df['Sex'].replace('male', 1,inplace=True)
med = df['Age'].median()
df['Age'].fillna(med,inplace=True)
df['Embarked'].fillna('S',inplace=True)
encode_text_dummy(df,'Embarked')

Next, the data are setup for the logistic regression. Python requires that the predictors (x) be separated from the target (y). The target is whether the person survived or not. The predictors are all of the other values used to determine survival. Some languages will allow us to keep the predictors and target together. We also split the training and validation sets.

1
2
3
4
5
6
7
8
9
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split

x = df.as_matrix(['Pclass','Sex','Age','SibSp','Parch','Fare',
'Embarked-C','Embarked-Q','Embarked-S'])
y = np.ravel(df.as_matrix(['Survived']))

x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)

Next we setup the Logistic regression classifier and fit the model to the training set.

1
2
3
classifier = LogisticRegression()

classifier.fit(x_train,y_train)

Finally, we evaluate the accuracy of the model by having it predict the validation set.

1
2
3
pred = classifier.predict(x_test)
score = metrics.accuracy_score(y_test, pred)
print("Accuracy score: {}".format(score))

Titanic in R

R and Python are the two heavyweights of open source data science. According to KDDNuggets, R is the most popular programming language for data science – but it is pretty close. R does win the award for the shortest program in this article. The small size of the R source code is a result of several sets that R handles for you.

The first section loads the data file and preprocesses the code.

1
2
3
4
5
6
df = read.csv("./data/titanic-dataset.csv", header = TRUE)

drops <- c("PassengerId","Name", "Ticket", "Cabin")
df <- df[ , !(names(df) %in% drops)]
df$Age[is.na(df$Age)] <- median(df$Age, na.rm=TRUE)
df$Age[is.na(df$Embarked)] <- 'S'

You will notice that unlike Python, R allows the x and y values to remain in the same dataframe. Additionally, there is no need to encode sex or embarked, because R loaded these values as factors (categoricals) and they are automatically encoded.

Next the training and validation sets are split.

1
2
3
4
5
6
7
smp_size <- floor(0.75 * nrow(df))

set.seed(42)
train_ind <- sample(seq_len(nrow(df)), size = smp_size)

train <- df[train_ind, ]
test <- df[-train_ind, ]

The Logistic regression model is created. R allows more of the GLM inner-workings to be specified than Python. Below you can see that we specify both the family (binomial, which means there are two outputs). Additionally, the link function is the logit, or logistic function.

1
model <- glm(Survived ~.,family=binomial(link='logit'),data=train)

The predictions are made. These predictions are all probabilities of survival. The round function ensures that any prediction >0.5 is treated as survival. Finally, the accuracy is calculated and displayed.

1
2
3
pred <- predict(model,newdata=test,type='response')
pred_survived <- round(pred)
sprintf( "Accuracy: %f", sum(pred_survived == test$Survived) / nrow(test) )

Titanic in MATLAB

MATLAB is a commercial programming language that is favored by many areas of academia and industry. My primary exposure to MATLAB was in academia. I used MATLAB for several classes as an undergraduate. My phd advisor used MATLAB for many of his projects. The free programming language Octave is somewhat compatible with MATLAB.

In MATLAB, everything is a matrix. A single integer is just a matrix of height and width of 1. Semicolons tell MATLAB if you want the return value printed. A semicolon will suppress output to the console of the return value of a function. Julia makes similar use of the semicolon.

Like the previous examples, the first step is to load and preprocess the data. MATLAB does have a categorical type, and we use it for both Embarked and Sex. However, we will still need to encode these values. Unlike R, dummy variables are not automatically created for us.

1
2
3
4
5
6
7
8
9
10
% Load the data
ds = readtable('titanic-dataset.csv');

% Handle missing ages
ds.Age(isnan(ds.Age)) = nanmean(ds.Age);

% Handle categoricals
ds.Embarked = categorical(ds.Embarked);
t = dummyvar(categorical(ds.Sex));
ds.Sex = t(:,1);

MATLAB, just like Python, requires that the x and y be split into two variables.

1
2
3
% Split X & Y.
y = ds(:,'Survived');
x = ds(:,{'Pclass','Sex','Age','SibSp','Parch','Fare'});

The dummy variables for Embarked are concatenated into the matrix.

1
2
3
4
% Create training matrix (all numeric)
x = table2array(x);
x = horzcat(x,dummyvar(ds.Embarked));
y = table2array(y);

Next, the training and validation sets are split.

1
2
3
4
5
6
% Training & validation split
[trainInd,valInd] = divideblock(length(x),0.7,0.3);
x_train = x(trainInd,:);
y_train = y(trainInd,:);
x_val = x(valInd,:);
y_val = y(valInd,:);

Just like the previous examples, a binomial logist regression is created and fit to the training.

1
2
% Fit the model
model = glmfit(x_train,y_train,'binomial','link','logit');

Finally, the validation set is used to create predictions and accuracy is evaluated.

1
2
3
4
5
% Predict and calculate accuracy.
pred = glmval(model,x_val,'logit');
pred = round(pred);
acc = (pred == y_val);
sum(acc)/length(acc)

Titanic in SAS

SAS is a commercial language used to create statistical models. The first thing that you will notice about SAS is that the two primary statement are PROC and DATA. Both the PROC and DATA statements are ended by a RUN statement. A SAS program is essentially made up of PROC and DATA statements that pass data sets between each other. The data set is the only object type and it is stored to disk at each step.

SAS programs can have other statements beyond PROC and DATA, such as a macro language. However, the macro language functions as macros and effectively repeats PROC and DATA segments of the source file. A PROC statement calls a predefined SAS function. A DATA statement loops over a data set and modifies it.

The first step is to read in the CSV file. The OUT parameter specifies that the CSV file will be stored in a temporary binary file named train. Because SAS performs all operations on binary files, it does not require that everything fit into RAM. All of the other language examples from this article require everything to fit into RAM.

1
2
3
4
5
6
/* Read the CSV */

PROC IMPORT DBMS=csv OUT=train REPLACE
DATAFILE="/folders/myfolders/titanic-dataset.csv";
GETNAMES=YES;
RUN;

Next, the missing ages are filled in with their median values. This is done with a DATA statement that loops over the train data set that was just loaded in. The resulting data set is written to the same data set as the source. The DATA parameter specifies the input and the OUT parameter specifies the output.

1
2
3
4
5
/* Fill in missing ages with median */
PROC STDIZE DATA=train OUT=train
METHOD=median reponly;
VAR Age;
RUN;

Next, we will separate the training and validation set. The first step is to add a column, named selected to the train data set that specifies if it is part of the ultimate training set or not. This is done by using PROC SURVEYSELECT, a total of 70% of the data will have a selected variable with a value of 1. The input and output data sets are both train. The previous train data set (which held all rows) is replaced by a new train data set that contains only the training data.

Once the train data set has been properly labeled, it is split into two data sets that are named train and validate.

1
2
3
4
5
6
7
8
9
10
11
12
13
/* Train/Validate split */
PROC SURVEYSELECT DATA=train outall OUT=train METHOD=srs SAMPRATE=0.7;
RUN;

DATA validate;
SET train;
IF selected = 0;
RUN;

DATA train;
SET train;
IF selected = 1;
RUN;

Now that we have a train and validation data set we are ready to fit the Logistic regression. This fitting is performed with PROC LOGISTIC. The two categorical values, Sex and Embarked must be specified with the CLASS parameters. The ref setting means to encode as dummy variables.

The formula that we are regressing is that Survived is equal to some regression of Sex, Age, Pclass, Parch, SibSp, and Embarked. The model parameters themselves are written to a binary file called model. The descending flag specifies that the model will predict values of 1 (survived), as opposed to 0 (perished).

1
2
3
4
5
6
/* Fit the logit */
PROC LOGISTIC data=train outmodel=model descending;
CLASS Sex / PARAM=ref ;
CLASS Embarked / PARAM=ref ;
MODEL Survived = Sex Age Pclass Parch SibSp Embarked;
RUN;

Now that the model has been fit, we can predict with another call to PROC LOGISTIC. This will create a data set named pred that contains the validate data set augmented with predictions. Because we are predicting a binary outcome, two additional columns are added to the data set: P_1 specifies the probability that the person survived and P_2 specifies the probability that the person perished. These two values sum to 1.0 for each row.

1
2
3
4
/* Predict */
PROC LOGISTIC INMODEL=model;
SCORE DATA=validate OUT=pred;
RUN;

Next a prediction data set is created where any Survived probability is greater than or equal to 0.5 is assumed to have survived is created. Only the PassengerId, Survived, and P_1 columns are kept. Additionally, a new column named pred_survived is created that holds a value of 1 if the probability of survival was greater or equal than 0.5.

1
2
3
4
5
/* Turn prediction probabilities into class values (threshold=.5) */
DATA pred;
SET PRED(KEEP = PassengerId Survived P_1);
pred_survived = ROUND(P_1);
RUN;

Finally, a confusion matrix is generated that measures model performance. This will tells us the percent of survived and perished predictions that were correct.

1
2
3
4
/* Evaluate */
proc freq data=pred;
tables Survived * pred_survived;
run;

Titanic in Julia

Julia is a relatively new programming language that the data science community is showing increased support for. Julia can use advanced linear algebra packages to perform matrix operations with great performance; however, loops and traditional programming constructs will also perform at near C/C++ speed.

The first step is to read the CSV file into a DataFrame.

1
2
3
4
using DataFrames;
using GLM;

df = readtable("./data/titanic-dataset.csv");

The usual preprocessing steps are performed. In Julia you will notice that function names and operators sometimes have a prefix/suffix, such as ! or .. The ! suffix, such as delete! notes that this function will modify what is passed to it. The . prefix/suffix, such as .~ means that the not(~) is applied to each member of the vector, not the whole vector.

1
2
3
4
5
6
7
8
delete!(df, :PassengerId);
delete!(df, :Name);
delete!(df, :Ticket);
delete!(df, :Cabin);
df[isna.(df[:Age]),:Age] = median(df[ .~isna.(df[:Age]),:Age])
df[isna.(df[:Embarked]),:Embarked] = "S"
pool!(df, [:Sex]);
pool!(df, [:Embarked]);

Next we split the training and validation sets into df_train and df_validate.

1
2
3
4
split_pt = trunc(Int,size(df,1)*0.7) # 70% validation
shuffle_idx = sample(1:size(df,1),size(df,1));
df_train = df[1:split_pt,:];
df_validate = df[split_pt+1:size(df,1),:];

The model is created and the validation set is scored for prediction. The formula of the regression uses the same regression format as R. The prediction probabilities are rounded, just like the other languages. This rounding means that probabilities of survival of 0.5 and higher indicate survived (1); whereas, 0 indicates perished.

1
2
3
model = glm(@formula(Survived ~ Pclass + Sex + Age + SibSp + Parch + Fare + Embarked), df_train, Binomial(), LogitLink());
pred = predict(model,df_validate);
pred = convert(DataArray{Int}, round.(pred));

Finally we can report the accuracy.

1
2
print("Accuracy: ")
println( sum(pred .== df_validate[:Survived]) / length(pred))

Final Remarks

Now you have seen a simple Logistic regression in Python, R, MATLAB, SAS, and Julia. You can see some of the parallels and differences between the languages. Python and MATLAB require that x and y be split; whereas, the others do not. Julia and R use a similar regression formula format.

There are other languages that are used for data science. I tried to focus on the mainstream. In the future, I may augment this article with additional languages. Are there any that you would like to see? Java, Javascript, and C# are candidates, as they are more mainstream IT languages.