Clean, Transform, Optimize: The Power of Data Preprocessing – Part II

    Date:

    See Part I for an intro to data preprocessing.

    Data preprocessing with Python for different dataset types

    Now that you know the different dataset errors, we can go ahead with learning how to use Python for preprocessing such a dataset.⁽²⁾

    Let us learn about these dataset errors:

    • Missing values in a dataset
    • Outliers in a dataset
    • Overfitting in a dataset
    • Data with no numerical values in a dataset
    • Different date formats in a dataset

    Missing values in a dataset

    Missing values are a common problem while dealing with data! The values can be missed because of various reasons such as human errors, mechanical errors, etc.

    Data cleansing is an important step before you even begin the algorithmic trading process, which begins with historical data analysis to make the prediction model as accurate as possible.

    Based on this prediction model you create the trading strategy. Hence, leaving missed values in the dataset can wreak havoc by giving faulty predictive results that can lead to erroneous strategy creation and further the results can not be great to state the obvious.

    There are three techniques to solve the missing values’ problem to find out the most accurate features, and they are:

    • Dropping
    • Numerical imputation
    • Categorical imputation

    Dropping

    Dropping is the most common method to take care of the missed values. Those rows in the dataset or the entire columns with missed values are dropped to avoid errors from occurring in data analysis.

    Some machines are programmed to automatically drop the rows or columns that include missed values resulting in a reduced training size. Hence, the dropping can lead to a decrease in the model performance.

    A simple solution for the problem of a decreased training size due to the dropping of values is to use imputation. We will discuss the interesting imputation methods further. In case of dropping, you can define a threshold to the machine.

    For instance, the threshold can be anything. It can be 50%, 60% or 70% of the data. Let us take 60% in our example, which means that 60% of data with missing values will be accepted by the model/algorithm as the training dataset, but the features with more than 60% missing values will be dropped.

    For dropping the values, the following Python codes are used:

    #Dropping columns in the data higher than 60% threshold
    data = data[data.columns[data.isnull().mean() < threshold]]
    
    #Dropping rows in the data higher than 60% threshold
    data = data.loc[data.isnull().mean(axis=1) < threshold]

    Dropping.py hosted with ❤ by GitHub

    By using the above Python codes, the missed values will be dropped and the machine learning model will learn on the rest of the data.

    Numerical imputation

    The word imputation implies replacing the missing values with such a value that makes sense. And, numerical imputation is done in the data with numbers.

    For instance, if there is a tabular dataset with the number of stocks, commodities and derivatives traded in a month as the columns, it is better to replace the missed value with a “0” than leave them as it is.

    With numerical imputation, the data size is preserved and hence, predictive models like linear regression can work better to predict most accurately.

    A linear regression model can not work with missing values in the dataset since it is biased toward the missed values and considers them “good estimates”. Also, the missed values can be replaced with the median of the columns since median values are not sensitive to outliers, unlike averages of columns.

    Let us see the Python codes for numerical imputation, which are as follows:

    #For filling all the missed values as 0
    data = data.fillna(0)
    
    #For replacing missed values with median of columns
    data = data.fillna(data.median())

    Numerical imputation.py hosted with ❤ by GitHub

    Categorical imputation

    This technique of imputation is nothing but replacing the missed values in the data with the one which occurs the maximum number of times in the column. But, in case there is no such value that occurs frequently or dominates the other values, then it is best to fill the same as “NAN”.

    The following Python code can be used here:

    #Categorical imputation
    data['column_name'].fillna(data['column_name'].value_counts().idxmax(), inplace=True)

    Categorical imputation.py hosted with ❤ by GitHub

    Outliers  in a dataset

    An outlier differs significantly from other values and is too distanced from the mean of the values. Such values that are considered outliers are usually due to some systematic errors or flaws.

    Let us see the following Python codes for identifying and removing outliers with standard deviation:

    #For identifying the outliers with the standard deviation method
    outliers = [x for x in data if x < lower or x > upper]
    print('Identified outliers: %d' % len(outliers))
    
    #Remove outliers
    outliers_removed = [x for x in data if x >= lower and x <= upper]
    print('Non-outlier observations: %d' % len(outliers_removed))

    Identify and remove.py hosted with ❤ by GitHub

    In the codes above, “lower” and “upper” signify the upper and lower limit in the dataset.

    Overfitting  in a dataset

    In both machine learning and statistics, overfitting occurs when the model fits the data too well or simply put when the model is too complex.

    The overfitting model learns the detail and noise in the training data to such an extent that it negatively impacts the performance of the model on new data/test data.

    The overfitting problem can be solved by decreasing the number of features/inputs or by increasing the number of training examples to make the machine learning algorithms more generalised.

    The most common solution is regularisation in an overfitting case. Binning is the technique that helps with the regularisation of the data which also makes you lose some data every time you regularise it.

    For instance, in the case of numerical binning, the data can be as follows:

    Stock value Bin
    100-250 Lowest
    251-400 Mid
    401-500 High

    Here is the Python code for binning:

    data['bin'] = pd.cut(data['value'], bins=[100,250,400,500], labels=["Lowest", "Mid", "High"])

    Binning.py hosted with ❤ by GitHub

    Your output should look something like this:

         Value    Bin
    0     102     Low
    1     300     Mid
    2     107     Low
    3     470     High

    Data with no numerical values  in a dataset

    In the case of the dataset with no numerical values, it becomes impossible for the machine learning model to learn the information.

    The machine learning model can only handle numerical values and thus, it is best to spread the values in the columns with assigned binary numbers “0” or “1”. This technique is known as one-hot encoding.

    In this type of technique, the grouped columns already exist. For instance, below I have mentioned a grouped column:

    Infected  Covid variants
    2 Delta
    4 Lambda
    5 Omicron
    6 Lambda
    4 Delta
    3 Omicron
    5 Omicron
    4 Lambda 
    2 Delta

    Now, the above-grouped data can be encoded with the binary numbers ”0” and “1” with one hot encoding technique. This technique subtly converts the categorical data into a numerical format in the following manner:

    Infected  Delta Lambda Omicron
    2 1 0 0
    4 0 1 0
    5 0 0 1
    6 0 1 0
    4 1 0 0
    3 0 0 1
    5 0 0 1
    4 0 1 0
    2 1 0 0

    Hence, it results in better handling of grouped data by converting the same into encoded data for the machine learning model to grasp the encoded (which is numerical) information quickly.

    Problem with the approach

    Going further, in case there are more than three categories in a dataset that is to be used for feeding the machine learning model, the one-hot encoding technique will create as many columns. Let us say, there are 2000 categories, then this technique will create 2000 columns and it will be a lot of information to feed to the model.

    Solution

    To solve this problem, while using this technique, we can apply the target encoding technique which implies calculating the “mean” of each predictor category and using the same mean for all other rows with the same category under the predictor column. This will convert the categorical column into the numeric column and that is our main aim.

    Let us understand this with the same example as above but this time we will use the “mean” of the values under the same category in all the rows. Let us see how.

    In Python, we can use the following code:

    #Convert data into numerical values with mean
    Infected = [2, 4, 5, 6, 4, 3]
    Predictor = ['Delta', 'Lambda', 'Omicron’, ’Lambda’, ’Delta’, ’Omicron’]
    Infected_df = pd.DataFrame(data={'Infected':Infected, 'Predictor':Predictor})
    means = Infected_df.groupby('Predictor')['Infected'].mean()
    Infected_df['Predictor_encoded'] = Infected_df['predictor'].map(means)
    Infected_df

    Data with no numerical values.py hosted with ❤ by GitHub

    Output:

    Infected  Predictor Predictor_encoded 
    2 Delta 3
    4 Lambda 5
    5 Omicron 4
    6 Lambda 5
    4 Delta 3
    3 Omicron 4

    In the output above, the Predictor column depicts the Covid variants and the Predictor_encoded column depicts the “mean” of the same category of Covid variants which makes 2+4/2 = 3 as the mean value for Delta, 4+6/2 = 5 as the mean value for Lambda and so on.

    Hence, the machine learning model will be able to feed the main feature (converted to a number) for each predictor category for the future.

    Different date formats  in a dataset

    With the different date formats such as “25-12-2021”, “25th December 2021” etc. the machine learning model needs to be equipped with each of them. Or else, it is difficult for the machine learning model to understand all the formats.

    With such a dataset, you can preprocess or decompose the data by mentioning three different columns for the parts of the date, such as Year, Month and Day.

    In Python, the preprocessing of the data with different columns for the date will look like this:

    #Convert to datetime object
    df['Date'] = pd.to_datetime(df['Date'])
    
    #Decomposition
    df['Year'] = df['Date'].dt.year
    df['Month'] = df['Date'].dt.month
    df['Day'] = df['Date'].dt.day
    df[['Year','Month','Day']].head()

    Decomposing date.py hosted with ❤ by GitHub

    Output:

    Year Month Day
    2019 1 5
    2019 3 8
    2019 3 3
    2019 1 27
    2019 2 8

    In the output above, the dataset is in date format which is numerical. And because of decomposing the date into different parts such as Year, Month and Day, the machine learning model will be able to learn the date format.

    The entire process mentioned above where data cleaning takes place can also be termed as data wrangling.

    In the field of machine learning, effective data preprocessing in Python is crucial for enhancing the quality and reliability of the input data, ultimately improving the performance of the model during training and inference.

    Author: Chainika Thakar

    Stay tuned for Part III to learn about Data cleaning vs data preprocessing.

    Originally posted on QuantInsti blog.

    Disclosure: Interactive Brokers

    Information posted on IBKR Campus that is provided by third-parties does NOT constitute a recommendation that you should contract for the services of that third party. Third-party participants who contribute to IBKR Campus are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.

    This material is from QuantInsti and is being posted with its permission. The views expressed in this material are solely those of the author and/or QuantInsti and Interactive Brokers is not endorsing or recommending any investment or trading discussed in the material. This material is not and should not be construed as an offer to buy or sell any security. It should not be construed as research or investment advice or a recommendation to buy, sell or hold any security or commodity. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

    Go Source

    Chart

    Sign up for Breaking Alerts

    Share post:

    Popular

    More like this
    Related

    Monthly Expiration to Traders: “Remember Me?”

    Your Privacy When you visit any website it may use...

    Why is the Gold Rally Leaving Silver Behind?

    Your Privacy When you visit any website it may use...

    What is the US national debt’s risk to investments?

    Key takeaways Debt not a disaster While the US national debt...