{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "(ch1)=\n", "# Regression Related Evaluation Metrics\n", "\n", "Regression problems are the problems where we try to make a prediction on a continuous scale. This chapter aims to help develop (or rediscover) the intuition behind some of these popular metrics, namely $R^2$, for evaluating regression models. Using an open source housing dataset, we first build three different models and thereafter intutitively build various metrics that help evaluate the the performance of the three trained models against each other. We start with basic evaluation metrics such as error, absolute error, etc and eventually try to the build the underlying intutition behind $R^2$. \n", "\n", "## Predicting House Price\n", "\n", "Let's assume we are tasked to build a model that can predict house prices. Further, we are provided with an open source Boston city dataset. As shown in the code snippet below, we can load the dataset as a `pandas` dataframe. In the table below, \"price\" is the variable we aim to predict and treat the rest of the variables as input features. You can find more information about the individual fields in the dataset over {ref}`here `." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:47:09.181745Z", "start_time": "2020-05-19T20:46:46.355041Z" } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
CRIMZNINDUSCHASNOXRMAGEDISRADTAXPTRATIOBLSTATprice
4613.693110.018.100.00.7136.37688.42.567124.0666.020.2391.4314.6517.7
1800.065880.02.460.00.4887.76583.32.74103.0193.017.8395.567.5639.8
5000.224380.09.690.00.5856.02779.72.49826.0391.019.2396.9014.3316.8
2460.3398322.05.860.00.4316.10834.98.05557.0330.019.1390.189.1624.3
1070.131170.08.560.00.5206.12785.22.12245.0384.020.9387.6914.0920.4
\n", "
" ], "text/plain": [ " CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX \\\n", "461 3.69311 0.0 18.10 0.0 0.713 6.376 88.4 2.5671 24.0 666.0 \n", "180 0.06588 0.0 2.46 0.0 0.488 7.765 83.3 2.7410 3.0 193.0 \n", "500 0.22438 0.0 9.69 0.0 0.585 6.027 79.7 2.4982 6.0 391.0 \n", "246 0.33983 22.0 5.86 0.0 0.431 6.108 34.9 8.0555 7.0 330.0 \n", "107 0.13117 0.0 8.56 0.0 0.520 6.127 85.2 2.1224 5.0 384.0 \n", "\n", " PTRATIO B LSTAT price \n", "461 20.2 391.43 14.65 17.7 \n", "180 17.8 395.56 7.56 39.8 \n", "500 19.2 396.90 14.33 16.8 \n", "246 19.1 390.18 9.16 24.3 \n", "107 20.9 387.69 14.09 20.4 " ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%matplotlib inline\n", "from sklearn import datasets\n", "import pandas as pd\n", "import warnings\n", "warnings.filterwarnings(action=\"ignore\", module=\"scipy\", message=\"^internal gelsd\")\n", "warnings.simplefilter(action='ignore', category=FutureWarning)\n", "\n", "\n", "boston = datasets.load_boston()\n", "data = pd.DataFrame(boston.data, columns=boston.feature_names)\n", "data['price'] = boston.target\n", "data.sample(n=5)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We randomly split the data into 80/20 i.e 80% data is used for training and 20% of the data is used for testing. Further, let's assume that we train three different models to predict house prices. The three models we will train are:\n", "\n", "1. **Mean Model:** Assume that the only available information is prices of adjoining houses to your house. Now, if someone asks what's the price of a nearby house, one is likely to guess a value close to the \"mean\" house price of the adjoining houses. It can be shown that when no information is available other than the target variable, using the expected value (or \"mean\") of the target variable as the predicted value will minimize the total absolute error. Here, I use the term \"mean model\" to describe such a model. For every data point, \"Mean model\" returns a constant value equal to the observed mean value of the target variable in the training dataset. \n", " As you will see later, this model serves as a great baseline for evaluating other models. \n", "\n", "2. **Linear Regression Model:** The second model will use linear regression algorithm. Linear regression and its variants (such as Lasso, ElasticNet, etc) are one of the most popular algorithms to deal with regression problems. For the purpose of this chapter, we will use the ordinary least square algorithm. \n", "\n", "3. **Xgboost:** The third model is based on the \"Xgboost\" algorithm. Xgboost trains multiple decision trees where each tree increases the weight depending on the error observed in the previous tree. \n", "\n", "For the purpose of this chapter, it's not important to understand how these different algorithms work. The only motivation for training three different models is to help conceptualize and develop an evaluation metric that allows us to compare different models and determine the best. In the same spirit, we also skip feature engineering and hyper-parameter tunning. These steps are critical to building a good quality model. But the focus of this chapter is not to build the best model for house price prediction but on how to evaluate regression models. \n", "\n", "Using training data, below code snippt trains the three models and use the same to predict house price for the houses in our test dataset. " ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:48:13.525252Z", "start_time": "2020-05-19T20:48:12.413572Z" } }, "outputs": [ { "ename": "XGBoostError", "evalue": "XGBoost Library (libxgboost.dylib) could not be loaded.\nLikely causes:\n * OpenMP runtime is not installed (vcomp140.dll or libgomp-1.dll for Windows, libomp.dylib for Mac OSX, libgomp.so for Linux and other UNIX-like OSes). Mac OSX users: Run `brew install libomp` to install OpenMP runtime.\n * You are running 32-bit Python on a 64-bit OS\nError message(s): ['dlopen(/usr/local/anaconda3/lib/python3.7/site-packages/xgboost/lib/libxgboost.dylib, 6): Library not loaded: /usr/local/opt/libomp/lib/libomp.dylib\\n Referenced from: /usr/local/anaconda3/lib/python3.7/site-packages/xgboost/lib/libxgboost.dylib\\n Reason: image not found']\n", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mXGBoostError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpreprocessing\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mStandardScaler\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmodel_selection\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtrain_test_split\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mxgboost\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mXGBRegressor\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;31m# Split Data Into Training and Test Dataset\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/usr/local/anaconda3/lib/python3.7/site-packages/xgboost/__init__.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mwarnings\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 11\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m\u001b[0mcore\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mDMatrix\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mDeviceQuantileDMatrix\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mBooster\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 12\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m\u001b[0mtraining\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtrain\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcv\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mrabit\u001b[0m \u001b[0;31m# noqa\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/usr/local/anaconda3/lib/python3.7/site-packages/xgboost/core.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 173\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 174\u001b[0m \u001b[0;31m# load the XGBoost library globally\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 175\u001b[0;31m \u001b[0m_LIB\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_load_lib\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 176\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 177\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/usr/local/anaconda3/lib/python3.7/site-packages/xgboost/core.py\u001b[0m in \u001b[0;36m_load_lib\u001b[0;34m()\u001b[0m\n\u001b[1;32m 164\u001b[0m \u001b[0;34m'`brew install libomp` to install OpenMP runtime.\\n'\u001b[0m \u001b[0;34m+\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 165\u001b[0m \u001b[0;34m' * You are running 32-bit Python on a 64-bit OS\\n'\u001b[0m \u001b[0;34m+\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 166\u001b[0;31m 'Error message(s): {}\\n'.format(os_error_list))\n\u001b[0m\u001b[1;32m 167\u001b[0m \u001b[0mlib\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mXGBGetLastError\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrestype\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mctypes\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mc_char_p\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 168\u001b[0m \u001b[0mlib\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcallback\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_get_log_callback_func\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mXGBoostError\u001b[0m: XGBoost Library (libxgboost.dylib) could not be loaded.\nLikely causes:\n * OpenMP runtime is not installed (vcomp140.dll or libgomp-1.dll for Windows, libomp.dylib for Mac OSX, libgomp.so for Linux and other UNIX-like OSes). Mac OSX users: Run `brew install libomp` to install OpenMP runtime.\n * You are running 32-bit Python on a 64-bit OS\nError message(s): ['dlopen(/usr/local/anaconda3/lib/python3.7/site-packages/xgboost/lib/libxgboost.dylib, 6): Library not loaded: /usr/local/opt/libomp/lib/libomp.dylib\\n Referenced from: /usr/local/anaconda3/lib/python3.7/site-packages/xgboost/lib/libxgboost.dylib\\n Reason: image not found']\n" ] } ], "source": [ "from sklearn.dummy import DummyRegressor\n", "from sklearn.linear_model import LinearRegression\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.model_selection import train_test_split\n", "from xgboost import XGBRegressor\n", "\n", "# Split Data Into Training and Test Dataset \n", "# so that we use the same training and test dataset \n", "# to train and test three models. Also setting random state \n", "# so that we can results are reproducible\n", "import numpy as np\n", "train, test = train_test_split(data, test_size=0.2, random_state=10)\n", "train.is_copy = None\n", "test.is_copy = None\n", "print(\"Training Dataset: \", train.shape)\n", "print(\"Test Dataset: \", test.shape)\n", "\n", "# =====================\n", "# TRAIN MODELS\n", "# =====================\n", "\n", "# Mean Model\n", "# It simply returns mean value based on the training dataset \n", "# as a prediction. Fixing \n", "mean_model = DummyRegressor(strategy='mean')\n", "mean_model.fit(train[boston.feature_names], train['price'])\n", "\n", "# Linear Regression\n", "median_model = LinearRegression(normalize=True)\n", "median_model.fit(train[boston.feature_names], train['price'])\n", "\n", "# xgboost model\n", "xgboost_model = XGBRegressor()\n", "xgboost_model.fit(train[boston.feature_names], train['price'])\n", "\n", "# =====================\n", "# GENERATE PREDICTIONS\n", "# =====================\n", "test['mean'] = mean_model.predict(test[boston.feature_names])\n", "test['regression'] = median_model.predict(test[boston.feature_names])\n", "test['xgboost'] = xgboost_model.predict(test[boston.feature_names])\n", "\n", "## display results\n", "test[['price', 'mean', 'regression', 'xgboost']].sample(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Error Distribution and Mean Absolute Error\n", "\n", "The next challenge is determining the best model of the three or to rank order them based on the quality. However, quality is undefined and we need to define it. The first thing that comes to mind is to examine error i.e. difference between actual house price and the predicted value. Note that the error can be positive if the model underpredicts the value of a given house or vice-versa. Since there are hundreds of houses in the test dataset, as shown in the below plot, we can further examine the distribution of error for each model. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:47:11.300285Z", "start_time": "2020-05-19T20:46:47.765Z" } }, "outputs": [], "source": [ "# =====================\n", "# CALCULATE ERROR\n", "# =====================\n", "test['mean_model_error'] = test['price'] - test['mean']\n", "test['regression_model_error'] = test['price'] - test['regression']\n", "test['xgboost_model_error'] = test['price'] - test['xgboost']\n", "\n", "# =====================\n", "# PLOT DISTRIBUTION OF ERRORS\n", "# FOR THREE MODELS\n", "# =====================\n", "\n", "# convert wide format to row format so that it\n", "# easy to visualize and compute statistical properties\n", "errorDF = test.melt(value_vars=['mean_model_error', 'regression_model_error', 'xgboost_model_error']) \n", "\n", "%matplotlib inline\n", "from plotnine import *\n", "display(\n", " ggplot(errorDF, aes('value', fill='variable')) \n", " + geom_density(alpha=0.5) \n", " + theme_xkcd()\n", " + theme(figure_size=(18,6))\n", " + xlab(\"Error in house price\")\n", " + ggtitle(\"Distribution of Error\")\n", ")\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The distribution of error tells a lot about each model. For instance, from the above plot, we can examine that xgboost based model has a much smaller standard deviation as compared to other models. But as an evaluation metric, we desire for a single real number that makes comparing and ranking competing models easier. Unless there is a significant bias in the model, the error is normally distributed (and as we can observe from the above plot). A normal distribution can be described using two parameters: mean and standard deviation. Thus one option to convert the distribution to a single number that serves as an evaluation criterion is to examine mean error for each model and we can name this evaluation criteria as \"mean error\". The code snippet below computes the mean and standard deviation for error distribution associated with each model. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:47:11.301829Z", "start_time": "2020-05-19T20:46:48.957Z" } }, "outputs": [], "source": [ "\n", "# =====================\n", "# CALCULATE MEAN AND SD\n", "# =====================\n", "print(\"Mean and Standard Deviation\")\n", "display(\n", " errorDF\n", " .groupby('variable').agg({'value': ['mean', 'std']})\n", " .reset_index()\n", " .sort_values(('value', 'mean'))\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using \"mean error\" as an evaluation criterion we can conclude that xgboost based model has the smallest error as compared to other models. However, there is an obvious flaw in our conclusion. Let's assume that we have a model, say Model X, that always over predict i.e. the error is always negative. In this case, the mean error will be negative and let's assume it's -3. Since -3 is less than the mean error of the xgboost based model, we might wrongly conclude that Model X is better than the xgboost based model.\n", "\n", "One might jump to fix the problem by taking absolute of mean error as it will again restore xgboost based model as the best quality model. There is, however, another issue. Assume that there are two houses in our test dataset and a model underestimate one by 10K, i.e. error = 10K, and overestimate the other by 10K, i.e error = -10K. Here, the mean error and absolute mean error will be zero. This model should be the best model we can as the absolute mean error can never be less than zero. However, this doesn't make sense. \n", "\n", "The problem with absolute mean error is that the individual errors are not additive. To make the errors additive we can take absolute error and thereafter take the mean of absolute error. Thus in the above example, the mean absolute error will be $ \\frac{|-10| + |10|}{2} = 10$. Note that there is a significant difference between \"absolute mean error\" and our new evaluation metric \"mean absolute error\". In the former, we first compute mean and then apply absolute function. In the later, we first apply the absolute function to individual errors and thereafter take the mean. For obvious reasons, this new evaluation metric is referred as \"Mean Absolute Error\" or MAE. Mathematically MAE can be defined as:\n", "\n", "$$MAE = \\frac{\\sum_{i=1}^{N}|\\hat{y_i} - y_i|}{N}$$\n", "\n", "Let's re-evaluate the quality of our three models using MAE." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:47:11.303214Z", "start_time": "2020-05-19T20:46:50.631Z" } }, "outputs": [], "source": [ "# Conver error into absolute error\n", "errorDF['absolute_error'] = errorDF['value'].abs()\n", "display(\n", " errorDF\n", " .groupby('variable')\n", " .agg({'absolute_error': 'mean'})\n", " .reset_index()\n", " .rename(columns={'absolute_error': 'MAE'})\n", " .sort_values('MAE')\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The unit of MAE is the same as that of the target variable. This makes MAE easy to understand and interpret. However, \"mean\" is one of the parameters several parameters related to a distribution and is sensitive to outliers. Hence, sometimes instead of taking the mean of absolute error, \"median absolute error\" is also reported. \n", "\n", "## $R^2$ \n", "\n", "While MAE and its variants are easy to interpret, they have one major shortcoming. These metrics focus on the centrality of the error distribution (i.e. mean or median) and ignore the spread of error. For instance, consider two house prediction models both having MAE of \\\\$10K. However, for one of the model the error ranges from \\\\$5K to \\\\$15K and for another it can range from \\\\$0K to \\\\$50K. In this case,the first model might be more desirable due to its smaller spread. When trying to make a decision about which model is the best, dealing with two separate metrics independently is often challenging and hence one need a single metric that can take into account the whole distribution of errors rather than focusing on the centrality of the errror. \n", "\n", "In our search for a better metric, let's revisit some of the core questions we are trying to answer as part of our evaluation strategy. One question that's important is understanding what percentage of houses exhibit less than certain absolute error. We can easily answer this by plotting the cumulative distribution of absolute error as shown below. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:47:11.304479Z", "start_time": "2020-05-19T20:46:51.482Z" } }, "outputs": [], "source": [ "# For now ignore this. The below description will help understand this eventually\n", "idealModel = pd.DataFrame(\n", " [[0, 0, 0, 1.], [0, 1, np.max(errorDF.absolute_error), 1.]],\n", " columns=['x1', 'y1', 'x2', 'y2']\n", ")\n", "\n", "(\n", " ggplot() \n", " \n", " # we use stat_ecdf function to compuate cumulative distribution and plot the same\n", " + stat_ecdf(aes('absolute_error', color='variable'), data=errorDF, geom='step')\n", " \n", " # plot the idea model\n", " + geom_segment(aes(x='x1', y='y1', xend='x2', yend='y2'), data=idealModel, linetype='dashed')\n", " \n", " # render labels\n", " + theme_minimal()\n", " + xlab(\"Absolute Error\")\n", " + ylab(\"Cumulative Percentage of Houses\")\n", " + ggtitle(\"Cumulative Distribution of Absolute Error For Various Models\")\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can observe that for the xgboost based model, 95% of the houses have less than 10 unit of absolute error. Whereas only about 75% of houses have less than 10 unit of absolute error for the \"mean model\". \n", "\n", "One thing that also stands out in the above plot is the clear hierarchy of lines. As the model get's better, the closer it's moving towards the top left part of the plot. For instance, the xgboost based model, which is the best of the three based on MAE metric, is closest to the top left part of the plot. Similarly, the \"mean model\", which is the worst, is farthest from the top left point of the plot. There is a reason for such a behavior. Let's think about an ideal model. An ideal model will zero absolute error for 100% of the test cases. Shown by the black dotted line in the above plot, the cumulative distribution plot for this ideal model will overlap with y-axis and extend up to 1. Thus, as our models will get better the closer they should move towards the cumulative distribution of the ideal model. \n", "\n", "We can further quantify the notion of \"closeness\" to the ideal model as the area between the ideal model and the cumulative distribution plot of a given model. Let's represent this area by $AAC$ or \"area above curve$. **AAC reflects how far is a given model from the ideal model or the possible improvement opportunity**, For instance, in the below plot the red region indicates the AAC for the xgboost based model. Similarly, AAC of the \"mean model\" will be equal to green region plus red region. Mathematically, we can define AAC as:\n", "\n", "$$ AAC(m) = \\int_0^{1}{|y-\\hat{y}_m|} ~ \\approx ~ \\sum_i{|y_i-\\hat{y}_{m, i}|} $$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:47:11.305489Z", "start_time": "2020-05-19T20:46:52.636Z" } }, "outputs": [], "source": [ "df = pd.melt(test[['mean_model_error', 'xgboost_model_error']].abs(), value_vars=['mean_model_error', 'xgboost_model_error'])\n", "meanCount, binEdges1 = np.histogram(test['mean_model_error'].abs(), bins=100, range=[0, np.max(df.value)])\n", "meanCumSum = meanCount.cumsum()/float(test.shape[0])\n", "\n", "xgboostCount, binEdges2 = np.histogram(test['xgboost_model_error'].abs(), bins=100, range=[0, np.max(df.value)])\n", "xgboostCumSum = xgboostCount.cumsum()/float(test.shape[0])\n", "\n", "assert np.array_equal(binEdges1, binEdges2), \"Bins are different\"\n", "\n", "df = pd.DataFrame({\n", " 'absolute_error': binEdges1[:-1] + np.diff(binEdges1)/2.0,\n", " 'mean_cumsum': meanCumSum,\n", " 'xgboost_cumsum': xgboostCumSum\n", "})\n", "\n", "(\n", " ggplot(df, aes(x='absolute_error'))\n", " + geom_ribbon(aes(ymin='mean_cumsum', ymax='xgboost_cumsum'), fill='green', alpha=0.5)\n", " + geom_ribbon(aes(ymin='xgboost_cumsum', ymax=1), fill='red', alpha=0.5)\n", " + geom_ribbon(aes(ymin=0, ymax='mean_cumsum'), fill='grey', alpha=0.5)\n", " + xlab(\"Absolute Error\")\n", " + ylab(\"Cumulative Percentage of Houses\")\n", " + ggtitle(\"Cumulative distribution of xgboost based model and mean model\")\n", ")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The advantage of AAC over MAE is that it takes into account the complete distribution of the absolute error and further quantify how far is a given model from the ideal model. However, as an evaluation metric, it's missing many desirable characteristics. Some of the key concerns are:\n", "\n", "1. AAC is inversely related to model quality. As AAC increases, model quality deteriorates and vice-versa and thereby is counter-intuitive. \n", "\n", "2. AAC is bounded only on one side. IOA can range from 0 to infinity. As the model quality deteriorates and AAC increases to infinite. A good metric should be bounded between 0 and 1 or -1 to 1 to make it easy to interpret. \n", "\n", "3. AAC is not differentiable. AAC uses \"absolute\" function, which is non-differentiable. For optimization, it is often desirable to have a function that is differentiable. \n", " \n", "Let's focus on the first problem. One simple fix to address the inverse relationship problem is to consider the area under the curve (AUC). In the above plot, AUC for the XGboost based model is the sum of the green and the grey regions. Conceptually, AUC represents improvement made by a given model as compared to the worst model. Theoretically, the worst model has an infinite absolute error for 100% test cases. Thus, the cumulative distribution plot of it overlaps with the x-axis and extends to infinity. Using the worst model, however, complicates computing AUC and hence we need some other baseline model. The \"mean model,\" one of the three models that we trained, is another good baseline. It uses no additional features other than the target variable itself and, in practice, is the worst model that the machine can learn. Let's redefine AUC as $AUC_{mean}$ to represent the area the cumulative distribution plot of a given model and the mean model. In the above plot, the green region represents $AUC_{mean}$ for the XGboost based model. Mathematically, we can represent $AUC_{mean}$ as \n", "\n", "$$ AUC_{mean}(m) = \\int_0^{1}{|\\hat{y}_m-\\bar{y}|} ~ \\approx ~ \\sum_i{|\\hat{y}_m-\\bar{y}|} $$\n", "\n", "where\n", "* m represents a given model, such as XGboost based model\n", "* $\\hat{y}_m$ represents predicted target value for a given test case\n", "* $\\bar{y}$ represents mean predicted target value.\n", "\n", "We can also rewrite $AUC_{mean}$ in terms of AAC as shown below\n", "\n", "$$ AUC_{mean}(m) = AAC(mean) - AAC(m) $$\n", "\n", "where\n", "* $AAC(mean)$ represents AAC for the mean model. \n", "* $AAC(m)$ represents AAC for a given model. \n", "\n", "$AUC_{mean}$ can range from $-\\infty$ to $\\infty$. It will be negative when our model is worse than the mean model and hence the cumulative plot of the given model is below the mean model. However, as discussed earlier, a bounded metric is much more easy to interpret and hence desirable. One way to bound $AUC_{mean}$ above is to normalize it by AAC(mean). That we redefine $AUC_{mean}$ as\n", "\n", "$$ AUC_{mean}(m) = \\frac{AAC(mean) - AAC(m)}{AAC(mean)} $$\n", "\n", "or \n", "\n", "$$ AUC_{mean}(m) = 1 - \\frac{AC(m)}{AAC(mean)} $$\n", "\n", "\n", "Theoretically, based on the new definition, AUC_{mean} can range from $-\\infty$ to 1. It is 1 when area above the curve for the given model, i.e. $AAC(m)$ is equal to 0. In that case we have an ideal model that is accurate for 100% of the test cases. $AUC_{mean}(m)$ will be zero when the model is same as the mean model. However if its worse than the mean model then it can range from anywhere 0 to $-\\infty$. In pratcise, however, $AUC_{mean}(m)$ can only range from 0 to 1. If it is less than 0, then one can essentially replace the learned model with the mean model. \n", "\n", "One last issue with AAC is that its not deferentiable due to usage of the \"absolute\" function. We used the \"absolute\" function to make the errors additive. To fix this, we can replace the \"absolute\" function with the \"square\" function. Thus we can rewrite AUC definition as\n", "\n", "$$ AUC_{mean}(m) = 1 - \\frac{AC(m)}{AAC(mean)} \\approx 1 - \\frac{\\sum{(y-\\hat{y})^2}}{\\sum{(y-\\bar{y})^2}}$$\n", "\n", "The above equation should start looking familiar. Not only we rediscovered intuition behind $R^2$ but can also visualize the metric. The below code snippet computes $R^2$ based on the idea explained above and also using sklearn library. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-05-19T20:47:11.306568Z", "start_time": "2020-05-19T20:46:53.588Z" } }, "outputs": [], "source": [ "# Manually calculating R2\n", "print(\"Manually Computing R2\")\n", "print(\"Xgboost Model: \", 1 - np.sum((test['price'] - test['xgboost']) ** 2)/ np.square(test['price'] - test['mean']).sum())\n", "print(\"Regression Model: \", 1 - np.sum(test['regression_model_error'] ** 2)/np.sum(test['mean_model_error'] ** 2))\n", "print(\"Mean Model: \", 1 - np.sum(test['mean_model_error'] ** 2)/np.sum(test['mean_model_error'] ** 2))\n", "\n", "\n", "# Using sklearn\n", "print(\"\\nUsing Sklearn\")\n", "from sklearn.metrics import r2_score\n", "print(\"Decision Tree: \", r2_score(test['price'], test['xgboost']))\n", "print(\"Regression Model: \", r2_score(test['price'], test['regression']))\n", "print(\"Mean Model: \", r2_score(test['price'], test['mean']))\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: For the mean model one might expect $R^2 = 0$. However using sklear we get non zero value. That's because sklearn is computing mean house price based on the test dataset. \n", "\n", "Apart from \"Mean Absolute Error\" and $R^2$, there are few other metrics, such as squared error, squared log error, etc., that are popularly used when evaluating a regression model. A good starting to find more about these metrics is the list of implemented metrics in `sklear` library over [here](http://scikit-learn.org/stable/modules/model_evaluation.html). \n", "\n", "One might also find it useful to read about adjusted $R^2$ metric. $R^2$ doesn't penalize for the number of features and is addressed by adjusted $R^2$ metric. Hopefully this chapter will now make it easy to understand some of these variants of $R^2$ and other regression metrics. " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }