Regression Metrics
RegressionMetrics
- class SeqMetrics.RegressionMetrics(*args, **kwargs)[source]
Bases:
Metrics
Calculates more than 100 regression performance metrics related to sequence data.
Example
>>> import numpy as np >>> from SeqMetrics import RegressionMetrics >>> t = np.random.random(10) >>> p = np.random.random(10) >>> errors = RegressionMetrics(t,p) >>> all_errors = errors.calculate_all()
- __init__(*args, **kwargs)[source]
Initializes
Metrics
.args and kwargs go to parent class
SeqMetrics.Metrics
.
- acc() float [source]
Anomaly correction coefficient. See Langland et al., 2012; Miyakoda et al., 1972 and Murphy et al., 1989.
- agreement_index() float [source]
Agreement Index (d) developed by Willmott, 1981.
It detects additive and pro-portional differences in the observed and simulated means and vari-ances Moriasi et al., 2015. It is overly sensitive to extreme values due to the squared differences. It can also be used as a substitute for R2 to identify the degree to which model predic-tions are error-free.
\[d = 1 - \frac{\sum_{i=1}^{N}(e_{i} - s_{i})^2}{\sum_{i=1}^{N}(\left | s_{i} - \bar{e} \right | + \left | e_{i} - \bar{e} \right |)^2}\]
- bias() float [source]
Bias as and given by Gupta1998 et al., 1998
\[Bias=\frac{1}{N}\sum_{i=1}^{N}(e_{i}-s_{i})\]
- bic(p=1) float [source]
Bayesian Information Criterion
Minimising the BIC is intended to give the best model. The model chosen by the BIC is either the same as that chosen by the AIC, or one with fewer terms. This is because the BIC penalises the number of parameters more heavily than the AIC. Modified after RegscorePy.
- brier_score() float [source]
Adopted from SkillMetrics Calculates the Brier score (BS), a measure of the mean-square error of probability forecasts for a dichotomous (two-category) event, such as the occurrence/non-occurrence of precipitation. The score is calculated using the formula:
\[BS = sum_(n=1)^N (f_n - o_n)^2/N\]where f is the forecast probabilities, o is the observed probabilities (0 or 1), and N is the total number of values in f & o. Note that f & o must have the same number of values, and those values must be in the range [0,1].
- Returns:
BS : Brier score
- Return type:
References
Glenn W. Brier, 1950: Verification of forecasts expressed in terms of probabilities. Mon. We. Rev., 78, 1-23. D. S. Wilks, 1995: Statistical Methods in the Atmospheric Sciences. Cambridge Press. 547 pp.
- calculate_hydro_metrics()[source]
Calculates all metrics for hydrological data.
- Returns:
Dictionary with all metrics
- Return type:
- centered_rms_dev() float [source]
Modified after SkillMetrics. Calculates the centered root-mean-square (RMS) difference between true and predicted using the formula: (E’)^2 = sum_(n=1)^N [(p_n - mean(p))(r_n - mean(r))]^2/N where p is the predicted values, r is the true values, and N is the total number of values in p & r.
Output: CRMSDIFF : centered root-mean-square (RMS) difference (E’)^2
- corr_coeff() float [source]
Pearson correlation coefficient. It measures linear correlatin between true and predicted arrays. It is sensitive to outliers. Reference: Pearson, K 1895.
\[r = \frac{\sum ^n _{i=1}(e_i - \bar{e})(s_i - \bar{s})}{\sqrt{\sum ^n _{i=1}(e_i - \bar{e})^2} \sqrt{\sum ^n _{i=1}(s_i - \bar{s})^2}}\]
- cosine_similarity() float [source]
It is a judgment of orientation and not magnitude: two vectors with the same orientation have a cosine similarity of 1, two vectors oriented at 90° relative to each other have a similarity of 0, and two vectors diametrically opposed have a similarity of -1, independent of their magnitude. See
- covariance() float [source]
- Covariance
- \[\]
Covariance = frac{1}{N} sum_{i=1}^{N}((e_{i} - bar{e}) * (s_{i} - bar{s}))
- cronbach_alpha() float [source]
It is a measure of internal consitency of data. See ucla and stackoverflow pages for more info.
- decomposed_mse() float [source]
Decomposed MSE developed by Kobayashi and Salam (2000)
\[dMSE = (\frac{1}{N}\sum_{i=1}^{N}(e_{i}-s_{i}))^2 + SDSD + LCS SDSD = (\sigma(e) - \sigma(s))^2 LCS = 2 \sigma(e) \sigma(s) * (1 - \frac{\sum ^n _{i=1}(e_i - \bar{e})(s_i - \bar{s})} {\sqrt{\sum ^n _{i=1}(e_i - \bar{e})^2} \sqrt{\sum ^n _{i=1}(s_i - \bar{s})^2}})\]
- exp_var_score(weights=None) Optional[float] [source]
Explained variance score . Best value is 1, lower values are less accurate.
- expanded_uncertainty(cov_fact=1.96) float [source]
By default it calculates uncertainty with 95% confidence interval. 1.96 is the coverage factor corresponding 95% confidence level .This indicator is used in order to show more information about the model deviation. Using formula from by Behar et al., 2015 and Gueymard et al., 2014.
- fdc_fhv(h: float = 0.02) float [source]
modified Kratzert2018 code. Peak flow bias of the flow duration curve (Yilmaz 2008). used in kratzert et al., 2018
- Parameters:
h (float) – Must be between 0 and 1.
- Return type:
Bias of the peak flows
- fdc_flv(low_flow: float = 0.3) float [source]
bias of the bottom 30 % low flows. modified Kratzert code used in kratzert et al., 2018
- gmean_diff() float [source]
Geometric mean difference. First geometric mean is calculated for each of two samples and their difference is calculated.
- kge(return_all=False)[source]
Kling-Gupta Efficiency Gupta, Kling, Yilmaz, Martinez, 2009, Decomposition of the mean squared error and NSE performance
criteria: Implications for improving hydrological modelling
- output:
kge: Kling-Gupta Efficiency cc: correlation alpha: ratio of the standard deviation beta: ratio of the mean
- kge_bound() float [source]
Bounded Version of the Original Kling-Gupta Efficiency_
- kge_mod(return_all=False)[source]
Modified Kling-Gupta Efficiency_ .
- kge_np(return_all=False)[source]
Non parametric Kling-Gupta Efficiency
- output:
kge: Kling-Gupta Efficiency cc: correlation alpha: ratio of the standard deviation beta: ratio of the mean
References
Pool, Vis, and Seibert, 2018 Evaluating model performance: towards a non-parametric variant of the Kling-Gupta efficiency, Hydrological Sciences Journal. https://doi.org/10.1080/02626667.2018.1552002
- kgeprime_c2m() float [source]
Bounded Version of the Modified Kling-Gupta Efficiency_
- lm_index(obs_bar_p=None) float [source]
Legate-McCabe Efficiency Index. Less sensitive to outliers in the data. The larger, the better
- Parameters:
obs_bar_p (float,) – Seasonal or other selected average. If None, the mean of the observed array will be used.
- log_nse(epsilon=0.0) float [source]
log Nash-Sutcliffe model efficiency
\[NSE = 1-\frac{\sum_{i=1}^{N}(log(e_{i})-log(s_{i}))^2}{\sum_{i=1}^{N}(log(e_{i})-log(\bar{e})^2}-1)*-1\]
- maape() float [source]
Mean Arctangent Absolute Percentage Error Note: result is NOT multiplied by 100
- mae(true=None, predicted=None) float [source]
Mean Absolute Error. It is less sensitive to outliers as compared to mse/rmse.
- mape() float [source]
Mean Absolute Percentage Error. The MAPE is often used when the quantity to predict is known to remain way above zero. It is useful when the size or size of a prediction variable is significant in evaluating the accuracy of a prediction. It has advantages of scale-independency and interpretability. However, it has the significant disadvantage that it produces infinite or undefined values for zero or close-to-zero actual values.
- mare() float [source]
Mean Absolute Relative Error. When expressed in %age, it is also known as mape.
- mase(seasonality: int = 1)[source]
Mean Absolute Scaled Error. Baseline (benchmark) is computed with naive forecasting (shifted by @seasonality) modified after [11]. It is the ratio of MAE of used model and MAE of naive forecast.
References
Hyndman, R. J. (2006). Another look at forecast-accuracy metrics for intermittent demand. Foresight: The International Journal of Applied Forecasting, 4(4), 43-46.
- mb_r() float [source]
Mielke-Berry R value. Berry and Mielke, 1988.
References
Mielke, P. W., & Berry, K. J. (2007). Permutation methods: a distance function approach. Springer Science & Business Media.
- mbe() float [source]
Mean bias error. This indicator expresses a tendency of model to underestimate (negative value) or overestimate (positive value) global radiation, while the MBE values closest to zero are desirable. The drawback of this test is that it does not show the correct performance when the model presents overestimated and underestimated values at the same time, since overestimation and underestimation values cancel each other. [1]
- mean_bias_error() float [source]
Mean Bias Error It represents overall bias error or systematic error. It shows average interpolation bias; i.e. average over- or underestimation. [1][2].This indicator expresses a tendency of model to underestimate (negative value) or overestimate (positive value) global radiation, while the MBE values closest to zero are desirable. The drawback of this test is that it does not show the correct performance when the model presents overestimated and underestimated values at the same time, since overestimation and underestimation values cancel each other.
References
- Willmott, C. J., & Matsuura, K. (2006). On the use of dimensioned measures of error to evaluate the performance
of spatial interpolators. International Journal of Geographical Information Science, 20(1), 89-102. https://doi.org/10.1080/1365881050028697
- Valipour, M. (2015). Retracted: Comparative Evaluation of Radiation-Based Methods for Estimation of Potential
Evapotranspiration. Journal of Hydrologic Engineering, 20(5), 04014068. https://dx.doi.org/10.1061/(ASCE)HE.1943-5584.0001066
- med_seq_error() float [source]
Median Squared Error Same as mse but it takes median which reduces the impact of outliers.
- mod_agreement_index(j=1) float [source]
Modified agreement of index. j: int, when j==1, this is same as agreement_index. Higher j means more impact of outliers.
- nrmse_ipercentile(q1=25, q2=75) float [source]
RMSE normalized by inter percentile range of true. This is least sensitive to outliers. q1: any interger between 1 and 99 q2: any integer between 2 and 100. Should be greater than q1. Reference: Pontius et al., 2008.
- nrmse_mean() float [source]
Mean Normalized RMSE RMSE normalized by mean of true values.This allows comparison between datasets with different scales.
Reference: Pontius et al., 2008
- nrmse_range() float [source]
Range Normalized Root Mean Squared Error. RMSE normalized by true values. This allows comparison between data sets with different scales. It is more sensitive to outliers.
Reference: Pontius et al., 2008
- nse() float [source]
Nash-Sutcliff Efficiency.
It determine how well the model simulates trends for the output response of concern. But cannot help identify model bias and cannot be used to identify differences in timing and magnitude of peak flows and shape of recession curves; in other words, it cannot be used for single-event simulations. It is sensitive to extreme values due to the squared differ-ences [1]. To make it less sensitive to outliers, [2] proposed log and relative nse.
References
- Moriasi, D. N., Gitau, M. W., Pai, N., & Daggupati, P. (2015). Hydrologic and water quality models:
Performance measures and evaluation criteria. Transactions of the ASABE, 58(6), 1763-1785.
- Krause, P., Boyle, D., & Bäse, F. (2005). Comparison of different efficiency criteria for hydrological
model assessment. Adv. Geosci., 5, 89-97. https://dx.doi.org/10.5194/adgeo-5-89-2005.
- nse_alpha() float [source]
Alpha decomposition of the NSE, see Gupta_ et al. 2009 used in kratzert et al., 2018
- Returns:
Alpha decomposition of the NSE
- Return type:
- nse_beta() float [source]
Beta decomposition of NSE. See Gupta et. al 2009 . used in kratzert et al., 2018
- Returns:
Beta decomposition of the NSE
- Return type:
- nse_mod(j=1) float [source]
Gives less weightage of outliers if j=1 and if j>1, gives more weightage to outliers. Reference: Krause et al., 2005
- pbias() float [source]
Percent Bias. It determine how well the model simulates the average magnitudes for the output response of interest. It can also determine over and under-prediction. It cannot be used (1) for single-event simula-tions to identify differences in timing and magnitude of peak flows and the shape of recession curves nor (2) to determine how well the model simulates residual variations and/or trends for the output response of interest. It can give a deceiving rating of model performance if the model overpredicts as much as it underpredicts, in which case PBIAS will be close to zero even though the model simulation is poor. [1]
[1] Moriasi et al., 2015
- r2() float [source]
Quantifies the percent of variation in the response that the ‘model’ explains. The ‘model’ here is anything from which we obtained predicted array. It is also called coefficient of determination or square of pearson correlation coefficient. More heavily affected by outliers than pearson correlatin r.
- r2_score(weights=None)[source]
This is not a symmetric function. Unlike most other scores, R^2 score may be negative (it need not actually be the square of a quantity R). This metric is not well-defined for single samples and will return a NaN value if n_samples is less than two.
- ref_agreement_index() float [source]
Refined Index of Agreement. From -1 to 1. Larger the better. Refrence: Willmott et al., 2012
- relative_rmse() float [source]
Relative Root Mean Squared Error
\[RRMSE=\frac{\sqrt{\frac{1}{N}\sum_{i=1}^{N}(e_{i}-s_{i})^2}}{\bar{e}}\]
- rmsle() float [source]
Root mean square log error.
This error is less sensitive to outliers . Compared to RMSE, RMSLE only considers the relative error between predicted and actual values, and the scale of the error is nullified by the log-transformation. Furthermore, RMSLE penalizes underestimation more than overestimation. This is especially useful in those studies where the underestimation of the target variable is not acceptable but overestimation can be tolerated .
- rsr() float [source]
Moriasi et al., 2007. It incorporates the benefits of error index statistics andincludes a scaling/normalization factor, so that the resulting statistic and reported values can apply to various constitu-ents.
- sa() float [source]
Spectral angle. From -pi/2 to pi/2. Closer to 0 is better. It measures angle between two vectors in hyperspace indicating how well the shape of two arrays match instead of their magnitude. Reference: Robila and Gershman, 2005.
- skill_score_murphy() float [source]
Adopted from here . Calculate non-dimensional skill score (SS) between two variables using definition of Murphy (1988) using the formula:
\[ \begin{align}\begin{aligned}SS = 1 - RMSE^2/SDEV^2\\SDEV is the standard deviation of the true values\\SDEV^2 = sum_(n=1)^N [r_n - mean(r)]^2/(N-1)\end{aligned}\end{align} \]where p is the predicted values, r is the reference values, and N is the total number of values in p & r. Note that p & r must have the same number of values. A positive skill score can be interpreted as the percentage of improvement of the new model forecast in comparison to the reference. On the other hand, a negative skill score denotes that the forecast of interest is worse than the referencing forecast. Consequently, a value of zero denotes that both forecasts perform equally [MLAir, 2020].
- Returns:
flaot
References
Allan H. Murphy, 1988: Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient. Mon. Wea. Rev., 116, 2417-2424. doi: http//dx.doi.org/10.1175/1520-0493(1988)<2417:SSBOTM>2.0.CO;2
- smdape() float [source]
Symmetric Median Absolute Percentage Error Note: result is NOT multiplied by 100
- spearmann_corr() float [source]
Separmann correlation coefficient.
This is a nonparametric metric and assesses how well the relationship between the true and predicted data can be described using a monotonic function.
- sse() float [source]
Sum of squared errors_ (model vs actual). It is measure of how far off our model’s predictions are from the observed values. A value of 0 indicates that all predications are spot on. A non-zero value indicates errors.
This is also called residual sum of squares (RSS) or sum of squared residuals as per tutorialspoint .
- std_ratio(**kwargs) float [source]
ratio of standard deviations of predictions and trues. Also known as standard ratio, it varies from 0.0 to infinity while 1.0 being the perfect value.
- ve() float [source]
Volumetric efficiency. from 0 to 1. Smaller the better. Reference: Criss and Winston 2008.
- volume_error() float [source]
Returns the Volume Error (Ve). It is an indicator of the agreement between the averages of the simulated and observed runoff (i.e. long-term water balance). used in Reynolds paper:
\[Sum(self.predicted- true)/sum(self.predicted)\]References
Reynolds, J.E., S. Halldin, C.Y. Xu, J. Seibert, and A. Kauffeldt. 2017. “Sub-Daily Runoff Predictions Using Parameters Calibrated on the Basis of Data with a Daily Temporal Resolution.” Journal of Hydrology 550 (July):399?411.