Oracle Data Science Model Evaluation 

Oracle Data Science Model Evaluation 
Oracle Data Science Model Evaluation 

refer to Model Evaluation

In Oracle Data Science Model Evaluation, We will discuss the process, the benefits, and the different types of evaluators. We can see that model evolution comes after the model training phase in a machine learning lifecycle.

  • Process of using different evaluation metrics to understand a machine learning model’s performance.
  • Helps understand how a machine learning model performs across a series of benchmarks
  • Evaluation is a set of functions that convert the output of your test data into an interpretable, standardized series of scores and charts 

Oracle Data Science Model Evaluation  Benefits

  • Benchmarking : Quickly compare models across several industry-standard metrics. The choice of metric depends on the ML use case.
  • Discover Pitfalls : Discover pitfalls and provide feedback into future model development. Why my model has a high accuracy and a low precision.
  • Understand Trade-Offs : Increase understanding of the trade-offs of various model types. Model A performs well when the weather is clear but is much more uncertain during inclement conditions.

Types of ADS Evaluators 

ADS offers the Evaluation class, which is a collection of tools, metrics, and charts. There are three types of ADS evaluators. Discuss there are open source methods for clustering in scikit learn.

  • Binary Classification is a type of modeling wherein the output is binary. For example, Yes or No, Up or Down, 1 or 0.
  • Multiclass Classification is a type of modeling wherein the output is discrete. For example, an integer 1-10, an animal at the zoo, or a primary color.
  • Regression is a type of modeling wherein the output is continuous. For example, price, height, sales, and length.

Binary Classification Metrics 

  • Accuracy
  • Hamming
  • Loss
  • F-1 Score
  • Precision
  • Recall
  • ROC / AUC

You use ADSEvaluator and ADSModel class in the ADS package to generate these metrics. 

ADSModel.from_estimator function takes as input a fitted estimator and converts it into an ADSModel object.

All the related metrics for the logistic regression and random forest models can be shown using evaluator.metrics.

While evolution metrics is used for showing all metrics, charts, on other hand, can be shown by evaluator.show_in_notebook

Multiclass Classification Metrics 

  • Accuracy
  • Hamming Loss
  • F-1 Score (Weighted, macro, micro)
  • Recall(Weighted, macro, micro)
  • Precision (Weighted, macro, micro)
  • ROC / AUC

The metrics are very similar to that of binary classification metrics. To generate metrics and charts using ADSEvaluator. Change the number of classes in from_estimator function.      classes= [0,1,2]

Regression Metrics

  • R-squared: Also known as the coefficient of determination. It is the proportion in the data of the variance that is explained by the model.
  • Explained variance score: The variance of the model’s predictions. The mean of the squared difference between the predicted values and the true mean of the data.
  • Mean squared error (MSE): The mean of the squared difference between the true values and predicted values.
  • Root mean squared error (RMSE): The square root of the mean squared error.
  • Mean absolute error (MAE): The mean of the absolute difference between the true values and predicted values.
  • mean residuals: The mean of the difference between the true values and predicted values.  

To generate metrics and charts using ADSEvaluator.

we have basically four types of regression charts here. You can see the observed versus predicted– so basically a plot between the observed or actual values against the predicted values.

  • Observed vs. predicted: A plot of the observed, or actual values, against the predicted values output by the models.
  • Residuals QQ: The quantile-quantile plot, shows the residuals and quantiles of a standard normal distribution. It should be close to a straight line for a good model.
  • Residuals vs. predicted: A plot of residuals versus predicted values. This should not carry a lot of structure in a good model.
  • Residuals vs observed: A plot of residuals vs observed values. This should not carry a lot of structure in a good model.
upgrade AHF in RAC
Kxodia 肯佐迪亞

Leave a Comment

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *