Once you've reviewed a model from the perspective of data suitability, data scientists often want to go a step deeper, and understand how well the underlying classifier is performing.
In line with industry best practice, Rev.Up propensity models use binary classification as the basis for propensity estimation. The raw model output is mapped to expected conversion rate based on past data. In the product, this expected conversion rate is shown as Lift - the conversion enhancement scale factor.
Example: If conversion of A is expected to be 5% against a 2.5% average, we call that 2x lift.
Rev.Updoes this analysis automatically with every single model build. The platform holds 20% of the training data out from the model learning to do use for backtest and calibration analysis.
You can review this analysis on the model performance page.
The most common metrics used to evaluate classifier performance are Precision and Recall. These metrics are easy to read for any model on the Performance page.
Step 1. Navigate to Model Dashboard, then the Performance page by clicking View for the iteration you want to review.
Step 2. In the Cumulative Conversions chart, click and drag on the white dot to choose a threshold for the evaluation. Precision and Recall for a model depend on the score threshold you choose for deciding "predicted positive" or "predicted negative".
Step 3. Read the Precision. The Precision is the Lift x Conversion Rate. Lattice reports the lift most prominently because Lift is often a more useful metric to share with business stakeholders.
Step 4. Read the Recall. The Recall for the model against it's testing set is shown as % Total Conversions on this page.
Comments
0 comments
Please sign in to leave a comment.