Overview
Whether modeling in Lattice platform, users often want detailed answers from under the hood. Here, we answer many of these questions and continue to add new ones. Please join us!
FAQ
Can you continue iterating on a model after you have activated it?
Yes! Your new iterations will not impact the Active scoring in any way. When you are ready to move to a new iteration for scoring, just Activate it. The next score job will pick it up.
What algorithm is behind propensity scoring?
Propensity scoring uses binary classification to score your records. The Event column in your training data is the target of the supervised learning. (For cross-sell/up-sell models on transaction data, this column is created for you based on your settings, and uses point in time evaluation of your transaction history data).
What algorithm is behind revenue scoring?
Lattice revenue scoring has two factors that contribute to the score:
- Propensity (see above)
- Revenue modeling
Revenue modeling uses regression methods to estimate the revenue level of a deal, assuming that it closes. Since deals may or may not close, the two factors together lead to the best prioritization strategy to maximize revenue across all targets.
What is a "negative universe", and what is it for?
Sometimes people call the top-of-funnel view that fit models use to estimate relative lift the "negative universe". But take this with a grain of salt - many of these are entities that just haven't made it down the funnel yet.
When you score this universe with the model, the ones that are more likely to convert will get the higher scores.
How does Lattice identify personal email domains?
Lattice maintains an ever-growing list of personal email domains that we check incoming domains against.
Are the sizes of slices in the donut chart important and what do they mean?
The donut chart shows a single-variable analysis for each attribute considered for your model. The different sizes of the attributes in the donut give a visual sense of the scale of predictive power for the attributes. The eventual model will have features selected from among everything seen in the donut.
Can you download the PMML for a model built in Lattice?
No. Although LPI uses PMML internally in our model expression, this format is not available for download (except for cases where PMML was uploaded first).
Admin users do have access to the Model Summary page for their models, where JSON expression of the model and many other artifacts can be found.
How is the Feature Importance in the RF Model CSV file calculated?
The feature importance is a direct artifact of the underlying LPI modeling process, which is typically random forest.
How is missing value imputation handled?
For numerical attributes, a value is imputed based upon the similar cohort conversion behavior. The conversion rate for the missing-value cohort is compared to conversions of other cohorts, and the-best match value is imputed.
For categorical values (strings), the Null or empty case is simply treated as a value.
How do I remove spam-suppressing attributes generated by Lattice from my model?
Using spam indicators (or not) is a first-class choice in the model creation flow. It is on by default for lead models, where user-filled data is frequent. If you don’t want to use this, change the “Use Transformations” setting in Advanced Settings when you create the model (it’s near the bottom of the file upload page).
Should I remove unmatched records from my training file before I upload?
Removal of unmatched records is optional in modeling – the platform will automatically down-sample them if there are too many in the training file. This allows the model to learn and can yield better scores on new records. If you want to remove some or all of these from your file up front, see below for recommendations on removing rows based on Lattice field values.
How do I identify rows in my training data based on Lattice values?
To remove rows from your training based on Lattice field values, try first running your file through the score and enrich flow in the model (sometimes known as flat file scoring). You can enrich with the firmographic fields you are interested in, including “Is Matched”.
What is Remodel in the left Model navigation menu?
Remodel is a feature that allows you refresh your view of the Lattice Data Cloud without having to re-pull and prepare your training file. This way, you have the option to quickly leverage our incremental data updates (which come every month or so). When you use custom attributes in your models, Remodel also adds some additional insights on how safe those attributes are for scoring.
What metric used to optimize the model?
The model is optimized for segmentatable conversion lift. This means that we look for high top-20% conversionrate, as well as a smooth, decreasing shape to the lift chart.
Does over/undersampling happen behind the scenes where necessary?
Lattice does not oversample your success events, as this could cause problems of overfitting. In a few cases, non-successes may be down sampled.
Does the tool compare model performance on train and test datasets to flag models which are at a high risk of overfitting?
Yes, the tool scores both test and training, and provides visualizations of both as a part of the suite of diagnostics created for every model.
Comments
0 comments
Please sign in to leave a comment.