Please note that instructions on how to set up product purchase data in Rev.Up are for the legacy platform. Please reach out to your Customer Success Manager for instructions on how to import this data.
Before importing product bundle data, make sure you have first reviewed and the Configuring Identity Resolution article. You should have done the work of creating a system catalog, defining the priority of your systems and defining how the systems are related.
You must have already created the account template that your product purchases are associated with. If you have not not done this you should review the importing accounts article.
If you have completed this, you are now ready to set your templates and load data.
What is Product Bundle?
The Product Bundles help you bundle different SKUs together depending on the solution or product area. You give a tag to each SKU depending on the solution or product area. An SKU can belong to many bundles. And, a bundle can have many SKUs.
As you see, the Product Bundles have a different construct than the data you have seen so far. The SKUs to bundles has a many to many relationships and can be represented in different ways. The Atlas platform requires this file to conform to one schema. We currently follow the one to one representation in each row, as shown in the screenshot above. Meaning, each row will contain one SKU and one bundle name associated with it. If the SKU belongs more than one bundle, the relation will be represented in a new row. You will need to analyze your products and decide your SKU to bundle relations before you layout this construct
Creating Import Templates
Step 1: Mapping Product Bundle Fields
- Product Id
- Product Bundle Name
- Product Bundle Description - You have the option of including a description of your product bundle. This description will display in the D&B CDP UI and provide helpful information to users about what is included in the bundle.
Step 1a: Load a Sample File to Create a New Template
Start by clicking on the Import Data button.
Click on Create Template for the Product Bundle Object.
We recommend using a sample file to create your template. You can load your actual data once the template has been created.
Step 1b: Map Product Bundle Fields
This is a unique ID that identifies the SKU of the transaction. It is also used to join with the product’s Table. Refer the Product Bundle and Product Hierarchy sections for more info. The field can accept alphanumeric characters.
This is the actual bundle name. The field can accept alphanumeric characters.
This is any description you want to attach to the bundle. This field is optional. This field can accept alphanumeric characters.
Step 2: Validation and Save
Step 2a: Save Template
Once the upload and field mapping process is complete, D&B CDP will provide you an option to import the data along with the template creation. If you check the option, the file is queued for validation and import. We recommends setting up all the templates first before uploading data.
You will need to confirm by clicking “Submit”. Clicking “Submit” along with the import data option will take you to Jobs page. You will be able to track the progress of the job on this page. For more information on the jobs processing, please refer to the Data Processing and Analysis tab under the Job Page.
Step 2b. Pause Automated Sync for Template(s)
A customer master is the final view of data from all your systems. It is strongly recommended that the relationships across systems that contribute to the master are clearly defined upfront before loading the data.
Any changes to the relationship after data is loaded will require a deletion and rebuilding of the customer master. This is an expensive process that can delay access to the D&B CDP platform.
To avoid accidental data loads, D&B CDP strongly recommends pausing the automated data sync for your template(s). Pausing the automated sync ensures that data available in S3 will not be automatically
Step 2c. Create Data Automation Pipeline
Each entity in a system has a dedicated AWS S3 drop folder. Data for each entity from the system can be sent to the drop folder in CSV format. The CSV columns should match the template defined in step 2.
Any changes to the format should follow these steps
- Pause the automated sync for the template(s).
- Modify the template(s) to reflect the changes.
- Send data in the new format.
- Activate the automated sync for the template(s).
It’s recommended that you automate the data transfer from your systems to D&B CDP after templates have been set up. There are several options, including:
- Use built-in connectors to set up automation. This option is only available for Salesforce, Marketo, Eloqua and Pardot.
- Use commercial ETL (extract, transfer and load) tools such as Informatica, Dell Boomi, Stitch Data, etc. to transfer data to specific AWS S3 folders.
- Create custom scripts to extract data from the systems and copy to specific AWS S3 folders.
Note: Data can also be loaded directly in the UI. This option is only recommended for smaller data sets and ad-hoc data loads.
Step 2d. Activate Automated Sync for Template(s)
Before activating automated sync for the template(s), we recommend verifying that:
- Templates are set up correctly and accurately reflect unique IDs and match IDs
- Data being loaded matches the template and is available in the correct location.
Activate the automated sync for your template(s). Once activated, D&B CDP will monitor for data in the drop folder and automatically import data.
Considerations for Loading Product Bundle Data:
Editing Product Bundles
The Product Bundles cannot be incrementally loaded in two different processes and analyze jobs. If there are multiple Product Bundle files, they have to be provided in the same process and analyze job. When a new file is provided in a later job, the new file replaces the entire Product Bundles. If you provide an empty file, it will use the previous Product Bundles present in the system.
You can download the existing product bundle configuration from the existing template.
Choosing File Column Names
The platform does not allow to use duplicate names. The platform has a smart feature for auto-mapping and hence names such as, “Account ID”,”accountid”,” account id”, etc will all be identified as the ID column. Using it more than once causes a duplicate column issue and fails the upload. In such cases, you will need to change the column name to something else while uploading.
Column Name Limits
There is a fixed length you can have on each column name. The maximum length a column name can have is 63 characters.