Every Rev.Up tenant has an AWS S3 bucket set up that can be used to automate importing data into Rev.Up.
Pre-requisites:
- Step 1: [Import Data] Create a Custom Data Import Connection
- You must have an Admin user profile in order to generate or re-generate credentials.
Step 1: Gather the S3 Credentials that were generated when creating a custom connection in the first step.
If you did not download your credentials and save them, you can still access them by using the View Credentials button for the source you are looking to import data for.
Your S3 credentials will then pop up.
Step 2: Locate the AWS S3 file path for each data source that you created.
Each data source that you created as part of the custom connection will have its own S3 path location. A path location represents a folder that is used to deliver data to Rev.Up for automated import.
The S3 path location for your data source can be copied from your credentials details (shown above) or from the AWS S3 Location column on the Source page shown below.
Step 3: Create an automated workflow to transfer files from your source system to your AWS S3 location
Use your S3 Path Location, Access Key, Access Secret, and other bucket information to set up an integration between an automation tool of your choice and the S3 account. The automation tool will be used to push your data files (csv format) to the AWS S3 file locations of your data sources.
The below are several tools that many of our customers have used in the past to automate sending data to AWS S3 buckets.
- S3 Browser
- Cyber Duck
- AWS CLI
- Python\Boto3
- Alteryx
- Smart FTP
Comments
0 comments
Please sign in to leave a comment.