Warehouse Loads
Your CRM system is respected as the real truth about your constituents. In order for us to respect that truth for as many minutes of the day as possible, we operate off of a series of derived warehouses (refreshed daily) to operate our services.
Base Feeds
To get you up and running as quickly as possible it's highly probable that we requested four "Wave 1" feeds from you:
Warehouse
WarehouseDegrees
WarehouseDesignations
WarehouseLookups
Additional data feeds would follow in "wave 2" and beyond as needs were identified and data sourced. The intent here was to get a functional pattern in place before expanding the footprint of data requests.
Prerequisites
An AlumnIQ API key - sourced from the AlumnIQ admin console, unique to QA and Production environments
An Amazon S3 IAM keypair - access key id + access key secret
Knowledge of how to script the build of a delimited text file
Knowledge of how to script an automated file transfer to S3
Knowledge of how to invoke a web service
If you're unsure of how to perform any of these tasks please enlist the assistance of technical help in order to successfully complete the mission.
Getting data to AlumnIQ
You build the file(s)
You move the file(s) into an Amazon S3 bucket
You then make a unique API call for each feed you're sending, instructing us to ingest the data
You await a callback to let you know whether the job has been enqueued or not
Step 1: Building the file
AlumnIQ will provide you with a data warehouse field map document that identifies which fields we need and what we're expecting to see in them. This living document will be added to as new feeds are identified and sent to AlumnIQ.
The warehouse field map also includes suggestions for formatting the files:
UTF8
delimiter selection
escaping delimiters within field values
compression
headers
Step 2: Moving the file
Don't have S3 Access Keys yet? Go get 'em! They also rotate quarterly. Send us an email if you'd like a reminder when they rotate.
You'll be given a bucket name
into which your files should be placed:
QA:
BUCKET_NAME/qa/warehouse
Prod:
BUCKET_NAME/production/warehouse
What you name the file is up to you. Many customers name it the same as the warehouse name (warehouse.csv, warehouse-degrees.csv, etc.)
Step 3: Telling us to import the file
Once the file(s) is/are in place, you'll then need to hit a webhook on AlumnIQ to tell us to import. Why do we do this? Because sometimes you need to push an emergency feed update to AlumnIQ - and by putting the command to ingest in your hands you can do this any time you need. It also ensures you're fully aware of where your data is going and when.
Each file that you provide to us will have a designated job to consume it. They will be listed for your reference in the AlumnIQ admin under Tasks > Webhooks
. For example, let's say that you uploaded a file named warehouse.csv
and the job name to consume that file is warehouse
. To start the job, you will make an HTTP POST request to:
Note that the final section of the URL is warehouse
. This is the name of the job that should be run.
Your request should use the POST method, and the body should contain the following:
The API key you use here will be provisioned for you in AlumnIQ Admin and is unique to the environment (QA vs Production).
The filename is the file that you uploaded to Amazon S3 and that you want the job to consume (e.g.
warehouse.csv
).You may optionally specify an email address in
notifyEmail
where we will report success or failure upon completion.You may optionally specify a
notifyWebhook
URL that we can call to report job success/failure, as well.You may specify both
notifyEmail
andnotifyWebhook
if you like; they are not mutually exclusive. You may also choose to provide neither, but in that case you will not be notified of success/failure of the consumption of the file.
Repeat these calls with the updated webhook name + filename for every feed you want us to ingest. These calls will result in an immediate response indicating whether we successfully receieved the job (not processed it yet!).
Sample response payloads for the ingestion call:
Failure due to file not existing
If you try to execute a job that doesn't exist you'll get this response payload. Note that the HTTP status will be 400
to indicate that there was a client-side error:
Failure to pass all necessary params in the body payload
Similarly, here is a sample response if your request doesn't include the required filename
attribute. Again, the HTTP status of the response will be 400
.
Everything is fine
If the request had no errors, the response might look like the following, with an HTTP status of 200
to represent successful file validation and job queueing:
Requests are queued
Warehouse sync requests are queued rather than being run inline with your API request. Depending on the size of the job, and of the other jobs in the queue, it could be some time before we begin processing your request. For the largest of your files —usually your primary constituents file— processing could take 30-60 minutes.
Occasionally processing jobs fail for unpredictable reasons. Such is the nature of the cloud. In these cases we will automatically retry the job several times. If the job fails 5 times, our staff will be notified and we may contact you if we believe that a change is needed on your side of the integration.
If requested (see notifications above) you should always receive a success or failure notification for every job. The notification will include the same jobId that is returned in the API response at the time of queueing. If you haven't received the notification yet, it is likely that the job hasn't completed yet. It could be that it is failing and retrying, or it could be that it's still waiting in the job queue.
While queueing does add some uncertainty to the timing of your imports, it allows us to process import requests more rapidly, efficiently, and reliably.
Step 4: Finding out if it worked
As noted above, you may optionally specify an email address in notificationSettings.emails
where we will report success or failure upon completion. You may also specify a webhook URL in notificationSettings.webhook
that we can call to report job success/failure, as well. You may use both at the same time if you like.
To setup a webhook, you need to create a publicly accessible URL that we can call. We'll send a POST request to that URL when the job is complete, with a JSON payload in this format:
If the job succeeded:
If the job failed:
To achieve total awareness of the state of your data sync, you may consider implementing a "Dead Man's Switch" mechanism to monitor the completion notifications for your jobs. AlumnIQ makes extensive use of a service named Dead Man's Snitch to monitor our internal jobs and services and we love it.
Last updated