Sending us your data

In order to provide the functionality you're looking for in AlumnIQ, we need a copy of relevant data from your system of record. Constituents, their degree and giving data, affiliations, etc. We accomplish this by providing an Amazon S3 bucket into which you can place files for us to consume. After the files are in place, you can make a simple HTTP request to our API to queue the jobs that consume those files, and we will notify you upon success or failure.

The S3 Access Keys are available to you in admin, and will be rotated every quarter.

During our onboarding process, we will create a bucket for you in Amazon S3, and provide you credentials to write to it. For example, we might create private-iqu. You should write your file into private-iqu/production/warehouse/ to make that file available to our production server, or into private-iqu/qa/warehouse/ to make that file availble to our qa server.

That onboarding process will also involve discussions about what files you should generate, and their contents and format.

Making a request

Each file that you provide to us will have a designated job to consume it. They will be listed for your reference in the AlumnIQ admin, in the "Tasks" section, under "Webhooks." For example, let's say that you uploaded a file named warehouse.csv and the job name to consume that file is warehouse. To start the job, you will make an HTTP POST request to:

https://example.alumniq.com/api/v1/index.cfm/webhook/warehouse

Note that the final section of the URL is warehouse. This is the name of the job that should be run.

Your request should use the POST method, and the body should contain the following:

{
  "apiKey": "API_KEY_GOES_HERE",
  "filename": "warehouse.csv",
  "notifyEmail": "you@example.com",
  "notifyWebhook": "https://..."
}
  • The API key you use here will be provisioned for you in AlumnIQ Admin.

  • The filename is the file that you uploaded to Amazon S3 and that you want the job to consume (e.g. warehouse.csv).

  • You may optionally specify an email address in notifyEmail where we will report success or failure upon completion.

  • You may optionally specify a notifyWebhook URL that we can call to report job success/failure, as well.

  • You may specify both notifyEmail and notifyWebhook if you like; they are not mutually exclusive. You may also choose to provide neither, but in that case you will not be notified of success/failure of the consumption of the file.

Response to your API request

Your API request will receive a response like one of the following.

First, here is a sample response if you try to execute a job that doesn't exist. Note that the HTTP status will be 400 to indicate that there was a client-side error:

{
  "errors": ["Invalid Job Name: sudo_make_me_a_sandwich"]
}

Similarly, here is a sample response if your request doesn't include the required filename attribute. Again, the HTTP status of the response will be 400.

{
  "errors": ["Missing required option: Filename"]
}

If the request had no errors, the response might look like the following, with an HTTP status of 200 to represent successful file validation and job queueing:

{
  "jobId": "F6494FE4-8642-4F2E-A87DF336916CC246",
  "job": "warehouse-membership-summary",
  "filename": "warehouse-membership-summary.csv",
  "notificationSettings": {
    "emails": "you@example.edu",
    "webhook": ""
  }
}

Requests are queued

As of July 2022, warehouse sync requests are queued rather than being run inline with your API request. Depending on the size of the job, and of the other jobs in the queue, it could be some time before we begin processing your request. For the largest of your files —usually your primary constituents file— processing could take 30-60 minutes.

Occasionally processing jobs fail for unpredictable reasons. Such is the nature of the cloud. In these cases we will automatically retry the job several times. If the job fails 5 times, our staff will be notified and we may contact you if we believe that a change is needed on your side of the integration.

If requested (see notifications above) you should always receive a success or failure notification for every job. The notification will include the same jobId that is returned in the API response at the time of queueing. If you haven't received the notification yet, it is likely that the job hasn't completed yet. It could be that it is failing and retrying, or it could be that it's still waiting in the job queue.

While queueing does add some uncertainty to the timing of your imports, it allows us to process import requests more rapidly, efficiently, and reliably.

Notifications

As noted above, you may optionally specify an email address in notificationSettings.emails where we will report success or failure upon completion. You may also specify a webhook URL in notificationSettings.webhook that we can call to report job success/failure, as well. You may use both at the same time if you like.

To setup a webhook, you need to create a publicly accessible URL that we can call. We'll send a POST request to that URL when the job is complete, with a JSON payload in this format:

If the job succeeded:

{
  "jobId": "F6494FE4-8642-4F2E-A87DF336916CC246",
  "status": "SUCCESS",
  "errors": null,
  "executionCompleted": true
}

If the job failed:

{
  "jobId": "F6494FE4-8642-4F2E-A87DF336916CC246",
  "status": "FAILED",
  "errors": [
    "Error message 1",
    "Error message 2"
  ],
  "executionCompleted": false
}

To achieve total awareness of the state of your data sync, you may consider implementing a "Dead Man's Switch" mechanism to monitor the completion notifications for your jobs. AlumnIQ makes extensive use of a service named Dead Man's Snitch to monitor our internal jobs and services.

Last updated