AlumnIQ Admin Documentation
  • README
  • Common Features
    • Identity
    • Xid
    • Google Analytics Tracking
  • Content Management
    • Editing Pages
  • Profiles and Directory
    • Public Resources
      • Account Creation
      • Updating a Profile
      • Using the Directory
    • Administrative Resources
      • Profile Customization
      • Maintenance
      • Capturing Changes
      • Directory Permissions and Inclusion
  • Events Module
    • Event Setup
      • Event Skinning
    • Activity Setup
    • Fees
    • Webinars
    • Products
    • Fair Market Value (FMV)
    • Location Management
    • Access Control
    • Remote Check In [V5]
    • Wingman
    • Customer Service
    • Printing Name Tags
    • Express Registration
    • Reporting
    • Including Warehouse Data
    • Post-Event Survey
    • Post-Event Tasks
      • Matching
  • Image Library
    • Recent Uploads
    • Edit Image
    • Search Images
    • Uploads
  • Email and Lists
    • List Management
    • Delivery Workflow
    • Unsubscribes
    • Exclusions
    • Automated Messages
    • Bounce Handling
    • Spam Complaints
    • Resubscribes
  • Membership
    • Customer Service
  • Volunteer
    • Data Feeds
  • Online Giving
    • Giving Form
    • Global Configuration
    • Donor Cover
    • Setup Paths & Pitches
    • Sending targeted emails
    • Ask Arrays
    • Customer Service & Reporting
    • Tax Receipts
    • Suspended Pledges
    • Payment Processing
    • Give Now
    • Refunds
  • Crowdfunding
    • Introduction
    • Media Recommendations
    • Scheduled Page Updates
    • Challenges
  • Salesforce
    • Installation
    • Integration
    • Security
  • Security
    • Salesforce
    • Shared User Accounts
    • API Keys
    • S3 Keys
  • Data Sync
    • API Basics
    • Sending us your data
    • Getting data out of AlumnIQ (API)
    • API: Financial Data
    • Object Model/ER Diagrams
    • Salesforce
  • Integration Recipes
    • Everyday Events
    • Warehouse Loads
    • Salesforce
  • Compliance
  • Customer Guides
    • Auburn Specific Instructions
    • WWU Specific Instructions
  • Signature Events Service
    • Onboarding and Setup Timeline
    • Integration
    • Payments and Gateways
    • Warehouse Structure and Projection
    • Graphic Specs
    • Giving
    • Where to update what
    • Planning to Attend
    • Bio Update
    • General Configuration
    • Who's Coming List(s)
    • Package Controls
    • Access Controls
    • Strings
    • Health+Safety/Vaccination Attestation
    • Table/Seat Assignment
    • Getting events from contributors across campus
    • Virtual Events and Webinars
    • Staff Assistant
    • Common Scenarios
    • General Registration Management
    • Text and Email Messaging
    • The Pass
    • Watches
    • Housing
    • Post-Event Survey
    • Name Tags and Printing
    • Options for Check In
    • Batch Printing
    • Offloading Clicker Data
    • Event Attendance with Gatekeeper
    • iqKey for fast Gatekeeper access
    • Email Senders
    • Newsletter Archive
Powered by GitBook
On this page
  • Base Feeds
  • Prerequisites
  • Getting data to AlumnIQ
  • Step 1: Building the file
  • Step 2: Moving the file
  • Step 3: Telling us to import the file
  • Step 4: Finding out if it worked
  1. Integration Recipes

Warehouse Loads

Your CRM system is respected as the real truth about your constituents. In order for us to respect that truth for as many minutes of the day as possible, we operate off of a series of derived warehouses (refreshed daily) to operate our services.

Base Feeds

To get you up and running as quickly as possible it's highly probable that we requested four "Wave 1" feeds from you:

  • Warehouse

  • WarehouseDegrees

  • WarehouseDesignations

  • WarehouseLookups

Additional data feeds would follow in "wave 2" and beyond as needs were identified and data sourced. The intent here was to get a functional pattern in place before expanding the footprint of data requests.

Prerequisites

  1. An AlumnIQ API key - sourced from the AlumnIQ admin console, unique to QA and Production environments

  2. An Amazon S3 IAM keypair - access key id + access key secret

  3. Knowledge of how to script the build of a delimited text file

  4. Knowledge of how to script an automated file transfer to S3

  5. Knowledge of how to invoke a web service

If you're unsure of how to perform any of these tasks please enlist the assistance of technical help in order to successfully complete the mission.

Getting data to AlumnIQ

  1. You build the file(s)

  2. You move the file(s) into an Amazon S3 bucket

  3. You then make a unique API call for each feed you're sending, instructing us to ingest the data

  4. You await a callback to let you know whether the job has been enqueued or not

Step 1: Building the file

AlumnIQ will provide you with a data warehouse field map document that identifies which fields we need and what we're expecting to see in them. This living document will be added to as new feeds are identified and sent to AlumnIQ.

The warehouse field map also includes suggestions for formatting the files:

  • UTF8

  • delimiter selection

  • escaping delimiters within field values

  • compression

  • headers

Step 2: Moving the file

You'll be given a bucket name into which your files should be placed:

  • QA: BUCKET_NAME/qa/warehouse

  • Prod: BUCKET_NAME/production/warehouse

What you name the file is up to you. Many customers name it the same as the warehouse name (warehouse.csv, warehouse-degrees.csv, etc.)

Step 3: Telling us to import the file

Once the file(s) is/are in place, you'll then need to hit a webhook on AlumnIQ to tell us to import. Why do we do this? Because sometimes you need to push an emergency feed update to AlumnIQ - and by putting the command to ingest in your hands you can do this any time you need. It also ensures you're fully aware of where your data is going and when.

Each file that you provide to us will have a designated job to consume it. They will be listed for your reference in the AlumnIQ admin under Tasks > Webhooks. For example, let's say that you uploaded a file named warehouse.csv and the job name to consume that file is warehouse. To start the job, you will make an HTTP POST request to:

https://YOUR_ALUMNIQ_DOMAIN/api/v1/index.cfm/webhook/warehouse

Note that the final section of the URL is warehouse. This is the name of the job that should be run.

Your request should use the POST method, and the body should contain the following:

{
  "apiKey": "API_KEY_GOES_HERE",
  "filename": "warehouse.csv",
  "notifyEmail": "you@example.com",
  "notifyWebhook": "https://..."
}
  • The filename is the file that you uploaded to Amazon S3 and that you want the job to consume (e.g. warehouse.csv).

  • You may optionally specify an email address in notifyEmail where we will report success or failure upon completion.

  • You may optionally specify a notifyWebhook URL that we can call to report job success/failure, as well.

  • You may specify both notifyEmail and notifyWebhook if you like; they are not mutually exclusive. You may also choose to provide neither, but in that case you will not be notified of success/failure of the consumption of the file.

Repeat these calls with the updated webhook name + filename for every feed you want us to ingest. These calls will result in an immediate response indicating whether we successfully receieved the job (not processed it yet!).

Sample response payloads for the ingestion call:

Failure due to file not existing

If you try to execute a job that doesn't exist you'll get this response payload. Note that the HTTP status will be 400 to indicate that there was a client-side error:

{
  "errors": ["Invalid Job Name: sudo_make_me_a_sandwich"]
}

Failure to pass all necessary params in the body payload

Similarly, here is a sample response if your request doesn't include the required filename attribute. Again, the HTTP status of the response will be 400.

{
  "errors": ["Missing required option: Filename"]
}

Everything is fine

If the request had no errors, the response might look like the following, with an HTTP status of 200 to represent successful file validation and job queueing:

{
  "jobId": "F6494FE4-8642-4F2E-A87DF336916CC246",
  "job": "warehouse-membership-summary",
  "filename": "warehouse-membership-summary.csv",
  "notificationSettings": {
    "emails": "you@example.edu",
    "webhook": ""
  }
}

Requests are queued

Warehouse sync requests are queued rather than being run inline with your API request. Depending on the size of the job, and of the other jobs in the queue, it could be some time before we begin processing your request. For the largest of your files —usually your primary constituents file— processing could take 30-60 minutes.

Occasionally processing jobs fail for unpredictable reasons. Such is the nature of the cloud. In these cases we will automatically retry the job several times. If the job fails 5 times, our staff will be notified and we may contact you if we believe that a change is needed on your side of the integration.

If requested (see notifications above) you should always receive a success or failure notification for every job. The notification will include the same jobId that is returned in the API response at the time of queueing. If you haven't received the notification yet, it is likely that the job hasn't completed yet. It could be that it is failing and retrying, or it could be that it's still waiting in the job queue.

While queueing does add some uncertainty to the timing of your imports, it allows us to process import requests more rapidly, efficiently, and reliably.

Step 4: Finding out if it worked

As noted above, you may optionally specify an email address in notificationSettings.emails where we will report success or failure upon completion. You may also specify a webhook URL in notificationSettings.webhook that we can call to report job success/failure, as well. You may use both at the same time if you like.

To setup a webhook, you need to create a publicly accessible URL that we can call. We'll send a POST request to that URL when the job is complete, with a JSON payload in this format:

If the job succeeded:

{
  "jobId": "F6494FE4-8642-4F2E-A87DF336916CC246",
  "status": "SUCCESS",
  "errors": null,
  "executionCompleted": true
}

If the job failed:

{
  "jobId": "F6494FE4-8642-4F2E-A87DF336916CC246",
  "status": "FAILED",
  "errors": [
    "Error message 1",
    "Error message 2"
  ],
  "executionCompleted": false
}
PreviousEveryday EventsNextCompliance

Last updated 1 month ago

Don't have yet? Go get 'em! They also rotate quarterly. Send us an email if you'd like a reminder when they rotate.

The you use here will be provisioned for you in AlumnIQ Admin and is unique to the environment (QA vs Production).

To achieve total awareness of the state of your data sync, you may consider implementing a "Dead Man's Switch" mechanism to monitor the completion notifications for your jobs. AlumnIQ makes extensive use of a service named to monitor our internal jobs and services and we love it.

S3 Access Keys
API key
Dead Man's Snitch