Direct Data API

Direct Data API is a new class of API that provides high-speed read-only data access to Vault. Direct Data API is a reliable, easy-to-use, timely, and consistent API for extracting Vault data.

It is designed for organizations that wish to replicate large amounts of Vault data to an external database, data warehouse, or data lake. Common use cases include:

Direct Data API is not designed for real-time application integration.

In v24.1+, this API supports the extraction of Vault objects, document versions, picklists, workflows, and audit logs.

Included Data

Direct Data API extracts the following data from your Vault:

Direct Data API is not configurable, and all of the above data is always made available. You can use or ignore the data in the files.

What Does Direct Data API Provide

Direct Data API provides the following:

Benefits of Using Direct Data API

There are several benefits of using Direct Data API to extract data from your Vault:

Simpler Integrations

A Direct Data file is produced in a fixed, well-defined, and easy-to-understand format. This simplifies integrations, as the user doesn’t need to know the data model of the components or make multiple calls to different endpoints to build the dataset. However, Direct Data API does include the data model for all the objects and documents in a single metadata.csv file so that tables can be created in the external system based on the data provided.

Faster Integrations

Direct Data API continuously collects and stages the data in the background and publishes it as a single file for the interval specified. This is significantly faster than extracting the data via traditional APIs, which may require multiple calls based on the number of records being extracted.

More Timely Integrations

Files are always published at fixed times and at a regular cadence. Direct Data API provides Incremental files every 15 minutes, tracking changes in a 15-minute interval, which makes it possible to update the data warehouse on a more timely basis.

Consistent

Direct Data API provides a transactionally consistent view of data across Vault at a given time, called stop_time.

Retrieving Direct Data Files

To retrieve Direct Data files, you can use the following endpoints:

  1. Retrieve Available Direct Data Files: This endpoint lists all files available for download.
  2. Download Direct Data File: This endpoint downloads a specific Direct Data file.

Learn more about these endpoints in our video walkthrough.

Getting Started: Download a Direct Data File with a Shell Script

As an alternative to using the Vault Postman Collection, you can use the example Shell script below to download Direct Data files from your Vault. The script uses your provided credentials and the filters specified to download all file parts of the latest available Direct Data file.

The script requires the following variables:

This script runs natively in OSX/UNIX systems. On Windows operating systems, the script requires Bash. If you have Git installed, you can use Git Bash.

Run this script from the directory where you would like to download the Direct Data file. If there are multiple file parts, the script combines them into a single .tar.gz file.

Download Direct Data File

Click to expand the code example.


# Add the vault_dns of your Vault
vault_dns="your-vault.veevavault.com"
# Add in your session ID
session_id="YOUR_SESSION_ID"
# Add "full_directdata", "incremental_directdata", or "log_directdata"
extract_type="full_directdata"
# For "full_directdata" always use 2000-01-01T00:00Z as the start_time
start_time="2000-01-01T00:00Z"
# Add the stop_time
stop_time="2024-06-01T15:15Z"
# Will place the files in the current folder where the script runs
target_directory="$(pwd)"

# Perform the API call to retrieve the list of Direct Data files
direct_data_file_list_response=$(curl -s -X GET -H "Authorization: $session_id" \
                                -H "Accept: application/json" \
                                "https://$vault_dns/api/v24.1/services/directdata/files?extract_type=$extract_type&start_time=$start_time&stop_time=$stop_time")

# Extract the response status from the API response
response_status=$(echo "$direct_data_file_list_response" | grep -o '"responseStatus":"[^"]*' | sed 's/"responseStatus":"//')

# Check if the API call was successful
if [ "$response_status" != "SUCCESS" ]; then
    error_message=$(echo "$direct_data_file_list_response" | grep -o '"message":"[^"]*' | sed 's/"message":"//' | tr -d '"')
    if [ -z "$error_message" ]; then
        printf "Retrieve Available Direct Data Files call failed. Exiting script.\n"
    else
        printf "Retrieve Available Direct Data Files call failed with error: %s\n" "$error_message"
    fi
    exit 1
else
    printf "Retrieve Available Direct Data Files call succeeded.\n"
    
    # Extract data array
    data=$(echo "$direct_data_file_list_response" | grep -o '"data":\[[^]]*\]' | sed 's/"data":\[//' | tr -d ']')

    # Count file parts
    fileparts=$(echo "$data" | grep -o '"fileparts":[0-9]*' | sed 's/"fileparts"://')

    # Check if fileparts is null or empty
    if [ -z "$fileparts" ]; then
        printf "No Direct Data Extract Files found for '$extract_type' with start_time = '$start_time' and stop_time = '$stop_time'.\n"
        exit 0
    fi

    if [ "$fileparts" -gt 1 ]; then
        printf "Multiple file parts.\n"

        # Handling multiple file parts
        filepart_details=$(echo "$data" | grep -o '"filepart_details":\[{"[^]]*' | sed 's/"filepart_details":\[//' | tr -d ']')
        filepart_details=$(echo "$filepart_details" | sed 's/},{/}\n{/g')
        filename=$(echo "$data" | grep -o '"filename":"[^"]*' | sed 's/"filename":"//' | tr -d '"' | head -n 1)

        while IFS= read -r filepart_detail; do
            filepart_url=$(echo "$filepart_detail" | grep -o '"url":"[^"]*' | sed 's/"url":"//' | tr -d '"')
            output_filepart_name=$(echo "$filepart_detail" | grep -o '"filename":"[^"]*' | sed 's/"filename":"//' | tr -d '"')
            curl -o "$output_filepart_name" -X GET -H "Authorization: $session_id" \
                                  -H "Accept: application/json" \
                                  "$filepart_url"
        done <<< "$filepart_details"

        # Combine file parts
        name=$(echo "$data" | grep -o '"name":"[^"]*' | sed 's/"name":"//' | tr -d '"' | head -n 1)
        cat "$name."* > "$filename"

        full_path="$target_directory/$name"
        if [ ! -d "$full_path" ]; then
            # Directory does not exist, create it
            mkdir -p "$full_path"
            printf "Directory '%s' created.\n" "$full_path"
        else
            printf "Directory '%s' already exists.\n" "$full_path"
        fi

        tar -xzvf "$filename" -C "$full_path"
    else
        printf "Only one file part.\n"

        # Handling single file part
        filepart_detail=$(echo "$data" | grep -o '"filepart_details":\[{"[^]]*' | sed 's/"filepart_details":\[//' | tr -d '{}')
        filepart_url=$(echo "$filepart_detail" | grep -o '"url":"[^"]*' | sed 's/"url":"//' | tr -d '"')
        filename=$(echo "$data" | grep -o '"filename":"[^"]*' | sed 's/"filename":"//' | tr -d '"' | head -n 1)

        curl -o "$filename" -X GET -H "Authorization: $session_id" \
            -H "Accept: application/json" "$filepart_url"

        name=$(echo "$data" | grep -o '"name":"[^"]*' | sed 's/"name":"//' | tr -d '"' | head -n 1)
        full_path="$target_directory/$name"

        if [ ! -d "$full_path" ]; then
            # Directory does not exist, create it
            mkdir -p "$full_path"
            printf "Directory '%s' created.\n" "$full_path"
        else
            printf "Directory '%s' already exists.\n" "$full_path"
        fi

        tar -xzvf "$filename" -C "$full_path"
    fi
fi

Understanding Direct Data Files

A Direct Data file is a .gzip file that includes a set of data entities as CSV files called extracts. You cannot directly create or modify Direct Data extracts. The available extracts may vary depending on the Vault application.

Direct Data files are categorized under the following types:

The following image shows the folder structure for a Full Direct Data file:

Every file is named according to the following format: {vaultid}-{date}-{stop_time}-{type}.tar.gz.{filepart}. The file name is comprised of the following variables:

For example, 143462-20240123-0000-F.tar.gzip.001 indicates the first file part of a Full Direct Data file from a Vault with ID 143462 that contains data from the time the Vault was created to January 23, 2024, 00:00 UTC.

Learn more about Direct Data file types in our video walkthroughs.

Manifest CSV File

The manifest.csv file provides definitive information about what is included in the file, as well as the record count for each extract. This file is always present under the root folder.

The manifest CSV file includes the following columns to describe each extract:

Column Name Description
extract The extract name, in the format {component}.{extract_name}. For example, Object.user__sys.
extract_label The extract label. For example, if the extract name is Object.user__sys, the extract label is User.
type The type of extract: updates or deletes. This column only appears if the extract_type is incremental_directdata.
records The number of records for a given extract. This may show as zero records if there is no data for the given time period.
file Relative path to the CSV file within the Direct Data .gzip file. This column may not show a file if there are zero records for a given extract.

Metadata CSV File

The metadata.csv file defines the structure of each extract so that consumers can understand the structure of the extract CSV.

Incremental files include the metadata that has changed in the interval. The metadata.csv is available in the Metadata folder in the .gzip Direct Data file. There is also a metadata_full.csv under the root folder which includes the metadata of all Vault data. This file is identical to the metadata.csv file in a Full file and helps consumers look at all metadata of the Vault regardless of the changes that are captured in an Incremental file. The manifest.csv does not include this file.

The metadata CSV file includes the following standard columns in the following order:

Column Name Description
modified_date The date the field’s configuration was last updated. This value is null within the metadata.csv of a Full file and the metadata_full.csv of an Incremental file.
extract The extract name, in the format {component}.{extract_name}. For example, Object.user__sys or Document.document_version__sys.
extract_label The extract label. For example, if the extract name is Object.user__sys, the extract label is User.
column_name Name of the column in the extract. For example, description__c.
column_label The column label in the extract. For example, if the column name is description__c, the column label is Description.
type The indicated data type of the column: String, LongText, Number, Date, DateTime, Relationship, MultiRelationship, Picklist, MultiPicklist, or Boolean.
length For columns where the type value is String or LongText, this provides the length of the field.
related_extract For columns where the type value is Relationship, Picklist, or MultiPicklist, this indicates the name of the related extract.

Extract Naming & Attributes

Extracts contain the data for Vault components: Documents, Objects, Picklists, Workflows, and Logs. Direct Data API names extract CSV files according to their extract_name. For example, product__v.csv. If a user deletes object records or document versions, the API stores it in a separate file by appending _deletes.csv to the extract name. The CSV files include a column referencing the record ID of related objects (which can be identified using the metadata.csv). The columns available in each extract vary depending on the component.

Document Extract

Document version data is available in the document_version__sys.csv file. Deleted document versions are tracked in a separate file.

All document extracts have a set of standard fields in addition to all the defined document fields in Vault.

The following standard columns are available in the document version extract:

Column Name Description
id The document version ID, in the format {doc_id}_{major_version_number}_{minor_version_number}. For example, 101_0_1 represents version 0.1 of document ID 101. This value is the same as version_id.
modified_date__v The date the document version was last modified.
doc_id The document id field value.
version_id The document version ID, in the format {doc_id}_{major_version_number}_{minor_version_number}. For example, 101_0_1 represents version 0.1 of document ID 101. This value is the same as id.
major_version_number The major version of the document.
minor_version_number The minor version of the document.
type The document type.
subtype The document subtype.
classification The document classification.
source_file The Vault API request to download the source file using the Download Document Version File endpoint.
rendition_file The Vault API request to download the rendition file using the Download Document Version Rendition File endpoint.

Accessing Source Content

Direct Data API includes document metadata in the document_version__sys extract. This file includes additional attributes source_file and rendition_file which have generated URLs to download the content for that particular version of a document.

If your organization needs to make the source content for all documents available for further processing or data mining, use the Export Document Versions endpoint to export documents to your Vault’s file staging server in bulk. This endpoint allows up to 10,000 document versions per request.

Vault Objects Extract

Each object has its own extract file. Extracts are named according to their object name. For example, the extract CSV file for the Activity object is named activity__v.csv. If there are deleted records for an object, they are tracked in a separate {objectname}_deletes.csv.

Both custom and standard objects from your Vault are included. All objects visible on the Admin > Configuration page of your Vault are available for extraction.

All object extracts have a set of standard fields in addition to all of the defined fields included on the object, excluding inactive fields. The following standard columns are available in Vault object extracts:

Column Name Description
id The object record ID.
modified_date__v The date the object record was last modified.
name__v The name of the object record.
status__v The status of the object record.
created_by__v The ID of the user who created the object record.
created_date__v The date the object record was created.
modified_by__v The ID of the user who last modified the object record.
global_id__sys The global ID of the object record.
link__sys The object record ID across all Vaults where the record exists.

Picklist Extract

All picklist data is available in the picklist__sys.csv. This does not include picklists that are not referenced by any objects or documents. Learn more about picklist references.

The following standard columns are available in picklist extracts:

Column Name Description
modified_date__v The date the picklist was last modified.
object The name of the object on which the picklist is defined.
object_field The name of the object picklist field.
picklist_value_name The picklist value name.
picklist_value_label The picklist value label.
status__v The status of the picklist value.

Workflow Extracts

Workflow data including workflow instances, items, user tasks, task items, and legacy workflow information is available in the following extracts:

A Direct Data file may include additional extracts for legacy workflows. All workflow data in extracts includes active and inactive workflows for both objects and documents. Incremental files exclude inactive legacy workflow data, however, this information is accessible from a Full file. The extract for inactive legacy workflows only includes data from the previous day.

Workflow Extract

The workflow__sys.csv extract provides workflow-level information about each workflow instance, including the workflow ID, workflow label, owner, type, and relevant dates.

Workflow Item Extract

The workflow_item__sys.csv provides item-level information about each document or object record associated with a workflow, including the workflow instance ID, item type, and IDs of the related Vault object record or document. If a workflow includes a document, the Document Version ID (doc_version_id) column in the CSV file references the document version it’s related to. If the workflow instance is not related to a specific document version, this column displays the latest version ID of that document. If the extract_type is incremental_directdata, the Incremental file captures new document versions associated with the workflow.

The metadata CSV file assigns the workflow item extract a type of String.

Workflow Task Extract

The workflow_task__sys.csv provides task-level information about each user task associated with a workflow. This extract includes information such as the workflow ID, task label, task owner, task instructions, and relevant dates. This extract excludes participant group details.

Workflow Task Item Extract

The workflow_task_item__sys.csv provides item-level information about each user task associated with a workflow, such as the workflow task item ID, any captured verdicts, and the type of task item.

Direct Data Jobs

The following recurring jobs are visible in the UI when Direct Data API is enabled:

Building Your Direct Data API Integration

Terms Used

The integration component used to build the Target System has the following functions:

  1. Building the Target System Tables
  2. Initial Load of the Data
  3. Incremental Data Load
  4. Handling Schema Changes
  5. Verifying Data in the Target System

Let’s walk through each of the steps to help achieve the above functions of the integration.

Sequence Diagram for Fetching Data

The sequence diagram below visualizes the interaction flow between an external data lake and Direct Data API. It illustrates the steps involved in requesting a Direct Data file, including authentication, initiating the data export process, retrieving the generated file, and subsequent data processing. The diagram highlights the communication between the client application and the Vault server, clarifying the sequence of API calls and responses. This visual guide simplifies understanding of the end-to-end process and helps developers implement their integration logic effectively.

Accessing the Source System Data

Access the source system data with the following steps:

  1. Configure a Vault integration user with the appropriate permissions.
  2. With this user, retrieve the data from Vault API’s Download Direct Data File endpoint.

Staging the Source Data

Stage the source system data with the following steps:

  1. Create a separate location to download Direct Data files.
  2. A Direct Data file could have multiple parts if it is greater than 1GB in size. If your Direct Data file has multiple parts, you should concatenate the files into a valid .gzip file before use. Learn more in Handling File Parts.

Once the files are extracted, you are ready to start loading your Target System.

Building the Target System Tables

  1. The metadata.csv file provides the schema for all the components that are present in the file. Any referenced data is defined with type=Relationship and the related extract is specified in the metadata.csv. This makes it easy to define tables in your database.
  2. The manifest.csv file provides an inventory of all extracts and gives the records captured in the CSV.
  3. Every object has its own extract CSV file in Direct Data. As such, every object should have a table in the data warehouse.
  4. Document data should have its own table.

Initial Load of the Data

  1. For the initial load, you will always work with a Full file. This has all Vault data captured from the time the Vault was created until the stop_time of the file.
  2. Loading an extract into the database is simply loading a CSV as a table with the schema already defined.
  3. For example, if you are loading Vault data into Amazon Redshift, once the extracts are stored in an S3 bucket, you can use the COPY command to load the table from the extract to create a table in Amazon Redshift. Below is a code example that provides a way to do this with data stored in AWS S3:

Click to expand the Python script example.


    f"COPY {dbname}.{schema_name}.{table_name} ({csv_headers}) FROM '{s3_uri}' " \
                    f"IAM_ROLE '{settings.config.get('redshift', 'iam_redshift_s3_read')}' " \
                    f"FORMAT AS CSV " \
                    f"QUOTE '\"' " \
                    f"IGNOREHEADER 1 " \
                    f"TIMEFORMAT 'auto'" \
                    f"ACCEPTINVCHARS " \
                    f"FILLRECORD"
  1. Once all the tables are loaded with the data in the extracts, you would have replicated the Vault in your Target System with a consistent data set up to the stop_time of the Full file.
  2. You may now verify the data in the Target System to check for any gaps.

Incremental Data Load

  1. To keep the Target System up to date with the latest data in your Vault, you would need to consume Incremental Direct Data files and load the changes.
  2. Incremental files capture creates, updates and deletes made to the data in your Vault.
  3. If the same piece of data has undergone multiple changes within the same time window in an incremental file, it will only appear once in the extract.
  4. A general best practice is to load the extract with creates and updates before loading the extract with deletes, which are captured in a separate file.

Handling Schema Changes

  1. Veeva Vault makes it easy for a Vault user to make configuration changes to a Vault using the UI, Vault API, Vault Loader or a Vault Package.
  2. A number of schema changes are made as part of Vault Releases too which happen three times a year.
  3. Schema changes are captured in the Incremental metadata file. A good practice for the integration should be to check for schema changes before making any data updates in the tables.

Verifying Data in the Target System

  1. Once the data is loaded in the data warehouse, you will need a reliable way to test that data in Vault is now replicated in the data warehouse.
  2. A simple way to achieve this is to query Vault for some filtered data and compare it to the data in the data warehouse by querying it.
  3. It is important to keep in mind that data in Vault may have changed since it was extracted via Direct Data, so results may vary slightly.

Direct Data API Connector for Amazon Redshift

Our Vault Developer Support team has built a sample connector which you can use as-is with your Vault or as a starting point to build your custom integration. If you are building your own connector, refer to best practices for building scalable connectors. The sample connector is hosted on Amazon and uses various Amazon services to do the following:

  1. Download zipped Direct Data files to an Amazon S3 bucket
  2. Unzip Direct Data files
  3. Load the data within the Direct Data file into an Amazon Redshift database

You can access the sample code here.

Best Practices for Building a Scalable Connector

Building a scalable connector leveraging Direct Data API requires careful consideration of several key factors. Key strategies for maximizing throughput and minimizing processing time include:

Integration Best Practices

The following best practices should be kept in mind when incorporating Direct Data into new or existing integrations.

Calling the API

Direct Data API publishes Direct Data files at fixed times in a day. For example, for a data extraction window of 13:00 to 13:15 UTC, the corresponding Incremental file will be published at 13:30 UTC.

This means an integration can check and request the file at 13:30 UTC or after. This reduces the number of requests to the API to check if a file is available and makes your code simpler to manage.

Handling File Parts

A Direct Data file name always includes a file part number. If the compressed file is over 1 GB, then the file is broken into 1 GB parts to simplify the downloading of very large files. After downloading all the file parts, you should concatenate the files into a valid .gzip file before use. Each part in itself is not readable.

Below is a code example to handle multiple file parts:

Click to expand the Python script example.


try:
    for file_part in directDataItem.filepart_details:
        file_part_number = file_part.filepart
        response: VaultResponse = request.download_item(file_part.name, file_part_number)

        response = s3.upload_part(
            Bucket=bucket_name,
            Key=object_key,
            UploadId=upload_id,
            PartNumber=file_part_number,
            Body=response.binary_content
        )

        part_info = {'PartNumber': file_part_number, 'ETag': response['ETag']}
        parts.append(part_info)

    s3.complete_multipart_upload(
        Bucket=bucket_name,
        Key=object_key,
        UploadId=upload_id,
        MultipartUpload={'Parts': parts}
    )
except Exception as e:
    # Abort the multipart upload in case of an error
    s3.abort_multipart_upload(Bucket=bucket_name, Key=object_key, UploadId=upload_id)
    log_message(log_level='Error',
                message=f'Multi-file upload aborted',
                exception=e,
                context=None)
    raise e

First Full File

When Direct Data API is turned on for the first time, Vault extracts a Full file and publishes it at 1:00 AM UTC. The time it takes to prepare the first Full file depends on when the feature was enabled and how much data is present in your Vault. Depending on what day Direct Data API was enabled in your Vault, the first Full file may not be available until the following day. Veeva can help provide an estimate for when the first Full file will be available after the feature has been enabled for your Vault. Direct Data API returns a standard error message until the first Full file is generated.

Filtering Files

Direct Data API provides the following options to help filter the Direct Data files available for download according to the data you want to retrieve:

Vault Upgrades and Direct Data API

During a Vault release, a production Vault is typically unavailable for up to 10 minutes in a 2-hour timeframe. Each Vault release may introduce configuration updates and new components. You should expect to see a large number of updates within a short period of time in your Vault’s Direct Data files.

Picklist References in Object & Document Extracts

Object or document fields that reference picklist values are classified with a type of Picklist or MultiPicklist in the metadata.csv file.

The picklist extract allows you to retrieve the picklist value labels corresponding to the picklist value names referenced in other extracts. Picklist extracts should be handled in the following ways:

Below is an example using the masking__v picklist field on the study__v object from a Safety Vault:

id modified_date__v masking__v name__v organization__v study_name__v
V17000000001001 2023-12-06T17:57:05.000Z open_label__v Study 1 V0Z000000000201 Study for Evaluation of Study Product

The corresponding entry in the metadata.csv for the picklist field would be:

extract extract_label column_name column_label type length related_extract
Object.study__v Study masking__v Masking Picklist 46 Picklist.picklist__sys

Below is an example of the Picklist.csv row for the open_label__v value of the masking__v picklist field on the study__v object:

modified_date__v object object_field picklist_value_name picklist_value_label status__v
2023-12-14T00:06:28.867Z study__v masking__v open_label__v Open Label active__v

Workflow Items Extract and Object Relationships

The workflow item extract (workflow_item__sys.csv) provides information about the document version or Vault object record that relates to the item. There may be instances where the referenced document version or Vault object record does not have a corresponding extract in the Direct Data file. For example, when you retrieve an Incremental file, its extracts only contain data updated within the specified 15-minute interval. Therefore, if the document version or object record was not modified within this interval, it will not have its own extract.

As best practice, your external data warehouse should allow for a polymorphic relationship between the workflow item extract and each of the tables representing object extracts.

Transaction Times & Data for Incremental Files

With the availability of Incremental files with Direct Data API, Vault has a consistent way to capture and report on data changes that are committed to the database in 15-minute increments. This ensures that Direct Data API provides a fully consistent data state based on the committed data.

However, Incremental files only include Vault events that are committed to the database within a given time frame. The database commit time may differ from the last modified time. The last modified time may be updated by long-running transactions such as jobs, cascading state changes, or even triggers that create additional entities as part of an SDK job.

Handling Field Data

The following best practices should be kept in mind when handling field data extracted from your Vault with Direct Data API.

Formula Fields

Direct Data API only evaluates formula fields during initial staging (for Full files) or when a record change occurs (for an Incremental file). Other endpoints of Vault API evaluate formula fields every time a record is requested.

If a formula field contains Today(), the only time the formula field value will be the same between Direct Data API and Vault API is on a day when the record has changed.

Icon Formula Fields

When configured, a formula field can display as an icon based on the formula’s evaluation. Direct Data API handles icon formula fields in a unique way. While these fields are represented as a single character in length within the Direct Data file’s metadata, they can contain additional metadata representing the alternative text for the icon. This alternative text can be up to 3,968 characters long and offers context about the icon.

MultiRelationship Fields

When working with MultiRelationship type document fields that reference multiple objects, set the maximum length of such fields to 65,535 bytes in your external system to avoid truncated data.

Rich Text Fields

Rich Text fields present a unique challenge when extracting data. While Vault’s user interface limits the plain text portion of Rich Text content to 32,000 characters, the underlying storage and the Direct Data export handle these fields differently. To accommodate potential markup and ensure data integrity, Direct Data API metadata always reports a length of 64,000 characters for Rich Text fields. While the metadata indicates a larger potential size, the plain text still adheres to Vault’s 32,000-character limit (excluding markup).

User Logins in Incremental Files

Direct Data API captures user logins as events within the user__sys.csv extract file even if no additional changes were made to the User object record, including if the record’s modified_date__v value remains unchanged.

Getting Labels for Components

While Direct Data API provides comprehensive data extracts, associating user-facing labels with certain components requires additional integration logic. For example, document types or lifecycles. Component labels are included within the Vault Component Object extract (vault_component__v.csv), which is available in the Object folder in the .gzip Direct Data file. The examples below demonstrate how to join the relevant IDs from the main data files with the corresponding entries in the vault_component__v.csv file to obtain the correct labels for lifecycles, document types, and other related components. These queries should be executed after loading data into the Target System.

Example: Retrieve Document Type Label

To retrieve the document type label, join the vault_component__v table on the document_version__sys table by matching component_name__v from vault_component__v with the type__v column in document_version__sys. After the join, select the label__v column from vault_component__v to obtain the document type label.

The following SQL query retrieves the document type label:

SELECT vc."label__v", vc."component_name__v", dv."type__v" 
FROM document_version__sys AS dv
JOIN vault_component__v AS vc ON vc."component_name__v" = dv."type__v";

Example: Retrieve Object Record’s Lifecycle Label

To retrieve an object record’s lifecycle label, join the vault_component__v table on the custom_object__c table by matching component_name__v from vault_component__v with the lifecycle__v column in custom_object__c. After the join, select the label__v column from vault_component__v to obtain the object record’s lifecycle label.

The following SQL query retrieves an object record’s lifecycle label:

SELECT vc."label__v", vc."component_name__v", co."lifecycle__v" 
FROM custom_object__c AS co
JOIN vault_component__v AS vc ON vc."component_name__v" = co."lifecycle__v";

Getting Labels for Subcomponents

To associate user-facing labels with subcomponents, use the Retrieve Component Record endpoint to request the parent component’s metadata. This retrieval is not necessary for picklist values and object fields because their user-facing labels are provided by Direct Data API.

Planning for Full Data Refreshes of Your External System

Maintaining data consistency between your data warehouse or data lake and Vault requires a strategic refresh approach. It is typical for a Vault to undergo data model changes with each release. Therefore, we recommend a full refresh of your replicated data once per General Vault release, or three (3) times per year. This ensures that your data warehouse or data lake accurately reflects the latest Vault schema and avoids potential inconsistencies or data corruption.

Notifications for Failures

Ensuring the reliable delivery of Direct Data files is paramount. To alert administrators to any issues, Vault sends failure notifications to Vault Owners whenever a file fails to publish. These notifications provide immediate awareness of any data extraction or delivery problems.

Video Walkthroughs

Direct Data API vs Traditional API

This video provides a walkthrough of the differences between a Direct Data API and a traditional API.

Direct Data API Endpoints

This video provides a walkthrough of retrieving your Direct Data files with Direct Data API endpoints, including a demo in Postman.

Direct Data File Types

The following walkthroughs give an in-depth explanation of each Direct Data file type.

Full Direct Data File

Incremental Direct Data File

Log Direct Data File