Service Announcements: The following upcoming changes affect all versions of the API.

[X]

Vault Java SDK Overview

The Vault Java SDK is a powerful tool in the the Vault Platform, allowing developers to extend Vault and deliver custom capabilities and experiences to Veeva customers. It provides a completely new experience in developing industry cloud applications, leveraging industry standard tools to develop and debug and integrating seamlessly with Vault in the cloud.

Veeva application developers, Veeva technical consultants, customers’ IT engineers, and 3rd party partners can all use Vault Java SDK to create applications and solutions. Learn more about developing and debugging code.

Extending Vault

Developers can use the Vault Java SDK to extend Vault by implementing custom code, such as triggers and actions.

alt text

Getting Started

Prerequisites

To develop with the Vault Java SDK, you need all of the tools for Java development, such as a Java Development Kit (JDK) and an Integrated Development Environment (IDE). You will also need a Vault to test your new Vault extensions before deploying them to production.

If you need help getting started, feel free to post your questions in the Developer forum.

  1. If you are unfamiliar with Vault, we suggest watching Vault Navigation Basics before you begin.
  2. You must be a Vault Owner to complete the Getting Started. Learn more about related permissions.
  3. To deploy code, you must enable configuration packages in your vault. Check if this feature is enabled in Admin > Deployment. If your vault has the Inbound and Outbound Packages pages here, this is enabled. If not, learn how to enable configuration packages in Vault Help.
  4. Download and install JDK 1.8.
  5. Download and install a Java IDE. We recommend Intellij IDEA Community Edition®, which is what we use for our tutorials.
  6. Request access to a vault from your Vault Admin. They can provision a sandbox for use.
  7. Clone or download the sample Maven project vSDK Hello World from GitHubTM. Download the project with the Download button, and select Download zip.

Step 1: Vault Setup

First, you need to configure your vault so the sample SDK trigger runs smoothly. The sample trigger runs on the vSDK Hello World object, which you must add to your vault. You can do this by deploying a prepackaged set of components (.vpk) included in the sample project.

  1. Log in to your vault and navigate to Admin > Deployment > Inbound Packages and click Import.
  2. Locate and select the \deploy-vpk\vsdk-helloworld-components\vsdk-helloworld-components.vpk file in your downloaded or cloned project folder. After selection, Vault opens and displays the details for the package.
  3. From the Actions menu (gear icon), select Review & Deploy. Vault displays a list of all components in the package.
  4. Click Next.
  5. On the confirmation page, review and click Finish. You will receive an email when Vault completes the deployment.
  6. Check that the VPK imported successfully in Admin > Business Admin. If the vSDK Hello World objects exists here, your vault setup is complete.

Step 2: Development Setup

Now that you’ve set up your vault, you can move on to setting up your development environment.

Import the Maven Project to IntelliJ®

  1. From IntelliJ®, select File > Open.
  2. Navigate to your downloaded or cloned project directory and locate the vsdk-helloworld-master folder and select the pom.xml file.
  3. Click OK.
  4. In the Open Project dialog, click Open as Project. IntelliJ® imports the project and automatically downloads the Vault Java SDK dependencies.
  5. Verify that the Maven: com.veeva.vault.sdk.api:vault-sdk-api library is present in the External Libraries section. If this file is not present, make sure you have access to the Internet and your browser can load repo.veevavault.com. You should also make sure your POM file is set up correctly.

Debugger Setup

After downloading the Vault Java SDK artifacts in your Maven project, you can use the Vault Java SDK Debugger. You must have the standard Vault Owner security profile. Learn more in related permissions.

To set up the debugger:

  1. Navigate to Run > Edit Configurations….
  2. Click the Add New Configuration button (+) and select Application from the drop-down.
  3. Give your configuration a Name.
  4. Add the following data to your new configuration:

Main Class: com.veeva.vault.sdk.debugger.SdkDebugger. The SDK Debugger Main Class should auto-complete as you begin to type. If it does not autocomplete, or if your IDE cannot recognize this Main Class, you may have completed a previous step incorrectly.

Program Arguments:

  1. Click Apply and then OK.
  2. Click Run to run the project and attach the Debugger to your vault.
    1. If your connection to the debugger is successful, you will see a console message stating “Welcome to the Vault Java SDK Debugger.” and additional information such as your user, host, and debugger version. Continue to Step 3: Run Code.
    2. If you see an error stating “Your Vault Java SDK library version does not match the vault version”, you may need to update the vault version in your pom.xml file. See the POM section below to update your Java SDK library version to match the vault.

POM Setup

If the debugger is not running successfully, you need to update your Java SDK library version to match the vault. You can do this by editing the <vault.sdk.version> attribute in your POM file.

  1. Verify the version of your vault. You can find your Vault version in Admin > Settings > General Settings. You don’t need to worry about your vault’s build number.
  2. In IntelliJ®, navigate to your pom.xml file.
  3. Update the <vault.sdk.version> to your vault version, using only periods (.) and not the letter R. For example, a vault on version 18R3.0 should look like this:

    <properties>
        <vault.sdk.version>[18.3.0-release0, 18.3.0-release1000000]</vault.sdk.version>
    </properties>
    
  4. If prompted, select Import Changes. You can also Enable Auto-Import to instruct Maven to automatically import any future changes.

  5. In the External Libraries section of IntelliJ®, verify the Maven: com.veeva.vault.sdk.api:vault-sdk-api library shows your vault version.

  6. Click Run to run the project and attach the Debugger to your vault. If your connection to the debugger is successful, you will see a console message with the text “Welcome to the Vault Java SDK Debugger.” If you continue to get a version error, verify you’ve completed the previous POM steps correctly. If the error persists after verifying you’ve completed the steps correctly, you may need to reference our more extensive POM Setup section.

Step 3: Run Code

The sample project downloaded for this guide contains a basic example of a trigger. When you run your project, all of the Vault Extension code in your project becomes live and running in your vault.

Your sample code is a BEFORE_INSERT trigger on the vSDK Hello World object. This means the trigger executes right before the object record saves. The sample trigger will then show an error, defined on lines 23 and 24 of the sample code.

  1. Log in to your vault.
  2. Navigate to Admin > Business Admin > vSDK Hello World.
  3. Click Create.
  4. Enter your name in the Name field.
  5. Click Save. You should see the following text:

That’s your trigger in action! The “Hello, World” code only runs while the debugger is running. Once the debugger is stopped, the trigger stops running and this popup will no longer appear.

Step 4: Debug Code

Instead of running the code, you can place breakpoint and debug the Vault extension class line by line. Let’s modify this Hello World class to say hello to a name you enter in a field.

  1. Click Stop in IntelliJ® to turn off the debugger.
  2. Open the HelloWorld Java file in IntelliJ®, which is in the javasdk folder.
  3. Comment out line 21 by adding two slashes to the beginning of the line:
  4. Uncomment out the lines 23 and 24 by removing the two slashes at the beginning of the line:
  5. Add breakpoints on the new lines uncommented out from Step 4 by clicking just to the right of the line number.
  6. Instead of Run, Debug your program.
  7. In Vault, Create a new vSDK Hello World record with your Name and click Save. You should see the code execution is transferred from the server to your code locally in IntelliJ®.
  8. To watch your code execute on each breakpoint, click the Resume Program button in the console sidebar.
  9. Back in your vault, you should see a slightly different error message:
  10. When you are finished, click Stop to turn off the debugger.

Step 5: Deploy Code

Vault extensions stop running when you stop the debugger. To make your SDK code run automatically for all users, you must deploy them to your vault. We do this with a VPK, the same way we deployed a VPK in Vault Setup section of this guide.

  1. In IntelliJ®, open the vaultpackage.xml file.
  2. Replace firstname.lastname@example.com with your vault user name.
  3. From your computer, select both the javasdk folder and the vaultpackage.xml file with CMD+click on MacOS®, or CTRL+click on Windows®. If you’re having trouble finding these files on your computer, you can right-click the filename in IntelliJ® and select Reveal in Finder on MacOS®, or Open File Location on Windows®.
  4. Right-click your files and select Compress 2 Items on MacOS®, or Send to > Compressed (zipped) folder on Windows®. If you are on MacOS® and cannot right-click, you can CTRL+click these items.
  5. Rename your new .zip to .vpk.
  6. Back in Vault, navigate to Admin > Deployment > Inbound Packages and click Import.
  7. Find and select your newly created VPK file from your computer.
  8. From the actions menu (gear icon), select Review & Deploy.
  9. Click Next.
  10. On the confirmation page, review and click Finish.
  11. Navigate to Admin > Configuration > Record Triggers. If you see your Hello World trigger here, you’ve successfully deployed the extension to Vault!
  12. Back in the Business Admin, navigate to the vSDK Hello World object and click Create.
  13. Name your trigger “Deployed Trigger” and click Save. You should see the following error message:

Congratulations, you’ve completed the Vault Java SDK Getting Started!

Developing Code

To develop code, you need to have a Maven project. You need to make sure your POM file is set up correctly, and your src folder is under the javasdk folder.

POM Setup

The artifacts (.jars) for the Vault Java SDK are distributed by a Maven Repository Manager. This allows you to easily download the Vault Java SDK and all its dependent libraries by simply setting up a Maven project pointing to the Maven Repo Manager in the pom.xml file.

This file has three sections you may need to edit:

Properties

The <vault.sdk.version> in your POM file must match the version of the vault you are developing on.

When Vault is upgraded to a new release or if you’re switching between vaults during development, the <vault.sdk.version> element in the properties section must be updated accordingly to reimport the correct version of the Vault Java SDK from the repository.

You can find your Vault version in Admin > Settings > General Settings. You don’t need to worry about your vault’s build number.

The <vault.sdk.version> must be in the following format:

[{vault_version}-release0, {vault_version}-release1000000]

For example, a vault on version 18R3.0 should look like this:

<properties>
    <vault.sdk.version>[18.3.0-release0, 18.3.0-release1000000]</vault.sdk.version>
</properties>

Repositories

Your <repositories> section should look like this:

<repositories>
    <repository>
        <id>veevavault</id>
        <url>https://repo.veevavault.com/maven</url>
        <releases>
            <enabled>true</enabled>
            <updatePolicy>always</updatePolicy>
        </releases>
</repositories>

Dependencies

This dependency will pull the Vault Java SDK and all the libraries it depends on from the repository.

Your <dependencies> section should look like this:

<dependencies>
    <dependency>
        <groupId>com.veeva.vault.sdk</groupId>
        <artifactId>vault-sdk</artifactId>
        <version>${vault.sdk.version}</version>
    </dependency>
</dependencies>

Development Basics

Developing Vault Extensions means writing your own implementation of specific Vault extension interfaces, such as RecordTrigger or RecordAction. For example, a record trigger must implement the RecordTrigger interface and annotate the class with the @RecordTriggerInfo to provide deployment information.

The following is a skeleton code example of a trigger class implementation:

package com.veeva.vault.custom.triggers;

import com.veeva.vault.sdk.api.data.RecordTriggerInfo;
import com.veeva.vault.sdk.api.data.RecordTrigger;
import com.veeva.vault.sdk.api.data.RecordEvent;
import com.veeva.vault.sdk.api.data.RecordTriggerContext;
import com.veeva.vault.sdk.api.data.RecordChange;


@RecordTriggerInfo(object = "object_name__c", events = {RecordEvent.BEFORE_INSERT})
public class ObjectTrigger implements RecordTrigger {

    public void execute(RecordTriggerContext recordTriggerContext) {

       // process each input record.

    }
}

Generally, a Vault extension’s implementation entails using services provided by the Vault Java SDK. With these services, you can apply custom business logic such as retrieving and performing data operations according to business requirements.

For example:

​Refer to the Javadoc about using these and other services.

Programming Guidelines

While developing Vault extensions is essentially programming in Java, there are some language and JDK restrictions to ensure your code runs securely in Vault.

You should observe the following general guidelines when developing Vault extensions:

Code Validation

Restrictions are checked during validation, which happens when you deploy code to Vault from a VPK. For example, if your code uses a third-party library or non-whitelisted class, it will not pass validation and deployment will fail. We recommend validating your code often during the development process to catch issues early.

You can do this with the Validate Package endpoint.

POST /api/{version}/services/package/actions/validate

To use this endpoint, you must create a Vault Package File (VPK) as input.

You can view, download, delete, enable or disable deployed extensions in the Admin UI, located in Admin > Configuration > Vault Java SDK. Learn more about the Admin UI in Vault Help.

Deploying Code

When testing Vault code locally through the debugger, the code is only active locally while the debugger is running. To make the code run for all users in your vault, you must deploy it.

To deploy code, a Vault Admin must enable configuration packages in your vault. Learn how to enable configuration packages in Vault Help.

Deploy code in three steps:

  1. Create a VPK with your source files
  2. Import the VPK to Vault
  3. Deploy the VPK

With this deploy method, you can accomplish any of the following:

If you need other deploy options, such as deploying or deleting a single file in the target vault, see Managing Deployed Code. However, deploying a single file rather than a VPK is considered bad practice and should be used sparingly.

Note that your Vault user must have the correct permissions to deploy code. See the related permissions table for more information.

Create a VPK

Create a VPK by zipping your javasdk folder and a vaultpackage.xml file and naming it with the .vpk extension.

Before you can create your VPK, you must verify your source code is in the proper folder structure and prepare a valid vaultpackage.xml file. This manifest file tells Vault if you’re adding, replacing, or removing code.

Verify file structure

Your file structure must adhere to the following guidelines:

Create Manifest File

Your manifest file must be named vaultpackage.xml and must be located in the root of your file structure.

Example vaultpackage.xml:

<vaultpackage xmlns="https://veevavault.com/">
  <name>PKG-DEPLOY</name>
  <source>
    <vault></vault>
    <author>mmurray@veepharm.com</author>
  </source>
  <summary>PromoMats RecordTrigger</summary>
  <description>Record trigger on the Product object for PromoMats.</description>
  <javasdk>
    <deployment_option>incremental</deployment_option>
  </javasdk>
</vaultpackage>

All of the following attributes must appear in the manifest file. Attributes marked as Optional must still be included, but can be left with a blank value.

Attribute Description
<vaultpackage> Top-level attribute to hold all other attributes. Must include xmlns="https://veevavault.com/".
<name> A name which identifies this package.
<source> A top-level attribute to hold the following sub-attributes:
  • <vault>: Optional: We recommend leaving this blank. This is the Vault ID of the source vault, but because you are importing this VPK, this attribute is ignored. When you export a VPK from Vault, this field is automatically populated with the source Vault ID.
  • <author>: The vault user name of the user who created this package.
<summary> Provide more information about this package. Appears in the Summary section of Admin > Deployment > Inbound Packages.
<javasdk> A top-level attribute to hold the <deployment_option> sub-attribute. This tells Vault how to deploy your package in Vaut. Valid values are:
  • incremental
  • replace_all
  • delete_all
Learn more in the Deployment Options section.
<description> Optional: A description of your package. If omitted, the description will appear blank in Admin > Deployment > Inbound Packages.

Deployment Options

Import the VPK to Vault

After creating the VPK, you need to import this VPK to your vault. This does not deploy the code, it just adds the VPK containing the code to your vault. The following instructions import the VPK using the Vault REST API, but you can also import through the Vault UI.

With the Import Package endpoint, import your code.

PUT /api/{version}/services/package

The body of your request must include the VPK created in the previous step.

On SUCCESS, the response contains an id for the vaultPackage. You will need this ID to deploy the package through the API.

Deploy the VPK

After importing your VPK, you need to deploy it. This is the final step which makes your vault extension run for all users. The following instructions deploy the VPK using the Vault REST API, but you can also deploy through the Vault UI. We recommend using the UI, which has a multi-step wizard that ensures validation.

Deploy your package with the Deploy Package endpoint.

POST /api/{version}/vobject/vault_package__v/{package_id}/actions/deploy

You can find the package_id URI Path Parameter in the API response from your import request. If you lost this ID, you can also find it in Admin > Deployment > Inbound Packages.

When you run the deploy endpoint, Vault first validates the VPK. If you have any validation errors, such as using non-whitelisted classes, deployment will fail. To avoid this, we recommend validating your package frequently throughout the development process.

After successful deployment, you can view deployed extensions in the Admin UI, located in Admin > Configuration > Vault Java SDK. Learn more about the Admin UI in Vault Help.

Deployment Errors

If the deployment encounters any errors, Vault stops the deployment but does not roll back any changes it already made. We recommend downloading and checking the log file for details. Learn more about deployment errors in Vault Help.

Managing Deployed Code

Deploying VPKs are not the only way to manage your custom code. You can view, download, delete, enable or disable deployed extensions in the Admin UI, located in Admin > Configuration > Vault Java SDK. Learn more about the Admin UI in Vault Help.

You also may need more granular deploy options. For example, you may need to delete a single file rather than all files. However, we do not recommend using the following single-file deploy methods as you may introduce or delete code which breaks existing deployed code. As a best practice, you should always use VPKs to manage code deployment.

Enable or Disable Extensions

When deployed, extensions are automatically enabled. You may wish to disable an extension if you are troubleshooting a bug, or loading data into a vault and do not want a trigger to execute. You can easily enable and disable extensions through the Admin UI, or you can use the Vault REST API. Users must have the Admin: Configuration: Vault Java SDK: Create and Edit and permissions to enable or disable code.

PUT /api/{version}/code/{FQCN}/{enable || disable}

You can only enable and disable entry-point classes, such as triggers and actions. You cannot disable UDCs, or vault extensions which reference other code components.

Download Source Code

You can retrieve the source code for a single file through the Admin UI, or through the Vault REST API. Users must have the Admin: Configuration: Vault Java SDK: Read permission to download source code.

GET /api/{version}/code/{FQCN}

Add or Replace Single Source Code File

You may need to add or replace a single file rather than a whole VPK. However, we do not recommend using the following single-file deploy method as you may introduce or delete code which breaks existing deployed code. As a best practice, you should always use VPKs to manage code deployment.

The following endpoint adds or replaces a single .java file in the currently authenticated Vault. If the given file does not already exist in the vault, it is added. If the file does already exist, the file is updated.

PUT /api/{version}/code

Users must have the Admin: Configuration: Vault Java SDK: Create and Edit and permissions to use this endpoint.

Delete Single Source Code File

In some cases, you may need to delete a single file rather than replace all or delete all files. However, we do not recommend using the following single-file deploy method as you may introduce or delete code which breaks existing deployed code. As a best practice, you should always use VPKs to manage code deployment.

Code deletion is permanent. There is no way to retrieve a deleted code file. Vault does not allow deletion of a file which is currently in-use.

You can delete a single source file through the Admin UI, or through the Vault REST API.

DELETE /api/{version}/code

Users must have the Admin: Configuration: Vault Java SDK: Create and Edit and permission to delete code with this endpoint.

Triggers

Understanding Record Triggers

A record trigger executes custom business logic whenever a data operation on an object record occurs. Users manipulate data in Veeva applications by using the UI or API to Insert, Update, and Delete records. When these operations occur, the Vault Java SDK provides interfaces to interact with the record data before and after the data operations. Using the Java SDK, users can apply custom business logic in Event handlers for BEFORE and AFTER Events.

alt text

This Event-driven programming model allows developers to write small programs that target a specific object and Event to address common business requirements which standard application configurations cannot address.

The following are some typical uses for triggers by Event type:

BEFORE

Field Value Defaults: Default field values before creating a record.

Field Value Validations: Validate field values before saving or deleting a record.

Conditionally Required Fields: Make a field required by canceling the save operation if some condition is not met.

AFTER

Create, Update, or Delete Related Records: Create, update, or delete other records after saving or deleting a record.

Start Workflow: Start a workflow after creating or updating a record.

Change State: Change the lifecycle state of a record.

Illustration: Saving a New Record

Let’s examine a typical Save new record operation initiated by a user and walk through what a trigger does. When the user clicks the Save button, the system captures the object, such as product__v, and Event, such as BEFORE_INSERT, which looks up a registry for triggers on the given object and Event and executes them in order. The system passes the data entered by the user to the trigger during trigger execution.

BEFORE_INSERT trigger logic can interact with the current record to:

After saving the record, the system executes the AFTER_INSERT triggers for the same object.

AFTER_INSERT trigger logic can interact with the current record to:

Since the current record cannot change in the AFTER_INSERT Event, most of the business logic in this Event interacts with other records through RecordService or performs jobs on the current record that are executed asynchronously in a separate process.

Anatomy of a Record Trigger

You can implement record triggers as normal Java classes. You can express complex business logic within a trigger class.

The code sample below explains the anatomy of a typical, basic trigger class. This example simply defaults a field value based on another field when creating a new record.

The explanations and line numbers below refer to the code sample above.

Line #1: Package

A custom record trigger must be under the com.veeva.vault.custom package. You can have further sub-package names as you see fit to organize your triggers. For example, you might use com.veeva.vault.custom.rim.submissions.triggers to indicate custom triggers for a RIM Submissions project.

Lines #3-11: Import

Only references to Vault Java SDK (com.veeva.vault.sdk.api.*) and a limited number of whitelisted classes, interfaces, and methods in the JDK are allowed. For example, String, LocalDate, List, etc.

Line #13: Annotation

The class annotation (@RecordTriggerInfo) indicates that this class is a record trigger. The annotation specifies the Object, Event(s), and Order of execution.

Line #14: Class Name

The class name declaration must include the public modifier and implements RecordTrigger. As a best practice, class name should indicate the object affected by this trigger and some functional description, for example, ProductFieldDefaults implements RecordTrigger means a trigger on Product that defaults some field values.

Line #16: execute() Method

You must implement this method for the RecordTrigger interface. This method has an instance of RecordTriggerContext passed in, so you can interact with the record(s) on which a user has initiated some operation.

Line #18: Context Record(s)

When a user performs a record operation whether by UI or API, such as creating a new record, the record being created is the context record. The operation may have multiple context records such as in a Bulk Create.

A list of records affected by the operation can be retrieved from RecordTriggerContext#getRecordChanges, and you can loop through each record to get field values and/or set field values. Your business logic is enclosed in this loop.

Line #21: getValue(String fieldName, ValueType.<T> fieldType)

You can retrieve field values from the context record. Trigger code operates as a System user, so no record level or field level security apply. All records and fields are accessible.

For new records, only new values are available. For updating records, both old and new values are available. The fieldName argument must be a valid field name in the object. For example, name__v. The fieldType argument must match the Vault field type in order to return the appropriate Java data type. Use the Data Type Map to find out how data types are mapped to objects in Vault.

Line #22: setValue(String fieldName, Object fieldValue)

You can set field value on fields that are editable. System fields, such as created_by__v and state__v, and Lookup fields are not editable.

The fieldName argument must be a valid field name in the object. The fieldValue argument must be an object of the appropriate data type for the field in the fieldName argument. Use the Data Type Map to find out how data types are mapped to objects in Vault.

Trigger Execution & Performance

Execute as System

Custom code in Vault executes with System-level access. Vault extension code, such as triggers and actions, can access object records with full read/write permission. This means any Vault user level, record level, or field level access restrictions do not apply. Custom code can copy or move data from object to object and delete data without regards to who the user is. It’s the developer’s responsibility to take that current user context into consideration and apply control where appropriate.

Data security should be considered when designing solutions using the Vault Java SDK.

Note: It is possible to make a field managed by custom code only by using Atomic Security to hide it from all business users, in which case custom code can still access this field because of System-level access.

Trigger Execution Flow

When a user initiates a request (INSERT, UPDATE, or DELETE) such as clicking Save in UI or sending a POST via Object API, the system processes the request by firing the BEFORE Event triggers first, then committing data to the database, and then firing the AFTER Event triggers.

BEFORE triggers are often used for defaulting field values and validating data entry, whereas AFTER Event triggers are mostly used to automate creating other records or starting workflow processes.

alt text

Trigger Order and Nested Depth

A limit of 10 triggers are allowed in each Event and the order of execution can be specified. That means BEFORE and AFTER Events each have their own limit of 10 triggers allowed. In addition, when any given trigger executes, it can cause other triggers (nested triggers) to fire when it performs a data operation (INSERT, UPDATE, DELETE) programmatically. The nested trigger depth cannot exceed 10 levels deep.

To summarize, when a user initiates a request (for example, INSERT), the BEFORE Event triggers (up to 10) will execute in order. If any of the triggers cause other triggers to fire, the nested triggers will execute (up to 10 nested levels). After the system finishes the BEFORE triggers, the data with any changes made by the executed triggers persists, and the AFTER Event triggers will fire in the same manner with trigger order and nested depth. The image below illustrates this execution flow.

If you need to share data between different triggers or actions in the same transaction, you can do so with RequestContext.

System-Initiated Requests

Generally, triggers fire when a user initiates a request. When the System updates records, such as Lookup Field updates, triggers do not fire. Similarly, when the System performs a Hierarchical Copy (deep copy), the insert operation will not fire any triggers.

Terminating Execution

The trigger execution flow described above represents a transaction. In some cases, it is necessary to cancel the entire INSERT request and rollback any changes. Developers can throw a RollbackException in any trigger in the transaction, and execution will terminate immediately and roll back all changes.

Note that calling RecordChange#setError will not terminate a transaction. Instead, the trigger which caused the error will fail and the rest of the transaction will continue. In order to terminate an entire transaction, you should always throw a RollbackException.

The system will also terminate execution and rollback a request when errors occur, such as missing required field value on INSERT or exceeding allowed elapsed time limit (100 seconds).

Asynchronous Services

Calls to asynchronous services such as JobService or NotificationService will execute only when the request transaction completes. This way, you can use a RollbackException to stop the transaction if necessary, preventing asynchronous services from executing unintentionally when rolling back a transaction. For example, if a DELETE Event trigger calls NotificationService to send a notification, but a nested trigger later rolls back the transaction, the system should not delete the record nor send the notification. This prevents the asynchronous notification process from executing erroneously. Once the entire transaction completes successfully, all queued asynchronous services execute immediately.

​Data Availability

When processing a request, the System performs the following sequence of steps:

  1. Execute BEFORE triggers.
  2. Write record changes to database.
  3. Update changes in VQL index.
  4. Execute AFTER trigger.

The data available in BEFORE and AFTER Event triggers depends on the operations (INSERT, UPDATE, and DELETE). For example, in an INSERT operation, you cannot get old or existing values because a new record is being inserted. Similarly, setting a field value only makes sense in the BEFORE Event in INSERT and UPDATE operations. It doesn’t make sense to set field ​value after it has been persisted or in a DELETE operation. The following chart illustrates when you can get or set field values.

Event

Record returned by getNew()

Record returned by getOld() 

 

getValue

setValue

getValue

setValue

BEFORE_INSERT

X

X

   

AFTER_INSERT

X

     

BEFORE_UPDATE

X

X

X

 

AFTER_UPDATE

X

 

X

 

BEFORE_DELETE

   

X

 

AFTER_DELETE

   

X

 

Query vs RecordService#readRecord

As illustrated above, BEFORE triggers can change field values, but these values are not persisted to the database and not updated in the VQL index yet. In this case, using the QueryService to retrieve a record being modified by a trigger will only return the old (existing) values. In order to get the values set by a trigger inside a transaction, you must use the RecordService#readRecord method. However, this method generally uses more memory. It is only recommended when you need to get field values modified by multiple triggers in a single transaction. Otherwise, we recommend QueryService to retrieve record data.

Because AFTER triggers happen after database updates and VQL indexing, you can use QueryService to retrieve both old and new values.

System Populated Fields

Lookup Field and Document Reference Field (latest version) are special field types. These field types have values set by the System.

In general, the System populates field values after the BEFORE Event. Because these field values are set by the System, the changes are not reflected in the BEFORE Event. For example, getNew() and getOld() will return the same existing value or null accordingly. However, the AFTER Event will return the new value set by the System in getNew() and the existing value in getOld().

In addition, because System-initiated requests do not fire triggers, triggers will not fire when the System updates a System-populated field.

Performance Considerations

Triggers should be designed to process records in bulk, especially when making service calls, such as QueryService and RecordService. These services are designed to take a list of records as input for CRUD operations. It is much more efficient to build a list of record for input and make a single call to these services rather than make service calls one record at a time inside a loop.

Triggers that do not process records in bulk will perform poorly, especially when there are multiple triggers (including nested triggers), execution will likely exceed the maximum elapsed time (100s) or CPU time (10s) allowed. In addition, queries that return large number of records with large number of fields (including fields not used in your code) will likely exceed the maximum memory allowed (40MB).

Generally, you should never run a query or perform CRUD operations on records in a loop. Each iteration will make unnecessary service calls which can be easily batched to get the same result with a single service call.

Performance Example

The following poorly performing code executes a query inside a “for” loop, for each Product record in a request. That means when a request has multiple records, like from an API call or bulk update wizard, the QueryService#query call is made for each of the records. The only difference between each query is the WHERE clause contains a different Country reference field value. Performing multiple queries in this case is inefficient and time consuming. A better approach is to make a single query with a CONTAINS clause for each Country referenced by the Product records in the request.

To make performance even worse, as each query is executed to retrieve related records, a forEach loop is used to call RecordService.batchSaveRecords to save each new Country Brand record one at a time. Creating, updating, and deleting records are the most expensive and time-consuming operations. You should always batch records up in a list as input when calling batchSaveRecords.

While the better performing code requires more lines of code as illustrated below, it performs much better because it reduces data operations significantly by leveraging the Vault Java SDK’s interfaces to process records in bulk. 

Poorly Performing Code:
@RecordTriggerInfo(object = "product__v", events = RecordEvent.AFTER_INSERT)
public class ProductCreateRelatedCountryBrand implements RecordTrigger {
    public void execute(RecordTriggerContext recordTriggerContext) {

        for (RecordChange inputRecord : recordTriggerContext.getRecordChanges()) {

            QueryService queryService = ServiceLocator.locate(QueryService.class);
            String queryCountry = "select id, name__v from country__v where region__c=" + "'" + region + "'";
            QueryResponse queryResponse = queryService.query(queryCountry);

                queryResponse.streamResults().forEach(queryResult -> {
                Record r = recordService.newRecord("country_brand__c");
                r.setValue("name__v", internalName + " (" + queryResult.getValue("name__v", ValueType.STRING) + ")");
                r.setValue("country__c",queryResult.getValue("id",ValueType.STRING));
                r.setValue("product__c",productId);

                RecordService recordService = ServiceLocator.locate(RecordService.class);
                recordService.batchSaveRecords(VaultCollections.asList(r)).rollbackOnErrors().execute();

            });
        }

}
Better Performing Code:
@RecordTriggerInfo(object = "product__v", name= "product_create_related_country_brand__c", events = RecordEvent.AFTER_INSERT)
public class ProductCreateRelatedCountryBrand implements RecordTrigger  {

    public void execute(RecordTriggerContext recordTriggerContext) {

        // Get an instance of the Record service
        RecordService recordService = ServiceLocator.locate(RecordService.class);
        List<Record> recordList = VaultCollections.newList();

        // Retrieve Regions from all Product input records
        Set<String> regions = VaultCollections.newSet();
        recordTriggerContext.getRecordChanges().stream().forEach(recordChange -> {
            String regionId = recordChange.getNew().getValue("region__c", ValueType.STRING);
            regions.add("'" + regionId + "'");
        });
        String regionsToQuery = String.join (",",regions);

        // Query Country object to select countries for regions referenced by all Product input records
        QueryService queryService = ServiceLocator.locate(QueryService.class);
        String queryCountry = "select id, name__v, region__c " +
                "from country__v where region__c contains (" + regionsToQuery + ")";
        QueryResponse queryResponse = queryService.query(queryCountry);

        // Build a Map of Regions (key) and Countries (value) from the query result
        Map<String, List<QueryResult>> countriesInRegionMap = VaultCollections.newMap();
        queryResponse.streamResults().forEach(queryResult -> {
            String region = queryResult.getValue("region__c",ValueType.STRING);
            if (countriesInRegionMap.containsKey(region)) {
                List<QueryResult> countries = countriesInRegionMap.get(region);
                countries.add(queryResult);
                countriesInRegionMap.put(region,countries);
            } else
                countriesInRegionMap.putIfAbsent(region,VaultCollections.asList(queryResult));
        });

        // Go through each Product record, look up countries for the region assigned to the Product,
        // and create new Country Brand records for each country.
        for (RecordChange inputRecord : recordTriggerContext.getRecordChanges()) {

            String regionId = inputRecord.getNew().getValue("region__c", ValueType.STRING);
            String internalName = inputRecord.getNew().getValue("internal_name__c", ValueType.STRING);
            String productId = inputRecord.getNew().getValue("id", ValueType.STRING);

            Iterator<QueryResult> countries = countriesInRegionMap.get(regionId).iterator();

            while (countries.hasNext()){
                QueryResult country =countries.next();
                Record r = recordService.newRecord("country_brand__c");
                r.setValue("name__v", internalName + " (" + country.getValue("name__v", ValueType.STRING) + ")");
                r.setValue("country__c", country.getValue("id", ValueType.STRING));
                r.setValue("product__c", productId);
                recordList.add(r);
            }

        }

        // Save the new Country Brand records in bulk. Rollback the entire transaction when encountering errors.
        recordService.batchSaveRecords(recordList).rollbackOnErrors().execute();
    }
}

Actions

Through the Vault Java SDK, you can create custom actions. These actions execute through the UI or API when invoked by a user.

Unlike triggers, uploading action code does not make it execute. Action code requires an additional step from developers or Vault Admins to configure an action in Vault to use the uploaded code.

Record Actions

Custom actions for records, called record actions, are invoked by a user on a specific record from the UI or API. Learn more about Object User Actions in Vault Help.

At this time, user action is the only supported record action usage. Note that unlike document user actions, record user actions are configured at the object-level, rather than the lifecycle level.

Implementing Record Action

In order to implement a custom action, the RecordAction interface requires implementing the following two methods:

The @RecordActionInfo class annotation is also required to indicate this class is an action.

The following is a basic skeleton of a record action:

        package com.veeva.vault.custom.actions;

        @RecordActionInfo(label="Say Hello", object="hello_world__c")
        public class Hello implements RecordAction {
            // This action is available for configuration in Vault Admin.
            public boolean isExecutable(RecordActionContext context) {
                return true;
            }
            public void execute(RecordActionContext context) {
              //action logic goes here
            }
        }

Document Actions

Along with the standard document actions you can configure in the Vault UI, you can create custom document actions using the Vault Java SDK to automate more specific business processes. Unlike document actions created through the Vault UI, custom document actions can run multiple sequential actions within one action, and can execute more complex conditional logic.

You can configure the following types of custom document actions:

You can find examples of document actions in our Sample Code.

Implementing Document Actions

A document action is a Java class that implements the DocumentAction interface and has the @DocumentActionInfo annotation.

The DocumentAction interface requires implementing the following two methods:

The @DocumentActionInfo class annotation requires the following:

The following is a basic skeleton of a document action:

package com.veeva.vault.custom.actions;

        @DocumentActionInfo(label="Set Expiration", usage="LIFECYCLE_ENTRY_ACTION")
        public class SetDocumentExpiration implements DocumentAction {
         // This action is available for configuration in Vault Admin.
            public boolean isExecutable(DocumentActionContext context) {
                return true;
            }
            public void execute(DocumentActionContext context) {
                //action logic goes here
            }
        }

Debugging Actions

To debug action code, developers must deploy the code to Vault and configure a usage for the action. When the configured action is invoked through Vault, execution passes to the debugger to allow developers to step through the code. The code in your debugger will override any deployed code, allowing developers to test changes to a deployed action. Note that the class you wish to develop and debug must have the same package, class name, and annotation as the deployed code.

Request Context

The RequestContext interface provides access to the context of a transaction. This allows you to pass data from the initial firing to subsequent triggers within the same transaction.

For example, initiating a request from an action which causes other triggers to execute, including nested triggers. As triggers execute through a thread of execution, some triggers along the execution sequence may need context from previously executed logic. This is especially needed when executing the same trigger multiple times within the same request transaction, for example, once for the current record, then subsequent execution in nested triggers on the same object. The subsequent firing may need to run different logic than the initial firing.

You can share some data using RequestContext to set a named context and get the named context value anywhere along a request transaction in downstream triggers. The maximum amount of data you can share is 5 MB per transaction request. The value you can set must be one of the value types specified in RequestContextValueType, or an implementation of the RequestContextValue interface.

Note that a value stored in RequestContext requires an explicit getValue and setValue whenever you want to change the value. If you change the state of your RequestContextValue object, you must call setValue to put the mutated object back into the RequestContext.

To properly debug uses of RequestContext, all code that uses the context should be in the debugger. If not, the value of the context may be inaccurate, especially if you have a context value set by code already deployed to Vault and that code is absent from your debugger. Any code that uses getValue and setValue should be in your debugger.

Using RequestContext

The following is an example of using RequestContext. First, a trigger named ProductBuildRegionMap sets up a RequestContext:

@RecordTriggerInfo(object = "product__v", name = "product_region__c", events = {RecordEvent.BEFORE_INSERT})
public class ProductBuildRegionMap implements RecordTrigger {
   public void execute(RecordTriggerContext recordTriggerContext) {
       List<String> productRegions = VaultCollections.newList();

       recordTriggerContext.getRecordChanges().stream().forEach(recordChange ->
          productRegions.add(recordChange.getNew().getValue("region__c",ValueType.STRING)));

       // Create a new region country map for all regions in this request and set the map into the "regionCountryMap"
       // request context, so that this map can be used by other triggers that execute after this one.
       RegionCountryMap regionCountryMap = new RegionCountryMap(productRegions);

       RequestContext.get().setValue("regionCountryMap", regionCountryMap);
   }
}

Next, we have a User-Defined Class which implements the RequestContextValue set up in ProductBuildRegionMap.

@UserDefinedClassInfo(name = "regioncountrymap__c")
public class RegionCountryMap implements RequestContextValue {

   private Map<String, List<String>> regionCountryMap = VaultCollections.newMap();

   RegionCountryMap (List<String> productRegionId){
       // Constructor to create a regionCountryMap <region, countries> by querying the Region object
       // to retrieve countries in the provided regions.
   }
   Map<String, List<String>> getMap () {
       return regionCountryMap;
   };
}

Lastly, our ProductCreateRelatedBrands trigger executes after the ProductBuildRegionMap trigger. It retrieves the RequestContextValue and modifies the Map inside the context.

@RecordTriggerInfo(object = "product__v", name = "product_region__c", events = {RecordEvent.AFTER_INSERT})
public class ProductCreateRelatedBrands implements RecordTrigger {

   public void execute(RecordTriggerContext recordTriggerContext) {
       // Get the RegionCountryMap from the reqeust context
       RegionCountryMap regionCountryMap = RequestContext.get()
               .getValue("regionCountryMap", RegionCountryMap.class);
       // Get the map of regions and countries
       Map<String, List<String>> map = regionCountryMap.getMap();

       // ... create specific brands for each country in a region

       // Remove a region from the map if brands for that region already exists and set the map back to the
       // request context in order to update request context with the changed map.
       map.remove("Asia");

       RequestContext.get().setValue("regionCountryMap", regionCountryMap);
   }
}

User-Defined Classes

User-defined classes (UDC) allow you to implement reusable logic into a single class, rather than repeating the same logic across multiple triggers on different objects. User-defined classes are then used by vault extensions, such as triggers and actions. Developers can alson use UDCs as an object to store complex data.

You can use UDCs to apply object-oriented solution designs by having interfaces, abstract classes, and class implementations in separate UDCs.

Unlike vault extensions which execute when a user or the System initiates an operation, UDCs only execute by calls from other classes.

UDCs can use any of the following libraries and services:

Creating User-Defined Classes

A user-defined class is a Java class which uses the @UserDefinedClassInfo class annotation. For example, the following illustrates a user-defined class ValidationUtils:

@UserDefinedClassInfo
public class ValidationUtils {
    boolean isNameFormatted (Record record) {
        String name = record.getValue("name__v", ValueType.STRING);
        if (name.length() < 100 && !name.substring(0,2).equals("BAC"))
            return true;
        else
            return false;
  }
}

Using User-Defined Classes

You can use a UDC in any Vault Java SDK extension, such as a trigger or an action class, as well as other UDCs. The following example illustrates a trigger using the ValidationUtils user-defined class:

@RecordTriggerInfo(object="product__v", events={RecordEvent.BEFORE_INSERT})
public class Example implements RecordTrigger {
    public void execute(RecordTriggerContext recordTriggerContext) {
        ValidationUtils validationUtils = new ValidationUtils();
        for (RecordChange inputRecord :
            recordTriggerContext.getRecordChanges()) {
                if (!validationUtils.isNameFormatted(inputRecord)){
                    // set Name field to format required for this object
                }
            }
    }
}

While debugging a trigger or action, you can step into UDCs to debug them. Because UDCs are not directly executable, you must step into them when calling them from an extension class.

Limits and Restrictions

While developing Vault extensions is essentially programming in Java, there are some limits and restrictions to ensure your code runs securely in Vault.

Limits

Limits are enforced in Vault at runtime to protect against excessive uses that may impact overall Vault performance. The System tracks custom code execution and terminates any code execution that has reached a limit. The transaction is rolled backed when this occurs, and a runtime error is presented to the user, informing the user to contact an Admin for assistance.

alt text

The following are some limits in executing Vault Java SDK code:

These limits are not enforced during debugging. However, service calls that execute on the server, such as QueryService and RecordService calls, will be tracked and enforced. If the service calls cause a limit violation error, the custom code will fail when deployed to Vault.

Restrictions

Restrictions are checked when code is uploaded to Vault to prevent unsafe use of Java. Validation occurs when sending the source code to Vault. An error message indicating the violation is returned the user. Code that does not pass validation will not be deployed to Vault.

The following are some examples of restrictions:

It’s important to keep restrictions in mind when developing Vault extensions, especially since these restrictions are not enforced while debugging your code. Enforcement only occurs when uploading code to Vault.

Debugger Restrictions

Only users with the Vault Owner security profile can attach a debug session to a vault. Learn more about managing security profiles in Vault Help.

Only vaults in the Sandbox domain can have debug sessions attached.

Limits:

JDK Whitelist

You may only use whitelisted JDK classes and interfaces in your Vault extensions. All other libraries in the JDK are not allowed.

The following Vault permissions control actions for deploying and managing code in Vault. Learn more about permission sets in Vault Help.

Permission Label Controls
Vault Owner security profile You must have the standard Vault Owner security profile to connect to the Vault Java SDK Debugger.
Admin: Vault Java SDK: Read Ability to read Vault Java SDK code; you’ll need this permission to deploy code, validate code, export a VPK which contains code, or download source code.
Admin: Vault Java SDK: Create Ability to create Vault Java SDK code; you’ll need this permission to deploy code, update existing code, or enable and disable extensions.
Admin: Vault Java SDK: Edit Ability to create Vault Java SDK code; you’ll need this permission to deploy code, update existing code, or enable and disable extensions.
Admin: Vault Java SDK: Delete Ability to delete Vault Java SDK code; you’ll need this permission to deploy code or delete existing source code.
Admin: Migration Packages: Deploy Ability to deploy packages; you’ll need this permission on the target vault to deploy code.
Objects: Inbound Package: Read Ability to view the Inbound Package object. You must also have Read permission on all fields for this object.
Objects: Inbound Package Step: Read Ability to view the Inbound Package Step object. You must also have Read permission on all fields for this object.
Objects: Inbound Package Data: Read Ability to view the Inbound Package Data object. You must also have Read permission on all fields for this object.
Objects: Inbound Package Component: Read Ability to view the Inbound Package Component object. You must also have Read permission on all fields for this object.

Data Type Map

When working with Vault field values in Java SDK, the data type configured in a Vault field must be mapped to a Java data type in order to manipulate the field value in Java code.

You can learn more about Vault object and document fields in Vault Help.

The com.veeva.vault.sdk.api.core.ValueType interface also provides this mapping.

Vault Field Type ValueType Returned Data Type
Text ValueType.STRING String
Yes/No VauleType.BOOLEAN Boolean
Number ValueType.NUMBER BigDecimal
Date ValueType.DATE LocalDate
DateTime ValueType.DATETIME ZonedDateTime
Picklist ValueType.PICKLIST_VALUES List
Object ValueType.STRING String
Parent ValueType.STRING String
Lookup Same as Source Depends on ValueType
ID ValueType.STRING String
Multi-Value References (Documents only) ValueType.REFERENCES List

Troubleshooting Runtime Errors

Typically most errors are discovered and fixed by debugging and testing code during development. However, in some cases, errors can occur at runtime causing custom code execution to terminate. When these errors occur, a developer needs to investigate what the cause is an fix the code accordingly.

Generally, there are three types of runtime errors:

All three types of errors result in immediate termination of code execution and the transaction is rolled back. The end user is informed in the UI or API response.

alt text

As shown in the message above, end-users are directed to an Admin for assistance. The caused by message detail is intended for developers to identify the cause of the error. In some cases, the error message is all a developer needs to fix the error. If a developer needs more information, they can check the Debug Log to further troubleshoot the issue.

Debug Log

You can view the Debug Log through the Vault UI in Admin > Logs. The debug log captures Vaul Java SDK code execution details. Every request initiated by a user generates a log file.

The log file captures the following information:

For example, a trigger error in the debug log may look like this:

2017-11-29 05:57:39,992 Recordtrigger.trigger_name__c INFO *****Start Execution:[com.veeva.vault.custom.triggers.HelloWorld]*****
2017-11-29 05:57:39,994 Recordtrigger.trigger_name__c INFO *****End Execution:[com.veeva.vault.custom.triggers.HelloWorld]*****
2017-11-29 05:57:39,997 Recordtrigger.trigger_name__c ERROR ErrorId[ef6d7eff-894c-49c7-9422-f86a534f1ccb] 
java.lang.Throwable: Vault Java SDK Error:The field [product_nome__v] does not exist for object [hello_world__c]
    at ...(Unknown Source)
    at com.veeva.vault.custom.triggers.HelloWorld.execute(source:24)
    at ...(Unknown Source)

By default, the Vault Owner and System Admin Security Profiles have permission to view the Debug Log and set up debug log sessions for a particular user. Note that no more than 20 users per vault can create debug logs.

Adding Custom Debug Log Messages

In some cases, developers may want to send a message directly to the Debug Log to help troubleshoot issues. The LogService allows developers to do just that. This is especially helpful when troubleshooting an issue that only occurs at runtime, meaning it’s not reproducible in debugging. For example, a variable value can be written to the debug log at runtime using the LogService.

Refer to the Javadocs for details about using this service.

Audit Logs

From the Admin > Logs area in Vault, you can view a history of actions within your vault, including actions performed with the Vault Java SDK. You can learn more about the Vault Admin Logs in Vault Help.

System Audit History

The System Audit History page displays vault-level configuration and settings changes, which includes managing vault extensions. For example, uploading a new trigger to your vault.

alt text

Object Record Audit History

Records affected by triggers indirectly through the use of the RecordService have audit entry identifying the change with System on behalf of the user initiating the request. This allows an Admin to audit changes which may not have been directly manipulated by users, but rather by code in triggers. When a user delegates access to another user, the audit will show System on behalf of the delegating user.

In addition, when a workflow is started indirectly by code, the audit log will also indicate System on behalf of the user initiating a request. This is important because a user may create a new record that fires a trigger to start a workflow. In this case, the end user did not start a workflow, they created a new record. The audit log would then indicate that the System started the workflow on her behalf.

alt text

Document Audit History

The Document Audit History page displays document-related events, including events triggered through the Vault Java SDK. Documents affected by Vault extensions through the use of DocumentService have audit entry identifying the change with System on behalf of the user initiating the request. When a user delegates access to another user, the audit will show System on behalf of the delegating user.

Sample Code

We’ve created sample code for various use cases. Feel free to use these as starting points for your own custom Vault extensions.

These projects are available at the Veeva GitHub™.

If you need help, feel free to post your questions in the Developer forum.

Project Description
Service Basics Services demonstrated in this project include:
  • RecordService
  • QueryService
Object Records Use cases in this project include:
  • Field defaulting
  • Field validation
  • Required fields
  • Create related records
  • Initialize workflows
  • Update roles
Documents Use cases in this project include:
  • Update fields on related documents
  • Create related object records
  • Send notifications

Tools

We’ve created the following tools to assist developers in utilizing the Vault Java SDK.

Tool Description
Maven Plugin Provides commands to package, validate, import, and deploy Vault Java SDK source code through the use of Maven build goals.