In this tutorial, we’ll see how to use Scheduler to create and automate a daily post-trade compliance run.
You can create a schedule to trigger a post-trade compliance run at a chosen time daily. You can add parameters to the schedule that allow you to change which compliance rules or portfolios to include in the run.
A post-trade compliance run must be created as a job, and is based on a Docker image. This tutorial assumes you are familiar with Docker.
Step 1: Creating a Docker image containing a suitable script
The first step is to create a Python script using the LUSID Python SDK to perform a compliance run.
A Main.py
Python script to perform a compliance run might look like the one below.
We’ll need to authenticate to the SDK but we don’t want to store credentials in the Docker image. So instead we’ll pass them securely into the job from the Configuration Store as environment variables, along with a recipe, compliance rule scope and run scope (step 2).
Click to expand Python script for post-trade compliance run
# Import Packages
import os
import lusid
from lusid import (
SyncApiClientFactory,
ArgsConfigurationLoader,
EnvironmentVariablesConfigurationLoader,
)
def run_compliance(api_factory: SyncApiClientFactory, is_pre_trade: bool, recipe_address: str, rule_scope: str, run_scope: str) -> lusid.ComplianceRunInfoV2:
# Initialisation of the compliance run
pre_or_post = "Pre" if is_pre_trade else "Post"
print(f"Running {pre_or_post}-Trade Compliance")
compliance_api = api_factory.build(lusid.ComplianceApi)
try:
# Running Compliance
compliance_response = compliance_api.run_compliance(
run_scope = run_scope,
rule_scope = rule_scope,
is_pre_trade = is_pre_trade,
recipe_id_scope = recipe_address.split("/")[0],
recipe_id_code = recipe_address.split("/")[1]
)
except Exception as e:
print(f"Error running {pre_or_post}-Trade Compliance: {e}")
print(f"Completed {pre_or_post}-Trade Compliance at: {compliance_response.completed_at.strftime('%Y-%m-%d %H:%M:%S')}")
return compliance_response
def main():
config_loaders=(
ArgsConfigurationLoader(app_name="PostTradeComplianceRun"),
# This expects FBN_ACCESS_TOKEN and FBN_LUSID_URL as 'https://domain.lusid.com/api' environment variables
EnvironmentVariablesConfigurationLoader())
api_factory = SyncApiClientFactory(config_loaders=config_loaders)
recipe_address = os.getenv("RECIPE_ADDRESS")
rule_scope = os.getenv("RULE_SCOPE")
run_scope = os.getenv("RUN_SCOPE")
assert recipe_address is not None
assert rule_scope is not None
assert run_scope is not None
# Run post trade compliance
run_compliance(
api_factory=api_factory,
is_pre_trade=False,
recipe_address=recipe_address,
rule_scope=rule_scope,
run_scope=run_scope
)
if __name__== "__main__":
main()
The Docker file to create a suitable image for this Python script might look like the one below. Note the base image (for example debian:12-slim
) must contain no critical or high vulnerabilities to pass AWS gate checks. Read more on troubleshooting vulnerabilities.
FROM debian:12-slim as base
WORKDIR /app
RUN apt-get update
RUN apt-get install python3 python3-pip python3-venv -y
RUN python3 -m venv /opt/venv
RUN /opt/venv/bin/pip install --no-cache-dir lusid-sdk==2.1.608
COPY main.py .
FROM gcr.io/distroless/cc-debian12 as temp
COPY --from=base /usr/bin/c_rehash /usr/bin/openssl /usr/bin/pdb3 /usr/bin/pdb3.11 /usr/bin/py3clean /usr/bin/py3compile /usr/bin/py3versions /usr/bin/pydoc3 /usr/bin/pydoc3.11 /usr/bin/pygettext3 /usr/bin/pygettext3.11 /usr/bin/python3 /usr/bin/python3.11 /usr/bin/
COPY --from=base /usr/lib/binfmt.d /usr/lib/ssl /usr/lib/valgrind /usr/lib/ /usr/lib/
COPY --from=base /usr/share/applications/ /usr/share/zoneinfo/ /usr/share/lintian/ /usr/share/binfmts/ /usr/share/python3 /usr/share/readline/ /usr/share/
COPY --from=base /opt/venv/ /opt/venv/
FROM temp as final
COPY --from=base /app /app
WORKDIR /app
ENTRYPOINT ["/opt/venv/bin/python3", "main.py"]
To build this Docker image, you might run a command like this:
docker build -t lusid-compliance-image:latest .
You could test the image works locally before uploading it by setting RECIPE_ADDRESS
, RULE_SCOPE
and RUN_SCOPE
as environment variables and performing a compliance run on a test portfolio.
docker run lusid-compliance-image
Step 2: Uploading LUSID credentials to the Configuration Store
Every call made to a LUSID API must be authorised by an API access token.
For this tutorial, we’ll use a Personal Access Token (PAT) to set up our schedule. To enable an SDK to access our token, we'll need to assemble the following details and pass them into the job as environment variables at runtime:
The LUSID API URL for our domain
A Personal Access Token (PAT)
Refreshing tokens
Instead of a PAT, you might prefer to set up your schedule to use a refreshing token. The LUSID SDKs have helper classes that automate the process of obtaining an API access token and refreshing it upon expiry. To enable an SDK to do this, you'll need to assemble the following details and pass them into the job as environment variables at runtime:
Your LUSID account username and password
A client ID and client secret pair
The dedicated Okta token URL for our LUSID domain
To see how to assemble this information, read this article.
We can pass the less-sensitive LUSID API URL directly into the job. However, the only way to securely pass in the PAT is to upload this credential in advance to the Configuration Store, in order to extract it at runtime. See how to do this.
Tip
You can set Block Reveal to
true
to allow only LUSID itself to access the value; it cannot be exposed by any user once saved. Read more.
For the purposes of this tutorial, we'll assume we have created a configuration set in our Personal keys area of the Configuration Store containing our PAT:
Step 3: Creating a job
To create a job that can run the content of the Docker image:
Sign into the LUSID web app and select Jobs & Scheduling > Jobs from the left-hand menu:
On the Jobs dashboard, click the Create job button (top right).
Specify a Scope and Code that together uniquely identify the job in your LUSID domain:
Uploading your Docker image
First, follow these instructions to upload your Docker image to the FINBOURNE AWS store.
Upon returning to this screen, you should be able to select your image from the Image name
and Version
dropdowns:
Note it is not possible to select an image with any critical or high vulnerabilities. Read more on troubleshooting Docker image vulnerabilities.
Authenticating to the LUSID Python SDK using environment variables
On the Arguments screen, you can (optionally) define environment variables to pass in to your job at runtime.
In our example, since we're exercising the LUSID API via the LUSID Python SDK, we'll use this facility to pass in the credentials assembled in step 2, so the SDK can obtain an API access token.
We’ll also add environment variables to pass in a recipe, rule scope, and run scope. We'll nominate a default value for each.
We need to define the following environment variables:
Variable name | Value source | Data type | Example value |
---|---|---|---|
| Configuration Store |
|
|
| LUSID domain URL |
|
|
| Your recipe scope/code |
|
|
| Your compliance rule scope |
|
|
| Your compliance run scope |
|
|
Step 4: Running the job and examining the results
You can run the job manually at any time by selecting the Run icon on the Jobs dashboard. The Run a job screen prompts you to confirm or override values for mandatory arguments, if these are defined.
To examine the status of the job, open the History dashboard:
If the Job Status column records Success
, click the link in the Run ID column to examine the results on the Details dashboard.
On the Details dashboard, if the job writes information to the console you can examine it on the Console Output tab:
If the Job Status column records Failure
, see Troubleshooting.
Step 5: Creating a schedule to run the job daily
To create a cron-like schedule for your job so that it runs automatically:
Navigate to the Schedules dashboard and click the Create schedule button (top right).
On the Specify job screen, choose the Job to automate from the dropdown list, and a Scope and Code that together uniquely identify the schedule (note, this is not the same as the scope and code for the job itself):
On the Arguments screen, confirm the default values for the environment variables passed in to the job, or override the defaults.
On the Triggers screen, select
Time based
as Trigger type and configure the trigger asDaily
to trigger the schedule at a particular hour on particular days. For example, we want to set up our post-trade schedule to run at 11pm each day:
Once the schedule is created, we simply need to wait until the next run is scheduled to take place. You can also run the schedule on demand via the Schedules dashboard by selecting Menu > Run now:
Step 6: Troubleshooting a failed job
Read more on jobs monitoring and troubleshooting a failed job.