In this tutorial we'll see how to use Scheduler to create, test and automate a job to securely upsert transactions into LUSID from a CSV file.
Note: To complete this tutorial, you must have suitable access control permissions. This can most easily be achieved by assigning your LUSID user the built-in
lusid-administrator
role. This should already be the case if you are the domain owner.
A job must be based on a Docker image; this tutorial assumes you are familiar with Docker. The code in the Docker image can call any endpoint in the core LUSID or in the Identity or Access APIs. We'll need to authenticate in the standard way by obtaining an API access token.
In this tutorial, we’ll call the LUSID UpsertTransactions
API endpoint using the LUSID Python SDK. We’ll pull our credentials to use the SDK into the job from the Configuration Store, and also add a pair of command line arguments to allow the user to nominate a particular transaction portfolio to upsert to at runtime.
Step 1: Creating a Docker image containing a suitable script
Imagine we have a feed of transactions from a data provider in CSV format:
fund,instrument,asset,figi,txn_id,txn_type,trade_date,settle_date,units,price,currency
Growth,GBP,Cash,,G001,FundsIn,2022-03-01,2022-03-01,100000,1.00,GBP
Growth,BP,Equity,BBG000C05BD1,G002,BuyEQ,2022-03-01,2022-03-03,10000,2.05,GBP
Growth,Tesco,Equity,BBG000BF46Y8,G003,BuyEQ,2022-03-01,2022-03-03,8000,3.05,GBP
Growth,Glencore,Equity,BBG001MM1KV4,G004,BuyEQ,2022-03-01,2022-03-03,7000,4.05,GBP
A transactions.py
Python script to upsert transactions into LUSID from this CSV file using the LUSID Python SDK might look like the one below.
We'll need to authenticate to the SDK but we don't want to store credentials in the Docker image. So instead we'll pass them securely into the job from the Configuration Store as environment variables at runtime; see Step 2.
## Upsert transactions into LUSID from a 'transactions.csv' file in the same directory as this script.
import lusid, argparse, pandas, pytz
from dateutil.parser import parse
# Authenticate using env vars passed into the job at runtime:
config = lusid.utilities.ApiConfigurationLoader.load()
api_factory = lusid.utilities.ApiClientFactory(token=lusid.utilities.RefreshingToken(config))
# Build the transaction portfolios API:
transaction_portfolios_api = api_factory.build(lusid.api.TransactionPortfoliosApi)
# Handle command line args passed into the job at runtime:
parser = argparse.ArgumentParser()
parser.add_argument("--scope", help="Specify a transaction portfolio scope")
parser.add_argument("--code", help="Specify a transaction portfolio code")
args = parser.parse_args()
# Get transaction data out of the CSV file:
transactions_file = "transactions.csv"
transactions = pandas.read_csv(transactions_file)
# Create one TransactionRequest object per transaction:
transactions_request = []
for row, txn in transactions.iterrows():
if txn["asset"] == "Cash":
instr_identifier = {"Instrument/default/Currency": txn["currency"]}
else:
instr_identifier = {"Instrument/default/Figi": txn["figi"]}
transactions_request.append(
lusid.models.TransactionRequest(
transaction_id=txn["txn_id"],
type=txn["txn_type"],
instrument_identifiers=instr_identifier,
transaction_date=pytz.UTC.localize(parse(txn["trade_date"])).isoformat(),
settlement_date=pytz.UTC.localize(parse(txn["settle_date"])).isoformat(),
units=txn["units"],
transaction_price=lusid.models.TransactionPrice(price=txn["price"], type="Price"),
total_consideration=lusid.models.CurrencyAndAmount(amount=txn["units"] * txn["price"], currency=txn["currency"])
)
)
# Upsert to transaction portfolio, passing in command line args:
transaction_portfolios_api.upsert_transactions(scope=args.scope, code=args.code, transaction_request=transactions_request)
# Write a message to the Scheduler console:
print(f"{len(transactions_request)} transactions added to the {args.scope}/{args.code} portfolio")
The Dockerfile to create a suitable image for this Python script might look like the one below. Note the base image (for example python:3-slim
or python:3-alpine
) must contain no critical or high vulnerabilities in order to pass AWS gate checks. Read more on troubleshooting vulnerabilities.
FROM python:3-slim
RUN apt update && apt upgrade -y && rm -rf /var/lib/apt/lists/*
RUN pip install lusid-sdk pandas pytz python-dateutil
COPY transactions.py transactions.csv .
ENTRYPOINT ["python3", "transactions.py"]
To build this Docker image, you might run a command like this:
docker build -t lusid-upsert-transactions-image:latest .
You could test the image works locally before uploading it by upserting transactions to one of the example portfolios provided with LUSID:
docker run lusid-upsert-transactions-image --scope=Finbourne-Examples --code=Global-Equity
Step 2: Uploading LUSID credentials to the Configuration Store
Every call made to a LUSID API must be authorised by an API access token.
Note: If your job does not exercise the core LUSID or the Identity or Access APIs (in other words, it performs a non-LUSID operation), you can skip this step.
The LUSID SDKs have helper classes that automate the process of obtaining an API access token and refreshing it upon expiry. To enable an SDK to do this, we'll need to assemble the following details and pass them into the job as environment variables at runtime:
Our LUSID account username and password
A client ID and client secret pair
The dedicated Okta token URL for our LUSID domain
To see how to assemble this information, read this article.
We can pass the less-sensitive account username, client ID and dedicated token URL directly into the job. However, the only way to securely pass in the password and client secret is to upload these credentials in advance to the Configuration Store, in order to extract them at runtime. See how to do this.
For the purposes of this tutorial, we'll assume we have created a configuration set in our Personal area of the Configuration Store containing two items, one for the account password and one for the client secret:
Step 3: Creating a job
To start the process of creating a new job:
Sign into the LUSID web app and select Jobs & Scheduling > Jobs from the left-hand menu:
On the Jobs dashboard, click the Create job button (top right).
Specify a Scope and Code that together uniquely identify the job in your LUSID domain:
Uploading your Docker image
First, follow these instructions to upload your Docker image to the FINBOURNE AWS store.
Upon returning to this screen, you should be able to select your image from the Image name
and Version
dropdowns:
Note it is not possible to select an image with any critical or high vulnerabilities. Read more on troubleshooting Docker image vulnerabilities.
Nominating a transaction portfolio using command line arguments
On the Arguments screen, you can (optionally) define command line arguments to pass in to your job at runtime.
In our example, we’ll add two arguments, one to pass in a scope and one to pass in a code (choose the equals
command line separator when doing this). Both should be mandatory, since transactions must upsert to a transaction portfolio. We'll nominate a default portfolio (Finbourne-Examples/UK-Equities
) and allow this to be overridden by a user at runtime:
Authenticating to the LUSID Python SDK using environment variables
On the Arguments screen, you can (optionally) define environment variables to pass in to your job at runtime.
In our example, since we're exercising the LUSID API via the LUSID Python SDK, we'll use this facility to pass in the credentials assembled in step 2, so the SDK can obtain an API access token.
We need to define the following environment variables:
Variable name | Source of the value | Data type | Example value |
| Configuration Store |
|
|
| LUSID account username |
|
|
| Configuration Store |
|
|
| Client ID of the application |
|
|
| LUSID domain URL |
|
|
| Dedicated Okta token URL for LUSID domain |
|
|
To pull in the account password and client secret from the Configuration Store, select Configuration
from the Data type dropdown and click in the Default value key area to be guided to select the appropriate configuration set and item (the config://...
syntax is automatically generated):
Once all the relevant arguments have been added, the job is ready to be created. For now, the Resources tab can be safely skipped. Click Save to finish creating the job.
Step 4: Running the job and examining results
You can run the job manually at any time by selecting the Run icon on the Jobs dashboard. The Run a job screen prompts to confirm or override values for mandatory arguments, if these are defined.
To examine the status of the job, open the History dashboard:
If the Job Status column records Success
, click the link in the Run ID column to examine the results on the Details dashboard (you may need to re-arrange columns to see them as they are here).
On the Details dashboard, if the job writes information to the console you can examine it on the Console Output tab:
If the Job Status column records Failure
, see Troubleshooting.
Step 5: Creating a schedule for automation
To create a cron-like schedule for your job so that it runs automatically:
Navigate to the Schedules dashboard and click the Create schedule button (top right).
On the Specify job screen, choose the Job to automate from the dropdown list, and a Scope and Code that together uniquely identify the schedule (note, this is not the same as the scope and code for the job itself):
On the Arguments screen, confirm the default values for any command line arguments and environment variables passed in to the job, or override the defaults.
On the Triggers screen, select
Time based
as Trigger type and configure the trigger as either:Daily
to trigger the schedule at a particular hour daily, or on particular days of the week.Hourly
to trigger the schedule hourly at a particular minute, this can be daily or only on particular days of the week.A
Cron
expression adhering to this standard.
Step 6: Troubleshooting a failed job
Read more on jobs monitoring and troubleshooting a failed job.