Views:

The following common functions are supported in Luminesce SQL queries:

We have also implemented the following custom functions:

See the full list of valid keywords (both inherited from SQLite and proprietary to FINBOURNE).

General custom functions

General custom functionExplanation and examples
CHECK_ACCESS

Performs an access control check on any application in the FINBOURNE platform, for example:

@x = select
  check_access('Feature/Honeycomb/Execute', 'Honeycomb:stable-provider-sys-file') x,
  check_access('Insights', 'Feature/LUSID/Execute', 'Insights:accesslogs-view-all') y,
  check_access('Insights', 'Feature/LUSID/Execute', 'Insights:accesslogs-view-xxx') z;
select * from @x
THROWCan be used conditionally to throw an error message, for example:
select iif(abc is not null, abc, throw('abc cannot be null'))

Append the following argument to throw without an error message:

throw('abc cannot be null', 'Ignorable')
PRINT

Can be used to debug a query, give feedback to an end user running the query, or enter text into the logs. Takes the following arguments:

 

  • The text to print (mandatory). This may contain 'structured result formatters', standard .NET format strings with named placeholders (see examples below).
  • A severity, one of Progress (the default), Debug, Information, Warning, Error.
  • A sender (defaults to PRINT).
  • Up to 125 arguments that substitute into .NET format strings.
select print('*** Starting your process NOW!')
=>
[INF] LuminesceCli: >> Progress >> PRINT >> *** Starting your process NOW!
select print('*** Starting your process NOW! {X:00000}', 'error', 'START!!!', 42)
=>
[ERR] LuminesceCli: >> Progress >> START!!! >> *** Starting your process NOW! 00042
select print('*** Starting your process NOW! {X}', 'error', 'START!!!', 42)
=>
[ERR] LuminesceCli: >> Progress >> START!!! >> *** Starting your process NOW! 42
DAY_COUNT_DISPLAYConverts a fractional day number to a more intuitive count of days, hours, minutes, seconds and so on.
select
day_count_display(3.1234567) a,
day_count_display(0.00123) b,
day_count_display(0.00000123) c,
day_count_display(julianday('now', '+2 minutes') - julianday('now', '-1 hours', '+3 minutes', '-12 seconds')) d
=>
┌───────────┬────────┬───────┬─────────┐
│ a         │ b      │ c     | d       │
├───────────┼────────┼───────┼─────────┤
│ 3d 2h 57m │ 1m 46s │ 106ms │ 59m 12s │
└───────────┴────────┴───────┴─────────┘
TO_DATEPerforms an explicit date conversion from a string in a non-standard format, for example:
select to_date('2022/29/03', 'yyyy/dd/MM') as aDate
IS_DATE

Returns 1 if a value is a date or datetime, otherwise 0. For example:

@x = select is_date('2024-01-19');
select * from @x
TO_UTCTransforms a date and a TZ database name timezone to a UTC date, for example:
select to_utc('2023/01/30 10:30:11', 'US/Eastern') as aDate

See also these examples.

TO_ISOTransforms a datetime to an ISO-formatted string, for example:
select to_iso(#2023-03-25#) as aDate

See also these examples.

FROM_UTCTransforms a UTC date to a given TZ database name timezone, for example:
select from_utc('2023/01/30 10:30:11', 'US/Eastern') as aDate
CONVERT_TIMEZONETransforms a date from one TZ database name timezone to another, for example:
select convert_timezone('2023/01/30 10:30:11', 'US/Eastern', 'US/Pacific') as aDate
IS_NUMERIC

Returns 1 if a value is a long, int, double or decimal, otherwise 0. For example:

@x = select is_numeric('200.99');
select * from @x
IS_INTEGER

Returns 1 if a value is a long or an int, otherwise 0. For example:

@x = select is_integer('200');
select * from @x
EDIT_DISTANCE

Returns the Levenshtein distance between two strings, for example:

@x = select edit_distance('Microsoft''Google');
select * from @x
GENERATE_HASH

Creates a hash string from one or more defined columns. The parameter syntax is as follows: (hashing-algorithm, column[, ...]). You can specify one of the following hashing algorithms when using this function:

 

  • SHA512 (default if no hashing algorithm is specified)
  • SHA1
  • SHA256
  • SHA384
  • MD5
For example:
select generate_hash('SHA256', TradeAmount, TradePrice) from Lusid.Portfolio.Txn limit 20
CONTAINS_TOKEN

Returns 1 if a string contains the specified token (that is, whole word), otherwise 0. For example:

@data = select 'The quick brown fox jumped over the lazy dog' as myValue;
@x = select contains_token(myValue, 'quick') from @data;
select * from @x

This function is optimised for filtering data retrieved from providers supplied by FINBOURNE. For more information on filtering, see this article.

CONTAINS_TOKEN_STARTING_WITH

Returns 1 if a string contains a word starting with the specified token, otherwise 0. For example:

@data = select 'The quick brown fox jumped over the lazy dog' as myValue;
@x = select contains_token_starting_with(myValue, 'quic') from @data;
select * from @x

This function is optimised for filtering data retrieved from providers supplied by FINBOURNE. For more information on filtering, see this article.

REGEXPEnables filtering by regular expression matches. Note this function is used as an operator and so behaves differently to the other regular expression functions below. For example:
select * from x where abc REGEXP '([A-Z]*-[0-9]*)'
REGEXP_MATCHReturns the first portion of the search string that matches the regular expression, or null if there are no matches. The parameter syntax is as follows: (search-string, regex [, name-of-capture-group-to-return]). For example:
-- returns: ABC-123
select regexp_match('XXX: ABC-123 Something', '([A-Z]+-[0-9]*)')
select regexp_match('XXX: ABC-123 Something', '([a-zA-Z]+-[0-9]*)')
-- returns: null (due to not matching on case)
select regexp_match('XXX: ABC-123 Something', '([a-z]+-[0-9]*)')
-- returns: ABC
select regexp_match('XXX: ABC-123 Something', '(?<project>[A-Z]+)-(?<id>[0-9]*)', 'project')
REGEXP_MATCHESReturns the portions of the search string that match the regular expression as a comma-separated list, or null if there are no matches. The parameter syntax is as follows: (search-string, regex). For example:
-- returns: ABC-123,QQQ-456
select regexp_matches('XXX: ABC-123 something QQQ-456 Something', '([A-Z]+-[0-9]*)')
REGEXP_MATCH_LOCATIONReturns the 1-based index of the first portion of the search string that matches the regular expression, or null if there are no matches. The parameter syntax is as follows: (search-string, regex [, name-of-capture-group-to-return]). For example:
-- returns: 6
select regexp_match_location('XXX: ABC-123 Something', '([A-Z]+-[0-9]*)')
select regexp_match_location('XXX: ABC-123 Something', '([a-zA-Z]+-[0-9]*)')
select regexp_match_location('XXX: ABC-123 Something', '(?<project>[A-Z]+)-(?<id>[0-9]*)')
-- returns: null (due to not matching on case)
select regexp_match_location('XXX: ABC-123 Something', '([a-z]+-[0-9]*)')
-- returns: 10
select
regexp_match_location('XXX: ABC-123 Something', '(?<project>[A-Z]+)-(?<id>[0-9]*)', 'id')
REGEXP_REPLACEReplaces the portions of the search string that match the regular expression with the replacement value. The parameter syntax is as follows: (search-string, regex, replacement-value). For example:
-- returns: XXX: ? something ? Something
select regexp_replace('XXX: ABC-123 something QQQ-456 Something', '([A-Z]+-[0-9]*)', '?')

Statistical custom functions

ExplanationLuminesce SQL syntaxEquivalent Lumipy method

Description

The coefficient of variation is the standard deviation scaled by the mean. It is a standardised measure of the dispersion of a random variable so distributions of different scale can be compared [1]

Inputs

x (numeric): column of values to compute the coefficient of variation over.

coefficient_of_variation([x])x.stats.coef_of_variation()

Description

Covariance is a statistical measure of the joint variability of two random variables [1]

Inputs

x (numeric): the first column of values to compute the coefficient of variation over.
y (numeric): the second column of values to compute the coefficient of variation over.
ddof (int): delta degrees of freedom (defaults to 1). Use ddof = 0 for the population covariance and
ddof = 1 for the sample covariance.:

covariance([x], [y], ddof)x.stats.covariance(y, ddof)

Description

The empirical CDF is the cumulative distribution function of a sample. It is a step function that jumps by 1/n at each of the n data points.

This function returns the value of the empirical CDF at the given value [1]

Inputs

x (numeric): column of values to compute the coefficient of variation over.
value (numeric): location in the domain of x to evaluate the empirical CDF at.

empirical_cume_dist_function([x], value)x.stats.empirical_cdf(value)

Description

The Shannon entropy measures the average amount of "surprise" in a sequence of values. It can be considered a measure of variability [1]

It is calculated as

S = -sum(p_i * log(p_i))

where p_i is the probability of the ith value occurring computed from the sample (n occurrences / sample size). This function is equivalent to scipy.stats.entropy called with a single series and with the natural base [2]

Inputs

x (numeric): column of values to compute the entropy over.

entropy([x])x.stats.entropy()

Description

The geometric mean is the multiplicative equivalent of the normal arithmetic mean. It multiplies a set of n-many numbers together and then takes the n-th root of the result [1]

Inputs

x (numeric): column of values to compute the geometric mean over.

geometric_mean([x])x.stats.geometric_mean()

Description

The geometric standard deviation measures the variability of a set of numbers where the appropriate mean to use is the geometric one (they are more appropriately combined by multiplication rather than addition) [1]

This is computed as the exponential of the standard deviation of the natural log of each element in the set

GSD = exp(stdev(log(x)))

Inputs

x (numeric): column of values to compute the geometric standard deviation over.

exp(window_stdev(log([x])))x.stats.geometric_stdev()

Description

The harmonic mean is the reciprocal of the mean of the individual reciprocals of the values in a set [1]

Inputs

x (numeric): column of values to compute the harmonic mean over.

harmonic_mean([x])x.stats.harmonic_mean()

Description

The interquantile range is the difference between two different quantiles. This is a generalisation of the interquartile range where q1=0.25 and q2=0.75.

The upper quantile (q2) value must be greater than the lower quantile (q1) value.

Inputs

x (numeric): column of values to compute the interquantile range over.
q1 (float): the lower quantile value.
q2 (float): the upper quantile value.

interquantile_range([x], [q1], [q2])x.stats.interquantile_range()

Description

The interquartile range is the difference between the upper and lower quartiles. It can be used as a robust measure of the variability of a random variable [1]

Inputs

x (numeric): column of values to compute the interquartile range over.

interquartile_range([x])x.stats.interquartile_range()

Description

Kurtosis measures how much probability density is in the tails (extremes) of a sample's distribution [1]

This function corresponds to the Pearson Kurtosis measure and currently only supports sample kurtosis.

Inputs

x (numeric): column of values to compute the kurtosis over.

kurtosis([x])x.stats.kurtosis()

Description

The lower quartile is the value that bounds the lower quarter of a dataset [1]

It is equivalent to quantile 0.25 or the 25th percentile.

Inputs

x (numeric): column of values to compute the lower quartile over.

quantile([x], 0.25)x.stats.lower_quartile()

Description

This is a convenience function for computing the mean divided by the standard deviation. This is used in multiple financial statistics such as the Sharpe ratio and information ratio.

Inputs

x (numeric): column of values to compute the mean standard deviation ratio over.

mean_stdev_ratio([x])x.stats.mean_stdev_ratio()

Description

The median is the value that separates the top and bottom half of a dataset [1]

It is equivalent to quantile 0.5, or the 50th percentile.

Inputs

x (numeric): column of values to compute the median over.

quantile([x], 0.5)x.stats.median()

Description

The median absolute deviation is a measure of the variability of a random variable. Unlike the standard deviation it is robust to the presence of outliers [1]

Inputs

x (numeric): column of values to compute the median absolute deviation over.

median_absolute_deviation([x])x.stats.median_abs_deviation()

Description

Pearson's r is a measure of the linear correlation between two random variables [1]

Inputs

x (numeric): the first series in the Pearson's r calculation.
y (numeric): the second series in the Pearson's r calculation.

pearson_correlation([x], [y])x.stats.pearson_r(y)

Description

The quantile function of a given random variable and q value finds the value x where the probability of observing a value less than or equal to x is equal to q [1]

Inputs

x (numeric): column of values to compute the quantile over.
q (float): the quantile value. Must be between 0 and 1.

quantile([x], q)x.stats.quantile(q)

Description

The Root Mean Square (RMS) is the square root of the mean of the squared values of a set of values. It is a statistical measure of the spead of a random variable [1]

Inputs

x (numeric): column of values to compute the root mean square over.

root_mean_square([x])x.stats.root_mean_square(x)

Description

Skewness measures the degree of asymmetry of a random variable around its mean [1]

This calculation currently only supports sample skewness.

Inputs

x (numeric): column of values to compute the skewness mean over.

skewness([x])x.stats.skewness(x)

Description

Spearman's rho measures how monotonic the relationship between two random variables is [1]

Inputs

x (numeric): the first series in the Spearman rank correlation calculation.
y (numeric): the second series in the Spearman rank correlation calculation.

spearman_rank_correlation([x], [y])x.stats.spearman_r(y)

Description

An implementation of standard deviation that can be used in a window.

The standard deviation measures the dispersion of a set of values around the mean [1]

This only calculates the sample standard deviation (delta degrees of freedom = 1)

Inputs

x (numeric): column of values to compute the standard deviation over.

window_stdev([x])x.stats.stdev()

Description

The upper quartile is the value that bounds the upper quarter of a dataset [1]

It is equivalent to quantile 0.75 or the 75th percentile.

Inputs

x (numeric): column of values to compute the upper quartile over.

quantile([x], 0.75)x.stats.upper_quartile()

Description

Computes the y intercept (alpha) of a regression line fitted to the given data [1]

Inputs

x (numeric): an expression corresponding to the independent variable.
y (numeric): an expression corresponding to the dependent variable.

linear_regression_alpha([x], [y])x.linreg.alpha(y)

Description

Computes the standard error of the y intercept (alpha) of a regression line fitted to the given data [1]

This assumes the residuals are normally distributed and is calculated according to [2]

Inputs

x (numeric): an expression corresponding to the independent variable.
y (numeric): an expression corresponding to the dependent variable.

linear_regression_alpha_error([x], [y])x.linreg.alpha_std_err(y)

Description

Computes the gradient of a regression line (beta) fitted to the given data [1]

Inputs

x (numeric): an expression corresponding to the independent variable.
y (numeric): an expression corresponding to the dependent variable.

linear_regression_beta([x], [y])x.linreg.beta(y)

Description

Computes the standard error of the gradient (beta) of a regression line fitted to the given data [1]

This assumes the residuals are normally distributed and is calculated according to [2]

Inputs

x (numeric): the column corresponding to the independent variable.
y (numeric): the column corresponding to the dependent variable.

linear_regression_beta_error([x], [y])x.linreg.beta_std_err(y)

Description

The Bray-Curtis distance is the elementwise sum of absolute differences between elements divided by the absolute value of their sum [1]

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

braycurtis_distance([x], [y])x.metric.braycurtis_distance(y)

Description

The Canberra distance is the elementwise sum of absolute differences between elements divided by the sum of

their absolute values. It can be considered a weighted version of the Manhattan distance [1]

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

canberra_distance([x], [y])x.metric.canberra_distance(y)

Description

The Chebyshev distance is the greatest difference between dimension values of two vectors. It is equivalent to the Minkowski distance as p → ∞ [1]

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

chebyshev_distance([x], [y])x.metric.chebyshev_distance(y)

Description

The cosine distance is the cosine of the angle between two vectors subtracted from 1 [1]

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

cosine_distance([x], [y])x.metric.cosine_distance(y)

Description

The Euclidean distance is the familiar 'as the crow flies' distance. It is the square root of the sum of squared differences between the elements of two vectors [1]

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

euclidean_distance([x], [y])x.metric.euclidean_distance(y)

Description

The F-score is a classifier performance metric which measures accuracy.

It is defined as the weighted harmonic mean of precision and recall scores. The beta parameter controls the relative weighting of these two metrics.

The most common value of beta is 1: this is the F_1 score (aka balanced F-score). It weights precision and recall evenly. Values of beta greater than 1 weight recall higher than precision and less than 1 weights precision higher than recall [1]

Inputs

x (boolean): the column corresponding to the first series.
y (boolean): the column corresponding to the second series.
beta (float): the value of beta to use in the calculation.

fbeta_score([x], [y], beta)x.metric.f_score(y, beta)

Description

The Manhattan distance (aka the taxicab distance) is the absolute sum of differences between the elements of two vectors.

It is the distance traced out by a taxicab moving along a city grid like Manhattan where the distance travelled is the sum of the sides of the squares [1]

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

manhattan_distance([x], [y])x.metric.manhattan_distance(y)

Description

The mean absolute error is the mean absolute elementwise difference between two series of values [1]

It is a common performance metric for regression models where one series is the predicted values and the other series is the observed values.

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

mean_absolute_error([x], [y])x.metric.mean_absolute_error(y)

Description

The mean fractional absolute error is the mean absolute elementwise fractional difference between two series of values. It is a scale-invariant version of the mean absolute error [1]

It is a common performance metric for regression models where one series is the predicted values and the other series is the observed values.

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

mean_fractional_absolute_error([x], [y])x.metric.mean_fractional_absolute_error(y)

Description

The mean squared error between two series of values is the mean of the squared elementwise differences [1]

It is a common performance metric for regression models.

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.

mean_squared_error([x], [y])x.metric.mean_squared_error(y)

Description

The Minkowski distance is a generalisation of the Euclidean (p=2) or Manhattan (p=1) distance to other powers p [1]

Inputs

x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
p (int): the order to use in the Minkowski distance calculation.

minkowski_distance([x], [y], p)x.metric.minkowski_distance(y, p)

Description

Precision is a classification performance metric which measures the fraction of true positive events in a set of events that a classifier has predicted to be positive. It is calculated as follows

precision = tp / (tp + fp)

where tp is the number of true positives and fp is the number of false positives.

Precision is a measure of the purity of the classifier's positive predictions. [1] It is also known as the positive predictive value and purity.

Inputs

x (boolean): the column corresponding to the first series.
y (boolean): the column corresponding to the second series.

precision_score([x], [y])x.metric.precision_score(y)

Description

Recall is a classification performance metric which measures the fraction of positive events that are successfully predicted by a classifier. It is calculated as follows

recall = tp / (tp + fn)

where tp is the number of true positives and fn is the number of false negatives.

Recall is a measure of the efficiency of the classifier at retrieving positive events. [1] It is also known as sensitivity, hit rate, true positive rate (TPR) and efficiency.

Inputs

x (boolean): the column corresponding to the first series.
y (boolean): the column corresponding to the second series.

recall_score([x], [y])x.metric.recall_score(y)

Description

Apply a R squared calculation between two value series in this window. This a measure of how well a regressor predicts true values.

Returns a window column instance representing this calculation.

Inputs

x (column): the first series in the calculation.
y (column): the second series in the calculation.

r_squared(x, y)x.metric.r_squared(y)           #column
w.metric.r_squared(x, y)     #window

Description

Apply an adjusted R squared calculation between two value series in this window. This a measure of how well a regressor predicts true values, but with a penalisation term for more predictors (inputs).

Returns a window column instance representing this calculation.

Inputs

x (column): the first series in the calculation.
y (column): the second series in the calculation.
n (integer): the number of predictors that the model has.

adjusted_r_squared(x, y, n)x.metric.adjusted_r_squared(y, n)        #column
w.metric.adjusted_r_squared(x, y, n)  #window

Description

Gain-loss ratio is the mean positive return of the series divided by the mean negative return of the series.

Inputs

x (numeric): column of return values to compute the gain-loss ratio over.

gain_loss_ratio([x])x.finance.gain_loss_ratio()

Description

The information ratio is the mean excess return between a return series and a benchmark series divided by the standard deviation of the excess return.

Inputs

x (numeric): column of return values.
y (numeric): column of benchmark return values.

mean_stdev_ratio([x] - [y])x.finance.information_ratio(y)

Description

Drawdown is calculated as

dd_i = (x_h - x_i)/(x_h)

where x_h is high watermark (max) up to and including the point x_i [1]

Drawdown must be in a window. You can change the limits in the OVER statement to define how far back the calculation goes. For example, the following looks back infinitely far:

drawdown([x]) OVER(
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW EXCLUDE NO OTHERS
)

...whereas the following only looks back 90 rows:

drawdown([x]) OVER(
ROWS BETWEEN 90 PRECEDING AND CURRENT ROW EXCLUDE NO OTHERS
)

In Lumipy this would be:

w = lm.window(lower=90)
w.finance.drawdown(table.x)

Inputs

x (numeric): column of price values to compute the drawdown over.

drawdown([x])x.finance.drawdown()

Description

Drawdown length is calculated as the number of rows between the high watermark value and the current row. [1]

Inputs

x (numeric): column of price values to compute the drawdown length over.

drawdown_length([x])x.finance.drawdown_length()

Description

Drawdown is calculated as

dd_i = (x_h - x_i)/(x_h)

where x_h is high watermark (max) up to and including the point x_i [1]

Max drawdown is then the maximum value of the drawdowns dd_i over the sequence of values.

This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL syntax, so you should consider turning the table containing the data into a table variable, ordering that by the time-equivalent column and then applying the method to the corresponding column in a select statement on the table variable.

Inputs

x (numeric): column of price values to compute the max drawdown over.

max_drawdown([x])x.finance.max_drawdown()

Description

Drawdown length is calculated as the number of rows between the high watermark value and the current row. [1]

The max drawdown length is then the maximum value of the drawdown length in the time period.

This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL syntax, so you should consider turning the table containing the data into a table variable, ordering that by the time-equivalent column and then applying the method to the corresponding column in a select statement on the table variable.

Inputs

x (numeric): column of price values to compute the max drawdown length over.

max_drawdown_length([x])x.finance.max_drawdown_length()

Description

Drawdown is calculated as

dd_i = (x_h - x_i)/(x_h)

where x_h is high watermark (max) up to and including the point x_i [1]

Mean drawdown is then the mean value of the drawdown over the sequence of values.

This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL syntax, so you should consider turning the table containing the data into a table variable, ordering that by the time-equivalent column and then applying the method to the corresponding column in a .select() on the table variable.

Inputs

x (numeric): column of price values to compute the mean drawdown over.

mean_drawdown([x])x.finance.mean_drawdown()

Description

Drawdown length is calculated as the number of rows between the high watermark value and the current row. [1]

The mean drawdown length is then the mean value of the drawdown length in the time period.

This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL syntax, so you should consider turning the table containing the data into a table variable, ordering that by the time-equivalent column and then applying the method to the corresponding column in a .select() on the table variable.

Inputs

x (numeric): column of price values to compute the mean drawdown length over.

mean_drawdown_length([x])x.finance.mean_drawdown_length()

Description

Compute a returns series from a prices series. Supports scaling by a simple scale factor or compounding.

Inputs

x (numeric): column of price values to compute the returns from.
interval (int): the number of time steps to use when scaling.
time_factor (float): a time scale factor for annualisation.
compound (bool): whether the time scale factor should be applied as a simple scale factor r*T or as a compounded calculation (r+1)**T

prices_to_returns([Price], 1, 1.0, 0) OVER( ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW )x.finance.prices_to_returns(1, 1, False)

Description

Compute a price series from a returns series and an initial value. Supports scaling by a simple scale factor or compounding.

Inputs

x (numeric): column of returns values to compute the prices from.
initial (float): the initial price to apply return factors to.
time_factor (float): a time scale factor for anualisation.
compound (bool): whether the time scale factor should be applied as a simple scale factor r*T or as a compounded calculation (r+1)**T

returns_to_prices([rets], 400, 1.0, 0) OVER( ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW )x.finance.returns_to_prices(400, 1, False)

Description

Semi-deviation is the standard deviation of values in a returns series below the mean return value.

Inputs

x (numeric): column of return values to compute the semi-deviation over.

semi_deviation([x])x.finance.semi_deviation()

Description

The Sharpe ratio is calculated as the mean excess return over the risk-free rate divided by the standard deviation of the excess return.

Inputs

x (numeric): column of return values to compute the Sharpe ratio for.
r (numeric): the risk-free rate of return. This can be a constant value (float) or a column.

mean_stdev_ratio([x] - [r])x.finance.sharpe_ratio(r)

Description

The tracking error is the standard deviation of the difference between an index's return series and a benchmark.

Inputs

x (numeric): the index return series.
y (numeric): the benchmark return series.

window_stdev([x] - [y])x.finance.tracking_error(y)

Description

The cumulative product is similar to cumulative sum except it multiplies rather than adds.

Inputs

x (numeric): A column of numbers to multiply together.

cumeprod(table.x)w.prod(table.x)