signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def time_millis():
return int(round(time.time() * <NUM_LIT:1000>))<EOL>
Returns current milliseconds since epoch
f2801:m0
def create_datapoint(value, timestamp=None, **tags):
if timestamp is None:<EOL><INDENT>timestamp = time_millis()<EOL><DEDENT>if type(timestamp) is datetime:<EOL><INDENT>timestamp = datetime_to_time_millis(timestamp)<EOL><DEDENT>item = { '<STR_LIT>': timestamp,<EOL>'<STR_LIT:value>': value }<EOL>if tags is not None:<EOL><INDENT>item['<STR_LIT>'] = tags<EOL><DEDENT>return ...
Creates a single datapoint dict with a value, timestamp and tags. :param value: Value of the datapoint. Type depends on the id's MetricType :param timestamp: Optional timestamp of the datapoint. Uses client current time if not set. Millisecond accuracy. Can be datetime instance also. :param tags: Optional datapoint ta...
f2801:m3
def create_metric(metric_type, metric_id, data):
if not isinstance(data, list):<EOL><INDENT>data = [data]<EOL><DEDENT>return { '<STR_LIT:type>': metric_type,'<STR_LIT:id>': metric_id, '<STR_LIT:data>': data }<EOL>
Create Hawkular-Metrics' submittable structure. :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id :param data: A datapoint or a list of datapoints created with create_datapoint(value, timestamp, tags)
f2801:m4
def create_percentiles_filter(*percentiles):
return '<STR_LIT:U+002C>'.join("<STR_LIT:%s>" % p for p in percentiles)<EOL>
Create percentiles filter from a list of float64 percentile values
f2801:m5
def create_tags_filter(**tags):
return HawkularMetricsClient._transform_tags(**tags)<EOL>
Transform a set of parameters to a tag query language filter
f2801:m6
def put(self, data):
if not isinstance(data, list):<EOL><INDENT>data = [data]<EOL><DEDENT>r = collections.defaultdict(list)<EOL>for d in data:<EOL><INDENT>metric_type = d.pop('<STR_LIT:type>', None)<EOL>if metric_type is None:<EOL><INDENT>raise HawkularError('<STR_LIT>')<EOL><DEDENT>r[metric_type].append(d)<EOL><DEDENT>for l in r:<EOL><IND...
Send multiple different metric_ids to the server in a single batch. Metrics can be a mixture of types. :param data: A dict or a list of dicts created with create_metric(metric_type, metric_id, datapoints)
f2801:c2:m11
def push(self, metric_type, metric_id, value, timestamp=None):
if type(timestamp) is datetime:<EOL><INDENT>timestamp = datetime_to_time_millis(timestamp)<EOL><DEDENT>item = create_metric(metric_type, metric_id, create_datapoint(value, timestamp))<EOL>self.put(item)<EOL>
Pushes a single metric_id, datapoint combination to the server. This method is an assistant method for the put method by removing the need to create data structures first. :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id :param value: Datapoint value (depending...
f2801:c2:m12
def query_metric(self, metric_type, metric_id, start=None, end=None, **query_options):
if start is not None:<EOL><INDENT>if type(start) is datetime:<EOL><INDENT>query_options['<STR_LIT:start>'] = datetime_to_time_millis(start)<EOL><DEDENT>else:<EOL><INDENT>query_options['<STR_LIT:start>'] = start<EOL><DEDENT><DEDENT>if end is not None:<EOL><INDENT>if type(end) is datetime:<EOL><INDENT>query_options['<STR...
Query for metrics datapoints from the server. :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id :param start: Milliseconds since epoch or datetime instance :param end: Milliseconds since epoch or datetime instance :param query_options: For possible query_options,...
f2801:c2:m13
def query_metric_stats(self, metric_type, metric_id=None, start=None, end=None, bucketDuration=None, **query_options):
if start is not None:<EOL><INDENT>if type(start) is datetime:<EOL><INDENT>query_options['<STR_LIT:start>'] = datetime_to_time_millis(start)<EOL><DEDENT>else:<EOL><INDENT>query_options['<STR_LIT:start>'] = start<EOL><DEDENT><DEDENT>if end is not None:<EOL><INDENT>if type(end) is datetime:<EOL><INDENT>query_options['<STR...
Query for metric aggregates from the server. This is called buckets in the Hawkular-Metrics documentation. :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id or None for tags matching only :param start: Milliseconds since epoch or datetime instance :param end: Mil...
f2801:c2:m14
def query_metric_definition(self, metric_type, metric_id):
return self._get(self._get_metrics_single_url(metric_type, metric_id))<EOL>
Query definition of a single metric id. :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id
f2801:c2:m15
def query_metric_definitions(self, metric_type=None, id_filter=None, **tags):
params = {}<EOL>if id_filter is not None:<EOL><INDENT>params['<STR_LIT:id>'] = id_filter<EOL><DEDENT>if metric_type is not None:<EOL><INDENT>params['<STR_LIT:type>'] = MetricType.short(metric_type)<EOL><DEDENT>if len(tags) > <NUM_LIT:0>:<EOL><INDENT>params['<STR_LIT>'] = self._transform_tags(**tags)<EOL><DEDENT>return ...
Query available metric definitions. :param metric_type: A MetricType to be queried. If left to None, matches all the MetricTypes :param id_filter: Filter the id with regexp is tag filtering is used, otherwise a list of exact metric ids :param tags: A dict of tag key/value pairs. Uses Hawkular-Metrics tag query languag...
f2801:c2:m16
def query_tag_values(self, metric_type=None, **tags):
tagql = self._transform_tags(**tags)<EOL>return self._get(self._get_metrics_tags_url(self._get_url(metric_type)) + '<STR_LIT>'.format(tagql))<EOL>
Query for possible tag values. :param metric_type: A MetricType to be queried. If left to None, matches all the MetricTypes :param tags: A dict of tag key/value pairs. Uses Hawkular-Metrics tag query language for syntax
f2801:c2:m17
def create_metric_definition(self, metric_type, metric_id, **tags):
item = { '<STR_LIT:id>': metric_id }<EOL>if len(tags) > <NUM_LIT:0>:<EOL><INDENT>data_retention = tags.pop('<STR_LIT>', None)<EOL>if data_retention is not None:<EOL><INDENT>item['<STR_LIT>'] = data_retention<EOL><DEDENT>if len(tags) > <NUM_LIT:0>:<EOL><INDENT>item['<STR_LIT>'] = tags<EOL><DEDENT><DEDENT>json_data = jso...
Create metric definition with custom definition. **tags should be a set of tags, such as units, env .. :param metric_type: MetricType of the new definition :param metric_id: metric_id is the string index of the created metric :param tags: Key/Value tag values of the new metric
f2801:c2:m18
def query_metric_tags(self, metric_type, metric_id):
definition = self._get(self._get_metrics_tags_url(self._get_metrics_single_url(metric_type, metric_id)))<EOL>return definition<EOL>
Returns a list of tags in the metric definition. :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id
f2801:c2:m19
def update_metric_tags(self, metric_type, metric_id, **tags):
self._put(self._get_metrics_tags_url(self._get_metrics_single_url(metric_type, metric_id)), tags, parse_json=False)<EOL>
Replace the metric_id's tags with given **tags :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id :param tags: Updated key/value tag values of the metric
f2801:c2:m20
def delete_metric_tags(self, metric_type, metric_id, **deleted_tags):
tags = self._transform_tags(**deleted_tags)<EOL>tags_url = self._get_metrics_tags_url(self._get_metrics_single_url(metric_type, metric_id)) + '<STR_LIT>'.format(tags)<EOL>self._delete(tags_url)<EOL>
Delete one or more tags from the metric definition. :param metric_type: MetricType to be matched (required) :param metric_id: Exact string matching metric id :param deleted_tags: List of deleted tag names. Values can be set to anything
f2801:c2:m21
def query_tenants(self):
return self._get(self._get_tenants_url())<EOL>
Query available tenants and their information.
f2801:c2:m22
def create_tenant(self, tenant_id, retentions=None):
item = { '<STR_LIT:id>': tenant_id }<EOL>if retentions is not None:<EOL><INDENT>item['<STR_LIT>'] = retentions<EOL><DEDENT>self._post(self._get_tenants_url(), json.dumps(item, indent=<NUM_LIT:2>))<EOL>
Create a tenant. Currently nothing can be set (to be fixed after the master version of Hawkular-Metrics has fixed implementation. :param retentions: A set of retention settings, see Hawkular-Metrics documentation for more info
f2801:c2:m23
def delete_tenant(self, tenant_id):
self._delete(self._get_single_id_url(self._get_tenants_url(), tenant_id))<EOL>
Asynchronously deletes a tenant and all the data associated with the tenant. :param tenant_id: Tenant id to be sent for deletion process
f2801:c2:m24
def __init__(self,<EOL>tenant_id,<EOL>host='<STR_LIT:localhost>',<EOL>port=<NUM_LIT>,<EOL>path=None,<EOL>scheme='<STR_LIT:http>',<EOL>cafile=None,<EOL>context=None,<EOL>token=None,<EOL>username=None,<EOL>password=None,<EOL>auto_set_legacy_api=True,<EOL>authtoken=None):
self.tenant_id = tenant_id<EOL>self.host = host<EOL>self.port = port<EOL>self.path = path<EOL>self.cafile = cafile<EOL>self.scheme = scheme<EOL>self.context = context<EOL>self.token = token<EOL>self.username = username<EOL>self.password = password<EOL>self.legacy_api = False<EOL>self.authtoken = authtoken<EOL>self._set...
A new instance of HawkularClient is created with the following defaults: host = localhost port = 8080 path = hawkular-metrics scheme = http cafile = None The url that is called by the client is: {scheme}://{host}:{port}/{2}/
f2802:c6:m0
def get(self, tags=[], trigger_ids=[]):
params = {}<EOL>if len(tags) > <NUM_LIT:0>:<EOL><INDENT>params['<STR_LIT>'] = '<STR_LIT:U+002C>'.join(tags)<EOL><DEDENT>if len(trigger_ids) > <NUM_LIT:0>:<EOL><INDENT>params['<STR_LIT>'] = '<STR_LIT:U+002C>'.join(trigger_ids)<EOL><DEDENT>url = self._service_url('<STR_LIT>', params=params)<EOL>triggers_dict = self._get(...
Get triggers with optional filtering. Querying without parameters returns all the trigger definitions. :param tags: Fetch triggers with matching tags only. Use * to match all values. :param trigger_ids: List of triggerIds to fetch
f2803:c13:m2
def create(self, trigger):
data = self._serialize_object(trigger)<EOL>if isinstance(trigger, FullTrigger):<EOL><INDENT>returned_dict = self._post(self._service_url(['<STR_LIT>', '<STR_LIT>']), data)<EOL>return FullTrigger(returned_dict)<EOL><DEDENT>else:<EOL><INDENT>returned_dict = self._post(self._service_url('<STR_LIT>'), data)<EOL>return Trig...
Create a new trigger. :param trigger: FullTrigger or Trigger class to be created :return: The created trigger
f2803:c13:m3
def update(self, trigger_id, full_trigger):
data = self._serialize_object(full_trigger)<EOL>rdict = self._put(self._service_url(['<STR_LIT>', '<STR_LIT>', trigger_id]), data)<EOL>return FullTrigger(rdict)<EOL>
Update an existing full trigger. :param full_trigger: FullTrigger with conditions, dampenings and triggers :type full_trigger: FullTrigger :return: Updated FullTrigger definition
f2803:c13:m4
def delete(self, trigger_id):
self._delete(self._service_url(['<STR_LIT>', trigger_id]))<EOL>
Delete an existing standard or group member trigger. This can not be used to delete a group trigger definition. :param trigger_id: Trigger definition id to be deleted.
f2803:c13:m5
def single(self, trigger_id, full=False):
if full:<EOL><INDENT>returned_dict = self._get(self._service_url(['<STR_LIT>', '<STR_LIT>', trigger_id]))<EOL>return FullTrigger(returned_dict)<EOL><DEDENT>else:<EOL><INDENT>returned_dict = self._get(self._service_url(['<STR_LIT>', trigger_id]))<EOL>return Trigger(returned_dict)<EOL><DEDENT>
Get an existing (full) trigger definition. :param trigger_id: Trigger definition id to be retrieved. :param full: Fetch the full definition, default is False. :return: Trigger of FullTrigger depending on the full parameter value.
f2803:c13:m6
def create_group(self, trigger):
data = self._serialize_object(trigger)<EOL>return Trigger(self._post(self._service_url(['<STR_LIT>', '<STR_LIT>']), data))<EOL>
Create a new group trigger. :param trigger: Group member trigger to be created :return: The created group Trigger
f2803:c13:m7
def group_members(self, group_id, include_orphans=False):
params = {'<STR_LIT>': str(include_orphans).lower()}<EOL>url = self._service_url(['<STR_LIT>', '<STR_LIT>', group_id, '<STR_LIT>'], params=params)<EOL>return Trigger.list_to_object_list(self._get(url))<EOL>
Find all group member trigger definitions :param group_id: group trigger id :param include_orphans: If True, include orphan members :return: list of asociated group members as trigger objects
f2803:c13:m8
def update_group(self, group_id, trigger):
data = self._serialize_object(trigger)<EOL>self._put(self._service_url(['<STR_LIT>', '<STR_LIT>', group_id]), data, parse_json=False)<EOL>
Update an existing group trigger definition and its member definitions. :param group_id: Group trigger id to be updated :param trigger: Trigger object, the group trigger to be updated
f2803:c13:m9
def delete_group(self, group_id, keep_non_orphans=False, keep_orphans=False):
params = {'<STR_LIT>': str(keep_non_orphans).lower(), '<STR_LIT>': str(keep_orphans).lower()}<EOL>self._delete(self._service_url(['<STR_LIT>', '<STR_LIT>', group_id], params=params))<EOL>
Delete a group trigger :param group_id: ID of the group trigger to delete :param keep_non_orphans: if True converts the non-orphan member triggers to standard triggers :param keep_orphans: if True converts the orphan member triggers to standard triggers
f2803:c13:m10
def create_group_member(self, member):
data = self._serialize_object(member)<EOL>return Trigger(self._post(self._service_url(['<STR_LIT>', '<STR_LIT>', '<STR_LIT>']), data))<EOL>
Create a new member trigger for a parent trigger. :param member: Group member trigger to be created :type member: GroupMemberInfo :return: A member Trigger object
f2803:c13:m11
def set_group_conditions(self, group_id, conditions, trigger_mode=None):
data = self._serialize_object(conditions)<EOL>if trigger_mode is not None:<EOL><INDENT>url = self._service_url(['<STR_LIT>', '<STR_LIT>', group_id, '<STR_LIT>', trigger_mode])<EOL><DEDENT>else:<EOL><INDENT>url = self._service_url(['<STR_LIT>', '<STR_LIT>', group_id, '<STR_LIT>'])<EOL><DEDENT>response = self._put(url, d...
Set the group conditions. This replaces any existing conditions on the group and member conditions for all trigger modes. :param group_id: Group to be updated :param conditions: New conditions to replace old ones :param trigger_mode: Optional TriggerMode used :type conditions: GroupConditionsInfo :type trigger_mode: ...
f2803:c13:m12
def set_conditions(self, trigger_id, conditions, trigger_mode=None):
data = self._serialize_object(conditions)<EOL>if trigger_mode is not None:<EOL><INDENT>url = self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>', trigger_mode])<EOL><DEDENT>else:<EOL><INDENT>url = self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>'])<EOL><DEDENT>response = self._put(url, data)<EOL>return Condit...
Set the conditions for the trigger. This sets the conditions for all trigger modes, replacing existing conditions for all trigger modes. :param trigger_id: The relevant Trigger definition id :param trigger_mode: Optional Trigger mode :param conditions: Collection of Conditions to set. :type trigger_mode: TriggerMode ...
f2803:c13:m13
def conditions(self, trigger_id):
response = self._get(self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>']))<EOL>return Condition.list_to_object_list(response)<EOL>
Get all conditions for a specific trigger. :param trigger_id: Trigger definition id to be retrieved :return: list of condition objects
f2803:c13:m14
def dampenings(self, trigger_id, trigger_mode=None):
if trigger_mode is not None:<EOL><INDENT>url = self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>', '<STR_LIT>', trigger_mode])<EOL><DEDENT>else:<EOL><INDENT>url = self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>'])<EOL><DEDENT>data = self._get(url)<EOL>return Dampening.list_to_object_list(data)<EOL>
Get all Dampenings for a Trigger (1 Dampening per mode). :param trigger_id: Trigger definition id to be retrieved. :param trigger_mode: Optional TriggerMode which is only fetched :type trigger_mode: TriggerMode :return: List of Dampening objects
f2803:c13:m15
def create_dampening(self, trigger_id, dampening):
data = self._serialize_object(dampening)<EOL>url = self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>'])<EOL>return Dampening(self._post(url, data))<EOL>
Create a new dampening. :param trigger_id: TriggerId definition attached to the dampening :param dampening: Dampening definition to be created. :type dampening: Dampening :return: Created dampening
f2803:c13:m16
def delete_dampening(self, trigger_id, dampening_id):
self._delete(self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>', dampening_id]))<EOL>
Delete an existing dampening definition. :param trigger_id: Trigger definition id for deletion. :param dampening_id: Dampening definition id to be deleted.
f2803:c13:m17
def update_dampening(self, trigger_id, dampening_id):
data = self._serialize_object(dampening)<EOL>url = self._service_url(['<STR_LIT>', trigger_id, '<STR_LIT>', dampening_id])<EOL>return Dampening(self._put(url, data))<EOL>
Update an existing dampening definition. Note that the trigger mode can not be changed using this method. :param trigger_id: Trigger definition id targeted for update. :param dampening_id: Dampening definition id to be updated. :return: Updated Dampening
f2803:c13:m18
def create_group_dampening(self, group_id, dampening):
data = self._serialize_object(dampening)<EOL>url = self._service_url(['<STR_LIT>', '<STR_LIT>', group_id, '<STR_LIT>'])<EOL>return Dampening(self._post(url, data))<EOL>
Create a new group dampening :param group_id: Group Trigger id attached to dampening :param dampening: Dampening definition to be created. :type dampening: Dampening :return: Group Dampening created
f2803:c13:m19
def update_group_dampening(self, group_id, dampening_id, dampening):
data = self._serialize_object(dampening)<EOL>url = self._service_url(['<STR_LIT>', '<STR_LIT>', group_id, '<STR_LIT>', dampening_id])<EOL>return Dampening(self._put(url, data))<EOL>
Update an existing group dampening :param group_id: Group Trigger id attached to dampening :param dampening_id: id of the dampening to be updated :return: Group Dampening created
f2803:c13:m20
def delete_group_dampening(self, group_id, dampening_id):
self._delete(self._service_url(['<STR_LIT>', '<STR_LIT>', group_id, '<STR_LIT>', dampening_id]))<EOL>
Delete an existing group dampening :param group_id: Group Trigger id to be retrieved :param dampening_id: id of the Dampening to be deleted
f2803:c13:m21
def set_group_member_orphan(self, member_id):
self._put(self._service_url(['<STR_LIT>', '<STR_LIT>', '<STR_LIT>', member_id, '<STR_LIT>']), data=None, parse_json=False)<EOL>
Make a non-orphan member trigger into an orphan. :param member_id: Member Trigger id to be made an orphan.
f2803:c13:m22
def set_group_member_unorphan(self, member_id, unorphan_info):
data = self._serialize_object(unorphan_info)<EOL>data = self._service_url(['<STR_LIT>', '<STR_LIT>', '<STR_LIT>', member_id, '<STR_LIT>'])<EOL>return Trigger(self._put(url, data))<EOL>
Make an orphan member trigger into an group trigger. :param member_id: Orphan Member Trigger id to be assigned into a group trigger :param unorphan_info: Only context and dataIdMap are used when changing back to a non-orphan. :type unorphan_info: UnorphanMemberInfo :return: Trigger for the group
f2803:c13:m23
def enable(self, trigger_ids=[]):
trigger_ids = '<STR_LIT:U+002C>'.join(trigger_ids)<EOL>url = self._service_url(['<STR_LIT>', '<STR_LIT>'], params={'<STR_LIT>': trigger_ids, '<STR_LIT>': '<STR_LIT:true>'})<EOL>self._put(url, data=None, parse_json=False)<EOL>
Enable triggers. :param trigger_ids: List of trigger definition ids to enable
f2803:c13:m24
def disable(self, trigger_ids=[]):
trigger_ids = '<STR_LIT:U+002C>'.join(trigger_ids)<EOL>url = self._service_url(['<STR_LIT>', '<STR_LIT>'], params={'<STR_LIT>': trigger_ids, '<STR_LIT>': '<STR_LIT:false>'})<EOL>self._put(url, data=None, parse_json=False)<EOL>
Disable triggers. :param trigger_ids: List of trigger definition ids to disable
f2803:c13:m25
def enable_group(self, trigger_ids=[]):
trigger_ids = '<STR_LIT:U+002C>'.join(trigger_ids)<EOL>url = self._service_url(['<STR_LIT>', '<STR_LIT>', '<STR_LIT>'], params={'<STR_LIT>': trigger_ids, '<STR_LIT>': '<STR_LIT:true>'})<EOL>self._put(url, data=None, parse_json=False)<EOL>
Enable group triggers. :param trigger_ids: List of group trigger definition ids to enable
f2803:c13:m26
def disable_group(self, trigger_ids=[]):
trigger_ids = '<STR_LIT:U+002C>'.join(trigger_ids)<EOL>url = self._service_url(['<STR_LIT>', '<STR_LIT>', '<STR_LIT>'], params={'<STR_LIT>': trigger_ids, '<STR_LIT>': '<STR_LIT:false>'})<EOL>self._put(url, data=None, parse_json=False)<EOL>
Disable group triggers. :param trigger_ids: List of group trigger definition ids to disable
f2803:c13:m27
def isup(self):
return self.status == '<STR_LIT>'<EOL>
Returns if the alerting service is ready to accept requests. :return: bool True if available
f2804:c0:m0
def isdistributed(self):
return self.distributed == '<STR_LIT:true>'<EOL>
Is the Alerting Service running in distributed mode or standalone. :return: bool True if distributed
f2804:c0:m1
def __init__(self, **opts):
prop_defaults = {<EOL>"<STR_LIT>": '<STR_LIT>',<EOL>"<STR_LIT:host>": '<STR_LIT:localhost>',<EOL>"<STR_LIT:port>": <NUM_LIT>,<EOL>"<STR_LIT>": '<STR_LIT:http>',<EOL>"<STR_LIT:path>": None,<EOL>"<STR_LIT>": None,<EOL>"<STR_LIT>": None,<EOL>"<STR_LIT>": None,<EOL>"<STR_LIT:username>": None,<EOL>"<STR_LIT:password>": None...
Available parameters: tenant_id, host='localhost', port=8080, path=None, scheme='http', cafile=None, context=None, token=None, username=None, password=None, auto_set_legacy_api=True, authtoken=None
f2804:c1:m0
def status(self):
orig_dict = self._get(self._service_url('<STR_LIT:status>'))<EOL>orig_dict['<STR_LIT>'] = orig_dict.pop('<STR_LIT>')<EOL>orig_dict['<STR_LIT>'] = orig_dict.pop('<STR_LIT>')<EOL>return Status(orig_dict)<EOL>
Get the status of Alerting Service :return: Status object
f2804:c1:m1
@utils.memoize<EOL>def get(tickers, provider=None, common_dates=True, forward_fill=False,<EOL>clean_tickers=True, column_names=None, ticker_field_sep='<STR_LIT::>',<EOL>mrefresh=False, existing=None, **kwargs):
if provider is None:<EOL><INDENT>provider = DEFAULT_PROVIDER<EOL><DEDENT>tickers = utils.parse_arg(tickers)<EOL>data = {}<EOL>for ticker in tickers:<EOL><INDENT>t = ticker<EOL>f = None<EOL>bits = ticker.split(ticker_field_sep, <NUM_LIT:1>)<EOL>if len(bits) == <NUM_LIT:2>:<EOL><INDENT>t = bits[<NUM_LIT:0>]<EOL>f = bits[...
Helper function for retrieving data as a DataFrame. Args: * tickers (list, string, csv string): Tickers to download. * provider (function): Provider to use for downloading data. By default it will be ffn.DEFAULT_PROVIDER if not provided. * common_dates (bool): Keep common dates only? Drop na's. ...
f2808:m0
@utils.memoize<EOL>def web(ticker, field=None, start=None, end=None,<EOL>mrefresh=False, source='<STR_LIT>'):
if source == '<STR_LIT>' and field is None:<EOL><INDENT>field = '<STR_LIT>'<EOL><DEDENT>tmp = _download_web(ticker, data_source=source,<EOL>start=start, end=end)<EOL>if tmp is None:<EOL><INDENT>raise ValueError('<STR_LIT>' % (ticker, field))<EOL><DEDENT>if field:<EOL><INDENT>return tmp[field]<EOL><DEDENT>else:<EOL><IND...
Data provider wrapper around pandas.io.data provider. Provides memoization.
f2808:m1
@utils.memoize<EOL>def _download_web(name, **kwargs):
return pdata.DataReader(name, **kwargs)<EOL>
Thin wrapper to enable memoization
f2808:m2
@utils.memoize<EOL>def csv(ticker, path='<STR_LIT>', field='<STR_LIT>', mrefresh=False, **kwargs):
<EOL>if '<STR_LIT>' not in kwargs:<EOL><INDENT>kwargs['<STR_LIT>'] = <NUM_LIT:0><EOL><DEDENT>if '<STR_LIT>' not in kwargs:<EOL><INDENT>kwargs['<STR_LIT>'] = True<EOL><DEDENT>df = pd.read_csv(path, **kwargs)<EOL>tf = ticker<EOL>if field is not '<STR_LIT>' and field is not None:<EOL><INDENT>tf = '<STR_LIT>' % (tf, field)...
Data provider wrapper around pandas' read_csv. Provides memoization.
f2808:m4
def to_returns(prices):
return prices / prices.shift(<NUM_LIT:1>) - <NUM_LIT:1><EOL>
Calculates the simple arithmetic returns of a price series. Formula is: (t1 / t0) - 1 Args: * prices: Expects a price series
f2810:m0
def to_log_returns(prices):
return np.log(prices / prices.shift(<NUM_LIT:1>))<EOL>
Calculates the log returns of a price series. Formula is: ln(p1/p0) Args: * prices: Expects a price series
f2810:m1
def to_price_index(returns, start=<NUM_LIT:100>):
return (returns.replace(to_replace=np.nan, value=<NUM_LIT:0>) + <NUM_LIT:1>).cumprod() * start<EOL>
Returns a price index given a series of returns. Args: * returns: Expects a return series * start (number): Starting level Assumes arithmetic returns. Formula is: cumprod (1+r)
f2810:m2
def rebase(prices, value=<NUM_LIT:100>):
return prices / prices.iloc[<NUM_LIT:0>] * value<EOL>
Rebase all series to a given intial value. This makes comparing/plotting different series together easier. Args: * prices: Expects a price series * value (number): starting value for all series.
f2810:m3
def calc_perf_stats(prices):
return PerformanceStats(prices)<EOL>
Calculates the performance statistics given an object. The object should be a Series of prices. A PerformanceStats object will be returned containing all the stats. Args: * prices (Series): Series of prices
f2810:m4
def calc_stats(prices):
if isinstance(prices, pd.Series):<EOL><INDENT>return PerformanceStats(prices)<EOL><DEDENT>elif isinstance(prices, pd.DataFrame):<EOL><INDENT>return GroupStats(*[prices[x] for x in prices.columns])<EOL><DEDENT>else:<EOL><INDENT>raise NotImplementedError('<STR_LIT>')<EOL><DEDENT>
Calculates performance stats of a given object. If object is Series, a PerformanceStats object is returned. If object is DataFrame, a GroupStats object is returned. Args: * prices (Series, DataFrame): Set of prices
f2810:m5
def to_drawdown_series(prices):
<EOL>drawdown = prices.copy()<EOL>drawdown = drawdown.fillna(method='<STR_LIT>')<EOL>drawdown[np.isnan(drawdown)] = -np.Inf<EOL>roll_max = np.maximum.accumulate(drawdown)<EOL>drawdown = drawdown / roll_max - <NUM_LIT:1.><EOL>return drawdown<EOL>
Calculates the `drawdown <https://www.investopedia.com/terms/d/drawdown.asp>`_ series. This returns a series representing a drawdown. When the price is at all time highs, the drawdown is 0. However, when prices are below high water marks, the drawdown series = current / hwm - 1 The max drawdown can be obtained by sim...
f2810:m6
def calc_max_drawdown(prices):
return (prices / prices.expanding(min_periods=<NUM_LIT:1>).max()).min() - <NUM_LIT:1><EOL>
Calculates the max drawdown of a price series. If you want the actual drawdown series, please use to_drawdown_series.
f2810:m7
def drawdown_details(drawdown, index_type=pd.DatetimeIndex):
is_zero = drawdown == <NUM_LIT:0><EOL>start = ~is_zero & is_zero.shift(<NUM_LIT:1>)<EOL>start = list(start[start == True].index) <EOL>end = is_zero & (~is_zero).shift(<NUM_LIT:1>)<EOL>end = list(end[end == True].index) <EOL>if len(start) is <NUM_LIT:0>:<EOL><INDENT>return None<EOL><DEDENT>if len(end) is <NUM_LIT:0>:<...
Returns a data frame with start, end, days (duration) and drawdown for each drawdown in a drawdown series. .. note:: days are actual calendar days, not trading days Args: * drawdown (pandas.Series): A drawdown Series (can be obtained w/ drawdown(prices). Returns: * pandas.DataFrame -- A data fram...
f2810:m8
def calc_cagr(prices):
start = prices.index[<NUM_LIT:0>]<EOL>end = prices.index[-<NUM_LIT:1>]<EOL>return (prices.iloc[-<NUM_LIT:1>] / prices.iloc[<NUM_LIT:0>]) ** (<NUM_LIT:1> / year_frac(start, end)) - <NUM_LIT:1><EOL>
Calculates the `CAGR (compound annual growth rate) <https://www.investopedia.com/terms/c/cagr.asp>`_ for a given price series. Args: * prices (pandas.Series): A Series of prices. Returns: * float -- cagr.
f2810:m9
def calc_risk_return_ratio(returns):
return calc_sharpe(returns)<EOL>
Calculates the return / risk ratio. Basically the `Sharpe ratio <https://www.investopedia.com/terms/s/sharperatio.asp>`_ without factoring in the `risk-free rate <https://www.investopedia.com/terms/r/risk-freerate.asp>`_.
f2810:m10
def calc_sharpe(returns, rf=<NUM_LIT:0.>, nperiods=None, annualize=True):
if type(rf) is float and rf != <NUM_LIT:0> and nperiods is None:<EOL><INDENT>raise Exception('<STR_LIT>')<EOL><DEDENT>er = returns.to_excess_returns(rf, nperiods=nperiods)<EOL>std = np.std(returns, ddof=<NUM_LIT:1>)<EOL>res = np.divide(er.mean(), std)<EOL>if annualize:<EOL><INDENT>if nperiods is None:<EOL><INDENT>nperi...
Calculates the `Sharpe ratio <https://www.investopedia.com/terms/s/sharperatio.asp>`_ (see `Sharpe vs. Sortino <https://www.investopedia.com/ask/answers/010815/what-difference-between-sharpe-ratio-and-sortino-ratio.asp>`_). If rf is non-zero and a float, you must specify nperiods. In this case, rf is assumed to be exp...
f2810:m11
def calc_information_ratio(returns, benchmark_returns):
diff_rets = returns - benchmark_returns<EOL>diff_std = np.std(diff_rets, ddof=<NUM_LIT:1>)<EOL>if np.isnan(diff_std) or diff_std == <NUM_LIT:0>:<EOL><INDENT>return <NUM_LIT:0.0><EOL><DEDENT>return np.divide(diff_rets.mean(), diff_std)<EOL>
Calculates the `Information ratio <https://www.investopedia.com/terms/i/informationratio.asp>`_ (or `from Wikipedia <http://en.wikipedia.org/wiki/Information_ratio>`_).
f2810:m12
def calc_prob_mom(returns, other_returns):
return t.cdf(returns.calc_information_ratio(other_returns),<EOL>len(returns) - <NUM_LIT:1>)<EOL>
`Probabilistic momentum <http://cssanalytics.wordpress.com/2014/01/28/are-simple-momentum-strategies-too-dumb-introducing-probabilistic-momentum/>`_ (see `momentum investing <https://www.investopedia.com/terms/m/momentum_investing.asp>`_) Basically the "probability or confidence that one asset is going to outperform t...
f2810:m13
def calc_total_return(prices):
return (prices.iloc[-<NUM_LIT:1>] / prices.iloc[<NUM_LIT:0>]) - <NUM_LIT:1><EOL>
Calculates the total return of a series. last / first - 1
f2810:m14
def year_frac(start, end):
if start > end:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>return (end - start).total_seconds() / (<NUM_LIT>)<EOL>
Similar to excel's yearfrac function. Returns a year fraction between two dates (i.e. 1.53 years). Approximation using the average number of seconds in a year. Args: * start (datetime): start date * end (datetime): end date
f2810:m15
def merge(*series):
dfs = []<EOL>for s in series:<EOL><INDENT>if isinstance(s, pd.DataFrame):<EOL><INDENT>dfs.append(s)<EOL><DEDENT>elif isinstance(s, pd.Series):<EOL><INDENT>tmpdf = pd.DataFrame({s.name: s})<EOL>dfs.append(tmpdf)<EOL><DEDENT>else:<EOL><INDENT>raise NotImplementedError('<STR_LIT>')<EOL><DEDENT><DEDENT>return pd.concat(dfs...
Merge Series and/or DataFrames together. Returns a DataFrame.
f2810:m16
def drop_duplicate_cols(df):
names = set(df.columns)<EOL>for n in names:<EOL><INDENT>if len(df[n].shape) > <NUM_LIT:1>:<EOL><INDENT>sub = df[n]<EOL>sub.columns = ['<STR_LIT>' % (n, x) for x in range(sub.shape[<NUM_LIT:1>])]<EOL>keep = sub.count().idxmax()<EOL>del df[n]<EOL>df[n] = sub[keep]<EOL><DEDENT><DEDENT>return df<EOL>
Removes duplicate columns from a dataframe and keeps column w/ longest history
f2810:m17
def to_monthly(series, method='<STR_LIT>', how='<STR_LIT:end>'):
return series.asfreq_actual('<STR_LIT:M>', method=method, how=how)<EOL>
Convenience method that wraps asfreq_actual with 'M' param (method='ffill', how='end').
f2810:m18
def asfreq_actual(series, freq, method='<STR_LIT>', how='<STR_LIT:end>', normalize=False):
orig = series<EOL>is_series = False<EOL>if isinstance(series, pd.Series):<EOL><INDENT>is_series = True<EOL>name = series.name if series.name else '<STR_LIT:data>'<EOL>orig = pd.DataFrame({name: series})<EOL><DEDENT>t = pd.concat([orig, pd.DataFrame({'<STR_LIT>': orig.index.values},<EOL>index=orig.index.values)], axis=<...
Similar to pandas' asfreq but keeps the actual dates. For example, if last data point in Jan is on the 29th, that date will be used instead of the 31st.
f2810:m19
def calc_inv_vol_weights(returns):
<EOL>vol = np.divide(<NUM_LIT:1.>, np.std(returns, ddof=<NUM_LIT:1>))<EOL>vol[np.isinf(vol)] = np.NaN<EOL>volsum = vol.sum()<EOL>return np.divide(vol, volsum)<EOL>
Calculates weights proportional to inverse volatility of each column. Returns weights that are inversely proportional to the column's volatility resulting in a set of portfolio weights where each position has the same level of volatility. Note, that assets with returns all equal to NaN or 0 are excluded from the port...
f2810:m20
def calc_mean_var_weights(returns, weight_bounds=(<NUM_LIT:0.>, <NUM_LIT:1.>),<EOL>rf=<NUM_LIT:0.>,<EOL>covar_method='<STR_LIT>',<EOL>options=None):
def fitness(weights, exp_rets, covar, rf):<EOL><INDENT>mean = sum(exp_rets * weights)<EOL>var = np.dot(np.dot(weights, covar), weights)<EOL>util = (mean - rf) / np.sqrt(var)<EOL>return -util<EOL><DEDENT>n = len(returns.columns)<EOL>exp_rets = returns.mean()<EOL>if covar_method == '<STR_LIT>':<EOL><INDENT>covar = sklear...
Calculates the mean-variance weights given a DataFrame of returns. Args: * returns (DataFrame): Returns for multiple securities. * weight_bounds ((low, high)): Weigh limits for optimization. * rf (float): `Risk-free rate <https://www.investopedia.com/terms/r/risk-freerate.asp>`_ used in utility calculation...
f2810:m21
def _erc_weights_slsqp(<EOL>x0,<EOL>cov,<EOL>b,<EOL>maximum_iterations,<EOL>tolerance<EOL>):
def fitness(weights, covar):<EOL><INDENT>trc = weights * np.matmul(covar, weights)<EOL>n = len(trc)<EOL>sse = <NUM_LIT:0.><EOL>for i in range(n):<EOL><INDENT>for j in range(n):<EOL><INDENT>sse += np.abs(trc[i] - trc[j])<EOL><DEDENT><DEDENT>return sse<EOL><DEDENT>bounds = [(<NUM_LIT:0>,None) for i in range(len(x0))]<EOL...
Calculates the equal risk contribution / risk parity weights given a DataFrame of returns. Args: * x0 (np.array): Starting asset weights. * cov (np.array): covariance matrix. * b (np.array): Risk target weights. By definition target total risk contributions are all equal which makes this redundant. * maximum_itera...
f2810:m22
def _erc_weights_ccd(x0,<EOL>cov,<EOL>b,<EOL>maximum_iterations,<EOL>tolerance):
n = len(x0)<EOL>x = x0.copy()<EOL>var = np.diagonal(cov)<EOL>ctr = cov.dot(x)<EOL>sigma_x = np.sqrt(x.T.dot(ctr))<EOL>for iteration in range(maximum_iterations):<EOL><INDENT>for i in range(n):<EOL><INDENT>alpha = var[i]<EOL>beta = ctr[i] - x[i] * alpha<EOL>gamma = -b[i] * sigma_x<EOL>x_tilde = (-beta + np.sqrt(<EOL>bet...
Calculates the equal risk contribution / risk parity weights given a DataFrame of returns. Args: * x0 (np.array): Starting asset weights. * cov (np.array): covariance matrix. * b (np.array): Risk target weights. * maximum_iterations (int): Maximum iterations in iterative solutions. * tolerance (flo...
f2810:m23
def calc_erc_weights(returns,<EOL>initial_weights=None,<EOL>risk_weights=None,<EOL>covar_method='<STR_LIT>',<EOL>risk_parity_method='<STR_LIT>',<EOL>maximum_iterations=<NUM_LIT:100>,<EOL>tolerance=<NUM_LIT>):
n = len(returns.columns)<EOL>if covar_method == '<STR_LIT>':<EOL><INDENT>covar = sklearn.covariance.ledoit_wolf(returns)[<NUM_LIT:0>]<EOL><DEDENT>elif covar_method == '<STR_LIT>':<EOL><INDENT>covar = returns.cov().values<EOL><DEDENT>else:<EOL><INDENT>raise NotImplementedError('<STR_LIT>')<EOL><DEDENT>if initial_weights...
Calculates the equal risk contribution / risk parity weights given a DataFrame of returns. Args: * returns (DataFrame): Returns for multiple securities. * initial_weights (list): Starting asset weights [default inverse vol]. * risk_weights (list): Risk target weights [default equal weight]. * covar_met...
f2810:m24
def get_num_days_required(offset, period='<STR_LIT:d>', perc_required=<NUM_LIT>):
x = pd.to_datetime('<STR_LIT>')<EOL>delta = x - (x - offset)<EOL>days = delta.days * <NUM_LIT><EOL>if period == '<STR_LIT:d>':<EOL><INDENT>req = days * perc_required<EOL><DEDENT>elif period == '<STR_LIT:m>':<EOL><INDENT>req = (days / <NUM_LIT:20>) * perc_required<EOL><DEDENT>elif period == '<STR_LIT:y>':<EOL><INDENT>re...
Estimates the number of days required to assume that data is OK. Helper function used to determine if there are enough "good" data days over a given period. Args: * offset (DateOffset): Offset (lookback) period. * period (str): Period string. * perc_required (float): percentage of number of days e...
f2810:m25
def calc_clusters(returns, n=None, plot=False):
<EOL>corr = returns.corr()<EOL>diss = <NUM_LIT:1> - corr<EOL>mds = sklearn.manifold.MDS(dissimilarity='<STR_LIT>')<EOL>xy = mds.fit_transform(diss)<EOL>def routine(k):<EOL><INDENT>km = sklearn.cluster.KMeans(n_clusters=k)<EOL>km_fit = km.fit(xy)<EOL>labels = km_fit.labels_<EOL>centers = km_fit.cluster_centers_<EOL>mapp...
Calculates the clusters based on k-means clustering. Args: * returns (pd.DataFrame): DataFrame of returns * n (int): Specify # of clusters. If None, this will be automatically determined * plot (bool): Show plot? Returns: * dict with structure: {cluster# : [col names]}
f2810:m26
def calc_ftca(returns, threshold=<NUM_LIT:0.5>):
<EOL>i = <NUM_LIT:0><EOL>corr = returns.corr()<EOL>remain = list(corr.index.copy())<EOL>n = len(remain)<EOL>res = {}<EOL>while n > <NUM_LIT:0>:<EOL><INDENT>if n == <NUM_LIT:1>:<EOL><INDENT>i += <NUM_LIT:1><EOL>res[i] = remain<EOL>n = <NUM_LIT:0><EOL><DEDENT>else:<EOL><INDENT>cur_corr = corr[remain].loc[remain]<EOL>mc =...
Implementation of David Varadi's `Fast Threshold Clustering Algorithm (FTCA) <http://cssanalytics.wordpress.com/2013/11/26/fast-threshold-clustering-algorithm-ftca/>`_. http://cssanalytics.wordpress.com/2013/11/26/fast-threshold-clustering-algorithm-ftca/ # NOQA More stable than k-means for clustering purposes. If y...
f2810:m27
def limit_weights(weights, limit=<NUM_LIT:0.1>):
if <NUM_LIT:1.0> / limit > len(weights):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if isinstance(weights, dict):<EOL><INDENT>weights = pd.Series(weights)<EOL><DEDENT>if np.round(weights.sum(), <NUM_LIT:1>) != <NUM_LIT:1.0>:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>% weights.sum())<EOL><DEDENT>res = np.r...
Limits weights and redistributes excedent amount proportionally. ex: - weights are {a: 0.7, b: 0.2, c: 0.1} - call with limit=0.5 - excess 0.2 in a is ditributed to b and c proportionally. - result is {a: 0.5, b: 0.33, c: 0.167} Args: * weights (Series): A series describing the weights...
f2810:m28
def random_weights(n, bounds=(<NUM_LIT:0.>, <NUM_LIT:1.>), total=<NUM_LIT:1.0>):
low = bounds[<NUM_LIT:0>]<EOL>high = bounds[<NUM_LIT:1>]<EOL>if high < low:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT>if n * high < total or n * low > total:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>w = [<NUM_LIT:0>] * n<EOL>tgt = -float(total)<EOL>for i in range(n):<EOL><INDENT...
Generate pseudo-random weights. Returns a list of random weights that is of length n, where each weight is in the range bounds, and where the weights sum up to total. Useful for creating random portfolios when benchmarking. Args: * n (int): number of random weights * bounds ((low, high)): bounds for each wei...
f2810:m29
def plot_heatmap(data, title='<STR_LIT>', show_legend=True,<EOL>show_labels=True, label_fmt='<STR_LIT>',<EOL>vmin=None, vmax=None,<EOL>figsize=None, label_color='<STR_LIT:w>',<EOL>cmap='<STR_LIT>', **kwargs):
fig, ax = plt.subplots(figsize=figsize)<EOL>heatmap = ax.pcolor(data, vmin=vmin, vmax=vmax, cmap=cmap)<EOL>ax.invert_yaxis()<EOL>if title is not None:<EOL><INDENT>plt.title(title)<EOL><DEDENT>if show_legend:<EOL><INDENT>fig.colorbar(heatmap)<EOL><DEDENT>if show_labels:<EOL><INDENT>vals = data.values<EOL>for x in range(...
Plot a heatmap using matplotlib's pcolor. Args: * data (DataFrame): DataFrame to plot. Usually small matrix (ex. correlation matrix). * title (string): Plot title * show_legend (bool): Show color legend * show_labels (bool): Show value labels * label_fmt (str): Label format string * vmi...
f2810:m30
def plot_corr_heatmap(data, **kwargs):
return plot_heatmap(data.corr(), vmin=-<NUM_LIT:1>, vmax=<NUM_LIT:1>, **kwargs)<EOL>
Plots the correlation heatmap for a given DataFrame.
f2810:m31
def rollapply(data, window, fn):
res = data.copy()<EOL>res[:] = np.nan<EOL>n = len(data)<EOL>if window > n:<EOL><INDENT>return res<EOL><DEDENT>for i in range(window - <NUM_LIT:1>, n):<EOL><INDENT>res.iloc[i] = fn(data.iloc[i - window + <NUM_LIT:1>:i + <NUM_LIT:1>])<EOL><DEDENT>return res<EOL>
Apply a function fn over a rolling window of size window. Args: * data (Series or DataFrame): Series or DataFrame * window (int): Window size * fn (function): Function to apply over the rolling window. For a series, the return value is expected to be a single number. For a DataFrame, it shu...
f2810:m32
def _winsorize_wrapper(x, limits):
if isinstance(x, pd.Series):<EOL><INDENT>if x.count() == <NUM_LIT:0>:<EOL><INDENT>return x<EOL><DEDENT>notnanx = ~np.isnan(x)<EOL>x[notnanx] = scipy.stats.mstats.winsorize(x[notnanx],<EOL>limits=limits)<EOL>return x<EOL><DEDENT>else:<EOL><INDENT>return scipy.stats.mstats.winsorize(x, limits=limits)<EOL><DEDENT>
Wraps scipy winsorize function to drop na's
f2810:m33
def winsorize(x, axis=<NUM_LIT:0>, limits=<NUM_LIT>):
<EOL>x = x.copy()<EOL>if isinstance(x, pd.DataFrame):<EOL><INDENT>return x.apply(_winsorize_wrapper, axis=axis, args=(limits, ))<EOL><DEDENT>else:<EOL><INDENT>return pd.Series(_winsorize_wrapper(x, limits).values,<EOL>index=x.index)<EOL><DEDENT>
`Winsorize <https://en.wikipedia.org/wiki/Winsorizing>`_ values based on limits
f2810:m34
def rescale(x, min=<NUM_LIT:0.>, max=<NUM_LIT:1.>, axis=<NUM_LIT:0>):
def innerfn(x, min, max):<EOL><INDENT>return np.interp(x, [np.min(x), np.max(x)], [min, max])<EOL><DEDENT>if isinstance(x, pd.DataFrame):<EOL><INDENT>return x.apply(innerfn, axis=axis, args=(min, max,))<EOL><DEDENT>else:<EOL><INDENT>return pd.Series(innerfn(x, min, max), index=x.index)<EOL><DEDENT>
Rescale values to fit a certain range [min, max]
f2810:m35
def annualize(returns, durations, one_year=<NUM_LIT>):
return np.power(<NUM_LIT:1.> + returns, <NUM_LIT:1.> / (durations / one_year)) - <NUM_LIT:1.><EOL>
Annualize returns using their respective durations. Formula used is: (1 + returns) ** (1 / (durations / one_year)) - 1
f2810:m36
def deannualize(returns, nperiods):
return np.power(<NUM_LIT:1> + returns, <NUM_LIT:1.> / nperiods) - <NUM_LIT:1.><EOL>
Convert return expressed in annual terms on a different basis. Args: * returns (float, Series, DataFrame): Return(s) * nperiods (int): Target basis, typically 252 for daily, 12 for monthly, etc.
f2810:m37
def calc_sortino_ratio(returns, rf=<NUM_LIT:0.>, nperiods=None, annualize=True):
if type(rf) is float and rf != <NUM_LIT:0> and nperiods is None:<EOL><INDENT>raise Exception('<STR_LIT>')<EOL><DEDENT>er = returns.to_excess_returns(rf, nperiods=nperiods)<EOL>negative_returns = np.minimum(returns[<NUM_LIT:1>:], <NUM_LIT:0.>)<EOL>std = np.std(negative_returns, ddof=<NUM_LIT:1>)<EOL>res = np.divide(er.m...
Calculates the `Sortino ratio <https://www.investopedia.com/terms/s/sortinoratio.asp>`_ given a series of returns (see `Sharpe vs. Sortino <https://www.investopedia.com/ask/answers/010815/what-difference-between-sharpe-ratio-and-sortino-ratio.asp>`_). Args: * returns (Series or DataFrame): Returns * rf (float,...
f2810:m38
def to_excess_returns(returns, rf, nperiods=None):
if type(rf) is float and nperiods is not None:<EOL><INDENT>_rf = deannualize(rf, nperiods)<EOL><DEDENT>else:<EOL><INDENT>_rf = rf<EOL><DEDENT>return returns - _rf<EOL>
Given a series of returns, it will return the excess returns over rf. Args: * returns (Series, DataFrame): Returns * rf (float, Series): `Risk-Free rate(s) <https://www.investopedia.com/terms/r/risk-freerate.asp>`_ expressed in annualized term or return series * nperiods (int): Optional. If provided, will ...
f2810:m39
def calc_calmar_ratio(prices):
return np.divide(prices.calc_cagr(), abs(prices.calc_max_drawdown()))<EOL>
Calculates the `Calmar ratio <https://www.investopedia.com/terms/c/calmarratio.asp>`_ given a series of prices Args: * prices (Series, DataFrame): Price series
f2810:m40
def to_ulcer_index(prices):
dd = prices.to_drawdown_series()<EOL>return np.divide(np.sqrt(np.sum(np.power(dd, <NUM_LIT:2>))), dd.count())<EOL>
Converts from prices -> `Ulcer index <https://www.investopedia.com/terms/u/ulcerindex.asp>`_ See https://en.wikipedia.org/wiki/Ulcer_index Args: * prices (Series, DataFrame): Prices
f2810:m41
def to_ulcer_performance_index(prices, rf=<NUM_LIT:0.>, nperiods=None):
if type(rf) is float and rf != <NUM_LIT:0> and nperiods is None:<EOL><INDENT>raise Exception('<STR_LIT>')<EOL><DEDENT>er = prices.to_returns().to_excess_returns(rf, nperiods=nperiods)<EOL>return np.divide(er.mean(), prices.to_ulcer_index())<EOL>
Converts from prices -> `ulcer performance index <https://www.investopedia.com/terms/u/ulcerindex.asp>`_. See https://en.wikipedia.org/wiki/Ulcer_index Args: * prices (Series, DataFrame): Prices * rf (float, Series): `Risk-free rate of return <https://www.investopedia.com/terms/r/risk-freerate.asp>`_. Assumed...
f2810:m42
def resample_returns(<EOL>returns,<EOL>func,<EOL>seed=<NUM_LIT:0>,<EOL>num_trials=<NUM_LIT:100><EOL>):
<EOL>if type(returns) is pd.Series:<EOL><INDENT>stats = pd.Series(index=range(num_trials))<EOL><DEDENT>elif type(returns) is pd.DataFrame:<EOL><INDENT>stats = pd.DataFrame(<EOL>index=range(num_trials),<EOL>columns=returns.columns<EOL>)<EOL><DEDENT>else:<EOL><INDENT>raise(TypeError("<STR_LIT>"))<EOL><DEDENT>n = returns....
Resample the returns and calculate any statistic on every new sample. https://en.wikipedia.org/wiki/Resampling_(statistics) :param returns (Series, DataFrame): Returns :param func: Given the resampled returns calculate a statistic :param seed: Seed for random number generator :param num_trials: Number of times to res...
f2810:m43
def extend_pandas():
PandasObject.to_returns = to_returns<EOL>PandasObject.to_log_returns = to_log_returns<EOL>PandasObject.to_price_index = to_price_index<EOL>PandasObject.rebase = rebase<EOL>PandasObject.calc_perf_stats = calc_perf_stats<EOL>PandasObject.to_drawdown_series = to_drawdown_series<EOL>PandasObject.calc_max_drawdown = calc_ma...
Extends pandas' PandasObject (Series, Series, DataFrame) with some functions defined in this file. This facilitates common functional composition used in quant finance. Ex: prices.to_returns().dropna().calc_clusters() (where prices would be a DataFrame)
f2810:m44
def set_riskfree_rate(self, rf):
self.rf = rf<EOL>self._update(self.prices)<EOL>
Set annual risk-free rate property and calculate properly annualized monthly and daily rates. Then performance stats are recalculated. Affects only this instance of the PerformanceStats. Args: * rf (float): Annual `risk-free rate <https://www.investopedia.com/terms/r/risk-freerate.asp>`_
f2810:c0:m1