Giter VIP home page Giter VIP logo

nike-inc / signal_analog Goto Github PK

View Code? Open in Web Editor NEW
40.0 15.0 18.0 691 KB

A troposphere-inspired library for programmatic, declarative definition and management of SignalFx Charts, Dashboards, and Detectors.

Home Page: https://signal-analog.readthedocs.io/en/latest/

License: BSD 3-Clause "New" or "Revised" License

Makefile 3.05% Python 96.95%
python signalfx monitoring library bsd configuration-as-code automation alerting

signal_analog's Introduction

CircleCI

signal_analog

A troposphere-inspired library for programmatic, declarative definition and management of SignalFx Charts, Dashboards, and Detectors.

This library assumes a basic familiarity with resources in SignalFx. For a good overview of the SignalFx API consult the upstream documentation.

TOC

Features

  • Provides bindings for the SignalFlow DSL
  • Provides abstractions for:
    • Charts
    • Dashboards, DashboardGroups
    • Detectors
  • A CLI builder to wrap resource definitions (useful for automation)

Installation

Add signal_analog to the requirements file in your project:

# requirements.txt
# ... your other dependencies
signal_analog

Then run the following command to update your environment:

pip install -r requirements.txt

Usage

signal_analog provides two kinds of abstractions, one for building resources in the SignalFx API and the other for describing metric timeseries through the Signal Flow DSL.

The following sections describe how to use Resource abstractions in conjunction with the Signal Flow DSL.

Building Charts

signal_analog provides constructs for building charts in the signal_analog.charts module.

Consult the upstream documentation for more information Charts.

Let's consider an example where we would like to build a chart to monitor memory utilization for a single applicaton in a single environment.

This assumes a service reports metrics for application name as app and environment as env with memory utilization reporting via the memory.utilization metric name.

In a timeseries chart, all data displayed on the screen comes from at least one data definition in the SignalFlow language. Let's begin by defining our timeseries:

from signal_analog.flow import Data

ts = Data('memory.utilization')

In SignalFlow parlance a timeseries is only displayed on a chart if it has been "published". All stream functions in SignalFlow have a publish method that may be called at the end of all timeseries transformations.

ts = Data('memory.utilization').publish()

As a convenience, all transformations on stream functions return the callee, so in the above example ts remains bound to an instance of Data.

Now, this timeseries isn't very useful by itself; if we attached this program to a chart we would see all timeseries for all Riposte applications reporting to SignalFx!

We can restrict our view of the data by adding a filter on application name:

from signal_analog.flow import Data, Filter

app_filter = Filter('app', 'foo')

ts = Data('memory.utilization', filter=app_filter).publish()

Now if we created a chart with this program we would only be looking at metrics that relate to the foo application. Much better, but we're still looking at instance of foo regardless of the environment it lives in.

What we'll want to do is combine our app_filter with another filter for the environment. The signal_analog.combinators module provides some helpful constructs for achieving this goal:

from signal_analog.combinators import And

env_filter = Filter('env', 'prod')

all_filters = And(app_filter, env_filter)

ts = Data('memory.utilization', filter=all_filters).publish()

Excellent! We're now ready to create our chart.

First, let's give our chart a name:

from signal_analog.charts import TimeSeriesChart

memory_chart = TimeSeriesChart().with_name('Memory Used %')

Like it's flow counterparts, charts adhere to the builder pattern for constructing objects that interact with the SignalFx API.

With our name in place, let's go ahead and add our program:

memory_chart = TimeSeriesChart().with_name('Memory Used %').with_program(ts)

Each Chart understands how to serialize our SignalFlow programs appropriately, so it is sufficient to simply pass in our reference here.

Finally, let's change the plot type on our chart so that we see solid areas instead of flimsy lines:

from signal_analog.charts import PlotType

memory_chart = TimeSeriesChart()\
                 .with_name('Memory Used %')\
                 .with_program(ts)
                 .with_default_plot_type(PlotType.area_chart)

Terrific; there's only a few more details before we have a complete chart.

In the following sections we'll see how we can create dashboards from collections of charts.

Building Dashboards

signal_analog provides constructs for building charts in the signal_analog.dashboards module.

Consult the upstream documentation for more information on the Dashboard API.

Building on the examples described in the previous section, we'd now like to build a dashboard containing our memory chart.

We start with the humble Dashboard object:

from signal_analog.dashboards import Dashboard

dash = Dashboard()

Many of the same methods for charts are available on dashboards as well, so let's give our dashboard a memorable name and configure it's API token:

dash.with_name('My Little Dashboard: Metrics are Magic')\
    .with_api_token('my-api-token')

Our final task will be to add charts to our dashboard and create it in the API!

response = dash\
  .with_charts(memory_chart)\
  .with_api_token('my-api-token')\
  .create()

At this point one of two things will happen:

  • We receive some sort of error from the SignalFx API and an exception is thrown
  • We successfully created the dashboard, in which case the JSON response is returned as a dictionary.

Also, if you have an existing Dashboard Group and you want this new dashboard to be part of that dashboard group, you can pass that group id of the dashboard group when creating the dashboard. Something like this:

response = dash\
  .with_charts(memory_chart)\
  .with_api_token('my-api-token')\
  .create(group_id="asdf;lkj")

Now, storing API keys in source isn't ideal, so if you'd like to see how you can pass in your API keys at runtime check the documentation below to see how you can dynamically build a CLI for your resources.

Updating Dashboards

Once you have created a dashboard you can update properties like name and description:

dash.update(
    name='updated_dashboard_name',
    description='updated_dashboard_description'
)

Dashboard updates will also update any Chart configurations it owns.

Note: If the given dashboard does not already exist, `update` will create a new dashboard for you

Providing Dashboard Filters

Dashboards can be configured to provide various filters that affect the behavior of all configured charts (overriding any conflicting filters at the chart level). You may wish to do this in order to quickly change the environment that you're observing for a given set of charts.

from signal_analog.filters import DashboardFilters, FilterVariable, FilterSource, FilterTime
app_var = FilterVariable().with_alias('app')\
.with_property('app')\
.with_is_required(True)\
.with_value('foo')

env_var = FilterVariable().with_alias('env')\
.with_property('env')\
.with_is_required(True)\
.with_value('prod')

aws_src = FilterSource().with_property("aws_region").with_value('us-west-2')

time = FilterTime().with_start("-1h").with_end("Now")

app_filter = DashboardFilters() \
.with_variables(app_var, env_var) \ 
.with_sources(aws_src) \
.with_time(time)

So, here we are creating a few filters "app=foo" and "env=prod", a source filter "aws_region=us-west-2" and a time filter "-1h till Now" Now we can pass this config to a dashboard object:

response = dash\
.with_charts(memory_chart)\
.with_api_token('my-api-token')\
.with_filters(app_filter)\
.create()

If you are updating an existing dashboard:

response = dash\
.with_filters(app_filter)\
.update()

Dashboard Event Overlays and Selected Event Overlays

To view events overlayed on your charts within a dashboard requires an event to be viewed, a chart with showEventLines enabled, and a dashboard with the correct eventOverlays settings (and selectedEventOverlays to show events by default).

Assuming that the events you would like to see exist; you would make a chart with showEventLines like so:

from signal_analog.flow import Data
from signal_analog.charts import TimeSeriesChart
program = Data('cpu.utilization').publish()
chart = TimeSeriesChart().with_name('Chart With Event Overlays')\
    .with_program(program).show_event_lines(True)

With our chart defined, we are ready to prepare our event overlays and selected event overlays for the dashboard. First we define the event signals we would like to match. In this case, we will look for an event named "test" (include leading and/or trailing asterisks as wildcards if you need partial matching). Next we use those event signals to create our eventOverlays, making sure to include a color index for our event's symbol, and setting event line to True. We also pass our event signals along to the selectedEventOverlays, which will tell the dashboard to display matching events by default.

from signal_analog.eventoverlays import EventSignals, EventOverlays, SelectedEventOverlays
events = EventSignals().with_event_search_text("*test*")\
    .with_event_type("eventTimeSeries")

eventoverlay = EventOverlays().with_event_signals(events)\
    .with_event_color_index(1)\
    .with_event_line(True)

selectedeventoverlay = SelectedEventOverlays()\
    .with_event_signals(events)

Next we combine our chart, our event overlay, and our selected event overlay into a dashboard object:

from signal_analog.dashboards import Dashboard
dashboard_with_event_overlays = Dashboard().with_name('Dashboard With Overlays')\
    .with_charts(chart)\
    .with_event_overlay(eventoverlay)\
    .with_selected_event_overlay(selectedeventoverlay)

Finally we build our resources in SignalFX with the cli builder:

if __name__ == '__main__':
    from signal_analog.cli import CliBuilder
    cli = CliBuilder().with_resources(dashboard_with_event_overlays)\
        .build()
    cli()

Creating Detectors

signal_analog provides a means of managing the lifecycle of Detectors in the signal_analog.detectors module. As of v0.21.0 only a subset of the full Detector API is supported.

Consult the upstream documentation for more information about Detectors.

Detectors are comprised of a few key elements:

  • A name
  • A SignalFlow Program
  • A set of rules for alerting

We start by building a Detector object and giving it a name:

from signal_analog.detectors import Detector

detector = Detector().with_name('My Super Serious Detector')

We'll now need to give it a program to alert on:

from signal_analog.flow import Program, Detect, Filter, Data
from signal_analog.combinators import GT

# This program fires an alert if memory utilization is above 90% for the
# 'bar' application.
data = Data('memory.utilization', filter=Filter('app', 'bar')).publish(label='A')
alert_label = 'Memory Utilization Above 90'
detect = Detect(GT(data, 90)).publish(label=alert_label)

detector.with_program(Program(detect))

With our name and program in hand, it's time to build up an alert rule that we can use to notify our teammates:

# We provide a number of notification strategies in the detectors module.
from signal_analog.detectors import EmailNotification, Rule, Severity

info_rule = Rule()\
  # From our detector defined above.
  .for_label(alert_label)\
  .with_severity(Severity.Info)\
  .with_notifications(EmailNotification('[email protected]'))

detector.with_rules(info_rule)

# We can now create this resource in SignalFx:
detector.with_api_token('foo').create()
# For a more robust solution consult the "Creating a CLI for your Resources"
# section below.

To add multiple alerting rules we would need to use different detect statements with distinct labels to differentiate them from one another.

Detectors that Combine Data Streams

More complex detectors, like those created as a function of two other data streams, require a more complex setup including data stream assignments. If we wanted to create a detector that watched for an average above a certain threshold, we may want to use the quotient of the sum() of the data and the count() of the datapoints over a given period of time.

from signal_analog.flow import \
    Assign, \
    Data, \
    Detect, \
    Ref, \
    When

from signal_analog.combinators import \
    Div, \
    GT

program = Program( \
    Assign('my_var', Data('cpu.utilization')) \
    Assign('my_other_var', Data('cpu.utilization').count()) \
    Assign('mean', Div(Ref('my_var'), Ref('my_other_var'))) \
    Detect(When(GT(Ref('mean'), 2000))) \
)

print(program)

The above code generates the following program:

my_var = data('cpu.utilization')
my_other_var = data('cpu.utilization').count()
mean = (my_var / my_other_var)

when(detect(mean > 2000))

Building Detectors from Existing Charts

We can also build up Detectors from an existing chart, which allows us to reuse our SignalFlow program and ensure consistency between what we're monitoring and what we're alerting on.

Let's assume that we already have a chart defined for our use:

from signal_analog.flow import Program, Data
from signal_analog.charts import TimeSeriesChart

program = Program(Data('cpu.utilization').publish(label='A'))
cpu_chart = TimeSeriesChart().with_name('Disk Utilization').with_program(program)

In order to alert on this chart we'll use the from_chart builder for detectors:

from signal_analog.combinators import GT
from signal_analog.detectors import Detector
from signal_analog.flow import Detect

# Alert when CPU utilization rises above 95%
detector = Detector()\
    .with_name('CPU Detector')\
    .from_chart(
        cpu_chart,
        # `p` is the Program object from the cpu_chart we passed in.
        lambda p: Detect(GT(p.find_label('A'), 95).publish(label='Info Alert'))
    )

The above example won't actually alert on anything until we add a Rule, which you can find examples for in the previous section.

Linking Charts to Existing Detectors

To see a visualization of a Detector's status from within a chart, the signal_analog.flow module provides an Alert data stream that can create a signal flow statement. That statement can be appended to the charts Program object. In this example we assume a Detector was previously created. To create the link we will need the detector id. One place to obtain the detector id is to navigate to the detector in the web user interface. The url will have the id in it. The url has the form: https://app.signalfx.com/#/detector/v2/{detector_id}

To refresh our memory, our data in the previous chart example was:

ts = Data('memory.utilization', filter=all_filters).publish()

We can append an additional alert data stream. Import Program and Alerts form the signal_analog.flow module. First we need to wrap the Data object in a Program object:

ts_program = Program(ts)

Then we can create a new statement using an Alert object with the detector id, publish the stream, and append the new statement to our program:

notifications = Alerts(detector_id).publish()
ts_program.statements.append(notifications)

The program can be included in a chart as usual:

memory_chart = TimeSeriesChart()\
                .with_program(ts_program)
                .with_default_plot_type(PlotType.area_chart)

By default the alert will show as a green box around the chart when the Detector is not in Alarm. The Detector can also be accessed from the bell icon in the upper right corner of the chart.

Using Flow and Combinator Functions In Formulas

signal_analog also provides functions for combining SignalFlow statements into more complex SignalFlow Formulas. These sorts of Formulas can be useful when creating more complex detectors and charts. For instance, if you would like to multiply one data stream by another and receive the sum of that Formula, it can be accomplished using Op and Mul like so:

from signal_analog.flow import Op, Program, Data
from signal_analog.combinators import Mul

# Multiply stream A by stream B and sum the result
    A = Data('request.mean')

    B = Data('request.count')

    C = Op(Mul(A,B)).sum()

Print(C) in the above example would produce the following output:

(data("request.mean") * data("request.count")).sum()

Building Dashboard Groups

signal_analog provides abstractions for building dashboard groups in the signal_analog.dashboards module.

Consult the upstream documentation for more information on the Dashboard Groups API.

Building on the examples described in the previous section, we'd now like to build a dashboard group containing our dashboards.

First, lets build a couple of Dashboard objects similar to how we did it in the Building Dashboards example:

from signal_analog.dashboards import Dashboard, DashboardGroup

dg = DashboardGroup()
dash1 = Dashboard().with_name('My Little Dashboard1: Metrics are Magic')\
    .with_charts(memory_chart)
dash2 = Dashboard().with_name('My Little Dashboard2: Metrics are Magic')\
    .with_charts(memory_chart)

Note: we do not create Dashboard objects ourselves, the DashboardGroup object is responsible for creating all child resources.

Many of the same methods for dashboards are available on dashboard groups as well, so let's give our dashboard group a memorable name and configure it's API token:

dg.with_name('My Dashboard Group')\
    .with_api_token('my-api-token')

Our final task will be to add dashboard to our dashboard group and create it in the API!

response = dg\
    .with_dashboards(dash1)\
    .with_api_token('my-api-token')\
    .create()

Now, storing API keys in source isn't ideal, so if you'd like to see how you can pass in your API keys at runtime check the documentation below to see how you can dynamically build a CLI for your resources.

Updating Dashboard Groups

Once you have created a dashboard group, you can update properties like name and description of a dashboard group or add/remove dashboards in a group.

Example 1:

dg.with_api_token('my-api-token')\
    .update(name='updated_dashboard_group_name',
            description='updated_dashboard_group_description')

Example 2:

dg.with_api_token('my-api-token').with_dashboards(dash1, dash2).update()

Talking to the SignalFlow API Directly

If you need to process SignalFx data outside the confince of the API it may be useful to call the SignalFlow API directly. Note that you may incur time penalties when pulling data out depending on the source of the data (e.g. AWS/CloudWatch).

SignalFlow constructs are contained in the flow module. The following is an example SignalFlow program that monitors an API services (like Riposte) RPS metrics for the foo application in the test environment.

from signal_analog.flow import Data, Filter
from signal_analog.combinators import And

all_filters = And(Filter('env', 'prod'), Filter('app', 'foo'))

program = Data('requests.count', filter=all_filters)).publish()

You now have an object representation of the SignalFlow program. To take it for a test ride you can use the official SignalFx client like so:

# Original example found here:
# https://github.com/signalfx/signalfx-python#executing-signalflow-computations

import signalfx
from signal_analog.flow import Data, Filter
from signal_analog.combinators import And

app_filter = Filter('app', 'foo')
env_filter = Filter('env', 'prod')
program = Data('requests.count', filter=And(app_filter, env_filter)).publish()

with signalfx.SignalFx().signalflow('MY_TOKEN') as flow:
    print('Executing {0} ...'.format(program))
    computation = flow.execute(str(program))

    for msg in computation.stream():
        if isinstance(msg, signalfx.signalflow.messages.DataMessage):
            print('{0}: {1}'.format(msg.logical_timestamp_ms, msg.data))
        if isinstance(msg, signalfx.signalflow.messages.EventMessage):
            print('{0}: {1}'.format(msg.timestamp_ms, msg.properties))

General Resource Guidelines

Charts Always Belong to Dashboards

It is always assumed that a Chart belongs to an existing Dashboard. This makes it easier for the library to manage the state of the world.

Resource Names are Unique per Account

In a signal_analog world it is assumed that all resource names are unique. That is, if we have two dashboards 'Foo Dashboard', when we attempt to update either dashboard via signal_analog we expect to see errors.

Resource names are assumed to be unique in order to simplify state management by the library itself. In practice we have not found this to be a major inconvenience.

Configuration is the Source of Truth

When conflicts arise between the state of a resource in your configuration and what SignalFx thinks that state should be, this library always prefers the local configuration.

Only "CCRUD" Methods Interact with the SignalFx API

Resource objects contain a number of builder methods to enable a "fluent" API when describing your project's dashboards in SignalFx. It is assumed that these methods do not perform state-affecting actions in the SignalFx API.

Only "CCRUD" (Create, Clone, Read, Update, and Delete) methods will affect the state of your resources in SignalFx.

Creating a CLI for your Resources

signal_analog provides builders for fully featured command line clients that can manage the lifecycle of sets of resources.

Simple CLI integration

Integrating with the CLI is as simple as importing the builder and passing it your resources. Let's consider an example where we want to update two existing dashboards:

#!/usr/bin/env python

# ^ It's always good to include a "hashbang" so that your terminal knows
# how to run your script.

from signal_analog.dashboards import Dashboard
from signal_analog.cli import CliBuilder

ingest_dashboard = Dashboard().with_name('my-ingest-service')
service_dashboard = Dashboard().with_name('my-service')

if __name__ == '__main__':
  cli = CliBuilder()\
      .with_resources(ingest_dashboard, service_dashboard)\
      .build()
  cli()

Assuming we called this dashboards.py we could run it in one of two ways:

  • Give the script execution rights and run it directly (typically chmod +x dashboards.py)
    • ./dashboards.py --api-key mykey update
  • Pass the script in to the Python executor
    • python dashboards.py --api-key mykey update

If you want to know about the available actions you can take with your new CLI you can always the --help command.

./dashboards.py --help

This gives you the following features:

  • Consistent resource management
    • All resources passed to the CLI builder can be updated with one update invocation, rather than calling the update() method on each resource indvidually
  • API key handling for all resources
    • Rather than duplicating your API key for each resource, you can instead invoke the CLI with an API key
    • This also provides a way to supply keys for users who don't want to store them in source control (that's you! don't store your keys in source control)

Documentation

Example Code

  • See examples included in this project.

Contributing

Please read our docs here for more info about contributing.

signal_analog's People

Contributors

agilefall avatar bebyx avatar benscholler avatar dogonthehorizon avatar dpolyakov-algn avatar ggeorgiev-sfly avatar greatestusername avatar jrduncans avatar regunathan avatar sjon-hub avatar tlisonbee avatar tylersouthwick avatar venky526 avatar vidspersonalusername avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

signal_analog's Issues

Add lint errors to Travis build

To encourage good contributor behavior we should abide by our own documentation and make sure that linting passes for the project.

Let's fix all the current lint errors and add lint tasks to our Travis builds.

Update CONTRIBUTING.md to remove bumpversion instructions

Lines 99-103 in CONTRIBUTING.md instructs the general audience on how to increment the version using bumpversion whereas this functionality is only accessible to the Maintainers for now. So, we need to remove those instructions from the document to get rid of any confusion

Add basic Team support for Dashboard Groups

We'd like to be able to associate dashboard groups to existing Teams in SignalFx for that sweet grouping functionality.

This is not a robust team implementation, look for a future ticket to implement that feature.

Allow specifying the base_url in constructor of Resource subclasses and in CliBuilder

When using a non-default realm of signalfx you need to change the API base URL to be able to use it.

Right now the only way to change the base URL in any object is to directly set the base_url property on it, since the constructors do not propagate the Resource constructor's option to set the base_url.

Further to this, it's not possible to override the base_url using the CLI.

Add support for flow assignments & references

We do not currently support building this program in signal_analog:

my_var = data('cpu.utilization')
my_other_var = data('cpu.utilization').count()
mean = (my_var / my_other_var)

when(detect(mean > 2000))

This is semantically different from the following program (which we do currently support):

when(detect(
    (data('cpu.utilization') / data('cpu.utilization').count()) > 2000
))

The solution is to support assignment and references in signal_analog so that these kinds of programs can be expressed.

Something like the following seems to make sense to me:

from signal_analog.flow import Data, When, Detect, Ref
from signal_analog.combinators import Assign, GT, Div

program = Program(
    Assign('my_var', Data('cpu.utilization'))
    Assign('my_other_var', Data('cpu.utilization').count())
    Assign('mean', Div(Ref('my_var'), Ref('my_other_var')))
    Detect(When(GT(Ref('mean'), 2000))
)

print(program)

Disabling a Detector appears to have changed.

Issue:
Disabling a detector no longer works.

Suspected Cause:
API change on SFX side. We used to pass in disabled:disabled as an option for a detector to disable it.
https://github.com/Nike-Inc/signal_analog/blob/master/signal_analog/detectors.py#L270-L280

Now it appears disabling a detector is with a PUT to /detector/{id}/disable
https://developers.signalfx.com/detectors_reference.html#tag/Disable-Single-Detector

We will need to refactor to grab the id for the matching detector to be disabled and hit that endpoint.

Describe `Program` validation strategy

This should probably go in the wiki as a design doc of some sort. Some discussion with @hibes surfaced the fact that we haven't done a good job communicating some of the design decisions made in the project.

This document should describe the goals we have in mind for SignalFlow program validation so that we can provide great error messages to users before they ever hit the SignalFx API.

This document may also include a strategy for implementing future validations.

POC move to CircleCI

What would it take to move to CircleCI for CI/releases?

Some things we want:

  • Public record of builds
  • Automated release on tag, from master to PyPI
  • PR integration
  • Some kind of secrets management (e.g. for releases to PyPI)
  • ???

Update method signatures

The following methods should have their signatures updated to match upstream:

  • bottom(count=None, percentage=None, by=None)
  • Count —> split into count by and count over
  • Delta —> no args
  • Dimensions not supported
  • ewma support over
  • double_ewma
  • Remove map support
  • Max —> split into max by and max over
  • Mean —> split into mean by and mean over
  • mean_plus_stddev(self, by=None, over=None) ;; by and over separate functions; accept stdev keyword arg
  • Median —> split into median by and median over
  • Min —> split in two
  • Percentile —> split in two and add percentile keyword arg
  • Pow —> add zero kwarg variant
  • Promote sig needs to change drastically
    • promote(property), promote([property, ...]), promote(by=None), promote(property, ...)
  • Random —> add count kwarg
  • Size —> split into by and over functions
  • Stdev —> split into by and over functions
  • Sum —> split into by and over functions
  • Timeshift —> remove kwarg
  • Top —> count, percentage, by
  • Variance —> split into by and over functions

Needs a ticket:

  • Fill not supported
  • Integrate not supported
  • Kpss not supported
  • Rateofchange is new

Docs:
https://developers.signalfx.com/reference#signalflow-stream-methods-1

Publish API docs

We should have these hosted somewhere, readthedocs is a likely candidate.

API docs should be published along with each release, so this'll require changes to the Travis deployment. Unclear on the best approach at this time!

Usability: FilterVariable alias should default to property name

This suggestion would be a very minor improvement but often the alias for a FilterVariable just ends up being the same as the property name so it would be nice if that was used as the default.

E.g.

Dashboard() \
        .with_name(name) \
        .with_charts(*charts) \
        .with_filters(
        DashboardFilters() \
            .with_variables(
            FilterVariable() \
                .with_property("aws_account_id")
                .with_alias("aws_account_id")
                .with_value(aws_account_id))

Fix Travis deploy to PyPI

Our credentials don't seem to be working, needs more investigation. Likely need to just blow away Travis config and start again.

Support setting layout of charts

There doesn't seem to be any way to define layout for charts, e.g. width, height, which row has which chart. Did I miss it? Or is it missing?

Using `Program` and attempting to `--dry-run` throws an exception

When using a signal_analog.flow.Program one would expect that --dry-run would spit out the configuration for whatever resource we're creating. Instead we're treated with a nasty error:

TypeError: Object of type 'Program' is not JSON serializable

This shouldn't happen!

A good approach is going to be to make sure that whenever we call dry-run all resources have been properly serialized (e.g. Program should be turned into a string before printing)

Document major version bump strategy

We should have a clearly define strategy for upgrading major versions of the library. A few things to keep in mind:

  • Migrating between major versions
  • Release behavior
  • Determine what is supported (e.g. how long will the previous major version be supported?)

Refactor `is_valid` to better describe intent

signal_analog.util.is_valid implies the return of a boolean, but we're actually using it to raise an error if pre-conditions for the callee haven't been met. That's kind of confusing, so let's consider changing it:

check_valid and assert_valid have been suggested as names that would better convey intent. If there isn't any loud opposition in this ticket I'm in favor of assert_valid.

Usability: with_axes() would be better as two methods: with_left_axis(), with_right_axis()

Axis configuration on charts currently models the SignalFx REST API but it would be more developer friendly if it was changed.

Current API is a with_axes() method that takes an array with 1 or 2 AxisOptions, there is also a with_axis_precision() method that could move:

AxisOption(min=None, max=None, label='', high_watermark=None, low_watermark=None)

.with_axes([AxisOption(label="Count", min=0), AxisOption(label="Latency", min=0)])

.with_axis_precision(num)

Suggested API on charts:

.with_left_axis(label="", min=None, max=None, low_watermark=None, high_watermark=None, axis_precision=None)
.with_right_axis(label="", min=None, max=None, low_watermark=None, high_watermark=None, axis_precision=None)

(also, I think axis_precision is only currently supported for the left axis, above is suggested for both)

Dashboard FilterVariable option with_preferred_suggestions() doesn't seem to work

This is low priority because it isn't a super important feature but when I tried using it, this feature didn't seem to work.

E.g.

Dashboard() \
        .with_name(name) \
        .with_charts(*charts) \
        .with_filters(
        DashboardFilters() \
            .with_variables(
            FilterVariable() \
                .with_property("aws_account_id")
                .with_alias("aws_account_id")
                .with_value(aws_account_id)
                .with_preferred_suggestions([aws_account_id])
                .with_apply_if_exists(True))

Support new `publishLabelOptions` config options

Values that aren't currently supported:

  • valuePrefix
  • valueSuffix
  • valueUnit

Documentation is here:
https://developers.signalfx.com/v2/reference#create-chart

PublishLabelOptions is defined here:

class PublishLabelOptions(ChartOption):
"""Options for displaying published timeseries data."""
def __init__(self, label, y_axis, palette_index, plot_type, display_name):
"""Initializes and validates publish label options.
Arguments:
label: label used in the publish statement that displayes the plot
y_axis: the y-axis associated with values for this plot.
Must be 0 (left) or 1 (right).
palette_index: the indexed palette color to use for all plot lines
plot_type: the visualization style to use
display_name: an alternate name to show in the legend
"""
for arg in [label, display_name]:
util.assert_valid(arg)
util.in_given_enum(palette_index, PaletteColor)
util.in_given_enum(plot_type, PlotType)
if y_axis not in [0, 1]:
msg = "YAxis for chart must be 0 (Left) or 1 (Right); " +\
"'{0}' provided."
raise ValueError(msg.format(y_axis))
self.opts = {
'label': label,
'yAxis': y_axis,
'paletteIndex': palette_index.value,
'plotType': plot_type.value,
'displayName': display_name
}

The work involved is likely just a matter of adding the new keywords to the options map for this class and updating documentation accordingly.

Add automated deploy to CircleCI build

We currently have tests running in the build. To support this it looks like we'd want to:

  • Only release on tags
  • Re-use the cache from the build step
  • Publish to PyPI using securely stored credentials in CircleCI
  • Publish a notification to Slack, if possible with our org configuration

Usability: It would be nice if the PublishLabelOptions could be folded into the Plot class

PublishLabelOptions would make more sense as part of the Plot class. It would make the API more closely model the SignalFx UI (vs. modeling the less user friendly Sfx REST API).

Currently, PublishLabelOptions looks like this but all of those options are more related to the Plot they are modifying.

        TimeSeriesChart() \
            .with_name("Lambda " + function_name + " Invocations") \
            .with_description(description)
            .with_default_plot_type(PlotType.column_chart) \
            .with_chart_legend_options("sf_metric", show_legend=True)
            .with_publish_label_options(
            PublishLabelOptions(
                label='Invocations',
                palette_index=PaletteColor.green
            ),
            PublishLabelOptions(
                label='Duration',
                palette_index=PaletteColor.gray,
                y_axis=1,
                plot_type=PlotType.line_chart,
                value_unit='Millisecond'
            )
        )
            .with_axes([AxisOption(label="Count", min=0), AxisOption(label="Latency", min=0)])
            .with_program(
            Program(
                Plot(
                    assigned_name="A",
                    signal_name="Invocations",
                    filter=And(
                        Filter("FunctionName", function_name),
                        Filter("namespace", "AWS/Lambda"),
                        Filter("stat", "sum")
                    ),
                    rollup=RollupType.sum,
                    fx=[Sum(by=["aws_account_id", "FunctionName"])],
                    label="Invocations"),
                Plot(
                    assigned_name="B",
                    signal_name="Duration",
                    filter=And(
                        Filter("FunctionName", function_name),
                        Filter("namespace", "AWS/Lambda"),
                        Filter("stat", "mean")
                    ),
                    rollup=RollupType.max,
                    fx=[Sum(by=["aws_account_id", "FunctionName"])],
                    label="Duration")

            )
        )

Axis high/low watermark labels missing in API

This seems like a very low priority feature but I did notice that the label option for both the high and low watermarks for Axis options is not currently supported in signal_analog.

Deleting an existing detector and adding a new detector in local config should delete the existing detector in SFx

Scenario:

  1. Created a new detector along with a Dashboard.
  2. Changed that detector to a different one and ran a dashboard update.

User expectation: New detector will be created and the old one will be deleted as it happens with our dashboards and charts.
Actual result: The new detector is created but the old one still stays.

We need to change this behavior to mirror what we do with dashboards and charts. This can be done by following a similar process like this

New release request

Hi there,

We are currently using signal_analog and have received a security alert regarding a vulnerability (SNYK-PYTHON-DNSPYTHON-6241713) found in one of its components, specifically [email protected]. This vulnerability has been addressed in the newer version 2.1.1 of email_validator.

We are keen to address this issue promptly. Do you have any plans to make a new release of signal_analog that includes the upgraded version of email_validator? If so, we kindly request that you upgrade the email_validator version in the next release to ensure the security of our application.

Thank you for your attention to this matter.

Thank you :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.