Giter VIP home page Giter VIP logo

beats-dashboards's Introduction

Beats dashboards

This repository contains sample Kibana4 dashboards for visualizing the data gathered by the Elastic Beats.

Installation

To load the dashboards, execute the script pointing to the Elasticsearch HTTP URL:

    # Unix
    ./load.sh -url "http://localhost:9200"

    # Windows
    .\load.ps1 -url "http://localhost:9200"

If you want to use HTTP authentication for Elasticsearch, you can specify the credentials as a second parameter:

    # Unix
    ./load.sh -url "http://localhost:9200" -user "admin:secret"

    # Windows
    .\load.ps1 -url "http://localhost:9200" -user "admin:secret"

Technical details

The dashboards folder contains the JSON files as exported from Kibana, by using the simple python tool from the save directory. The loader is a simple shell script so that you don't need python installed when loading the dashboards.

Create a new dashboard

If you added support for a new protocol in Packetbeat or a module in Metricbeat, it would be nice to create a dedicated Kibana dashboard to visualize your data. The Kibana dashboards are saved in a special index in Elasticsearch. By default it's .kibana, but it can be configured to anything else.

The first step in creating your own Kibana dashboard is to get a fresh installation of the Kibana dashboards/visualizations/searches/index patterns, that you can use as a starting point for your own dashboard. You can use the load.sh script on Unix and load.ps1 on Windows for loading the sample dashboards/visualizations/searches/index patterns in Kibana. The usage of this script is described above.

Note: Make sure you are using the latest Kibana version to create and download the dashboards.

Then, you can create the dashboard together with the necessary visualizations and searches in Kibana. After the dashboard is ready, you can download all the dashboards using the save/kibana_dump.py script.

Before executing the save/kibana_dump.py script, make sure you have python and virtualenv installed:

    # Prepare the environment
    virtualenv env
    . env/bin/activate
    pip install -r requirements.txt

    # go to save directory
    cd save

    # Download all Kibana dashboards to your host
    python kibana_dump.py --url 'http://localhost:9200' --dir output

where url points to the Elasticsearch URL, and dir is the directory where you want to save the Kibana dashboards.

Finally, copy the related dashboards, visualizations, searches and eventually index patterns to the dashboards directory, and send us a pull request.

Screenshots

Packetbeat Statistics MySql performance Thrift performance Windows Event Log Statistics NFS traffic Statistics

beats-dashboards's People

Contributors

andrewkroh avatar borian avatar chayim avatar dovicn avatar joshboon avatar jpcarey avatar karolisl avatar kofemann avatar lmangani avatar monicasarbu avatar radoondas avatar ruflin avatar tsg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beats-dashboards's Issues

Ability to choose which dashboards to import for Kibana

Use case here is that if I run only topbeat, I don't want to import all other beats in to my Kibana.
I think it can be taken in consideration in next generation of dashboards when (for example) merged to beats itself.

I have a little idea how to do it with current setup. it would require only little changes in to the structure of dashboards and load script itself. If there is interest for the enhancement, I can prepare PR.

Is is worth to do it? Or am I the only one with 'issue'.

load.sh and load.ps1 nits

load.sh

Usage example comments reflect the old positional command-line parameters, not the new named parameters:

# Usage examples:
# env KIBANA_INDEX='.kibana_env1' ./load.sh
# ./load.sh http://test.com:9200
# ./load.sh http://test.com:9200 test

load.sh and load.ps1

Usage text mentions "dashboards, visualizations, and index patterns", but not searches.

Typo in the usage text for the -url option: "Elasticseacrh" (sic).

I know, I'm a pedant ๐Ÿ˜›.

Update Packetbeat related screenshots

The Navigation widget changed by adding Topbeat dashboard into the picture. As the navigation widget is available in all Packetbeat dashboards, it would be nice to redo the screenshots.

Variable 'name' in script is in lowercase in load.sh

Hi,
this would be really small update of the script. Since all of the variables are uppercase, I think it would be logical to rename also this one.

name=`basename $file .json`

I have changed it already in my fork, I can create PR if needed.

load.ps1 not working with -user flag

PS C:\Users\vagrant\beats-dashboards> .\load.ps1 -url "https://localhost:9243" -user "test:test"
Loading dashboards to https://localhost:9243 in .kibana using Invoke-RestMethod -Headers '@{Authorization=("Basic dGVzd
Dp0ZXN0")}':
Loading search Cache-transactions:
& : The term 'Invoke-RestMethod -Headers '@{Authorization=("Basic dGVzdDp0ZXN0")}'' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify
that the path is correct and try again.
At C:\Users\vagrant\beats-dashboards\load.ps1:82 char:4
+   &$CURL -Uri "$ELASTICSEARCH/$KIBANA_INDEX/search/$name" -Method PUT -Body $(Ge ...
+    ~~~~~
    + CategoryInfo          : ObjectNotFound: (Invoke-RestMeth...GVzdDp0ZXN0")}':String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

@dovicn Would you mind taking a look at the PS script? Thanks!

{"error":"NullPointerException[null]","status":500}

[root@localhost:packetbeat-dashboards]#./load.sh -url http://localhost:9200
Loading dashboards to http://localhost:9200 in .kibana using curl:
Loading search Cache-transactions:
{"error":"NullPointerException[null]","status":500}
Loading search DB-transactions:
{"error":"NullPointerException[null]","status":500}
Loading search Default-Search:
{"error":"NullPointerException[null]","status":500}
Loading search Errors:
{"error":"NullPointerException[null]","status":500}
Loading search Filesystem-stats:
{"error":"NullPointerException[null]","status":500}
Loading search HTTP-errors:
{"error":"NullPointerException[null]","status":500}
Loading search MongoDB-errors:
{"error":"NullPointerException[null]","status":500}
Loading search MongoDB-transactions:
{"error":"NullPointerException[null]","status":500}
Loading search MongoDB-transactions-with-write-concern-0:
{"error":"NullPointerException[null]","status":500}
Loading search MySQL-errors:
{"error":"NullPointerException[null]","status":500}
Loading search MySQL-Transactions:
{"error":"NullPointerException[null]","status":500}

why?
how to solve ?

Add filebeat index pattern

The libbeat getting started guide says that the [filebeat-]YYYY.MM.DD index will be created by load.sh but it doesn't exist in this project.

Configurable Index Pattern

Hi,

Would it be possible to make the index-pattern configurable?

I store topbeat data with an index pattern TOKEN_YYYY-DD-MM - while token is a prefix for each client/customer/system. In my case topbeat logs are shipped with another log shipper, which can insert any json logs to ES.

What needs to be changed to import the dashboards?
Just change title in dashboards/topbeat.json?

Thanks
Stefan

Pass username:password to the load script

The load script doesn't receive the password together with the username for connecting to Elasticsearch.
Passing username:password under the user argument doesn't work:

./load.sh -user 'admin:xxxxxx' -url 'https://xyz.aws.found.io:9243'

But the following command works:

./load.sh https://admin:[email protected]:9243

Zero or negative time interval not supported

using the dashboards 1.0.0-beta with kibana 4.0.2 and elasticsearch 1.5.2 I get these errors several times whenever any dashboard is accessed.

[2015-06-08 10:39:44,766][DEBUG][action.search.type ] [Phil Urich] All shards failed for phase: [query]
org.elasticsearch.search.SearchParseException: [packetbeat-2015.06.08][0]: from[-1],size[0]: Parse Failure [Failed to parse source [{"size":0,"aggs":{"2":{"date_histogram":{"field":"timestamp","interval":"0ms","pre_zone":"-04:00","pre_zone_adjust_large_interval":true,"min_doc_count":1,"extended_bounds":{"min":1433774084563,"max":1433774384563}},"aggs":{"3":{"histogram":{"field":"responsetime","interval":10},"aggs":{"1":{"sum":{"field":"count"}}}}}}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"":{}}},"query":{"filtered":{"query":{"match_all":{}},"filter":{"bool":{"must":[{"query":{"query_string":{"analyze_wildcard":true,"query":""}}},{"range":{"@timestamp":{"gte":1433774084566,"lte":1433774384566}}}],"must_not":[]}}}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: Zero or negative time interval not supported
at org.elasticsearch.common.rounding.TimeZoneRounding$Builder.(TimeZoneRounding.java:68)
at org.elasticsearch.common.rounding.TimeZoneRounding.builder(TimeZoneRounding.java:42)
at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramParser.parse(DateHistogramParser.java:205)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:130)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:77)
at org.elasticsearch.search.aggregations.AggregationParseElement.parse(AggregationParseElement.java:60)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more

[2015-06-08 10:39:44,762][DEBUG][action.search.type ] [Phil Urich] All shards failed for phase: [query]
org.elasticsearch.search.SearchParseException: [packetbeat-2015.06.08][0]: query[ConstantScore(BooleanFilter(+QueryWrapperFilter(ConstantScore(:)) +cache(@timestamp:[1433774084566 TO 1433774384566])))],from[-1],size[0]: Parse Failure [Failed to parse source [{"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"":{}}},"query":{"filtered":{"query":{"query_string":{"analyze_wildcard":true,"query":""}},"filter":{"bool":{"must":[{"query":{"query_string":{"analyze_wildcard":true,"query":"*"}}},{"range":{"@timestamp":{"gte":1433774084566,"lte":1433774384566}}}],"must_not":[]}}}},"size":0,"aggs":{"2":{"date_histogram":{"field":"timestamp","interval":"0ms","pre_zone":"-04:00","pre_zone_adjust_large_interval":true,"min_doc_count":1,"extended_bounds":{"min":1433774084560,"max":1433774384561}},"aggs":{"1":{"percentiles":{"field":"responsetime","percents":[75,95,99]}}}}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: Zero or negative time interval not supported
at org.elasticsearch.common.rounding.TimeZoneRounding$Builder.(TimeZoneRounding.java:68)
at org.elasticsearch.common.rounding.TimeZoneRounding.builder(TimeZoneRounding.java:42)
at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramParser.parse(DateHistogramParser.java:205)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:130)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:77)
at org.elasticsearch.search.aggregations.AggregationParseElement.parse(AggregationParseElement.java:60)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more

[2015-06-08 10:39:44,764][DEBUG][action.search.type ] [Phil Urich] All shards failed for phase: [query]
org.elasticsearch.search.SearchParseException: [packetbeat-2015.06.08][0]: from[-1],size[0]: Parse Failure [Failed to parse source [{"size":0,"aggs":{"2":{"date_histogram":{"field":"timestamp","interval":"0ms","pre_zone":"-04:00","pre_zone_adjust_large_interval":true,"min_doc_count":1,"extended_bounds":{"min":1433774084563,"max":1433774384563}},"aggs":{"3":{"terms":{"field":"type","size":5,"order":{"1":"desc"}},"aggs":{"1":{"sum":{"field":"count"}}}}}}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"":{}}},"query":{"filtered":{"query":{"query_string":{"query":"type: mysql or type: pgsql","analyze_wildcard":true}},"filter":{"bool":{"must":[{"query":{"query_string":{"analyze_wildcard":true,"query":""}}},{"range":{"@timestamp":{"gte":1433774084566,"lte":1433774384566}}}],"must_not":[]}}}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: Zero or negative time interval not supported
at org.elasticsearch.common.rounding.TimeZoneRounding$Builder.(TimeZoneRounding.java:68)
at org.elasticsearch.common.rounding.TimeZoneRounding.builder(TimeZoneRounding.java:42)
at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramParser.parse(DateHistogramParser.java:205)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:130)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:77)
at org.elasticsearch.search.aggregations.AggregationParseElement.parse(AggregationParseElement.java:60)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more

Unanalyzed Index for beats.hostname?

Hello,

I heavily utilize AWS which leaves me with hostnames like ip-172-31-10-10, these then are analyzed as "ip", "172", "31", ...

Would it be acceptable for me to add in an additional unanalyzed field e.g.: beats.hostname-unanalyzed?

load.sh password prompting with basic auth

Enhance documentation and usage info with how to use the script against an ES server with Shield + basic auth.

If you just run load.sh -url https://xyz.found.io -u admin then you will be prompted for your password for every curl request which isn't practical.

There are a few ways to pass in the password just once.

  • load.sh -url https://xyz.found.io -u admin:password
  • load.sh -url https://xyz.found.io -u admin:$(cat ~/pass-file) (use file to not pollute bash history with password)
  • load.sh -url https://user:[email protected] (just put user and pass in the url)

Update Kibana3 dashboard to the new template

The schema we use has changed s/@timestamp/timestamp, s/request_raw/request, etc.

We have updated the K4 dashboards, but the K3 dashboards, which can still be used for getting the topology map are not yet updated.

beats-dashboards license?

In brief:

What type of software license applies to this elastic/beats-dashboard project? (Apache 2, like beats?) Could you please consider adding a license file to this project?

In detail:

I want to distribute Kibana dashboards - with their related visualization and search definitions - in a GitHub project.

In my GitHub project, I would like to include copies of the load.sh and load.ps1 files that are provided in beats-dashboards. In the absence of built-in support in Kibana for exporting and loading sets of related definitions - I'm familiar with the "per object type" export buttons on the Kibana 4.1 Settings > Objects page - these scripts look like a good option. (Nice one, thank you!)

However, I do not see a license specified for beats-dashboards. There's no license file in the root directory, and no mention of a license in the readme.

It occurs to me that you might have deliberately omitted specifying a license for this project because you consider it to be a "sub-project" of elastic/beats, which specifies the Apache 2 license. But I didn't feel comfortable making that assumption. I thought it more polite, and more prudent, to ask you directly: what is the beats-dashboards license? Would you consider adding a license file to this project? If it's Apache 2 or similar, then I can proceed with a clear conscience to redistribute those load scripts in my own project, according to the license conditions, such as including a copy of the original license citing Elasticsearch as the copyright owner.

FYI: my recent answer to a related question on Stack Overflow.

Fix proc.cpu.user_p value in the top processes widget

The proc.cpu.user_p is empty in the Top processes widget from the Topbeat dashboard in case the latest stable versions of Elastisearch and Kibana are used.
Topbeat exports the right value of the proc.cpu.user_p in the JSON object.

Update demo

Update demo with the latest version of all Kibana dashboards.

Nav widget issue with Kibana alpha5

While testing 1.3 with Kibana alpha5, I noticed that the Markdown titles in the Nav widget are not getting rendered:

screen shot 2016-08-22 at 14 16 23

The reason seems to be a missing space before the titles. i.e. we have:

###Packetbeat:

[Dashboard](/#/dashboard/Packetbeat-Dashboard)

[Web transactions](/#/dashboard/HTTP)

and should have:

### Packetbeat:

[Dashboard](/#/dashboard/Packetbeat-Dashboard)

[Web transactions](/#/dashboard/HTTP)

No Proc or mem stats available in topbeat dashboard

Hi
These dashboards are great but some aren't displaying correctly.

My Setup:
Topbeat-1.0.0
Elastic 2.0
Kibana 4.2.0

The cpu graphs are blank. If I look at the visualization and associated saved search, they all return blank even though there are results available in the Discovery search with no filters enabled

I've modified the search query for "Proc Stats" and "Processes" from "type: proc" to "type: proc*"

I'm new to ELK stack, so this might not be the most optimal solution.

Regards
Shaun

load.sh: 48: printf: Illegal option -v

On Ubuntu 14.04 running

./load.sh http://localhost:9200

Gives the following error(s):

Loading dashboard Topbeat-Dashboard:
{"_index":".kibana","_type":"dashboard","_id":"Topbeat-Dashboard","_version":1,"created":true}
./load.sh: 48: printf: Illegal option -v
Loading index pattern :
No handler found for uri [/.kibana/index-pattern/] and method [PUT]
./load.sh: 48: printf: Illegal option -v

First line should probably be: #!/bin/bash.

@see http://stackoverflow.com/questions/28302474/shellscript-throws-errors-but-somehow-is-still-working

Workaround:

bash ./load.sh http://localhost:9200

Update all packetbeat dashboards

Create a new search for the packetbeat data and update all packetbeat dashboards to use the search.
This fix is needed as the Default Search is not set to the packetbeat data anymore: #51

load.sh fails in master

load.sh fails to load in master. Here are the logs:

Loading dashboard Winlogbeat-Dashboard:
{"_index":".kibana","_type":"dashboard","_id":"Winlogbeat-Dashboard","_version":4,"created":false}
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
Loading index pattern :
No handler found for uri [/.kibana/index-pattern/] and method [PUT]
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
Loading index pattern :
No handler found for uri [/.kibana/index-pattern/] and method [PUT]
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
Loading index pattern :
No handler found for uri [/.kibana/index-pattern/] and method [PUT]
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
Loading index pattern :
No handler found for uri [/.kibana/index-pattern/] and method [PUT]

In Kibana proc.cpu.user_p is null

proc.cpu.user_p is of type percentage. Even the value stored in ES seems to be ok, Kibana shows an empty value.
screen shot 2015-09-13 at 11 02 31 pm

Using ES version 1.7.0 and Kibana 4.1.1

MongoDB performance dashboards errors

Hi,

I use beats 1.1.1 (with ES 2.2 and Kibana 4.4.1). When I import beats-dashboards 1.1.1, and access Kibana web -> Setting tab -> Indices tab -> Index Patterns, click on "packetbeat-*" and it warning :

Mapping conflict! A field is defined as several types (string, integer, etc) across the indices that match this pattern. You may still be able to use these conflict fields in parts of Kibana, but they will be unavailable for functions that require Kibana to know their type. Correcting this issue will require reindexing your data.

So when I use packetbeat to monitor mogodb, and see the dashboard "MongoDB performance", it corrupts 3 dashboards (remainning other dashboards still is working well) with the error :

Could not locate that index-pattern-field (id: resource)

I suspect the reason is the Kibana Index Pattern of dashboard "MongoDB Performance" is old version, so it not updated yet to compatible with PacketBeat index template of Elasticsearch ?

Best Regards,
VietNC

beats-dashboards should not use dynamic typing

There's have a problem when you load beats-dashboards BEFORE saving other items from Kibana. I first noticed the problem when I tried to use the Kibana Timelion plugin and got an error;

{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"mapper [hits] cannot be changed
from type [long] to [int]"}],"type":"illegal_argument_exception","reason":"mapper [hits] cannot be changed
from type [long] to [int]"},"status":400}

This is because when beats-dashboards were loaded, hits was saved as a long, but Kibana and [all?] plugins use an 'int' for hits.
@rashidkpc please correct me if I got this wrong.

  • Kibana could set the types when it first starts up to avoid this, but Rashid says that's front end code that runs in the browser so would be hard to do in a very automated way.
  • Kibana could provide an API for loading all of this (future).
  • beats-dashboards could set "hits" to be "int"s (and statically type all other fields?).
  • The easiest thing right now might be to document NOT to install the beats-dashboards until AFTER the user has done a few things in Kibana to set hits as int. Like saving a Saved Search.

No dashboards actually show up after running script

I am using AWS managed service and I get:

$ ./load.sh -url "http://search-abc-123.us-themoon-99.es.amazonaws.com"

{"Message":"User: anonymous is not authorized to perform: es:ESHttpPut on resource: arn:aws:es:us-west-2:1235573471613174:domain/windows-apache-cluster/.kibana/visualization/PgSQL-Errors"}

So it is not using AWS signed requests? Any way you guys could maybe add that feature?

Even if I put open domain policy: I get false for created so nothing gets created...

Loading index pattern filebeat-*:
{"_index":".kibana","_type":"index-pattern","_id":"filebeat-*","_version":3,"created":false}
Loading index pattern packetbeat-*:
{"_index":".kibana","_type":"index-pattern","_id":"packetbeat-*","_version":3,"created":false}
Loading index pattern topbeat-*:
{"_index":".kibana","_type":"index-pattern","_id":"topbeat-*","_version":3,"created":false}
Loading index pattern winlogbeat-*:
{"_index":".kibana","_type":"index-pattern","_id":"winlogbeat-*","_version":3,"created":false}

Allow for custom kibana index

Hi

By default the load.sh will upload dashboards to http://localhost:9200/.kibana.

In the case of kibana installations which use custom indices, it might be worthwhile making the index a variable within the script so that the dashboard can be uploaded to multiple kibana installs.

E.G:
./load.sh http://localhost:9200 .kibana_env1
./load.sh http://localhost:9200 .kibana_env2

Its not a major issue but more a nice to have. For now I'm editing the load.sh and replacing .kibana with other names and then loading.

Thanks
shaun

problem in creating index patterns

Hi, thanks for your great job
I have a problem after import of dashboard by load.sh,
after import in kibana setting-> indices page on left side there is no [filebeat-]YYYY.MM.DD ... format patterns, the available pattern is filebeat-* ...
the output, when importing file by load.sh is as below (everything seem to be normal):

./load.sh -url "http://localhost:9200"ocalhost:9200/.kibana
Loading dashboards to http://localhost:9200 in .kibana
Loading search Cache-transactions:
{"_index":".kibana","_type":"search","_id":"Cache-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search DB-transactions:
{"_index":".kibana","_type":"search","_id":"DB-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Default-Search:
{"_index":".kibana","_type":"search","_id":"Default-Search","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Errors:
{"_index":".kibana","_type":"search","_id":"Errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Filesystem-stats:
{"_index":".kibana","_type":"search","_id":"Filesystem-stats","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search HTTP-errors:
{"_index":".kibana","_type":"search","_id":"HTTP-errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search MongoDB-errors:
{"_index":".kibana","_type":"search","_id":"MongoDB-errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search MongoDB-transactions:
{"_index":".kibana","_type":"search","_id":"MongoDB-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search MongoDB-transactions-with-write-concern-0:
{"_index":".kibana","_type":"search","_id":"MongoDB-transactions-with-write-concern-0","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search MySQL-errors:
{"_index":".kibana","_type":"search","_id":"MySQL-errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search MySQL-Transactions:
{"_index":".kibana","_type":"search","_id":"MySQL-Transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Packetbeat-Search:
{"_index":".kibana","_type":"search","_id":"Packetbeat-Search","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search PgSQL-errors:
{"_index":".kibana","_type":"search","_id":"PgSQL-errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search PgSQL-transactions:
{"_index":".kibana","_type":"search","_id":"PgSQL-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Processes:
{"_index":".kibana","_type":"search","_id":"Processes","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Proc-stats:
{"_index":".kibana","_type":"search","_id":"Proc-stats","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search RPC-transactions:
{"_index":".kibana","_type":"search","_id":"RPC-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search System-stats:
{"_index":".kibana","_type":"search","_id":"System-stats","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search System-wide:
{"_index":".kibana","_type":"search","_id":"System-wide","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Thrift-errors:
{"_index":".kibana","_type":"search","_id":"Thrift-errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Thrift-transactions:
{"_index":".kibana","_type":"search","_id":"Thrift-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Web-transactions:
{"_index":".kibana","_type":"search","_id":"Web-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading search Winlogbeat-Search:
{"_index":".kibana","_type":"search","_id":"Winlogbeat-Search","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Average-system-load-across-all-systems:
{"_index":".kibana","_type":"visualization","_id":"Average-system-load-across-all-systems","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Cache-transactions:
{"_index":".kibana","_type":"visualization","_id":"Cache-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Client-locations:
{"_index":".kibana","_type":"visualization","_id":"Client-locations","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization CPU-usage:
{"_index":".kibana","_type":"visualization","_id":"CPU-usage","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization CPU-usage-per-process:
{"_index":".kibana","_type":"visualization","_id":"CPU-usage-per-process","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization DB-transactions:
{"_index":".kibana","_type":"visualization","_id":"DB-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Disk-usage:
{"_index":".kibana","_type":"visualization","_id":"Disk-usage","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Disk-usage-overview:
{"_index":".kibana","_type":"visualization","_id":"Disk-usage-overview","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Disk-utilization-over-time:
{"_index":".kibana","_type":"visualization","_id":"Disk-utilization-over-time","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Errors-count-over-time:
{"_index":".kibana","_type":"visualization","_id":"Errors-count-over-time","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Errors-vs-successful-transactions:
{"_index":".kibana","_type":"visualization","_id":"Errors-vs-successful-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Event-Levels:
{"_index":".kibana","_type":"visualization","_id":"Event-Levels","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Evolution-of-the-CPU-times-per-process:
{"_index":".kibana","_type":"visualization","_id":"Evolution-of-the-CPU-times-per-process","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization HTTP-codes-for-the-top-queries:
{"_index":".kibana","_type":"visualization","_id":"HTTP-codes-for-the-top-queries","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization HTTP-error-codes-evolution:
{"_index":".kibana","_type":"visualization","_id":"HTTP-error-codes-evolution","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization HTTP-error-codes:
{"_index":".kibana","_type":"visualization","_id":"HTTP-error-codes","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Latency-histogram:
{"_index":".kibana","_type":"visualization","_id":"Latency-histogram","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Levels:
{"_index":".kibana","_type":"visualization","_id":"Levels","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Memory-usage:
{"_index":".kibana","_type":"visualization","_id":"Memory-usage","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Memory-usage-per-process:
{"_index":".kibana","_type":"visualization","_id":"Memory-usage-per-process","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MongoDB-commands:
{"_index":".kibana","_type":"visualization","_id":"MongoDB-commands","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MongoDB-errors:
{"_index":".kibana","_type":"visualization","_id":"MongoDB-errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MongoDB-errors-per-collection:
{"_index":".kibana","_type":"visualization","_id":"MongoDB-errors-per-collection","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MongoDB-in-slash-out-throughput:
{"_index":".kibana","_type":"visualization","_id":"MongoDB-in-slash-out-throughput","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MongoDB-response-times-and-count:
{"_index":".kibana","_type":"visualization","_id":"MongoDB-response-times-and-count","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MongoDB-response-times-by-collection:
{"_index":".kibana","_type":"visualization","_id":"MongoDB-response-times-by-collection","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Most-frequent-MySQL-queries:
{"_index":".kibana","_type":"visualization","_id":"Most-frequent-MySQL-queries","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Most-frequent-PgSQL-queries:
{"_index":".kibana","_type":"visualization","_id":"Most-frequent-PgSQL-queries","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MySQL-Errors:
{"_index":".kibana","_type":"visualization","_id":"MySQL-Errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MySQL-Methods:
{"_index":".kibana","_type":"visualization","_id":"MySQL-Methods","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MySQL-Reads-vs-Writes:
{"_index":".kibana","_type":"visualization","_id":"MySQL-Reads-vs-Writes","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Mysql-response-times-percentiles:
{"_index":".kibana","_type":"visualization","_id":"Mysql-response-times-percentiles","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization MySQL-throughput:
{"_index":".kibana","_type":"visualization","_id":"MySQL-throughput","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Navigation:
{"_index":".kibana","_type":"visualization","_id":"Navigation","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Number-of-Events:
{"_index":".kibana","_type":"visualization","_id":"Number-of-Events","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Number-of-Events-Over-Time-By-Event-Log:
{"_index":".kibana","_type":"visualization","_id":"Number-of-Events-Over-Time-By-Event-Log","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Number-of-MongoDB-transactions-with-writeConcern-w-equal-0:
{"_index":".kibana","_type":"visualization","_id":"Number-of-MongoDB-transactions-with-writeConcern-w-equal-0","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization PgSQL-Errors:
{"_index":".kibana","_type":"visualization","_id":"PgSQL-Errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization PgSQL-Methods:
{"_index":".kibana","_type":"visualization","_id":"PgSQL-Methods","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization PgSQL-Reads-vs-Writes:
{"_index":".kibana","_type":"visualization","_id":"PgSQL-Reads-vs-Writes","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization PgSQL-response-times-percentiles:
{"_index":".kibana","_type":"visualization","_id":"PgSQL-response-times-percentiles","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization PgSQL-throughput:
{"_index":".kibana","_type":"visualization","_id":"PgSQL-throughput","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Process-status:
{"_index":".kibana","_type":"visualization","_id":"Process-status","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Reads-versus-Writes:
{"_index":".kibana","_type":"visualization","_id":"Reads-versus-Writes","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Response-times-percentiles:
{"_index":".kibana","_type":"visualization","_id":"Response-times-percentiles","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Response-times-repartition:
{"_index":".kibana","_type":"visualization","_id":"Response-times-repartition","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization RPC-transactions:
{"_index":".kibana","_type":"visualization","_id":"RPC-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Servers:
{"_index":".kibana","_type":"visualization","_id":"Servers","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Slowest-MySQL-queries:
{"_index":".kibana","_type":"visualization","_id":"Slowest-MySQL-queries","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Slowest-PgSQL-queries:
{"_index":".kibana","_type":"visualization","_id":"Slowest-PgSQL-queries","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Slowest-Thrift-RPC-methods:
{"_index":".kibana","_type":"visualization","_id":"Slowest-Thrift-RPC-methods","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Sources:
{"_index":".kibana","_type":"visualization","_id":"Sources","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization System-load:
{"_index":".kibana","_type":"visualization","_id":"System-load","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Thrift-requests-per-minute:
{"_index":".kibana","_type":"visualization","_id":"Thrift-requests-per-minute","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Thrift-response-times-percentiles:
{"_index":".kibana","_type":"visualization","_id":"Thrift-response-times-percentiles","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Thrift-RPC-Errors:
{"_index":".kibana","_type":"visualization","_id":"Thrift-RPC-Errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-10-HTTP-requests:
{"_index":".kibana","_type":"visualization","_id":"Top-10-HTTP-requests","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-10-memory-consumers:
{"_index":".kibana","_type":"visualization","_id":"Top-10-memory-consumers","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-10-processes-by-total-CPU-usage:
{"_index":".kibana","_type":"visualization","_id":"Top-10-processes-by-total-CPU-usage","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-Event-IDs:
{"_index":".kibana","_type":"visualization","_id":"Top-Event-IDs","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-processes:
{"_index":".kibana","_type":"visualization","_id":"Top-processes","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-slowest-MongoDB-queries:
{"_index":".kibana","_type":"visualization","_id":"Top-slowest-MongoDB-queries","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-Thrift-RPC-calls-with-errors:
{"_index":".kibana","_type":"visualization","_id":"Top-Thrift-RPC-calls-with-errors","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Top-Thrift-RPC-methods:
{"_index":".kibana","_type":"visualization","_id":"Top-Thrift-RPC-methods","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Total-number-of-HTTP-transactions:
{"_index":".kibana","_type":"visualization","_id":"Total-number-of-HTTP-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Total-time-spent-in-each-MongoDB-collection:
{"_index":".kibana","_type":"visualization","_id":"Total-time-spent-in-each-MongoDB-collection","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading visualization Web-transactions:
{"_index":".kibana","_type":"visualization","_id":"Web-transactions","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard HTTP:
{"_index":".kibana","_type":"dashboard","_id":"HTTP","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard MongoDB-performance:
{"_index":".kibana","_type":"dashboard","_id":"MongoDB-performance","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard MySQL-performance:
{"_index":".kibana","_type":"dashboard","_id":"MySQL-performance","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard Packetbeat-Dashboard:
{"_index":".kibana","_type":"dashboard","_id":"Packetbeat-Dashboard","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard PgSQL-performance:
{"_index":".kibana","_type":"dashboard","_id":"PgSQL-performance","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard Thrift-performance:
{"_index":".kibana","_type":"dashboard","_id":"Thrift-performance","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard Topbeat-Dashboard:
{"_index":".kibana","_type":"dashboard","_id":"Topbeat-Dashboard","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading dashboard Winlogbeat-Dashboard:
{"_index":".kibana","_type":"dashboard","_id":"Winlogbeat-Dashboard","_version":1,"shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading index pattern filebeat-
:
{"_index":".kibana","_type":"index-pattern","id":"filebeat-","_version":1,"shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading index pattern packetbeat-
:
{"_index":".kibana","_type":"index-pattern","id":"packetbeat-","_version":1,"shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading index pattern topbeat-
:
{"_index":".kibana","_type":"index-pattern","id":"topbeat-","_version":1,"shards":{"total":2,"successful":1,"failed":0},"created":true}
Loading index pattern winlogbeat-
:
{"_index":".kibana","_type":"index-pattern","id":"winlogbeat-","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}

please tell me how can I fix this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.