Giter VIP home page Giter VIP logo

aliyun-log-cli's Introduction

User Guide

Documentation Status Pypi Version Travis CI Development Status Python version License

中文版README

Content

Introduction

The Alicloud log service provides with Web and SDK flavor to operate log service and analyzie logs. To make it more convinient to do automation, we release this command line interface (CLI).

Brief

Alicloud log service command line console, support almost all operations as web. It also supports incomplete log query check and query cross multiple pages. It could even do project settings copy cross multiple regions.

Major Features

  • Support almost all 50+ REST API of log service.
  • Multiple account support to support cross region operation.
  • Log query incomplete check and automatically query cross pagination.
  • Multiple confidential storage types, from file, commandline to env variables.
  • Support command line based or file based inputs, complete formation validations.
  • Support JMES filter to do further process on results, e.g. select specific fields from json.
  • Cross platforms support (Windows, Linux and Mac), Python based and friendly to Py2 and Py3 even Pypy. Support Pip installation.

Installation

Operation System

The CLI supports below operation system:

  • Windows
  • Mac
  • Linux

Supported Version

Python 2.6, 2.7, 3.3, 3.4, 3.5, 3.6, PyPy, PyPy3

Installation Method

Run below command to install the CLI:

> pip install -U aliyun-log-cli

Note

On mac it's recommended to use pip3 to install the CLI.

> brew install python3
> pip3 install -U aliyun-log-cli

if you encounter errors like OSError: [Errno 1] Operation not permitted, try to use option --user to install:

> pip3 install -U aliyun-log-cli --user

Alicloud ECS which may have limited internet access

You could try the mirrors of local network provider, for Alicloud ECS, you can try below noe:

pip/pip3 install -U aliyun-log-cli --index http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com

Offline Installation

Since 0.1.12, we provide offline package for mac x64 and linux x64 platform.

Follow below ways to install it.

  1. download the package from release page
  2. unzip it to a local folder, like cli_packages, you can see some whl files inside it.
  3. if you don't have pip, install pip first:
python pip-10.0.1-py2.py3-none-any.whl/pip install --no-index cli_packages/pip-10.0.1-py2.py3-none-any.whl
  1. install the CLI:
pip install aliyun-log-cli --no-index --find-links=cli_packages
  1. verify it
> aliyunlog --version

FAQ of Installation

  1. Encoutering errr TLSV1_ALERT_PROTOCOL_VERSION when installing CLI:
> pip install aliyun-log-cli

Collecting aliyun-log-cli
  Could not fetch URL https://pypi.python.org/simple/aliyun-log-cli/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) - skipping
  Could not find a version that satisfies the requirement aliyun-log-cli (from versions: )
No matching distribution found for aliyun-log-cli

Solution: Please upgrade pip and retry:

pip install pip -U
  1. Cannot find command aliyunlog?

it's caused by the missing of shell of aliyunlog, you could make one by yourself.

2.1. find python path:

for linux or mac:

which python

on windows, it's probably located in c:\PythonXX (XX means version like 27 or 37)

2.2. create a shell script named aliyunlog (on linux/mac) or aliyunlog.py with below content and allow to execute it. And put it into PATH folder:

#!<python path here with ! ahead>
import re
import sys

from aliyunlogcli.cli import main

if __name__ == '__main__':
    sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
    sys.exit(main())

for linux or mac, it could be put under /usr/bin/. On Windows, it could be put under c:/windows.

2.3. verify it

# linux or mac
> aliyunlog --version
# windows
> aliyunlog.py --version
  1. Fail to install module regex?

Refer to below link to install python-devel via yun, apt-get or manually. https://rpmfind.net/linux/rpm2html/search.php?query=python-devel

Full Usage list

Run below command to get the full usage list:

> aliyunlog --help

Note: aliyun command is deprecated to prevent conflict with Universal Ali-cloud CLI.

it will show the full usage.

Note aliyunlog is recommended in case the aliyun conflict with others.

Configure CLI

Refer to Configure CLI.

Input and Output

Inputs

  1. Normally case:
> aliyunlog log get_logs --request="{\"topic\": \"\", \"logstore\": \"logstore1\", \"project\": \"dlq-test-cli-123\", \"toTime\": \"2018-01-01 10:10:10\", \"offset\": \"0\", \"query\": \"*\", \"line\": \"10\", \"fromTime\": \"2018-01-01 08:08:08\", \"reverse\":\"false\"}"
  1. Input via file: You could store the content of one parameter into a file and pass it via the command line with prefix file://:
> aliyunlog log get_logs --request="file://./get_logs.json"

the content in file get_logs.json as below. Note: the \ is unnecessary to escape the ".

{
  "topic": "",
  "logstore": "logstore1",
  "project": "project1",
  "toTime": "2018-01-01 11:11:11",
  "offset": "0",
  "query": "*",
  "line": "10",
  "fromTime": "2018-01-01 10:10:10",
  "reverse": "true"
}

Parameter Validation

  • Mandatory check: if one mandatory parameter is missed, it will report error with usage info.

  • Format of parameter's value will be validated. e.g. int, bool, string list, special data structure.

  • for boolean, it support:

    • true (case insensitive), T, 1
    • false (case insensitive), F, 0
  • String list support as ["s1", "s2"]

Output

  1. For operations like Create, Update and Delete, there's no output except the exit code is 0 which means success.

  2. For operations like Get and List, it will output in json format.

  3. For errors, it will report in json format as below:

{
  "errorCode":"...",
  "errorMessage":"..."
}

Filter output

It's supported to filter output via JMES:

Examples:

> aliyunlog log get_logs ...

which outputs:

[ {"__source__": "ip1", "key": "log1"}, {"__source__": "ip2", "key": "log2"} ]

You could use below --jmes-filter to break log into each line:

> aliyunlog log get_logs ... --jmes-filter="join('\n', map(&to_string(@), @))"

output:

{"__source__": "ip1", "key": "log1"}
{"__source__": "ip2", "key": "log2"}

Further Process

You could use >> to store the output to a file. or you may want to process the output using your own cmd. For example, there's another way to if you may want to break the logs into each line. you could append thd command with a | on linux/unix:

| python2 -c "from __future__ import print_function;import json;map(lambda x: print(json.dumps(x).encode('utf8')), json.loads(raw_input()));"
or 
| python3 -c "import json;list(map(lambda x: print(json.dumps(x)), json.loads(input())));"

e.g.

aliyunlog log get_log .... | | python2 -c "from __future__ import print_function;import json;map(lambda x: print(json.dumps(x).encode('utf8')), json.loads(raw_input()));" >> data.txt

Command Reference

Command Specification

1. aliyunlog log <subcommand> [parameters | global options]
2. aliyunlog configure <access_id> <access-key> <endpoint>
3. aliyunlog [--help | --version]

Alias

There's also an alias aliyunlog for the CLI in case the command aliyun conflicts with others.

1. aliyunlog log <subcommand> [parameters | global options]
2. aliyunlog configure <access_id> <access-key> <endpoint>
3. aliyunlog [--help | --version]

Subcommand and parameters

Actually, the CLI leverage aliyun-log-python-sdk, which maps the command into the methods of aliyun.log.LogClient. The parameters of command line is mapped to the parameters of methods. For the detail spec of parameters, please refer to the Mapped Python SDK API Spec

Examples:

def create_logstore(self, project_name, logstore_name, ttl=2, shard_count=30):

Mapped to CLI:

> aliyunlog log create_logstore
  --project_name=<value>
  --logstore_name=<value>
  [--ttl=<value>]
  [--shard_count=<value>]

Global options

All the commands support below optional global options:

    [--access-id=<value>]
    [--access-key=<value>]
    [--region-endpoint=<value>]
    [--client-name=<value>]
    [--jmes-filter=<value>]

Command categories

  1. Project management
  2. Logstore management
  3. Shard management
  4. Machine group management
  5. Logtail config management
  6. Machine group and Logtail Config Mapping
  7. Index management
  8. Cursor management
  9. Logs write and consume
  10. Shipper management
  11. Consumer group management
  12. Elasticsearch data migration

1. Project management

  • list_project

  • create_project

  • get_project

  • delete_project

  • copy_project

    • copy all configurations including logstore, logtail, and index config from project to another project which could be in different region.
> aliyunlog log copy_project --from_project="p1" --to_project="p1" --to_client="account2"
  • Note: to_client is another account configured via aliyunlog configure, it's OK to pass main or not to copy inside the same region.
  • Refer to Copy project settings cross regions to learn more.

2. Logstore management

  • create_logstore
  • delete_logstore
  • get_logstore
  • update_logstore
  • list_logstore

3. Shard management

  • list_shards
  • split_shard
  • merge_shard

4. Machine group management

  • create_machine_group
    • Format of partial parameter:
{
 "machine_list": [
   "machine1",
   "machine2"
 ],
 "machine_type": "userdefined",
 "group_name": "group_name2",
 "group_type": "",
 "group_attribute": {
   "groupTopic": "topic x"
 }
}
  • delete_machine_group
  • update_machine_group
  • get_machine_group
  • list_machine_group
  • list_machines

5. Logtail config management

  • create_logtail_config
  • update_logtail_config
  • delete_logtail_config
  • get_logtail_config
  • list_logtail_config

6. Machine group and Logtail Config Mapping

  • apply_config_to_machine_group
  • remove_config_to_machine_group
  • get_machine_group_applied_configs
  • get_config_applied_machine_groups

7. Index management

  • create_index
    • Format of partial parameter:
{
 "keys": {
   "f1": {
     "caseSensitive": false,
     "token": [
       ",",
       " ",
       "\"",
       "\"",
       ";",
       "=",
       "(",
       ")",
       "[",
       "]",
       "{",
       "}",
       "?",
       "@",
       "&",
       "<",
       ">",
       "/",
       ":",
       "\n",
       "\t"
     ],
     "type": "text",
     "doc_value": true
   },
   "f2": {
     "doc_value": true,
     "type": "long"
   }
 },
 "storage": "pg",
 "ttl": 2,
 "index_mode": "v2",
 "line": {
   "caseSensitive": false,
   "token": [
     ",",
     " ",
     "\"",
     "\"",
     ";",
     "=",
     "(",
     ")",
     "[",
     "]",
     "{",
     "}",
     "?",
     "@",
     "&",
     "<",
     ">",
     "/",
     ":",
     "\n",
     "\t"
   ]
 }
}
  • update_index
  • delete_index
  • get_index_config
  • list_topics

8. Cursor management

  • get_cursor
  • get_cursor_time
  • get_previous_cursor_time
  • get_begin_cursor
  • get_end_cursor

9. Logs write and consume

  • put_logs
    • Format of parameter:
{
"project": "dlq-test-cli-35144",
"logstore": "logstore1",
"topic": "topic1",
"source": "source1",
"logtags": [
  [
    "tag1",
    "v1"
  ],
  [
    "tag2",
    "v2"
  ]
],
"hashKey": "1231231234",
"logitems": [
  {
    "timestamp": 1510579341,
    "contents": [
      [
        "key1",
        "v1"
      ],
      [
        "key2",
        "v2"
      ]
    ]
  },
  {
    "timestamp": 1510579341,
    "contents": [
      [
        "key3",
        "v3"
      ],
      [
        "key4",
        "v4"
      ]
    ]
  }
]
}
  • get_logs
    • Format of parameter:
{
"topic": "",
"logstore": "logstore1",
"project": "dlq-test-cli-35144",
"toTime": "2018-01-01 11:11:11",
"offset": "0",
"query": "*",
"line": "10",
"fromTime": "2018-01-01 10:10:10",
"reverse": "true"
}
  • It will fetch all data when line is passed as -1. But if have large volume of data exceeding 1GB, better to use get_log_all

  • get_log_all

    • this API is similar as get_logs, but it will fetch data iteratively and output them by chunk. It's used for large volume of data fetching.
  • get_histograms

  • pull_logs

  • pull_log

    • this API is similar as pull_logs, but it allow readable parameter and allow to fetch data iteratively and output them by chunk. It's used for large volume of data fetching.
  • pull_log_dump

    • this API will dump data from all shards to local files concurrently.

10. Shipper management

- create_shipper
  • Format of partial parameter:
{
"oss_bucket": "dlq-oss-test1",
"oss_prefix": "sls",
"oss_role_arn": "acs:ram::1234:role/aliyunlogdefaultrole",
"buffer_interval": 300,
"buffer_mb": 128,
"compress_type": "snappy"
}
  • update_shipper
  • delete_shipper
  • get_shipper_config
  • list_shipper
  • get_shipper_tasks
  • retry_shipper_tasks

11. Consumer group management

  • create_consumer_group
  • update_consumer_group
  • delete_consumer_group
  • list_consumer_group
  • update_check_point
  • get_check_point

12. Elasticsearch data migration

Best Practice

Troubleshooting

By default, CLI store erros or warnings at ~/aliyunlogcli.log, it's also configurable via file ~/.aliyunlogcli, section __loggging__ to adjust the logging level and location:

[__logging__]
filename=  # default: ~/aliyunlogcli.log, Rotated when hit filebytes
filebytes=   # Deafult: 104857600 (100MB), file size of each log before rotation, Unit: Bytes
backupcount= # Default: 5, file backup file
#filemode=  # deprecated
format=    # default: %(asctime)s %(levelname)s %(filename)s:%(lineno)d %(funcName)s %(message)s
datefmt=   # default: "%Y-%m-%d %H:%M:%S", could be strftime() compitable date/time formatting string
level=     # default: warn, could be: info, error, fatal, critical, debug

Other resources

  1. Alicloud Log Service homepage:https://www.alibabacloud.com/product/log-service
  2. Alicloud Log Service doc:https://www.alibabacloud.com/help/product/28958.htm
  3. Alicloud Log Python SDK doc: http://aliyun-log-python-sdk.readthedocs.io/
  4. for any issues, please submit support tickets

aliyun-log-cli's People

Contributors

alibaba-oss avatar brucewu-fly avatar fanzhonghao avatar lichengseu avatar liketic avatar lwx709707 avatar wjo1212 avatar yuanqj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aliyun-log-cli's Issues

show relative hints when there's error in command

e.g. type aliyun log copy_project ... with invalid parameter, just show usage of aliyun log copy_project rather than show all usage. it's useless.

Note: due to optdoc limitation, need some further check.

allow to set time-out

due to network or server side issue, it may time-out in 20 seconds, better to make it configurable.

/home/travis/virtualenv/python3.3.6/bin/python "/home/travis/build/aliyun/aliyun-log-cli/tests/../aliyunlogcli/cli.py" log create_logstore --project_name="dlq-test-cli-56913" --logstore_name="logstore1" --ttl=2 --shard_count="2"
Status : FAIL 2
{"errorCode": "LogRequestError", "errorMessage": "HTTPConnectionPool(host='dlq-test-cli-56913.[secure]', port=80): Max retries exceeded with url: /logstores (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f149a6e5910>, 'Connection to dlq-test-cli-56913.[secure] timed out. (connect timeout=20)'))", "requestId": ""}
Traceback (most recent call last):
  File "./test_cli.py", line 14, in <module>
    test_main()
  File "./test_cli.py", line 10, in test_main
    run_test(file_path)
  File "/home/travis/build/aliyun/aliyun-log-cli/tests/util.py", line 75, in run_test
    assert return_code == 0, ValueError('cmd return code "{0}" is as expected "0"'.format(return_code))
AssertionError: cmd return code "2" is as expected "0"

support copy data cross logstore

duplicate

aliyunlog log copy_logstore_data --from_project="p1" --from_logstore="l1" --to_project="p2" --to_logstore="l2" --to_client="...." --from_time="..." --to_time="..."

with index:

aliyunlog log copy_logstore_data --from_project="p1" --from_logstore="l1" --to_project="p2" --to_logstore="l2" --to_client="...." --from_time="..." --to_time="..." --query="...."

support accept output from some getAPI in some SetAPI

now, get_logtail_config's output is not the same format as expected for set_logtail_conconfig.

e.g.
it's similar as:

{
  "configName": "order_fulfill_tail_1",
  "createTime": 1513579434,
  "inputDetail": {
    "delayAlarmBytes": 0,
    "fileEncoding": "utf8",
    "filePattern": "fulfill_order.log",
    "filterKey": [
      
    ],
    "filterRegex": [
      
    ],
    "key": [
      "execTime",
      "taskId",
      "agentCode",
      "msgType",
      "mainOrderId",
      "planStoreCode",
      "realStoreList",
      "isSplit",
      "end",
      "processTime"
    ],
    "logPath": "/home/admin/xenon/logs/agent/*1",
    "logTimezone": "",
    "logType": "delimiter_log",
    "maxDepth": 100,
    "maxSendRate": -1,
    "mergeType": "topic",

    "preserveDepth": 0,
    "quote": "\\x01",
    "sendRateExpire": 0,
    "sensitive_keys": [
      
    ],
    "separator": ";",
    "shardHashKey": [
      
    ],
    "timeFormat": "",
    "timeKey": "",
    "topicFormat": "none"
  },
  "inputType": "file",
  "lastModifyTime": 1513579434,
  "logSample": "2017-11-22 14:41:51.734;499;orderFulfillAgent;order_agent_msg;89646280215588163;CGO101;[STORE_1228938];false;false;2017-11-18 10:00:11",
  "outputDetail": {
    "endpoint": "sls.aliyun-inc.com",
    "logstoreName": "order_fulfill_store",
    "region": "cn-hangzhou-corp"
  },
  "outputType": "LogService"
}

but expecting as :

{
  "config_name": "config_name2",
  "logstore_name": "logstore2",
  "file_pattern": "file_pattern",
  "time_format": "time_format",
  "log_path": "/log_path",
  "endpoint": "endpoint",
  "log_parse_regex": "xxx ([\\w\\-]+\\s[\\d\\:]+)\\s+(.*)",
  "log_begin_regex": "xxx.*",
  "reg_keys": [
    "time",
    "value"
  ],
  "topic_format": "none",
  "filter_keys": [
    "time",
    "value"
  ],
  "filter_keys_reg": [
    "time",
    "value"
  ],
  "logSample": "xxx 2017-11-11 11:11:11 hello alicloud."
}

improve error message when default account is not configured

expected:

Error!

The default account is not configured or the command doesn't have a well-configured account passed. 

Fix it by either configuring a default account as: 
> aliyunlog configure <access_id> <access-key> <endpoint>

or use option --client-name to specify a well-configured account as:
> aliyunlog configure <access_id> <access-key> <endpoint> <user-bj>
> aliyunlog log .....  --client-name=user-bj

Refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_configure_cli_en.html for more info.

rather than the existing one:

Traceback (most recent call last):
  File "/usr/bin/aliyunlog", line 9, in <module>
    load_entry_point('aliyun-log-cli', 'console_scripts', 'aliyunlog')()
  File "/home/admin/beaver-python27/lib/python2.7/site-packages/aliyunlogcli/cli_core.py", line 199, in main
    access_id, access_key, endpoint, jmes_filter, format_output, decode_output = load_config(system_options)
  File "/home/admin/beaver-python27/lib/python2.7/site-packages/aliyunlogcli/config.py", line 139, in load_config
    assert access_id and access_key and endpoint, ValueError("Access id/key or endpoint is empty!")
AssertionError: Access id/key or endpoint is empty!

support index historical files

currently , the logtail support index historical data, but very complex and error pone. customer requests to use CLI to index them directly.

better to get logtail config directly for configuration.

support copy logstore via CLI

Usage:
aliyun log copy_logstore --from_project=<value> --from_logstore=<value> --to_logstore=<value> [--to_project=<value>] [--to_client=<value>] [--access-id=<value>] [--access-key=<value>] [--region-endpoint=<value>] [--client-name=<value>] [--jmes-filter=<value>] [--format-output=<value>]

Options:
--from_project=<value> 		: project name
--from_logstore=<value> 		: logstore name
--to_logstore=<value> 		: logstore name
[--to_project=<value>] 		: project name, copy to same project if not being specified, will try to create it if not being specified
[--to_client=<value>] 		: logclient instance, use it to operate on the "to_project" if being specified

support ETL using CLI

support transform historical data from one logstore to another one basing on config like regex, CSV, look-up, evaluation etc.

support search context

e.g.

aliyun log get_log --project="p1" --logstore="l1" --query="..."
| aliyun log get_log --project="p1" --logstore="l1" --query="..." ... --show_context="5"

the --show_context="5"
means show the 6th log's context, to show the previous 20 logs and the afterwards 20 logs

other form of show_context

  • 5, 20, 30 means show the previous 20 logs and later 30 logs for the 6th log.

response format:

{
"before": [...list of log...]
"current": {...log...}
"after": [...list of log...]
}

support copy data from logstore into multiple logstore splitting by topic/certain fields

support copy data from logstore into multiple logstore splitting by topic/certain fields.

background:
when data volume increase rapidly, need to split logstore, this feature will allow user to copy historical data into different logstore.

copy_data ...(same as current one) ....
--to_logstore="topic:value:logstore1|filed1:value2:logstore2|*"
--to_project="p1|p2" --to_client="client"
let's support keep in same region.

if to_project contains '|', the count must be same as --to_logstore, or else, it will confuse how to choose the destination

support simpler shard split command

current way via console is quite inconvenient when splitting to larger numbers.
need CLI to support more convenient command to split one logstore from current 2 to 32 shards.

support scaling copy_data by shard

currently, it only support by splitting by time-range, some logstore may have large shard count (e.g. 20+), better to support splitting by shard, thus we could copying data in different agents.

aliyunlog log copy_data .... -shard_list=1-8
aliyunlog log copy_data .... -shard_list=9-16
aliyunlog log copy_data .... -shard_list=17-24

other shard_list syntax:
shard_list=1,2,3
shard_list=1,6-8,20

offline package support customized cent-os

Linux 3.10.0-327.ali2014.alios7.x86_64 #1 SMP Fri Jan 12 12:33:55 CST 2018 x86_64 x86_64 x86_64 GNU/Linux

need wheel file for protobuf:
protobuf-3.6.1-py2.py3-none-any.whl

regex has the similar issue

support logstore sync feature

  1. copy logs from one logstore to another one
  2. support cross region
  3. support copy with condition (search filter)
  4. support continues copy (via consumer group)
  5. support real-time copy (keep copying new incoming data)

some API's output content is not in json format

currently, some output is not real json as below, thus it doesn't support jmes filter or store them as file.

aliyun log get_logtail_config  --project_name=abc --config_name=abc

will output:

{'configName': 'order_fulfill_tail_1', 'createTime': 1513579434, 'inputDetail': {'adjustTimezone': False, 'autoExtend': True, 'delayAlarmBytes': 0, 'discardNonUtf8': False, 'discardUnmatch': True, 'enableRawLog': False, 'enableTag': False, 'fileEncoding': 'utf8', 'filePattern': 'fulfill_order.log', 'filterKey': [], 'filterRegex': [], 'key': ['execTime', 'taskId', 'agentCode', 'msgType', 'mainOrderId', 'planStoreCode', 'realStoreList', 'isSplit', 'end', 'processTime'], 'localStorage': True, 'logPath': '/home/admin/xenon/logs/agent/*1', 'logTimezone': '', 'logType': 'delimiter_log', 'maxDepth': 100, 'maxSendRate': -1, 'mergeType': 'topic', 'preserve': True, 'preserveDepth': 0, 'quote': '\x01', 'sendRateExpire': 0, 'sensitive_keys': [], 'separator': ';', 'shardHashKey': [], 'tailExisted': False, 'timeFormat': '', 'timeKey': '', 'topicFormat': 'none'}, 'inputType': 'file', 'lastModifyTime': 1513579434, 'logSample': '2017-11-22 14:41:51.734;499;orderFulfillAgent;order_agent_msg;89646280215588163;CGO101;[STORE_1228938];false;false;2017-11-18 10:00:11', 'outputDetail': {'endpoint': 'sls.aliyun-inc.com', 'logstoreName': 'order_fulfill_store', 'region': 'cn-hangzhou-corp'}, 'outputType': 'LogService'}

the ` should be ", the True/False should be true/false.

allow to switch default account

right now, user may have multiple account and want to switch to specific account in the CLI session.

Note: this could be archived by setting the env actually.

aliyun configure default_account hangzhou
aliyun configure default_account hangzhou --persist=true

by default, it will only change the session only (e.g. via ENV), when persist is set to true, it will copy the hangzhou's AK as default in config file.

config reading improvement interoperation

  1. when comments below item, it will raise error:
    [option]
    #default-client =

NoOptionError

  1. want to support variable interoperation:
[DEFAULT]
admin_ak_id=id
admin_ak_key=key
user_ak_id=id
user_ak_key=key

[hz_admin]
access-id = %(admin_ak_id)
access-key =  %(admin_ak_key)
region-endpoint = cn-hangzhou.sls.aliyuncs.com

[hz_user]
access-id = %(user_ak_id)
access-key =  %(user_ak_key)
region-endpoint = cn-hangzhou.sls.aliyuncs.com

[bj_admin]
access-id = %(admin_ak_id)
access-key =  %(admin_ak_key)
region-endpoint = cn-beijing.sls.aliyuncs.com

[bj_user]
access-id = %(user_ak_id)
access-key =  %(user_ak_key)
region-endpoint = cn-beijing.sls.aliyuncs.com

default timezone "CST" in web page might raise warning on some machine

d:\python\python37-x64\lib\site-packages\dateutil\parser_parser.py:1204: Unknow
nTimezoneWarning: tzname CST identified but not understood. Pass tzinfos argu
ment in order to correctly return a timezone-aware datetime. In a future versio
n, this will raise an exception.
category=UnknownTimezoneWarning)

error message is not json format when configured AK is invalid.

configure an invalid AK, it will report errors as below:

{"ErrorCode": "LogRequestError", "ErrorMessage": "HTTPConnectionPool(host='dlq-test-sls-project1.endpoint123', port=80): Max retries exceeded with url: /configs/config_name3 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x102b57050>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))"}, "RequestID": ""

expected:
at least, it's JSON format.

feature request: support tail-f

since we already support tail-f on server coming soon, we could also support it via CLI, which might be more powerful when using with linux command like redirection/awk/grep etc.

support export log to external service via syslog

used to export logs to SOC e.g. splunk/arcsight/QRadar etc.

spec:

  1. continuously export logs to target via syslog (UDP prefered?)
  2. support recovery from breakpoint.
  3. high performance (e.g. up to 10MB/s+ per logstore?)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.