Giter VIP home page Giter VIP logo

wgzhao / addax Goto Github PK

View Code? Open in Web Editor NEW
1.1K 31.0 284.0 34.48 MB

Addax is a versatile open-source ETL tool that can seamlessly transfer data between various RDBMS and NoSQL databases, making it an ideal solution for data migration.

Home Page: https://wgzhao.github.io/Addax/

License: Apache License 2.0

Java 97.89% Python 1.28% Shell 0.76% Dockerfile 0.04% HTML 0.03%
hadoop hive database data-integrity clickhouse influxdb kudu mysql sqlserver trino

addax's Introduction

Addax Logo

Addax is a versatile open-source ETL tool

The documentation describes in detail how to install and use the plugins. It provides detailed instructions and sample configuration documentation for each plugin.

release version Maven Package

English | 简体中文

The project's initial code originated from Ali's DataX, and has been greatly improved on this basis. It also provides more read and write plugins. For more details, please refer to the difference document.

Supported Data Sources

Addax supports more than 20 SQL and NoSQL data sources. It can also be extended to support more.

Cassandra Clickhouse IMB DB2 dBase
Doris Elasticsearch Excel Greenplum
Apache HBase Hive InfluxDB Kafka
Kudu MinIO MongoDB MySQL
Oracle Phoenix PostgreSQL Presto
Redis Amazon S3 SQLite SQLServer
Starrocks Sybase TDengine Trino
Access SAP HANA

Getting Started

Use docker image

docker pull wgzhao/addax:latest
docker run -ti --rm --name addax wgzhao/addax:latest /opt/addax/bin/addax.sh /opt/addax/job/job.json

If you want to use common reader and writer plugins, you can pull the image whose name ends with -lite, it's very small.

docker pull wgzhao/addax:4.0.12-lite
docker run -ti --rm --name addax wgzhao/addax:4.0.12-lite /opt/addax/bin/addax.sh /opt/addax/job/job.json

Use install script

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/wgzhao/Addax/master/install.sh)"

This script installs Addax to its preferred prefix (/usr/local for macOS Intel, /opt/addax for Apple Silicon and /opt/addax/ for Linux)

Compile and Package

git clone https://github.com/wgzhao/addax.git addax
cd addax
mvn clean package
mvn package assembly:single

After successful compilation and packaging, a addax-<version> folder will be created in the target/datax directory of the project directory, where <version indicates the version.

Begin your first task

The job subdirectory contains many sample jobs, of which job.json can be used as a smoke-out test and executed as follows

bin/addax.sh job/job.json

The output of the above command is roughly as follows.

Click to expand
$ bin/addax.sh job/job.json
  ___      _     _
 / _ \    | |   | |
/ /_\ \ __| | __| | __ ___  __
|  _  |/ _` |/ _` |/ _` \ \/ /
| | | | (_| | (_| | (_| |>  <
\_| |_/\__,_|\__,_|\__,_/_/\_\

:: Addax version ::    (v4.0.13-SNAPSHOT)

2023-05-14 11:43:38.040 [        main] INFO  VMInfo               - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl
2023-05-14 11:43:38.062 [        main] INFO  Engine               -
{
	"setting":{
		"speed":{
			"byte":-1,
			"channel":1,
			"record":-1
		}
	},
	"content":{
		"reader":{
			"name":"streamreader",
			"parameter":{
				"sliceRecordCount":10,
				"column":[
					{
						"value":"addax",
						"type":"string"
					},
					{
						"value":19890604,
						"type":"long"
					},
					{
						"value":"1989-06-04 11:22:33 123456",
						"type":"date",
						"dateFormat":"yyyy-MM-dd HH:mm:ss SSSSSS"
					},
					{
						"value":true,
						"type":"bool"
					},
					{
						"value":"test",
						"type":"bytes"
					}
				]
			}
		},
		"writer":{
			"name":"streamwriter",
			"parameter":{
				"print":true,
				"encoding":"UTF-8"
			}
		}
	}
}

2023-05-14 11:43:38.092 [        main] INFO  JobContainer         - The jobContainer begins to process the job.
2023-05-14 11:43:38.107 [       job-0] INFO  JobContainer         - The Reader.Job [streamreader] perform prepare work .
2023-05-14 11:43:38.107 [       job-0] INFO  JobContainer         - The Writer.Job [streamwriter] perform prepare work .
2023-05-14 11:43:38.108 [       job-0] INFO  JobContainer         - Job set Channel-Number to 1 channel(s).
2023-05-14 11:43:38.108 [       job-0] INFO  JobContainer         - The Reader.Job [streamreader] is divided into [1] task(s).
2023-05-14 11:43:38.108 [       job-0] INFO  JobContainer         - The Writer.Job [streamwriter] is divided into [1] task(s).
2023-05-14 11:43:38.130 [       job-0] INFO  JobContainer         - The Scheduler launches [1] taskGroup(s).
2023-05-14 11:43:38.138 [ taskGroup-0] INFO  TaskGroupContainer   - The taskGroupId=[0] started [1] channels for [1] tasks.
2023-05-14 11:43:38.141 [ taskGroup-0] INFO  Channel              - The Channel set byte_speed_limit to -1, No bps activated.
2023-05-14 11:43:38.141 [ taskGroup-0] INFO  Channel              - The Channel set record_speed_limit to -1, No tps activated.
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
addax  19890604	1989-06-04 11:24:36	true	test
2023-05-14 11:43:41.157 [       job-0] INFO  AbstractScheduler    - The scheduler has completed all tasks.
2023-05-14 11:43:41.158 [       job-0] INFO  JobContainer         - The Writer.Job [streamwriter] perform post work.
2023-05-14 11:43:41.159 [       job-0] INFO  JobContainer         - The Reader.Job [streamreader] perform post work.
2023-05-14 11:43:41.162 [       job-0] INFO  StandAloneJobContainerCommunicator - Total 10 records, 260 bytes | Speed 86B/s, 3 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 100.00%
2023-05-14 11:43:41.596 [       job-0] INFO  JobContainer         -
Job start  at             : 2023-05-14 11:43:38
Job end    at             : 2023-05-14 11:43:41
Job took secs             :                  3ss
Average   bps             :               86B/s
Average   rps             :              3rec/s
Number of rec             :                  10
Failed record             :                   0

Here and Here provides all kinds of job configuration examples

Runtime Requirements

  • JDK 1.8+
  • Python 2.7+ / Python 3.7+ (Windows)

Documentation

Code Style

We recommend you use IntelliJ as your IDE. The code style template for the project can be found in the codestyle repository along with our general programming and Java guidelines. In addition to those you should also adhere to the following:

  • Alphabetize sections in the documentation source files (both in table of contents files and other regular documentation files). In general, alphabetize methods/variables/sections if such ordering already exists in the surrounding code.
  • When appropriate, use the Java 8 stream API. However, note that the stream implementation does not perform well so avoid using it in inner loops or otherwise performance sensitive sections.
  • Categorize errors when throwing exceptions. For example, AddaxException takes an error code and error message as arguments, AddaxException(REQUIRE_VALUE, "lack of required item"). This categorization lets you generate reports, so you can monitor the frequency of various failures.
  • Ensure that all files have the appropriate license header; you can generate the license by running mvn license:format.
  • Consider using String formatting (printf style formatting using the Java Formatter class): format("Session property %s is invalid: %s", name, value) (note that format() should always be statically imported). Sometimes, if you only need to append something, consider using the + operator.
  • Avoid using the ternary operator except for trivial expressions.
  • Use an assertion from Airlift's Assertions class if there is one that covers your case rather than writing the assertion by hand. Over time, we may move over to more fluent assertions like AssertJ.
  • When writing a Git commit message, follow these guidelines.

Star History

Star History Chart

License

This software is free to use under the Apache License Apache license.

Special Thanks

Special thanks to JetBrains for his supports to this project.

addax's People

Contributors

asdf2014 avatar binaryworld avatar cch1996 avatar dependabot[bot] avatar fish-palm avatar hbspy avatar heljoyliu avatar ironsdu avatar lw309637554 avatar mutoulbj avatar sufism avatar tceason avatar trafalgarluo avatar wanda1416 avatar weihebu avatar wgzhao avatar wuchase avatar yyi avatar zhongtian-hu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

addax's Issues

错误: 找不到或无法加载主类 com.wgzhao.datax.core.Engine

Describe the bug
最新版本下载下来,编译打包--->尝试python bin/datax.py job/job.json测试,报找不到入口。
我在lib/data-core里面找到了Engine.class应该是有文件的。

image

另:我下载不需要编译的jar包,尝试也是报同样的问题。

这是我的运行环境各版本。
image

支持Doris的读写插件

Doris是兼容MySQL的,独写使用MySQL插件都可以,但是写入的时候使用insert into的方式,Doris会使用FE节点解析sql,导致速度比较慢,所以有没有什么办法开发独立的,速度比较快的Doris读写插件,看官方文档只说 insert into 不适合用在生产环境,转而使用 insert into select 方式,现在这种方式不适合 DataX,但是它又不像PostgresSQL那样提供了类似 COPY FROM 这样的高效数据插入方式,推荐使用Stream Load的方式导入,希望作者可以内部转换成Stream Load,以此来提高导入速度!

关于HDFSReader的一些疑问

想问一下,原本关于HdfsReader的线程数量,通过通道数指定无效,是根据文件数量决定的,这个问题
您的DataX有解决该问题吗

速度控制并没有起效,想知道您的DataX有这个问题吗

标签打成bug 不好意思 因为第一次使用github提出issue 不知道在哪里修改

写入的目标表中含有保留字时报错

以写入MySQL为例,以下是任务文件

{
    "job": {
        "setting": {
            "speed": {
                "channel": 3
            }
        },
        "content": [
            {
                "writer": {
                    "name": "mysqlwriter",
                    "parameter": {
                        "username": "root",
                        "password": "123456",
                        "column": [
                            "id", "date","table"
                        ],
                        "connection": [
                            {
                                "table": [
                                    "tbl_test"
                                ],
                                "jdbcUrl": "jdbc:mysql://127.0.0.1:3306/test"
                            }
                        ]
                    }
                },
                "reader": {
                    "name": "streamreader",
                    "parameter": {
                        "column": [
                            {
                                "value": "123",
                                "type": "long"
                            },
			  {"value":"2020-11-12", "type":"string"},
			{"value":"hello", "type":"string"}
                        ],
                        "sliceRecordCount": 10
                    }
                }
            }
        ]
    }
}

执行时报错如下:

....
020-11-16 20:25:48.028 [job-0] ERROR JobContainer - Exception when job run
com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-01], Description:[获取表字段相关信息失败.].  - 获取表:tbl_test 的字段的元信息时失败. 请联系 DBA 核查该库、表信息. - java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'table from tbl_test where 1=2' at line 1
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
	at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1200)
	at com.alibaba.datax.plugin.rdbms.util.DBUtil.getColumnMetaData(DBUtil.java:542)
	at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:133)
	at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:148)
	at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.doPretreatment(OriginalConfPretreatmentUtil.java:43)
	at com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter$Job.init(CommonRdbmsWriter.java:41)
	at com.alibaba.datax.plugin.writer.mysqlwriter.MysqlWriter$Job.init(MysqlWriter.java:30)
	at com.alibaba.datax.core.job.JobContainer.initJobWriter(JobContainer.java:687)
	at com.alibaba.datax.core.job.JobContainer.init(JobContainer.java:313)
	at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:124)
	at com.alibaba.datax.core.Engine.start(Engine.java:88)
	at com.alibaba.datax.core.Engine.entry(Engine.java:149)
	at com.alibaba.datax.core.Engine.main(Engine.java:164)

	at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:33)
	at com.alibaba.datax.plugin.rdbms.util.DBUtil.getColumnMetaData(DBUtil.java:555)
	at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:133)
	at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:148)
	at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.doPretreatment(OriginalConfPretreatmentUtil.java:43)
	at com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter$Job.init(CommonRdbmsWriter.java:41)
	at com.alibaba.datax.plugin.writer.mysqlwriter.MysqlWriter$Job.init(MysqlWriter.java:30)
	at com.alibaba.datax.core.job.JobContainer.initJobWriter(JobContainer.java:687)
	at com.alibaba.datax.core.job.JobContainer.init(JobContainer.java:313)
	at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:124)
	at com.alibaba.datax.core.Engine.start(Engine.java:88)
	at com.alibaba.datax.core.Engine.entry(Engine.java:149)
	at com.alibaba.datax.core.Engine.main(Engine.java:164)
Caused by: java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'table from tbl_test where 1=2' at line 1
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
	at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1200)
	at com.alibaba.datax.plugin.rdbms.util.DBUtil.getColumnMetaData(DBUtil.java:542)
	... 11 common frames omitted

oracle使用update报类型转换错误

oracle以及postgresql的update模式,我测试过了。其中oracle使用update报类型转换错误,经查发现CommonRdbmsWriter类中i=0会导致resultSetMetaData变量字段顺序错误。我提交了代码,你可以看下。另外我想请教下,每次任务运行成功后,软件都会弹出一个地址,但随即又报失败,请问你遇到过吗?
请教

InfluxDBWriter 多channel设置无效。

配置内容:

"job": {
  "setting": {
    "speed": {
      "channel": **5,**
      //"record": 200,
      "bytes": -1

错误信息:

2021-02-22 08:21:00.688 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0
2021-02-22 08:21:00.688 [main] INFO JobContainer - DataX jobContainer starts job.
2021-02-22 08:21:00.691 [main] INFO JobContainer - Set jobId = 0
2021-02-22 08:21:00.711 [job-0] INFO JobContainer - DataX Reader.Job [streamreader] do prepare work .
2021-02-22 08:21:00.711 [job-0] INFO JobContainer - DataX Writer.Job [influxdbwriter] do prepare work .
2021-02-22 08:21:10.613 [job-0] INFO JobContainer - Job set Channel-Number to 3 channels.
2021-02-22 08:21:10.614 [job-0] INFO JobContainer - DataX Reader.Job [streamreader] splits to [3] tasks.
2021-02-22 08:21:10.615 [job-0] WARN InfluxDBWriter$Job - -------------------------split() begin...
2021-02-22 08:21:10.615 [job-0] INFO JobContainer - DataX Writer.Job [influxdbwriter] splits to [1] tasks.
2021-02-22 08:21:10.620 [job-0] INFO StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00%
2021-02-22 08:21:10.621 [job-0] ERROR Engine - Code:[Framework-15], Description:[DataX插件切分出错, 该问题通常是由于DataX各个插件编程错误引起,请联系DataX开发团队解决]. - Code:[Framework-15], Description:[DataX插件切分出错, 该问题通常是由于DataX各个插件编程错误引起,请联系DataX开发团队解决]. - reader切分的task数目[3]不等于writer切分的task数目[1].

代码修改:
// @OverRide
// public List split(int adviceNumber)
// {
// Configuration readerSliceConfig = super.getPluginJobConf();
// List splittedConfigs = new ArrayList<>();
// splittedConfigs.add(readerSliceConfig);
// return splittedConfigs;

    // }

    @Override
    public List<Configuration> split(int mandatoryNumber)
    {
        //修改多channel无效问题
        LOG.warn("-------------------------split [{}] begin...", mandatoryNumber);
        ArrayList<Configuration> configurations = new ArrayList<Configuration>(mandatoryNumber);
        for (int i = 0; i < mandatoryNumber; i++) {
                 configurations.add(this.originalConfig.clone());
             }
        return configurations;

    }

初步测试设置多channel后无报错。

能提供编译包吗

好多包不架梯子根本拉不动,能直接提供个编译包就好了,放百度网盘啥的。

关于使用中速度突然为0的问题,主要在oracle->hdfs遇到

现在主要出现2种情况,一种是突然一个子任务中间出现connection reset,然后到最后会卡在最后一个子任务。
另一种是没有任何错误征兆,就是突然的卡死,waitreadtime也是呈现卡住不动的状态
image

我在datax官方issue看到一个关掉的,说可能是
这种情况经常出现在hdfswriter的task.init(), hdfsHelper.getFileSystem(defaultFS, writerSliceConfig); 这里初始化卡住了
测试重现的方法:在Writer.Task.init() 或者 prepare()
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
会导致reader 结束,从而导致程序出现严重bug
但是这个大概率应该不是我遇到的问题

java.sql.SQLException: Operation not allowed after ResultSet closed

Describe the bug
when I try to ingest mysql content and print console, I got java.sql.SQLException: Operation not allowed after ResultSet closed

To Reproduce

job json

{
    "job": {
        "setting": {
            "speed": {
                "channel": 3
            }
        },
        "content": [
            {
                "reader": {
                    "name": "mysqlreader",
                    "parameter": {
                        "username": "root",
                        "password": "****",
                        "column": [
                            "id","`date`","`table`"
                        ],
                        "connection": [
                            {
                                "table": [
                                    "tbl_test"
                                ],
                                "jdbcUrl": ["jdbc:mysql://127.0.0.1:3306/test?useSSL=false"]
                            }
                        ]
                    }
                },
                "writer": {
                    "name": "streamwriter",
                    "parameter": { "print":true
                    }
                }
            }
        ]
    }
}

output

....
2020-11-21 14:42:54.947 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started
2020-11-21 14:42:54.949 [0-0-0-reader] INFO  CommonRdbmsReader$Task - Begin to read record by Sql: [select id,`date`,`table` from tbl_test
] jdbcUrl:[jdbc:mysql://127.0.0.1:3306/test?useSSL=false&yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].
2020-11-21 14:42:54.965 [0-0-0-reader] ERROR ReaderRunner - Reader runner Received Exceptions:
com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.].  - 执行的SQL为: select id,`date`,`table` from tbl_test  具体错误信息为:java.sql.SQLException: Operation not allowed after ResultSet closed
	at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:31)
	at com.alibaba.datax.plugin.rdbms.util.RdbmsException.asQueryException(RdbmsException.java:88)
	at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:218)
	at com.alibaba.datax.plugin.reader.mysqlreader.MysqlReader$Task.startRead(MysqlReader.java:91)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Exception in thread "taskGroup-0" com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.].  - 执行的SQL为: select id,`date`,`table` from tbl_test  具体错误信息为:java.sql.SQLException: Operation not allowed after ResultSet closed
	at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:31)
	at com.alibaba.datax.plugin.rdbms.util.RdbmsException.asQueryException(RdbmsException.java:88)
	at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:218)
	at com.alibaba.datax.plugin.reader.mysqlreader.MysqlReader$Task.startRead(MysqlReader.java:91)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
2020-11-21 14:42:57.947 [job-0] ERROR JobContainer - 运行scheduler出错.
2020-11-21 14:42:57.949 [job-0] INFO  StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 0.00%
2020-11-21 14:42:57.949 [job-0] ERROR Engine - Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.].  - Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.].  - 执行的SQL为: select id,`date`,`table` from tbl_test  具体错误信息为:java.sql.SQLException: Operation not allowed after ResultSet closed

Running Information

  • OS: MacOS 11.1 (Big Sur)
  • JDK Version: 1.8.0_231
  • DataX Version 3.1.3

编译打包异常

编译会报错,这是什么原因啊
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:3.3.0:single (release) on project datax-all: Failed to create assembly: Error creating assembly archive release: archive cannot be empty -> [Help 1]

编译问题

[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.726 s
[INFO] Finished at: 2020-11-17T15:54:12+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:3.0.0:exec (run-sphinx) on project datax-docs: Command execution failed.: Process exited with an error: 127 (Exit value: 127) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :datax-docs

这个是什么原因啊

txtfilereader 报错 java.lang.NoClassDefFoundError: org/apache/hadoop/io/compress/CompressionCodec

     "reader": {
          "name": "txtfilereader",
          "parameter": {
              "path": ["/home/go/da/*"],
              "encoding": "UTF-8",
              "skipHeader": true,
              "column": [
                  {

                  {
                      "index": 27,
                      "type": "long"
                  }
              ],
              "fieldDelimiter": ",",
              "csvReaderConfig":{
                "safetySwitch": false,
                "skipEmptyRecords": false,
                "useTextQualifier": false
              }
          }
      },
2021-03-16 04:49:17.438 [main] INFO  PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0
2021-03-16 04:49:17.439 [main] INFO  JobContainer - DataX jobContainer starts job.
2021-03-16 04:49:17.447 [main] INFO  JobContainer - Set jobId = 0
2021-03-16 04:49:17.802 [job-0] INFO  JobContainer - PerfTrace not enable!
Exception in thread "job-0" java.lang.NoClassDefFoundError: org/apache/hadoop/io/compress/CompressionCodec
        at com.wgzhao.datax.plugin.reader.txtfilereader.TxtFileReader$Job.init(TxtFileReader.java:68)
        at com.wgzhao.datax.core.job.JobContainer.initJobReader(JobContainer.java:661)
        at com.wgzhao.datax.core.job.JobContainer.init(JobContainer.java:306)
        at com.wgzhao.datax.core.job.JobContainer.start(JobContainer.java:113)
        at com.wgzhao.datax.core.Engine.start(Engine.java:86)
        at com.wgzhao.datax.core.Engine.entry(Engine.java:146)
        at com.wgzhao.datax.core.Engine.main(Engine.java:164)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.compress.CompressionCodec
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 7 more

Python 3.7+是否支持

请问下载的官方master的源码
执行 python3 datax.py ../job/job.json
会提示 Missing parentheses in call to 'print'. Did you mean print(readerRef)?
你的这个版本是可以支持Python 3.7的嘛

插件补充

希望平台能够集成更多的数据源。提供一些参考建议,希望能够把这个项目做成集大成一体。
Datax支持Redis Writer
https://blog.csdn.net/u013289115/article/details/106277937

Datax支持Kudu Writer
https://github.com/amygbj/kuduwriter

Datax支持Kudu Reader
alibaba/DataX#538

GpdbWriter 使用copy from方式写入
https://github.com/HashDataInc/DataX/tree/master/gpdbwriter

InfluxDBReader
https://github.com/wowiscrazy/InfluxDBReader-DataX

InfluxDBWriter???尚未找到实现(物联网有前景数据库,希望能够实现)

反馈2个问题

1.在SqlServerReader能不能加上WITH(NOLOCK) ,在长时间读的时候防止触发查询锁;

2.在ClickHouseWriter能不能做写默认值的处理,一般clickhouse数据库建表的时候没有null,这样数值类型默认成0,字符串默认成空,日期类型默认成'1970-01-01',时间类型默认'1970-01-01 08:00:00'等。

看看是否有方法攻克ElasticSearch Reader

比如让SQL中写DSL,或者要求ES6.3之上也行,或者集成插件也行。
目前的需求是将ES中的增量信息,定时同步到clickhouse,当前的解决办法是定时触发logstash导出csv,clickhouse 导入csv,相对繁琐。希望能够通过datax框架完成这一任务。
感谢!

希望增加ODPS读写插件

因业务需要读parquet文件,然后写入ODPS,因此想请教下如何增加ODPS读写插件,谢谢了~

influxdb: column order of reader

Describe the bug
A clear and concise description of what the bug is.
we can't assume the column's order of response by influxdb. so we need specify the order in config.
like this:

"reader": {
                    "name": "influxdbreader",
                    "parameter": {
                        "column": [
                            "user_name",
                            "time",
                            "user_id",
                        ],
                        "connection": [
                            {
                                "endpoint": "http://localhost:8086",
                                "database": "datax",
                                "table": "datax_tb2",
                            }
                        ],
                        "username": "jdoe",
                        "password": "abcABC123!@#",
                        "querySql": "select user_name, user_id, xxxxxxxx from  datax_tb2",
                    }
                },

use this config, we output record data by this column order : [user_name, time, user_id]
To Reproduce

  1. paste json content of running job
  2. paste the output of running job

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. CentOS]
  • JDK Version [e.g. 1.8]
  • DataX Version [e.g. 3.1.3]

Additional context
Add any other context about the problem here.

异构数据源头和目标传递复杂数据类型

如题:针对复杂数据类型,如:数组、嵌套、Map、Json、GeoPoint等复杂类型的抽取-转换-写入。
对于已经适配的这些类数据源支持的怎么样,是否考虑把这块内容仔细梳理并完善一下。
感谢。

ParquetReader大文件内存溢出

您的这种ParquetReader(AvroParquetReader)在读大文件(约20G)的时候会有内存溢出吗?我用过网上的一种方式类似于OrcReader(使用Serde和FileInputFormat),这种方式在读20G的parquet文件时,会内存溢出,然后不断的加大内存和降低并发线程

influxdb batch size、query use chunk size

Describe the bug
A clear and concise description of what the bug is.
你好,我遇到一个BUG和一个建议。

  1. 在写入influxdb的配置中,如果batch size大于实际reader提供的数据条数,则最终数据并没有写入到influxdb。修复方式:在https://github.com/wgzhao/DataX/blob/e99f3f634a191a4623889cc793217adec8964209/plugin/writer/influxdbwriter/src/main/java/com/wgzhao/datax/plugin/writer/influxdbwriter/InfluxDBWriterTask.java#L131 for循环语句外面(后面)添加如下代码进行写入:
if (influxDB.isBatchEnabled()) {
                influxDB.flush();
}
  1. 建议读取数据时是否采用chunk传输?比如在 https://github.com/wgzhao/DataX/blob/e99f3f634a191a4623889cc793217adec8964209/plugin/reader/influxdbreader/src/main/java/com/wgzhao/datax/plugin/reader/influxdbreader/InfluxDBReaderTask.java#L106 里调用chunk版的query。
    To Reproduce
  2. paste json content of running job
  3. paste the output of running job

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. CentOS]
  • JDK Version [e.g. 1.8]
  • DataX Version [e.g. 3.1.3]

Additional context
Add any other context about the problem here.

datax读取数据时增加自定义字段跟值

希望可以增加一个功能,就是datax读取数据时,增加自定义字段跟值。比如说源表没有send_date这个字段,但是目标表有这个字段。我可以手动添加这个字段,并且写上一个值传入过去。

Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.Snappy

Hi,author,when I use this datax to collect hdfs parquent+snappy files, I had a issue. Can you help me to find the reason,thank you.

The hive snappy settiing like:
set hive.intermediate.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;

My Job.json like:

{
"job": {
"setting": {
"speed": {
"channel": 3
}
},
"content": [
{
"reader": {
"name": "hdfsreader",
"parameter": {
"path": "******",
"defaultFS": "hdfs://nameservice1",
"column": [
{
"index": 0,
"type": "string"
},
{
"index": 1,
"type": "string"
},
{
"index": 2,
"type": "string"
},
{
"index": 3,
"type": "string"
},
{
"index": 4,
"type": "string"
},
{
"index": 5,
"type": "string"
},
{
"index": 6,
"type": "string"
},
{
"index": 7,
"type": "string"
},
{
"index": 8,
"type": "string"
},
{
"index": 9,
"type": "string"
},
{
"index": 10,
"type": "string"
},
{
"index": 11,
"type": "string"
},
{
"index": 12,
"type": "string"
},
{
"index": 13,
"type": "string"
},
{
"index": 14,
"type": "string"
},
{
"index": 15,
"type": "string"
},
{
"index": 16,
"type": "string"
},
{
"index": 17,
"type": "string"
},
{
"index": 18,
"type": "string"
},
{
"index": 19,
"type": "string"
},
{
"index": 20,
"type": "string"
},
{
"index": 21,
"type": "string"
},
{
"index": 22,
"type": "string"
},
{
"index": 23,
"type": "string"
},
{
"index": 24,
"type": "string"
},
{
"index": 25,
"type": "string"
},
{
"index": 26,
"type": "string"
},
{
"index": 27,
"type": "string"
}
],
"fileType": "parquet",
"encoding": "UTF-8",
"fieldDelimiter": ",",
"compress":"hadoop-snappy"
}

            },
             "writer": {
      "name": "elasticsearchwriter",
      "parameter": {
        "endpoint": "http://********:9200",
        "index": "supp_yunc_recpt_fct_test",
        "type": "type1",
        "cleanup": false,
        "settings": {"index" :{"number_of_shards": 2, "number_of_replicas": 1}},
        "discovery": false,
        "batchSize": 1000,
        "splitter": ",",
        "column": [
          {"name": "id", "type": "id"},
          { "name": "pur_doc_id","type": "keyword" },
          { "name": "goodsid","type": "keyword" },
          { "name": "appt_id","type": "keyword" },
          { "name": "compt_id","type": "keyword" },
          { "name": "appt_sts","type": "keyword" },
          { "name": "del_no","type": "keyword" },
          { "name": "is_gift","type": "keyword" },
          { "name": "qty","type": "keyword" },
          { "name": "qty_appt","type": "keyword" },
          { "name": "qty_proce","type": "keyword" },
          { "name": "qty_proce_base","type": "keyword" },
          { "name": "qty_compt","type": "keyword" },
          { "name": "appt_time","type": "keyword" },
          { "name": "price","type": "keyword" },
          { "name": "price_no_tax","type": "keyword" },
          { "name": "sap_del_no","type": "keyword" },
          { "name": "sap_rownum","type": "keyword" },
          { "name": "act_time","type": "keyword" },
          { "name": "remark","type": "keyword" },
          { "name": "create_time","type": "keyword" },
          { "name": "creator","type": "keyword" },
          { "name": "updated_time","type": "keyword" },
          { "name": "updated_by","type": "keyword" },
          { "name": "last_updated_time","type": "keyword" },
          { "name": "insert_time","type": "keyword" },
          { "name": "sdt","type": "keyword" }
        ]
      }
    }
  }
]

}
}

the issue like:
Exception in thread "job-0" java.lang.NoClassDefFoundError: org/xerial/snappy/Snappy
at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)
at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:279)
at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:230)
at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.(PlainValuesDictionary.java:91)
at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.(PlainValuesDictionary.java:74)
at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:88)
at org.apache.parquet.column.Encoding$4.initDictionary(Encoding.java:147)
at org.apache.parquet.column.impl.ColumnReaderBase.(ColumnReaderBase.java:383)
at org.apache.parquet.column.impl.ColumnReaderImpl.(ColumnReaderImpl.java:46)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:84)
at org.apache.parquet.io.RecordReaderImplementation.(RecordReaderImplementation.java:271)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109)
at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:137)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:132)
at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:136)
at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.isParquetFile(DFSUtil.java:893)
at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.checkHdfsFileType(DFSUtil.java:741)
at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.addSourceFileByType(DFSUtil.java:222)
at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.addSourceFileIfNotEmpty(DFSUtil.java:152)
at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.getHDFSAllFilesNORegex(DFSUtil.java:209)
at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.getHDFSAllFiles(DFSUtil.java:179)
at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.getAllFiles(DFSUtil.java:141)
at com.alibaba.datax.plugin.reader.hdfsreader.HdfsReader$Job.prepare(HdfsReader.java:172)
at com.alibaba.datax.core.job.JobContainer.prepareJobReader(JobContainer.java:702)
at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:312)
at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
at com.alibaba.datax.core.Engine.start(Engine.java:90)
at com.alibaba.datax.core.Engine.entry(Engine.java:151)
at com.alibaba.datax.core.Engine.main(Engine.java:169)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.Snappy
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 36 more

能否添加oracle以及postgresql的Insert/Update功能

oracle使用 merge into 语法实现。
postgresql采用 insert into ...on conflict语法实现。
附上博客:
https://blog.csdn.net/u013308496/article/details/106881907
https://blog.csdn.net/u013308496/article/details/106876691 以供参考。
但是他的修改不是很完美,破坏了原来insert的功能,后面只能配置update。
另外想要请教如何调试插件代码的,每个插件打包后引入再进行调试吗,这个不是很清楚,望指教。
目前我是参照这两篇博客:
https://www.jianshu.com/p/01672e5ea1b6
https://blog.csdn.net/pharos/article/details/104147381
把datax进行集成然后调试的,如有更好的调试办法,望告知,谢谢。

添加org.glassfish:javax.el:pom:3.0.1-b06-SNAPSHOT依赖报错

感谢更新,下午我拿来测试下。
1.项目处理org.glassfish:javax.el:pom:3.0.1-b06-SNAPSHOT依赖报错,错误信息如下:
Could not find artifact org.glassfish:javax.el:pom:3.0.1-b06-SNAPSHOT in apache release
(https://repository.apache.org/content/repositories/releases/)
查了下项目里现在应该依赖3.0.1-b12版本,这个错误不影响编译打包。
2.希望更新下mysql的驱动,同时更新下驱动 MySql("mysql", "com.mysql.cj.jdbc.Driver")。

influxdb writer: not add tag into point when case TAG column type.

Describe the bug
A clear and concise description of what the bug is.

If catch a TAG column type, but we not add the value into point's tag.
https://github.com/wgzhao/DataX/blob/e99f3f634a191a4623889cc793217adec8964209/plugin/writer/influxdbwriter/src/main/java/com/wgzhao/datax/plugin/writer/influxdbwriter/InfluxDBWriterTask.java#L152-L154
I think we need call builder.tag(name, column.asString()); instead.
To Reproduce

  1. paste json content of running job
  2. paste the output of running job

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. CentOS]
  • JDK Version [e.g. 1.8]
  • DataX Version [e.g. 3.1.3]

Additional context
Add any other context about the problem here.

能否添加对postgresql数组类型的支持

大概思路是read时候转换成字符串,write时候如果该字符串因为没有大括号不符合数组格式,提供相应报错信息。
个人编码能力有限,只能提供建议,sql的操作如下
CREATE table test_array(a int,b int[],c varchar[]);
insert into test_array VALUES(1,'{1,2}','{a,b,c}');

另外HashData公司维护的datax很好的实现了对greenplum数据库的读写支持,可以考虑借鉴一下,扩充这个项目支持的数据库类型
https://github.com/HashDataInc/DataX

hdfsreader 插件读取text文件时报错

Describe the bug

hdfsreader 插件读取text文件时报错

运行的json文件如下:

{
    "job": {
        "setting": {
            "speed": {
                "byte": -1,
                "channel": 1
            }
        },
        "content": [
            {
                "writer": {
                    "name": "streamwriter",
                    "parameter": {
                        "print": "true"
                    }
                },
                "reader": {
                    "name": "hdfsreader",
                    "parameter": {
                        "column": [
                            {
                                "index": 0,
                                "type": "string"
                            },
                            {
                                "index": 1,
                                "type": "long"
                            },
                            {
                                "index": 2,
                                "type": "date"
                            },
                            {
                                "index": 3,
                                "type": "boolean"
                            },
                            {
                                "index": 4,
                                "type": "string"
                            }
                        ],
                        "defaultFS": "hdfs://sandbox-hdp.hortonworks.com:8020",
                        "path": "/tmp/out_orc",
                        "fileType": "text",
                        "fieldDelimiter": "\u0001",
                        "fileName": "test_none",
                        "encoding": "UTF-8",
                    }
                }
            }
        ]
    }
}

执行结果如下:

....
2020-12-11 21:27:24.903 [job-0] INFO  DFSUtil - get HDFS all files in path = [/tmp/out_orc]
2020-12-11 21:27:26.459 [job-0] ERROR DFSUtil - 检查文件[hdfs://sandbox-hdp.hortonworks.com:8020/tmp/out_orc/test_none__48bb0c2c_c520_4406_ab12_8039dc277296]类型失败,目前支持ORC,SEQUENCE,RCFile,TEXT,CSV五种格式的文件,请检查您文件类型和文件是否正确。
2020-12-11 21:27:26.472 [job-0] INFO  StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 0.00%
2020-12-11 21:27:26.474 [job-0] ERROR Engine - Code:[HdfsReader-10], Description:[读取文件出错].  - Code:[HdfsReader-10], Description:[读取文件出错].  - 检查文件[hdfs://sandbox-hdp.hortonworks.com:8020/tmp/out_orc/test_none__48bb0c2c_c520_4406_ab12_8039dc277296]类型失败,目前支持ORC,SEQUENCE,RCFile,TEXT,CSV五种格式的文件,请检查您文件类型和文件是否正确。 - java.lang.RuntimeException: hdfs://sandbox-hdp.hortonworks.com:8020/tmp/out_orc/test_none__48bb0c2c_c520_4406_ab12_8039dc277296 is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [101, 115, 116, 10]
	at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:531)
	at org.apache.parquet.hadoop.ParquetFileReader.<init>(ParquetFileReader.java:712)
	at org.apache.parquet.hadoop.ParquetFileReader.open(ParquetFileReader.java:609)
	at org.apache.parquet.hadoop.ParquetReader.initReader(ParquetReader.java:152)
	at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:135)
	at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.isParquetFile(DFSUtil.java:893)
	at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.checkHdfsFileType(DFSUtil.java:724)
	at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.addSourceFileByType(DFSUtil.java:222)
	at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.addSourceFileIfNotEmpty(DFSUtil.java:152)
	at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.getHDFSAllFilesNORegex(DFSUtil.java:209)
	at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.getHDFSAllFiles(DFSUtil.java:179)
	at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.getAllFiles(DFSUtil.java:141)
	at com.alibaba.datax.plugin.reader.hdfsreader.HdfsReader$Job.prepare(HdfsReader.java:172)
	at com.alibaba.datax.core.job.JobContainer.prepareJobReader(JobContainer.java:702)
	at com.alibaba.datax.core.job.JobContainer.prepare(JobContainer.java:312)
	at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:115)
	at com.alibaba.datax.core.Engine.start(Engine.java:90)
	at com.alibaba.datax.core.Engine.entry(Engine.java:151)
	at com.alibaba.datax.core.Engine.main(Engine.java:169)

运行环境

  • OS: CentOS 7.7.1908
  • JDK Version: openjdk 14
  • DataX Version: 3.1.4

influxreader调用超时

作者你好,在调用influxreader接口时,会有超时报错返回,由于从日志上看不出问题。debug日志如下
在使用curl工具对influx进行query时,可以查到对应数据,可以排除网络和接口连通性问题。

$ curl -G  'http://192.168.1.1:8086/query?db=njmon&u=monitor&p=xxxx@influx2021' --data-urlencode 'q=show measurements'
{"results":[{"statement_id":0,"series":[{"name":"measurements","columns":["name"],"values":[["NFS_totals"],["NFSv3client"],["config"],["cpu_details"],["cpu_logical"],["cpu_logical_total"],["cpu_physical"],["cpu_physical_total"],["cpu_physical_total_spurr"],["cpu_syscalls"],["cpu_util"],["disk_adapters"],["disk_total"],["disks"],["fc_adapters"],["filesystems"],["identity"],["kernel"],["logicalvolumes"],["long_sql"],["lpar_format1"],["lpar_format2"],["memory"],["memory_page"],["netbuffers"],["network_adapters"],["network_interfaces"],["network_total"],["paging_spaces"],["partition_type"],["rperf"],["server"],["timestamp"],["uptime"],["uptime_output"],["vminfo"],["volumegroups"]]}]}]}
2021-03-11 09:27:44.456 [main] INFO  VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl
2021-03-11 09:27:44.462 [main] DEBUG Engine - the machine info  => 

	osInfo: 	Linux amd64 3.10.0-1062.18.1.el7.x86_64
	jvmInfo:	Oracle Corporation 1.8 25.242-b08
	cpu num:	64

	totalPhysicalMemory:	-0.00G
	freePhysicalMemory:	-0.00G
	maxFileDescriptorCount:	-1
	currentOpenFileDescriptorCount:	-1

	GC Names	[PS MarkSweep, PS Scavenge]

	MEMORY_NAME                    | allocation_size                | init_size                      
	PS Eden Space                  | 677.50MB                       | 16.00MB                        
	Code Cache                     | 240.00MB                       | 2.44MB                         
	Compressed Class Space         | 1,024.00MB                     | 0.00MB                         
	PS Survivor Space              | 2.50MB                         | 2.50MB                         
	PS Old Gen                     | 1,365.50MB                     | 43.00MB                        
	Metaspace                      | -0.00MB                        | 0.00MB                         


2021-03-11 09:27:44.475 [main] INFO  Engine - 
{
	"content":[
		{
			"reader":{
				"parameter":{
					"password":"*****",
					"column":[
						"*"
					],
					"connection":[
						{
							"endpoint":"http://192.168.1.1:8086",
							"database":"njmon",
							"table":"config"
						}
					],
					"username":"monitor"
				},
				"name":"influxdbreader"
			},
			"writer":{
				"parameter":{
					"password":"*****",
					"column":[
						"\"OSBUILD\"",
						"\"OSNAME\"",
						"\"OSVERSION\"",
						"\"ACTIVECPUSINPOOL\"",
						"\"AME_TARGETMEMEXPFACTOR\"",
						"\"AME_TARGETMEMEXPSIZE\"",
						"\"AMS_HYPERPGSIZE\"",
						"\"AMS_MEMPOOLID\"",
						"\"AMS_TOTIOMEMENT\"",
						"\"CPUCAP_DESIRED\"",
						"\"CPUCAP_MAX\"",
						"\"CPUCAP_MIN\"",
						"\"CPUCAP_ONLINE\"",
						"\"CPUCAP_WEIGHTAGE\"",
						"\"CPUPOOL_WEIGHTAGE\"",
						"\"DRIVES\"",
						"\"ENTITLED_PROC_CAPACITY\"",
						"\"ENTPOOLCAP\"",
						"\"EXPANDED_MEM_DESIRED\"",
						"\"EXPANDED_MEM_MAX\"",
						"\"EXPANDED_MEM_MIN\"",
						"\"EXPANDED_MEM_ONLINE\"",
						"\"LCPUS\"",
						"\"MACHINEID\"",
						"\"MAXPOOLCAP\"",
						"\"MEM_DESIRED\"",
						"\"MEM_MAX\"",
						"\"MEM_MIN\"",
						"\"MEM_ONLINE\"",
						"\"MEM_WEIGHTAGE\"",
						"\"NODENAME\"",
						"\"NW_ADAPTER\"",
						"\"PARTITIONNAME\"",
						"\"PCPU_MAX\"",
						"\"PCPU_ONLINE\"",
						"\"PROCESSORFAMILY\"",
						"\"PROCESSORMHZ\"",
						"\"PROCESSORMODEL\"",
						"\"PROCESSOR_POOLID\"",
						"\"SHAREDPCPU\"",
						"\"SMTTHREADS\"",
						"\"SUBPROCESSOR_MODE\"",
						"\"VCPUS_DESIRED\"",
						"\"VCPUS_MAX\"",
						"\"VCPUS_MIN\"",
						"\"VCPUS_ONLINE\""
					],
					"connection":[
						{
							"jdbcUrl":"jdbc:oracle:thin:@//192.168.2.2:1521/dwdb",
							"table":[
								"CONFIG"
							]
						}
					],
					"username":"njmon"
				},
				"name":"oraclewriter"
			}
		}
	],
	"setting":{
		"errorLimit":{
			"record":0,
			"percentage":0.02
		},
		"speed":{
			"bytes":-1,
			"channel":1
		}
	}
}

2021-03-11 09:27:44.476 [main] DEBUG Engine - {"core":{"container":{"trace":{"enable":"false"},"job":{"sleepInterval":3000,"mode":"standalone","reportInterval":10000,"id":-1},"taskGroup":{"channel":5}},"dataXServer":{"address":""},"transport":{"exchanger":{"class":"com.alibaba.datax.core.plugin.BufferedRecordExchanger","bufferSize":32},"channel":{"byteCapacity":67108864,"flowControlInterval":20,"class":"com.alibaba.datax.core.transport.channel.memory.MemoryChannel","speed":{"byte":-1,"record":-1},"capacity":512}},"statistics":{"collector":{"plugin":{"taskClass":"com.alibaba.datax.core.statistics.plugin.task.StdoutPluginCollector","maxDirtyNumber":10}}}},"entry":{"jvm":"-Xms32M -Xmx1G"},"common":{"column":{"dateFormat":"yyyy-MM-dd","datetimeFormat":"yyyy-MM-dd HH:mm:ss","timeFormat":"HH:mm:ss","extraFormats":["yyyyMMdd"],"timeZone":"PRC","encoding":"utf-8"}},"plugin":{"reader":{"influxdbreader":{"path":"/u01/datax/datax/plugin/reader/influxdbreader","name":"influxdbreader","description":"read from InfluxDB table","developer":"wgzhao","class":"com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader"}},"writer":{"oraclewriter":{"path":"/u01/datax/datax/plugin/writer/oraclewriter","name":"oraclewriter","description":"useScene: prod. mechanism: Jdbc connection using the database, execute insert sql. warn: The more you know about the database, the less problems you encounter.","developer":"alibaba","class":"com.alibaba.datax.plugin.writer.oraclewriter.OracleWriter"}}},"job":{"content":[{"reader":{"parameter":{"password":"xxxx@influx2021","column":["*"],"connection":[{"endpoint":"http://192.168.1.1:8086","database":"njmon","table":"config"}],"username":"monitor"},"name":"influxdbreader"},"writer":{"parameter":{"password":"njmon","column":["\"OSBUILD\"","\"OSNAME\"","\"OSVERSION\"","\"ACTIVECPUSINPOOL\"","\"AME_TARGETMEMEXPFACTOR\"","\"AME_TARGETMEMEXPSIZE\"","\"AMS_HYPERPGSIZE\"","\"AMS_MEMPOOLID\"","\"AMS_TOTIOMEMENT\"","\"CPUCAP_DESIRED\"","\"CPUCAP_MAX\"","\"CPUCAP_MIN\"","\"CPUCAP_ONLINE\"","\"CPUCAP_WEIGHTAGE\"","\"CPUPOOL_WEIGHTAGE\"","\"DRIVES\"","\"ENTITLED_PROC_CAPACITY\"","\"ENTPOOLCAP\"","\"EXPANDED_MEM_DESIRED\"","\"EXPANDED_MEM_MAX\"","\"EXPANDED_MEM_MIN\"","\"EXPANDED_MEM_ONLINE\"","\"LCPUS\"","\"MACHINEID\"","\"MAXPOOLCAP\"","\"MEM_DESIRED\"","\"MEM_MAX\"","\"MEM_MIN\"","\"MEM_ONLINE\"","\"MEM_WEIGHTAGE\"","\"NODENAME\"","\"NW_ADAPTER\"","\"PARTITIONNAME\"","\"PCPU_MAX\"","\"PCPU_ONLINE\"","\"PROCESSORFAMILY\"","\"PROCESSORMHZ\"","\"PROCESSORMODEL\"","\"PROCESSOR_POOLID\"","\"SHAREDPCPU\"","\"SMTTHREADS\"","\"SUBPROCESSOR_MODE\"","\"VCPUS_DESIRED\"","\"VCPUS_MAX\"","\"VCPUS_MIN\"","\"VCPUS_ONLINE\""],"connection":[{"jdbcUrl":"jdbc:oracle:thin:@//192.168.2.2:1521/dwdb","table":["CONFIG"]}],"username":"njmon"},"name":"oraclewriter"}}],"setting":{"errorLimit":{"record":0,"percentage":0.02},"speed":{"bytes":-1,"channel":1}}}}
2021-03-11 09:27:44.488 [main] INFO  PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0
2021-03-11 09:27:44.488 [main] INFO  JobContainer - DataX jobContainer starts job.
2021-03-11 09:27:44.489 [main] DEBUG JobContainer - jobContainer starts to do preHandle ...
2021-03-11 09:27:44.489 [main] DEBUG JobContainer - jobContainer starts to do init ...
2021-03-11 09:27:44.490 [main] INFO  JobContainer - Set jobId = 0
2021-03-11 09:27:44.990 [job-0] INFO  OriginalConfPretreatmentUtil - table:[CONFIG] all columns:[OSBUILD,OSNAME,OSVERSION,ACTIVECPUSINPOOL,AME_TARGETMEMEXPFACTOR,AME_TARGETMEMEXPSIZE,AMS_HYPERPGSIZE,AMS_MEMPOOLID,AMS_TOTIOMEMENT,CPUCAP_DESIRED,CPUCAP_MAX,CPUCAP_MIN,CPUCAP_ONLINE,CPUCAP_WEIGHTAGE,CPUPOOL_WEIGHTAGE,DRIVES,ENTITLED_PROC_CAPACITY,ENTPOOLCAP,EXPANDED_MEM_DESIRED,EXPANDED_MEM_MAX,EXPANDED_MEM_MIN,EXPANDED_MEM_ONLINE,LCPUS,MACHINEID,MAXPOOLCAP,MEM_DESIRED,MEM_MAX,MEM_MIN,MEM_ONLINE,MEM_WEIGHTAGE,NODENAME,NW_ADAPTER,PARTITIONNAME,PCPU_MAX,PCPU_ONLINE,PROCESSORFAMILY,PROCESSORMHZ,PROCESSORMODEL,PROCESSOR_POOLID,SHAREDPCPU,SMTTHREADS,SUBPROCESSOR_MODE,VCPUS_DESIRED,VCPUS_MAX,VCPUS_MIN,VCPUS_ONLINE].
2021-03-11 09:27:45.053 [job-0] INFO  OriginalConfPretreatmentUtil - Write data [
INSERT INTO %s ("OSBUILD","OSNAME","OSVERSION","ACTIVECPUSINPOOL","AME_TARGETMEMEXPFACTOR","AME_TARGETMEMEXPSIZE","AMS_HYPERPGSIZE","AMS_MEMPOOLID","AMS_TOTIOMEMENT","CPUCAP_DESIRED","CPUCAP_MAX","CPUCAP_MIN","CPUCAP_ONLINE","CPUCAP_WEIGHTAGE","CPUPOOL_WEIGHTAGE","DRIVES","ENTITLED_PROC_CAPACITY","ENTPOOLCAP","EXPANDED_MEM_DESIRED","EXPANDED_MEM_MAX","EXPANDED_MEM_MIN","EXPANDED_MEM_ONLINE","LCPUS","MACHINEID","MAXPOOLCAP","MEM_DESIRED","MEM_MAX","MEM_MIN","MEM_ONLINE","MEM_WEIGHTAGE","NODENAME","NW_ADAPTER","PARTITIONNAME","PCPU_MAX","PCPU_ONLINE","PROCESSORFAMILY","PROCESSORMHZ","PROCESSORMODEL","PROCESSOR_POOLID","SHAREDPCPU","SMTTHREADS","SUBPROCESSOR_MODE","VCPUS_DESIRED","VCPUS_MAX","VCPUS_MIN","VCPUS_ONLINE") VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
], which jdbcUrl like:[jdbc:oracle:thin:@//192.168.2.2:1521/dwdb]
2021-03-11 09:27:45.053 [job-0] DEBUG CommonRdbmsWriter$Job - After job init(), originalConfig now is:[
{"jobid":0,"password":"njmon","column":["\"OSBUILD\"","\"OSNAME\"","\"OSVERSION\"","\"ACTIVECPUSINPOOL\"","\"AME_TARGETMEMEXPFACTOR\"","\"AME_TARGETMEMEXPSIZE\"","\"AMS_HYPERPGSIZE\"","\"AMS_MEMPOOLID\"","\"AMS_TOTIOMEMENT\"","\"CPUCAP_DESIRED\"","\"CPUCAP_MAX\"","\"CPUCAP_MIN\"","\"CPUCAP_ONLINE\"","\"CPUCAP_WEIGHTAGE\"","\"CPUPOOL_WEIGHTAGE\"","\"DRIVES\"","\"ENTITLED_PROC_CAPACITY\"","\"ENTPOOLCAP\"","\"EXPANDED_MEM_DESIRED\"","\"EXPANDED_MEM_MAX\"","\"EXPANDED_MEM_MIN\"","\"EXPANDED_MEM_ONLINE\"","\"LCPUS\"","\"MACHINEID\"","\"MAXPOOLCAP\"","\"MEM_DESIRED\"","\"MEM_MAX\"","\"MEM_MIN\"","\"MEM_ONLINE\"","\"MEM_WEIGHTAGE\"","\"NODENAME\"","\"NW_ADAPTER\"","\"PARTITIONNAME\"","\"PCPU_MAX\"","\"PCPU_ONLINE\"","\"PROCESSORFAMILY\"","\"PROCESSORMHZ\"","\"PROCESSORMODEL\"","\"PROCESSOR_POOLID\"","\"SHAREDPCPU\"","\"SMTTHREADS\"","\"SUBPROCESSOR_MODE\"","\"VCPUS_DESIRED\"","\"VCPUS_MAX\"","\"VCPUS_MIN\"","\"VCPUS_ONLINE\""],"connection":[{"jdbcUrl":"jdbc:oracle:thin:@//192.168.2.2:1521/dwdb","table":["CONFIG"]}],"insertOrReplaceTemplate":"INSERT INTO %s (\"OSBUILD\",\"OSNAME\",\"OSVERSION\",\"ACTIVECPUSINPOOL\",\"AME_TARGETMEMEXPFACTOR\",\"AME_TARGETMEMEXPSIZE\",\"AMS_HYPERPGSIZE\",\"AMS_MEMPOOLID\",\"AMS_TOTIOMEMENT\",\"CPUCAP_DESIRED\",\"CPUCAP_MAX\",\"CPUCAP_MIN\",\"CPUCAP_ONLINE\",\"CPUCAP_WEIGHTAGE\",\"CPUPOOL_WEIGHTAGE\",\"DRIVES\",\"ENTITLED_PROC_CAPACITY\",\"ENTPOOLCAP\",\"EXPANDED_MEM_DESIRED\",\"EXPANDED_MEM_MAX\",\"EXPANDED_MEM_MIN\",\"EXPANDED_MEM_ONLINE\",\"LCPUS\",\"MACHINEID\",\"MAXPOOLCAP\",\"MEM_DESIRED\",\"MEM_MAX\",\"MEM_MIN\",\"MEM_ONLINE\",\"MEM_WEIGHTAGE\",\"NODENAME\",\"NW_ADAPTER\",\"PARTITIONNAME\",\"PCPU_MAX\",\"PCPU_ONLINE\",\"PROCESSORFAMILY\",\"PROCESSORMHZ\",\"PROCESSORMODEL\",\"PROCESSOR_POOLID\",\"SHAREDPCPU\",\"SMTTHREADS\",\"SUBPROCESSOR_MODE\",\"VCPUS_DESIRED\",\"VCPUS_MAX\",\"VCPUS_MIN\",\"VCPUS_ONLINE\") VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)","batchSize":2048,"tableNumber":1,"username":"njmon"}
]
2021-03-11 09:27:45.053 [job-0] DEBUG JobContainer - jobContainer starts to do prepare ...
2021-03-11 09:27:45.054 [job-0] INFO  JobContainer - DataX Reader.Job [influxdbreader] do prepare work .
2021-03-11 09:27:45.054 [job-0] INFO  JobContainer - DataX Writer.Job [oraclewriter] do prepare work .
2021-03-11 09:27:45.054 [job-0] DEBUG CommonRdbmsWriter$Job - After job prepare(), originalConfig now is:[
{"jobid":0,"password":"njmon","column":["\"OSBUILD\"","\"OSNAME\"","\"OSVERSION\"","\"ACTIVECPUSINPOOL\"","\"AME_TARGETMEMEXPFACTOR\"","\"AME_TARGETMEMEXPSIZE\"","\"AMS_HYPERPGSIZE\"","\"AMS_MEMPOOLID\"","\"AMS_TOTIOMEMENT\"","\"CPUCAP_DESIRED\"","\"CPUCAP_MAX\"","\"CPUCAP_MIN\"","\"CPUCAP_ONLINE\"","\"CPUCAP_WEIGHTAGE\"","\"CPUPOOL_WEIGHTAGE\"","\"DRIVES\"","\"ENTITLED_PROC_CAPACITY\"","\"ENTPOOLCAP\"","\"EXPANDED_MEM_DESIRED\"","\"EXPANDED_MEM_MAX\"","\"EXPANDED_MEM_MIN\"","\"EXPANDED_MEM_ONLINE\"","\"LCPUS\"","\"MACHINEID\"","\"MAXPOOLCAP\"","\"MEM_DESIRED\"","\"MEM_MAX\"","\"MEM_MIN\"","\"MEM_ONLINE\"","\"MEM_WEIGHTAGE\"","\"NODENAME\"","\"NW_ADAPTER\"","\"PARTITIONNAME\"","\"PCPU_MAX\"","\"PCPU_ONLINE\"","\"PROCESSORFAMILY\"","\"PROCESSORMHZ\"","\"PROCESSORMODEL\"","\"PROCESSOR_POOLID\"","\"SHAREDPCPU\"","\"SMTTHREADS\"","\"SUBPROCESSOR_MODE\"","\"VCPUS_DESIRED\"","\"VCPUS_MAX\"","\"VCPUS_MIN\"","\"VCPUS_ONLINE\""],"jdbcUrl":"jdbc:oracle:thin:@//192.168.2.2:1521/dwdb","insertOrReplaceTemplate":"INSERT INTO %s (\"OSBUILD\",\"OSNAME\",\"OSVERSION\",\"ACTIVECPUSINPOOL\",\"AME_TARGETMEMEXPFACTOR\",\"AME_TARGETMEMEXPSIZE\",\"AMS_HYPERPGSIZE\",\"AMS_MEMPOOLID\",\"AMS_TOTIOMEMENT\",\"CPUCAP_DESIRED\",\"CPUCAP_MAX\",\"CPUCAP_MIN\",\"CPUCAP_ONLINE\",\"CPUCAP_WEIGHTAGE\",\"CPUPOOL_WEIGHTAGE\",\"DRIVES\",\"ENTITLED_PROC_CAPACITY\",\"ENTPOOLCAP\",\"EXPANDED_MEM_DESIRED\",\"EXPANDED_MEM_MAX\",\"EXPANDED_MEM_MIN\",\"EXPANDED_MEM_ONLINE\",\"LCPUS\",\"MACHINEID\",\"MAXPOOLCAP\",\"MEM_DESIRED\",\"MEM_MAX\",\"MEM_MIN\",\"MEM_ONLINE\",\"MEM_WEIGHTAGE\",\"NODENAME\",\"NW_ADAPTER\",\"PARTITIONNAME\",\"PCPU_MAX\",\"PCPU_ONLINE\",\"PROCESSORFAMILY\",\"PROCESSORMHZ\",\"PROCESSORMODEL\",\"PROCESSOR_POOLID\",\"SHAREDPCPU\",\"SMTTHREADS\",\"SUBPROCESSOR_MODE\",\"VCPUS_DESIRED\",\"VCPUS_MAX\",\"VCPUS_MIN\",\"VCPUS_ONLINE\") VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)","batchSize":2048,"tableNumber":1,"table":"CONFIG","username":"njmon"}
]
2021-03-11 09:27:45.054 [job-0] DEBUG JobContainer - jobContainer starts to do split ...
2021-03-11 09:27:45.054 [job-0] INFO  JobContainer - Job set Channel-Number to 1 channels.
2021-03-11 09:27:45.055 [job-0] INFO  JobContainer - DataX Reader.Job [influxdbreader] splits to [1] tasks.
2021-03-11 09:27:45.055 [job-0] INFO  JobContainer - DataX Writer.Job [oraclewriter] splits to [1] tasks.
2021-03-11 09:27:45.055 [job-0] DEBUG JobContainer - transformer configuration:[] 
2021-03-11 09:27:45.067 [job-0] DEBUG JobContainer - contentConfig configuration:[{"internal":{"reader":{"parameter":{"jobid":0,"password":"xxxx@influx2021","column":["*"],"connection":[{"endpoint":"http://192.168.1.1:8086","database":"njmon","table":"config"}],"username":"monitor"},"name":"influxdbreader"},"writer":{"parameter":{"jobid":0,"password":"njmon","column":["\"OSBUILD\"","\"OSNAME\"","\"OSVERSION\"","\"ACTIVECPUSINPOOL\"","\"AME_TARGETMEMEXPFACTOR\"","\"AME_TARGETMEMEXPSIZE\"","\"AMS_HYPERPGSIZE\"","\"AMS_MEMPOOLID\"","\"AMS_TOTIOMEMENT\"","\"CPUCAP_DESIRED\"","\"CPUCAP_MAX\"","\"CPUCAP_MIN\"","\"CPUCAP_ONLINE\"","\"CPUCAP_WEIGHTAGE\"","\"CPUPOOL_WEIGHTAGE\"","\"DRIVES\"","\"ENTITLED_PROC_CAPACITY\"","\"ENTPOOLCAP\"","\"EXPANDED_MEM_DESIRED\"","\"EXPANDED_MEM_MAX\"","\"EXPANDED_MEM_MIN\"","\"EXPANDED_MEM_ONLINE\"","\"LCPUS\"","\"MACHINEID\"","\"MAXPOOLCAP\"","\"MEM_DESIRED\"","\"MEM_MAX\"","\"MEM_MIN\"","\"MEM_ONLINE\"","\"MEM_WEIGHTAGE\"","\"NODENAME\"","\"NW_ADAPTER\"","\"PARTITIONNAME\"","\"PCPU_MAX\"","\"PCPU_ONLINE\"","\"PROCESSORFAMILY\"","\"PROCESSORMHZ\"","\"PROCESSORMODEL\"","\"PROCESSOR_POOLID\"","\"SHAREDPCPU\"","\"SMTTHREADS\"","\"SUBPROCESSOR_MODE\"","\"VCPUS_DESIRED\"","\"VCPUS_MAX\"","\"VCPUS_MIN\"","\"VCPUS_ONLINE\""],"jdbcUrl":"jdbc:oracle:thin:@//192.168.2.2:1521/dwdb","insertOrReplaceTemplate":"INSERT INTO %s (\"OSBUILD\",\"OSNAME\",\"OSVERSION\",\"ACTIVECPUSINPOOL\",\"AME_TARGETMEMEXPFACTOR\",\"AME_TARGETMEMEXPSIZE\",\"AMS_HYPERPGSIZE\",\"AMS_MEMPOOLID\",\"AMS_TOTIOMEMENT\",\"CPUCAP_DESIRED\",\"CPUCAP_MAX\",\"CPUCAP_MIN\",\"CPUCAP_ONLINE\",\"CPUCAP_WEIGHTAGE\",\"CPUPOOL_WEIGHTAGE\",\"DRIVES\",\"ENTITLED_PROC_CAPACITY\",\"ENTPOOLCAP\",\"EXPANDED_MEM_DESIRED\",\"EXPANDED_MEM_MAX\",\"EXPANDED_MEM_MIN\",\"EXPANDED_MEM_ONLINE\",\"LCPUS\",\"MACHINEID\",\"MAXPOOLCAP\",\"MEM_DESIRED\",\"MEM_MAX\",\"MEM_MIN\",\"MEM_ONLINE\",\"MEM_WEIGHTAGE\",\"NODENAME\",\"NW_ADAPTER\",\"PARTITIONNAME\",\"PCPU_MAX\",\"PCPU_ONLINE\",\"PROCESSORFAMILY\",\"PROCESSORMHZ\",\"PROCESSORMODEL\",\"PROCESSOR_POOLID\",\"SHAREDPCPU\",\"SMTTHREADS\",\"SUBPROCESSOR_MODE\",\"VCPUS_DESIRED\",\"VCPUS_MAX\",\"VCPUS_MIN\",\"VCPUS_ONLINE\") VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)","batchSize":2048,"tableNumber":1,"table":"CONFIG","username":"njmon"},"name":"oraclewriter"},"taskId":0},"keys":["writer.parameter.column[43]","writer.parameter.column[11]","writer.parameter.column[31]","writer.parameter.column[1]","writer.parameter.tableNumber","writer.parameter.column[19]","writer.parameter.column[27]","writer.parameter.column[39]","writer.parameter.column[15]","writer.parameter.column[23]","writer.parameter.column[35]","reader.parameter.password","writer.name","reader.name","writer.parameter.column[5]","reader.parameter.connection[0].table","writer.parameter.column[9]","writer.parameter.column[32]","writer.parameter.column[44]","writer.parameter.column[20]","writer.parameter.column[2]","writer.parameter.column[40]","writer.parameter.column[16]","writer.parameter.column[28]","writer.parameter.column[36]","writer.parameter.column[12]","writer.parameter.jobid","writer.parameter.column[24]","writer.parameter.batchSize","writer.parameter.column[6]","reader.parameter.connection[0].endpoint","writer.parameter.username","writer.parameter.column[21]","writer.parameter.column[33]","writer.parameter.column[41]","writer.parameter.jdbcUrl","writer.parameter.column[29]","reader.parameter.jobid","writer.parameter.column[17]","writer.parameter.column[25]","writer.parameter.column[37]","writer.parameter.column[45]","writer.parameter.column[13]","writer.parameter.password","writer.parameter.column[3]","writer.parameter.column[7]","writer.parameter.column[10]","writer.parameter.column[22]","writer.parameter.column[30]","writer.parameter.column[42]","writer.parameter.column[0]","writer.parameter.column[18]","writer.parameter.column[38]","writer.parameter.column[14]","writer.parameter.column[26]","writer.parameter.column[34]","writer.parameter.insertOrReplaceTemplate","writer.parameter.table","reader.parameter.connection[0].database","writer.parameter.column[4]","reader.parameter.username","reader.parameter.column[0]","taskId","writer.parameter.column[8]"],"secretKeyPathSet":[]}] 
2021-03-11 09:27:45.068 [job-0] DEBUG JobContainer - jobContainer starts to do schedule ...
2021-03-11 09:27:45.071 [job-0] INFO  JobContainer - Scheduler starts [1] taskGroups.
2021-03-11 09:27:45.081 [taskGroup-0] DEBUG TaskGroupContainer - taskGroup[0]'s task configs[[{"internal":{"reader":{"parameter":{"jobid":0,"password":"xxxx@influx2021","column":["*"],"connection":[{"endpoint":"http://192.168.1.1:8086","database":"njmon","table":"config"}],"username":"monitor"},"name":"influxdbreader"},"writer":{"parameter":{"jobid":0,"password":"njmon","column":["\"OSBUILD\"","\"OSNAME\"","\"OSVERSION\"","\"ACTIVECPUSINPOOL\"","\"AME_TARGETMEMEXPFACTOR\"","\"AME_TARGETMEMEXPSIZE\"","\"AMS_HYPERPGSIZE\"","\"AMS_MEMPOOLID\"","\"AMS_TOTIOMEMENT\"","\"CPUCAP_DESIRED\"","\"CPUCAP_MAX\"","\"CPUCAP_MIN\"","\"CPUCAP_ONLINE\"","\"CPUCAP_WEIGHTAGE\"","\"CPUPOOL_WEIGHTAGE\"","\"DRIVES\"","\"ENTITLED_PROC_CAPACITY\"","\"ENTPOOLCAP\"","\"EXPANDED_MEM_DESIRED\"","\"EXPANDED_MEM_MAX\"","\"EXPANDED_MEM_MIN\"","\"EXPANDED_MEM_ONLINE\"","\"LCPUS\"","\"MACHINEID\"","\"MAXPOOLCAP\"","\"MEM_DESIRED\"","\"MEM_MAX\"","\"MEM_MIN\"","\"MEM_ONLINE\"","\"MEM_WEIGHTAGE\"","\"NODENAME\"","\"NW_ADAPTER\"","\"PARTITIONNAME\"","\"PCPU_MAX\"","\"PCPU_ONLINE\"","\"PROCESSORFAMILY\"","\"PROCESSORMHZ\"","\"PROCESSORMODEL\"","\"PROCESSOR_POOLID\"","\"SHAREDPCPU\"","\"SMTTHREADS\"","\"SUBPROCESSOR_MODE\"","\"VCPUS_DESIRED\"","\"VCPUS_MAX\"","\"VCPUS_MIN\"","\"VCPUS_ONLINE\""],"jdbcUrl":"jdbc:oracle:thin:@//192.168.2.2:1521/dwdb","insertOrReplaceTemplate":"INSERT INTO %s (\"OSBUILD\",\"OSNAME\",\"OSVERSION\",\"ACTIVECPUSINPOOL\",\"AME_TARGETMEMEXPFACTOR\",\"AME_TARGETMEMEXPSIZE\",\"AMS_HYPERPGSIZE\",\"AMS_MEMPOOLID\",\"AMS_TOTIOMEMENT\",\"CPUCAP_DESIRED\",\"CPUCAP_MAX\",\"CPUCAP_MIN\",\"CPUCAP_ONLINE\",\"CPUCAP_WEIGHTAGE\",\"CPUPOOL_WEIGHTAGE\",\"DRIVES\",\"ENTITLED_PROC_CAPACITY\",\"ENTPOOLCAP\",\"EXPANDED_MEM_DESIRED\",\"EXPANDED_MEM_MAX\",\"EXPANDED_MEM_MIN\",\"EXPANDED_MEM_ONLINE\",\"LCPUS\",\"MACHINEID\",\"MAXPOOLCAP\",\"MEM_DESIRED\",\"MEM_MAX\",\"MEM_MIN\",\"MEM_ONLINE\",\"MEM_WEIGHTAGE\",\"NODENAME\",\"NW_ADAPTER\",\"PARTITIONNAME\",\"PCPU_MAX\",\"PCPU_ONLINE\",\"PROCESSORFAMILY\",\"PROCESSORMHZ\",\"PROCESSORMODEL\",\"PROCESSOR_POOLID\",\"SHAREDPCPU\",\"SMTTHREADS\",\"SUBPROCESSOR_MODE\",\"VCPUS_DESIRED\",\"VCPUS_MAX\",\"VCPUS_MIN\",\"VCPUS_ONLINE\") VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)","batchSize":2048,"tableNumber":1,"table":"CONFIG","username":"njmon"},"name":"oraclewriter"},"taskId":0},"keys":["writer.parameter.column[43]","writer.parameter.column[11]","writer.parameter.column[31]","writer.parameter.column[1]","writer.parameter.tableNumber","writer.parameter.column[19]","writer.parameter.column[27]","writer.parameter.column[39]","writer.parameter.column[15]","writer.parameter.column[23]","writer.parameter.column[35]","reader.parameter.password","writer.name","reader.name","writer.parameter.column[5]","reader.parameter.connection[0].table","writer.parameter.column[9]","writer.parameter.column[32]","writer.parameter.column[44]","writer.parameter.column[20]","writer.parameter.column[2]","writer.parameter.column[40]","writer.parameter.column[16]","writer.parameter.column[28]","writer.parameter.column[36]","writer.parameter.column[12]","writer.parameter.jobid","writer.parameter.column[24]","writer.parameter.batchSize","writer.parameter.column[6]","reader.parameter.connection[0].endpoint","writer.parameter.username","writer.parameter.column[21]","writer.parameter.column[33]","writer.parameter.column[41]","writer.parameter.jdbcUrl","writer.parameter.column[29]","reader.parameter.jobid","writer.parameter.column[17]","writer.parameter.column[25]","writer.parameter.column[37]","writer.parameter.column[45]","writer.parameter.column[13]","writer.parameter.password","writer.parameter.column[3]","writer.parameter.column[7]","writer.parameter.column[10]","writer.parameter.column[22]","writer.parameter.column[30]","writer.parameter.column[42]","writer.parameter.column[0]","writer.parameter.column[18]","writer.parameter.column[38]","writer.parameter.column[14]","writer.parameter.column[26]","writer.parameter.column[34]","writer.parameter.insertOrReplaceTemplate","writer.parameter.table","reader.parameter.connection[0].database","writer.parameter.column[4]","reader.parameter.username","reader.parameter.column[0]","taskId","writer.parameter.column[8]"],"secretKeyPathSet":[]}]]
2021-03-11 09:27:45.081 [taskGroup-0] INFO  TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks.
2021-03-11 09:27:45.084 [job-0] DEBUG AbstractScheduler - com.alibaba.datax.core.statistics.communication.Communication@23941fb4[
  counter={}
  jobId=<null>
  message={}
  state=RUNNING
  throwable=<null>
  timestamp=1615426065077
]
2021-03-11 09:27:45.088 [taskGroup-0] INFO  Channel - Channel set byte_speed_limit to -1, No bps activated.
2021-03-11 09:27:45.089 [taskGroup-0] INFO  Channel - Channel set record_speed_limit to -1, No tps activated.
2021-03-11 09:27:45.095 [taskGroup-0] DEBUG TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started
2021-03-11 09:27:45.095 [0-0-0-writer] DEBUG WriterRunner - task writer starts to do init ...
2021-03-11 09:27:45.095 [0-0-0-reader] DEBUG ReaderRunner - task reader starts to do init ...
2021-03-11 09:27:45.097 [0-0-0-writer] DEBUG WriterRunner - task writer starts to do prepare ...
2021-03-11 09:27:45.098 [0-0-0-reader] DEBUG ReaderRunner - task reader starts to do prepare ...
2021-03-11 09:27:45.099 [0-0-0-reader] DEBUG ReaderRunner - task reader starts to read ...
2021-03-11 09:27:45.100 [0-0-0-reader] INFO  InfluxDBReaderTask - connect influxdb: http://192.168.1.1:8086 with username: monitor
2021-03-11 09:27:45.134 [0-0-0-writer] DEBUG WriterRunner - task writer starts to write ...
2021-03-11 09:27:48.086 [job-0] DEBUG AbstractScheduler - com.alibaba.datax.core.statistics.communication.Communication@3b35a229[
  counter={writeSucceedRecords=0, totalErrorBytes=0, percentage=0.0, totalReadRecords=0, writeSucceedBytes=0, byteSpeed=0, totalErrorRecords=0, recordSpeed=0, totalReadBytes=0}
  jobId=<null>
  message={}
  state=RUNNING
  throwable=<null>
  timestamp=1615426068085
]
2021-03-11 09:27:51.088 [job-0] DEBUG AbstractScheduler - com.alibaba.datax.core.statistics.communication.Communication@9816741[
  counter={writeSucceedRecords=0, totalErrorBytes=0, percentage=0.0, totalReadRecords=0, writeSucceedBytes=0, byteSpeed=0, totalErrorRecords=0, recordSpeed=0, totalReadBytes=0}
  jobId=<null>
  message={}
  state=RUNNING
  throwable=<null>
  timestamp=1615426071087
]
2021-03-11 09:27:54.090 [job-0] DEBUG AbstractScheduler - com.alibaba.datax.core.statistics.communication.Communication@1e16c0aa[
  counter={writeSucceedRecords=0, totalErrorBytes=0, percentage=0.0, totalReadRecords=0, writeSucceedBytes=0, byteSpeed=0, totalErrorRecords=0, recordSpeed=0, totalReadBytes=0}
  jobId=<null>
  message={}
  state=RUNNING
  throwable=<null>
  timestamp=1615426074089
]
2021-03-11 09:27:55.361 [0-0-0-reader] ERROR StdoutPluginCollector - 
org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 common frames omitted
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 common frames omitted
2021-03-11 09:27:55.362 [0-0-0-reader] ERROR StdoutPluginCollector - 脏数据: {"exception":"java.net.SocketTimeoutException: timeout","type":"reader"}%n
2021-03-11 09:27:55.362 [0-0-0-reader] WARN  AbstractTaskPluginCollector - 脏数据record=null.
2021-03-11 09:27:55.364 [0-0-0-reader] ERROR ReaderRunner - Reader runner Received Exceptions:
com.alibaba.datax.common.exception.DataXException: Code:[InfluxDBReader-01], Description:[值非法].  - org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 more
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 more
 - org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 more
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 more

	at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:47)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:84)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	... 3 common frames omitted
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 common frames omitted
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 common frames omitted
2021-03-11 09:27:55.365 [0-0-0-reader] DEBUG ReaderRunner - task reader starts to do destroy ...
2021-03-11 09:27:57.092 [job-0] DEBUG AbstractScheduler - com.alibaba.datax.core.statistics.communication.Communication@13d73f29[
  counter={}
  jobId=<null>
  message={}
  state=FAILED
  throwable=com.alibaba.datax.common.exception.DataXException: Code:[InfluxDBReader-01], Description:[值非法].  - org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 more
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 more
 - org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 more
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 more

  timestamp=1615426077091
]
2021-03-11 09:27:57.094 [job-0] INFO  StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 0.00%
2021-03-11 09:27:57.094 [job-0] ERROR JobContainer - 运行scheduler出错.
2021-03-11 09:27:57.095 [job-0] INFO  StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All Task WaitReaderTime 0.000s | Percentage 0.00%
2021-03-11 09:27:57.095 [job-0] ERROR Engine - Code:[InfluxDBReader-01], Description:[值非法].  - Code:[InfluxDBReader-01], Description:[值非法].  - org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 more
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 more
 - org.influxdb.InfluxDBIOException: java.net.SocketTimeoutException: timeout
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:841)
	at org.influxdb.impl.InfluxDBImpl.executeQuery(InfluxDBImpl.java:824)
	at org.influxdb.impl.InfluxDBImpl.query(InfluxDBImpl.java:559)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReaderTask.startRead(InfluxDBReaderTask.java:61)
	at com.alibaba.datax.plugin.reader.influxdbreader.InfluxDBReader$Task.startRead(InfluxDBReader.java:83)
	at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:63)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: timeout
	at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
	at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
	at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
	at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
	at okio.GzipSource.consumeHeader(GzipSource.kt:104)
	at okio.GzipSource.read(GzipSource.kt:62)
	at okio.RealBufferedSource.read(RealBufferedSource.kt:189)
	at okio.ForwardingSource.read(ForwardingSource.kt:29)
	at retrofit2.OkHttpCall$ExceptionCatchingResponseBody$1.read(OkHttpCall.java:314)
	at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:470)
	at okio.RealBufferedSource.rangeEquals(RealBufferedSource.kt:128)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:42)
	at retrofit2.converter.moshi.MoshiResponseBodyConverter.convert(MoshiResponseBodyConverter.java:27)
	at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:829)
	... 6 more
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at okio.InputStreamSource.read(JvmOkio.kt:90)
	at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
	... 28 more

doris连接问题

doris连接出现问题,无法连接
2021-02-24 14:57:17.929 [0-0-2-writer] ERROR WriterRunner - Writer Runner Received Exceptions:
com.alibaba.datax.common.exception.DataXException: Code:[DorisWriter-03], Description:[连接错误]. - Failed to connect Doris server with: http://192.168.2.100:8030/api/ssb/detail/_stream_load, org.apache.http.client.ClientProtocolException
at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:26) ~[datax-common-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.plugin.writer.doriswriter.DorisWriterTask.sendData(DorisWriterTask.java:185) ~[doriswriter-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.plugin.writer.doriswriter.DorisWriterTask.startWrite(DorisWriterTask.java:127) ~[doriswriter-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.plugin.writer.doriswriter.DorisWriter$Task.startWrite(DorisWriter.java:100) ~[doriswriter-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.core.taskgroup.runner.WriterRunner.run(WriterRunner.java:56) ~[datax-core-0.0.1-SNAPSHOT.jar:na]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_172]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.