Giter VIP home page Giter VIP logo

gohangout's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gohangout's Issues

my kafka version is 1.0.0,kafka server got the err:Magic v0 does not support record headers

kafka get the error like followers:
Error when handling request {replica_id=-1,max_wait_time=100,min_bytes=1,topics=[{topic=wj.sleuth.test,partitions=[{partition=17,fetch_offset=0,max_bytes=10485760}]}]} (kafka.server.KafkaApis)
java.lang.IllegalArgumentException: Magic v0 does not support record headers
at org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
at org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)

ghangout got the err:

E0410 14:14:52.967678 53245 fetch_response.go:255] The server experienced an unexpected error when processing the request
I0410 14:14:52.967687 53245 simple_consumer.go:341] consumer wj.sleuth.test[29] error:The server experienced an unexpected error when processing the request
I0410 14:14:52.967722 53245 simple_consumer.go:309] simple consumer stop consuming wj.sleuth.test[29]
E0410 14:14:52.968932 53245 fetch_response.go:255] The server experienced an unexpected error when processing the request

kafka有大量堆积用gohangout消费产生oom

kafka0.10.2,保存6小时的数据大概2T。后面两台16c32g机器只运行gohangout v1.2.4 单进程单线程用来消费kafka中日志数据,出现过多次高峰期lag逐渐增大后gohangout的oom现象

E0525 23:24:54.782056 2735 group_consumer.go:449] failed to send heartbeat: The group is rebalancing, so a rejoin is needed.
E0525 23:24:54.799387 2735 group_consumer.go:449] failed to send heartbeat: The group is rebalancing, so a rejoin is needed.
E0525 23:24:54.818359 2735 group_consumer.go:449] failed to send heartbeat: The group is rebalancing, so a rejoin is needed.
E0525 23:24:54.822726 2735 group_consumer.go:449] failed to send heartbeat: The group is rebalancing, so a rejoin is needed.
fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x97528d, 0x16)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/panic.go:608 +0x72
runtime.sysMap(0xc770000000, 0x4000000, 0xdb3ad8)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/mem_linux.go:156 +0xc7
runtime.(*mheap).sysAlloc(0xd9a7c0, 0x4000000, 0x80, 0x0)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/malloc.go:619 +0x1c7
runtime.(*mheap).grow(0xd9a7c0, 0x3b5, 0x0)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/mheap.go:920 +0x42
runtime.(*mheap).allocSpanLocked(0xd9a7c0, 0x3b5, 0xdb3ae8, 0xc0001e9ed8)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/mheap.go:848 +0x337
runtime.(*mheap).alloc_m(0xd9a7c0, 0x3b5, 0xc000340101, 0x1000000001)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/mheap.go:692 +0x119
runtime.(*mheap).alloc.func1()
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/mheap.go:759 +0x4c
runtime.(*mheap).alloc(0xd9a7c0, 0x3b5, 0xc000010101, 0xc0fcfe7680)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/mheap.go:758 +0x8a
runtime.largeAlloc(0x768c93, 0xc423a00101, 0xc76f2fe001)
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/malloc.go:1019 +0x97
runtime.mallocgc.func1()
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/malloc.go:914 +0x46
runtime.systemstack(0x0)
/usr/local/Cellar/go/2.11.2/libexec/src/runtime/asm_amd64.s:351 +0x66
runtime.mstart()
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/proc.go:1229

goroutine 8502 [running]:
runtime.systemstack_switch()
/usr/local/Cellar/go/1.11.2/libexec/src/runtime/asm_amd64.s:311 fp=0xc11d703490 sp=0xc11d703488 pc=0x459900
runtime.mallocgc(0x768c93, 0x8a9c00, 0x5a01, 0xffff)

ClickHouse写入遇到的2个问题

问题1: ClickHouse写入数据都空,数据条目在不断新增,但是插入的数据条目都是空记录。
如题的现象,没有任何报错日志,打开-v 5,也没有任何的错误信息。

问题1: JSON格式的数据,包含子对象,如何转换成field插入到clickhouse
例如json数据为:
{
"a": "aa",
"bb": {
"x": "xx",
"y": "yy"
},
"c": "cc"
}
如上数据,x和y,如何插入到clickhouse的x和y字段呢?

正确使用姿势是啥?

我想问你一下,你这个正确使用姿势是怎么样的?消费kafka写es的时候,是多进程控制,还是多协程?我使用docker实例同一个配置起动两个gohangout消费同一个kafka的topic,group一样,写es,一直报
E0620 10:12:10.111627 83 group_consumer.go:449] failed to send heartbeat: The group is rebalancing, so a rejoin is needed.
观察kakfa的topic,一直处于

Warning: Consumer group 'hangout.test' is rebalancing.

不能多个gohangout实例消费同一个topic吗?

govendor sync 仍然报错

cd /Users/Charles/re_sxf/go/.cache/govendor/github.com/childe/healer; git reset --hard a298fa4b52427036330ec73aeef3f71e574b17cf

fatal: Could not parse object 'a298fa4b52427036330ec73aeef3f71e574b17cf'.

Group authorization failed

你好,配置完成后启动服务,日志提示大量的错误信息,
I0103 17:52:37.389993 141941 group_consumer.go:190] join nginx.log error: Not authorized to access group: Group authorization failed.
I0103 17:52:37.490052 141941 group_consumer.go:167] try to join group nginx.log
I0103 17:52:37.490217 141941 group_consumer.go:190] join nginx.log error: Not authorized to access group: Group authorization failed.
I0103 17:52:37.590276 141941 group_consumer.go:167] try to join group nginx.log
I0103 17:52:37.590438 141941 group_consumer.go:190] join nginx.log error: Not authorized to access group: Group authorization failed.

版本为:v.0.0.1, kafka为:1.0 配置如下:
inputs:
- Kafka:
topic:
nginxlog: 1
codec: json
consumer_settings:
bootstrap.servers: "10.10.1.10:9092"
group.id: nginx.log
client.id: nginxlog-aOwrp

input嵌套JSON问题

INPUT是嵌套json
比如


database:test
,data{
id:12
,name:tt

这种我要去id,name,不需要类型转换等其他操作,怎么取出来?

docker容器环境(64位)中gohangout无法运行

现象

根据项目提供的Dockerfile,build镜像后运行gohangout失败,提示:/bin/sh: gohangout: not found错误。

原因

32位的Alpine linux容器中运行64位go的二进制文件出现的问题,原因是Alpine linux缺少运行go程序的64位/lib64/共享库文件,而该文件存在于32位/lib共享库下,从而导致运行失败。
具体详见链接,需单独针对该问题修改Dockerfile,使其兼容64位编译环境。

64位Alpine linux

64位Alpine linux下的异常错误如下:

# gohangout 
/bin/sh: /opt/gohangout/gohangout: not found
 # ldd /opt/gohangout/gohangout 
	/lib64/ld-linux-x86-64.so.2 (0x7f664c369000)
	libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f664c369000)
	libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f664c369000)
# ls /lib64/ld-linux-x86-64.so.2
ls: /lib64/ld-linux-x86-64.so.2: No such file or directory

32位Alpine linux

32位Alpine linux无法重现该问题,ldd的话会提示以下错误,原因是gohangout程序是64位的,但Alpine linux是32位的,参考链接,具体报错如下:

# ldd /opt/gohangout/gohangout
ldd: /opt/gohangout/gohangout: Not a valid dynamic program

clickhouse query format bug

NewClickhouseOutput
fields[i] = fmt.Sprintf("%s", p.fields[i])
需要修改成:
fields[i] = fmt.Sprintf(%s, p.fields[i])

output导入clickhouse,显示 字段名 (Int64): unexpected type string

报错日志:
error: server (Int64): unexpected type string
启动日志:
row desc: output.rowDesc{Name:"server", Type:"Int64", DefaultType:"", DefaultExpression:""}
我在config.yml文件里配置:
fields: [server, 'ip', id]
请教一下,是不是clickhouse里的表字段是int64,yml文件里fields里面对应字段就不加单引号?
加单引号的即表示clickhouse里面对应字段是string类型?
我试了一下,将fields里面的字段全部加上' ',会显示:
row desc: output.rowDesc{Name:"字段名", Type:"String"......省略....

当fields里面的字段全部加上单引号时需要把clickhouse里面的表对应字段全部改成string才能写入成功,且只有这种方式可以写入成功.
但只要表中包含int64类型的字段,就会报出错误日志 error: 字段名 (Int64): unexpected type string.
对go不是很懂,您是在源码里将从kafka里接收的数据全部改成string类型然后开始写入clickhouse的吗?
我的源数据:
{"server":666,"ip":"1.2.3.4","id":100}
server和id字段应该是int64类型,ip是string类型,但只有fields和clickhouse表里全设置成string类型才能写入,否则就报error.不明白是什么原因.

kafka to clickhouse, not format

使用kafka console producer生产数据{date:2012-12-12,level:level,message:test},clickhouse建表create table metrics (date Date,level String,message String) Engine=MergeTree(date,(date,level),8192)

gohangout仅配置了input和output,kafka的数据能够正常消费,但是写入clickhouse时数据整体写入了message,没有按照字段对应写入,这个应该怎么配置?

ipip字段类型不符

报错日志:
error: location (String): unexpected type map[string]interface {}

我的过滤器设置为:
- IPIP:
src: ip
target: location
database: /opt/gohangout/ipdata.datx

gohangout的target字段类型必须是interface吗?我将location设置成String想要直接写入到ch中应该如何设置呢?ch的字段里已经设置了location

output中bulk_actions bulk_size flush_interval的优先级

配置如下
单线程
bulk_actions: 100000
bulk_size: 200
flush_interval: 60
我测试下来发现有两种情况
1.kafka有堆积时 如果100000条不超过200m则满100000就往es发,如果不满100000条但达到200m则换算200m的对应条数往es发,并且此时flush_interval配置无效
I0528 15:18:44.059359 20568 bulk_http.go:156] bulk 100000 docs with execution_id 25
I0528 15:18:49.322766 20568 bulk_http.go:156] bulk 100000 docs with execution_id 26
I0528 15:18:54.482141 20568 bulk_http.go:156] bulk 100000 docs with execution_id 27
I0528 15:18:59.659397 20568 bulk_http.go:156] bulk 100000 docs with execution_id 28
I0528 15:19:04.777812 20568 bulk_http.go:156] bulk 100000 docs with execution_id 29
I0528 15:19:09.591249 20568 bulk_http.go:156] bulk 20574 docs with execution_id 30
I0528 15:19:10.649654 20568 bulk_http.go:156] bulk 100000 docs with execution_id 31
I0528 15:19:15.721720 20568 bulk_http.go:156] bulk 100000 docs with execution_id 32
I0528 15:19:21.540412 20568 bulk_http.go:156] bulk 100000 docs with execution_id 33
I0528 15:19:26.316815 20568 bulk_http.go:156] bulk 100000 docs with execution_id 34
I0528 15:19:31.515927 20568 bulk_http.go:156] bulk 100000 docs with execution_id 35
I0528 15:19:37.102144 20568 bulk_http.go:156] bulk 100000 docs with execution_id 36
I0528 15:19:42.326828 20568 bulk_http.go:156] bulk 100000 docs with execution_id 37
I0528 15:19:46.972345 20568 bulk_http.go:156] bulk 100000 docs with execution_id 38
I0528 15:19:51.771179 20568 bulk_http.go:156] bulk 100000 docs with execution_id 39

2.无堆积时,按正常flush_interval配置的周期换算数据量运行:
I0528 16:10:31.727049 28031 bulk_http.go:156] bulk 1516 docs with execution_id 106
I0528 16:11:31.711819 28031 bulk_http.go:156] bulk 1473 docs with execution_id 107
I0528 16:12:31.727624 28031 bulk_http.go:156] bulk 1508 docs with execution_id 108
I0528 16:13:31.726607 28031 bulk_http.go:156] bulk 1474 docs with execution_id 109
I0528 16:14:31.712247 28031 bulk_http.go:156] bulk 1497 docs with execution_id 110
I0528 16:15:31.714574 28031 bulk_http.go:156] bulk 1469 docs with execution_id 111
I0528 16:16:31.712348 28031 bulk_http.go:156] bulk 1506 docs with execution_id 112
I0528 16:17:31.712169 28031 bulk_http.go:156] bulk 1472 docs with execution_id 113
I0528 16:18:31.714599 28031 bulk_http.go:156] bulk 1499 docs with execution_id 114
I0528 16:19:31.728199 28031 bulk_http.go:156] bulk 1481 docs with execution_id 115
I0528 16:19:32.193729 28031 bulk_http.go:156] bulk 180 docs with execution_id 116
I0528 16:20:31.711735 28031 bulk_http.go:156] bulk 1511 docs with execution_id 117
I0528 16:21:31.730355 28031 bulk_http.go:156] bulk 1468 docs with execution_id 118
I0528 16:22:31.725153 28031 bulk_http.go:156] bulk 1520 docs with execution_id 119
I0528 16:23:31.712136 28031 bulk_http.go:156] bulk 1465 docs with execution_id 120
I0528 16:24:31.715234 28031 bulk_http.go:156] bulk 1513 docs with execution_id 121
I0528 16:25:31.711754 28031 bulk_http.go:156] bulk 1459 docs with execution_id 122
I0528 16:26:31.712235 28031 bulk_http.go:156] bulk 1510 docs with execution_id 123
I0528 16:27:31.712951 28031 bulk_http.go:156] bulk 1476 docs with execution_id 124
I0528 16:28:31.727973 28031 bulk_http.go:156] bulk 1481 docs with execution_id 125
I0528 16:29:31.711916 28031 bulk_http.go:156] bulk 1521 docs with execution_id 126
I0528 16:30:31.713283 28031 bulk_http.go:156] bulk 1480 docs with execution_id 127

是否是这种机制呢?

clickhouse 不能插入数据

ClickHouse server version 19.1.6,无法插入数据,也没有错误日志,input 使用 Stdin,连续10多条 input 记录后程序假死,无法input,clickhouse也没有数据。
outputs:

  • Stdout: {}
  • Clickhouse:
    table: 'default.app_log'
    hosts:
    • 'tcp://172.16.154.246:8123'
      fields: ['class']
      bulk_actions: 2
      flush_interval: 1
      concurrent: 1

govendor sync 报错

git reset --hard cdf5f9a3308af3e7c53631ff7f2f42998cfc84fa
fatal: Could not parse object 'cdf5f9a3308af3e7c53631ff7f2f42998cfc84fa'.
Error: Remotes failed for:
Failed for "github.com/childe/healer" (failed to sync repo to cdf5f9a3308af3e7c53631ff7f2f42998cfc84fa): exit status 128

一直在try to join group

failed to send heartbeat:The group is rebalancing, so a rejoin is needed.
我之前用的hangout,配置文件做了修改之后,运行了gohangout,然后就一直报上面的错误,打开v5后,详细信息显示,可以找到后端es和前端kafka,可以commit offset,就是不能加入group,一直重试

kafka集群中某broker宕机后gohangout panic

线上kafka集群中某broker(ip地址是:10.10.52.153)意外宕机,gohangout panic,消费端中断消费,异常日志如下:

E0701 13:04:11.135321       1 broker.go:272] write only partial data. api: 1 CorrelationID: 108513404
E0701 13:04:11.135331       1 simple_consumer.go:326] fetch error:read tcp4 10.63.142.198:40734->10.10.52.153:9092: use of closed network connection
E0701 13:04:11.135345       1 fetch_response.go:226] could read enough bytes(4) to get fetchresponse length. read 0 bytes
I0701 13:04:11.135233       1 broker.go:174] broker 10.10.52.153:9092 dead, reopen it
E0701 13:04:11.135348       1 broker.go:177] could not conn to 10.10.52.153:9092: dial tcp4 10.10.52.153:9092: connect: connection refused
E0701 13:04:11.135365       1 broker.go:269] write tcp 10.63.142.198:32776->10.10.52.153:9092: use of closed network connection
I0701 13:04:11.135371       1 broker.go:174] broker 10.10.52.153:9092 dead, reopen it
E0701 13:04:11.135384       1 broker.go:272] write only partial data. api: 1 CorrelationID: 107643126
E0701 13:04:11.135399       1 simple_consumer.go:326] fetch error:read tcp 10.63.142.198:32776->10.10.52.153:9092: use of closed network connection
E0701 13:04:11.135412       1 fetch_response.go:226] could read enough bytes(4) to get fetchresponse length. read 0 bytes
I0701 13:04:11.135431       1 broker.go:174] broker 10.10.52.153:9092 dead, reopen it
E0701 13:04:11.135458       1 broker.go:177] could not conn to 10.10.52.153:9092: dial tcp4 10.10.52.153:9092: connect: connection refused
E0701 13:04:11.135476       1 broker.go:269] write tcp 10.63.142.198:32772->10.10.52.153:9092: use of closed network connection
E0701 13:04:11.135495       1 broker.go:272] write only partial data. api: 1 CorrelationID: 108919885
E0701 13:04:11.135507       1 simple_consumer.go:326] fetch error:read tcp 10.63.142.198:32772->10.10.52.153:9092: use of closed network connection
E0701 13:04:11.135520       1 fetch_response.go:226] could read enough bytes(4) to get fetchresponse length. read 0 bytes
E0701 13:04:11.135493       1 broker.go:177] could not conn to 10.10.52.153:9092: dial tcp4 10.10.52.153:9092: connect: connection refused
E0701 13:04:11.135541       1 broker.go:269] write tcp 10.63.142.198:32782->10.10.52.153:9092: use of closed network connection
E0701 13:04:11.135560       1 broker.go:272] write only partial data. api: 1 CorrelationID: 107964938
E0701 13:04:11.135569       1 broker.go:177] could not conn to 10.10.52.153:9092: dial tcp4 10.10.52.153:9092: connect: connection refused
E0701 13:04:11.135583       1 fetch_response.go:226] could read enough bytes(4) to get fetchresponse length. read 0 bytes
E0701 13:04:11.135591       1 broker.go:269] write tcp4 10.63.142.198:36866->10.10.52.153:9092: use of closed network connection
I0701 13:04:11.135603       1 broker.go:174] broker 10.10.52.153:9092 dead, reopen it
E0701 13:04:11.135610       1 broker.go:272] write only partial data. api: 1 CorrelationID: 108123359
E0701 13:04:11.135626       1 simple_consumer.go:326] fetch error:read tcp4 10.63.142.198:36866->10.10.52.153:9092: use of closed network connection
E0701 13:04:11.135573       1 simple_consumer.go:326] fetch error:read tcp 10.63.142.198:32782->10.10.52.153:9092: use of closed network connection
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x7e6e66]
 goroutine 38 [running]:
github.com/childe/gohangout/vendor/github.com/childe/healer.(*GroupConsumer).Consume.func2(0xc0000dc6c0)
	/home/soft/go/src/github.com/childe/gohangout/vendor/github.com/childe/healer/group_consumer.go:429 +0x1d6
created by github.com/childe/gohangout/vendor/github.com/childe/healer.(*GroupConsumer).Consume
	/home/soft/go/src/github.com/childe/gohangout/vendor/github.com/childe/healer/group_consumer.go:410 +0xcb

gohangout配置如下,其中kafka3.ops.yeshj.comdns解析是10.10.52.153

    inputs:
        #- Stdin:
        #    codec: json
        - Kafka:
            topic:
                log4j_v1: 1
            #assign:
            #    healer.test: [0]
            codec: json
            consumer_settings:
                bootstrap.servers: "kafka1.ops.yeshj.com:9092,kafka2.ops.yeshj.com:9092,kafka3.ops.yeshj.com:9092"
                group.id: gohangout-log4j_v1-k8s-prod-topic-online
                max.partition.fetch.bytes: 10485760
                auto.commit.interval.ms: 5000
                from.beginning: false

后来重启异常broker后恢复。正常情况下宕掉一个broker应该不至于导致整个进程挂掉,晚点我再做下测试,看能否重现,先开个issue跟踪下这个问题。

有输出日志按照天或者大小滚动的功能吗?

大家平常打印日志是怎么用的?我刚刚用这个,没用过go语言,大家平常怎么使用gohangout的?是使用守护进程脚本后台启动这种方式?有人有打印日志的需求吗?
我用gohangout可以打印日志输入文件,但文件没办法自动滚动,担心在线上跑时间久了日志文件过大.

if语法失效

在使用if 的语法中,发现一直不起作用

filters:
    - Filters:
       if:
           - '{{if eq [var_field][event_name] "enter_item"}}y{{end}}'
       filters:
           - Add:
              fields:
                name: childe

-------------------------------------------------------------------第二种写法-------

filters:
    - Filters:
       if:
           - 'EQ(var_field,event_name "enter_item")'
       filters:
           - Add:
              fields:
                name: childe

在使用最近的relese中依然无效

scan rows error: sql: expected 6 destination arguments in Scan, not 4

你好,我想用gohangout从kafka往clickhouse导数,报了这个错
E0327 17:32:50.996244 21738 clickhouse_output.go:75] scan rows error: sql: expected 6 destination arguments in Scan, not 4
请问这个错是什么原因导致的。。
下面是我的配置
inputs:
- Stdin:
codec: json
- Kafka:
codec: json
topic:
k_ads_event_stat: 1
consumer_settings:
group.id: gohangout.kafka
bootstrap.servers: "dev-kafka-01.dev:9092"
from.beginning: true

filters:
- Add:
fields:
p_date: "${(beginTime)?substring(0, 10)}"

outputs:
- Clickhouse:
table: 'default.kafka_test'
hosts:
- 'tcp://xx.xx.xx.xx:9000'
fields: ['p_date', 'beginTime','endTime', 'advertiserId', 'campaignId', 'adGroupId', 'creativeId', 'spotId', 'showCount', 'clickCount', 'playCount']
username: 'default'
password: ''
bulk_actions: 1000
flush_interval: 30
concurrent: 1

kafka consumer settings parse error!

environment:
kafka: 1.0.1+kafka3.1.1 (Cloudera CDK )
gohangout: v1.2.6
OS: Centos 6.4 x64

test.yml

inputs:
    - Kafka:
        topic:
            weblog: 1
        codec: json
        consumer_settings:
            bootstrap.servers: "10.0.0.100:9092"
            group.id: gohangout.weblog
            from.beginning: true
filters:
    - Grok:
        src: message
        match:
            - '^(?P<logtime>\S+) (?P<name>\w+) (?P<status>\d+)$'
            - '^(?P<logtime>\S+) (?P<status>\d+) (?P<loglevel>\w+)$'
        remove_fields: ['message']
    - Date:
        location: 'Asia/Shanghai'
        src: logtime
        formats:
            - 'RFC3339'
        remove_fields: ["logtime"]
outputs:
    - Stdout: {}
./gohangout-linux-x64-2a655d9 -logtostderr -v 5 --config test.yml 
I0424 19:40:51.983224   19839 gohangout.go:101] map[inputs:[map[Kafka:map[topic:map[weblog:1] codec:json consumer_settings:map[bootstrap.servers:10.0.0.100:9092 group.id:gohangout.weblog from.beginning:true]]]] filters:[map[Grok:map[src:message match:[^(?P<logtime>\S+) (?P<name>\w+) (?P<status>\d+)$ ^(?P<logtime>\S+) (?P<status>\d+) (?P<loglevel>\w+)$] remove_fields:[message]]] map[Date:map[location:Asia/Shanghai src:logtime formats:[RFC3339] remove_fields:[logtime]]]] outputs:[map[Stdout:map[]]]]
I0424 19:40:51.983353   19839 gohangout.go:50] input[1] map[Kafka:map[codec:json consumer_settings:map[bootstrap.servers:10.0.0.100:9092 group.id:gohangout.weblog from.beginning:true] topic:map[weblog:1]]]
F0424 19:40:51.983563   19839 kafka_input.go:84] error in consumer settings: json: invalid use of ,string struct tag, trying to unmarshal unquoted value into bool

error: kafka_input.go:84] error in consumer settings: json: invalid use of ,string struct tag, trying to unmarshal unquoted value into bool

if remove the configuration(from.beginning: true) , config parse is fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.