pingcap / dm Goto Github PK
View Code? Open in Web Editor NEWData Migration Platform
License: Apache License 2.0
Data Migration Platform
License: Apache License 2.0
gopl can be used to detect duplicate codes.
We can use gopl -html > dm.html
to create a report with contents similar to the following:
With the report, we can refactor to eliminate duplicate code.
Is your feature request related to a problem? Please describe:
now, we need to set server-id
for DM-worker, but it may conflict with other DM-worker or MySQL/MariaDB slaves.
Describe the feature you'd like:
server-id
server-id
automaticallyTeachability, Documentation, Adoption, Migration Strategy:
Server_id
in SHOW SLAVE HOSTS
to decide whether the server-id
already exists in the replication groupserver-id
(may from special base value)[root@dba dm]# make build
gofmt (simplify)
GO111MODULE=off go get golang.org/x/lint/golint
golint
go: downloading github.com/coreos/etcd v3.3.10+incompatible
go: verifying github.com/grpc-ecosystem/[email protected]: checksum mismatch
downloaded: h1:Iju5GlWwrvL6UBg4zJJt3btmonfrMlCDdsejg4CZE7c=
go.sum: h1:BWIsLfhgKhV5g/oF34aRjniBHLTZe5DNekSjbAjIS6c=
vet
dm/master/config.go:112: declaration of "err" shadows declaration at dm/master/config.go:98
dm/worker/config.go:156: declaration of "err" shadows declaration at dm/worker/config.go:142
bash -x ./tests/wait_for_mysql.sh
i'm git clone https://github.com/pingcap/dm
and run "start debugging" on vs code IDE ,
it's does't work and raised "undefined: unix.Statfs_t"
but i'm checked "F:\source-tree\pkg\mod\golang.org/x/sys/unix" ,it's exists .
what can i do for debugging on module
F:\gomodworkspace\dm>go env
set GOARCH=amd64
set GOBIN=F:\source-tree\bin
set GOCACHE=C:\Users\admin pc\AppData\Local\go-build
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=F:\source-tree
set GOPROXY=
set GORACE=
set GOROOT=C:\Go
set GOTMPDIR=
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=F:\gomodworkspace\dm\go.mod
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\ADMINP~1\AppData\Local\Temp\go-build820054998=/tmp/go-build -gno-record-gcc-switches
F:\gomodworkspace\dm>
Describe the feature you'd like:
go 1.12 is released, it is better to be compatible with both go 1.11.x and go 1.12
Describe the feature you'd like:
Adding a pkg or some functions to generate MySQL/MariaDB binlog events. These events can be used for unit testing or integration testing.
Teachability, Documentation, Adoption, Migration Strategy:
DM-master
and DM-worker
use toml
as config file format
task
use yaml
as config file format
can we unify them to use one format?
Is your feature request related to a problem? Please describe:
I want to replicate binlog from oldest Binlog position in incremental mode,But get a wrong message
"msg":"mysql-instance(0) must set meta for task-mode incremental"
Describe the feature you'd like:
I think DM should allow empty meta in incremental mode
Before asking a question, make sure you have:
how to remove stopped task(some test tasks) from dm-worker metrics?
Please answer these questions before submitting your issue. Thanks!
loading data with "
in the table.
data loaded.
paserpkg parse SQL statement fail because not using the correct sql-mode
.
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
before this commit (https://github.com/pingcap/dm/tree/cd743e564306a9ef103460a0a446359ae711e21d)
Describe the feature you'd like:
Now we deploy and operate DM using Ansible, we have to do a lot of things manually.
For example, if you want to replicate binlog from a new MySQL instance, we should do:
Ansible inventory
and run an ansible-playbook
to add a new dm-worker firstly;A vision - the user only operates the task configuration, maybe by web or a command line tool, and everything else is automated.
DM operator on K8S is a good choice
Project example: TiDB-operator
K8s users can use DM more naturally.
N/A
N/A
N/A
Please answer these questions before submitting your issue. Thanks!
What did you do? If possible, provide a recipe for reproducing the error.
What did you expect to see?
What did you see instead?
"result": {
"isCanceled": false,
"errors": [
{
"Type": "UnknownError",
"msg": "invalid connection\ngithub.com/pingcap/errors.AddStack\n\t/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/errors.go:174\ngithub.com/pingcap/errors.Trace\n\t/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/juju_adaptor.go:12\ngithub.com/pingcap/dm/syncer.(*Conn).querySQL\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:93\ngithub.com/pingcap/dm/syncer.getTableColumns\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:351\ngithub.com/pingcap/dm/syncer.(*Syncer).getTableFromDB\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:511\ngithub.com/pingcap/dm/syncer.(*Syncer).getTable\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:537\ngithub.com/pingcap/dm/syncer.(*Syncer).Run\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:1137\ngithub.com/pingcap/dm/syncer.(*Syncer).Process\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:446\ngithub.com/pingcap/dm/syncer.(*Syncer).Resume\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:1916\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1333"
}
],
"detail": null
}
+ second resume-task
"result": {
"isCanceled": false,
"errors": [
{
"Type": "ExecSQL",
"msg": "invalid connection\ngithub.com/pingcap/errors.AddStack\n\t/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/errors.go:174\ngithub.com/pingcap/errors.Trace\n\t/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/juju_adaptor.go:12\ngithub.com/pingcap/dm/syncer.(*Conn).executeSQLJobImp\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:222\ngithub.com/pingcap/dm/syncer.(*Conn).executeSQLJob\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:195\ngithub.com/pingcap/dm/syncer.(*Syncer).sync.func3\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:733\ngithub.com/pingcap/dm/syncer.(*Syncer).sync\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:806\ngithub.com/pingcap/dm/syncer.(*Syncer).Run.func2\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:867\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1333"
},
{
"Type": "ExecSQL",
"msg": "invalid connection\ngithub.com/pingcap/errors.AddStack\n\t/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/errors.go:174\ngithub.com/pingcap/errors.Trace\n\t/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/juju_adaptor.go:12\ngithub.com/pingcap/dm/syncer.(*Conn).executeSQLJobImp\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:222\ngithub.com/pingcap/dm/syncer.(*Conn).executeSQLJob\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:195\ngithub.com/pingcap/dm/syncer.(*Syncer).sync.func3\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:733\ngithub.com/pingcap/dm/syncer.(*Syncer).sync\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:806\ngithub.com/pingcap/dm/syncer.(*Syncer).Run.func2\n\t/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:867\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1333"
},{}, --- { total: 13 Type : ExecSQL}
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
./dmctl/dmctl -V
Release Version: v1.0.0-alpha-10-g4d01d79
Git Commit Hash: 4d01d79
Git Branch: master
UTC Build Time: 2019-02-11 14:50:57
Go Version: go version go1.11.2 linux/amd64
```
- Upstream MySQL server version:
```
Server version: 5.7.24-27-log Source distribution
```
- Downstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```
2.1.3
```
- How did you deploy DM: DM-Ansible or manually?
```
DM-Ansible
```
- Other interesting information (system version, hardware config, etc):
>
Is your feature request related to a problem? Please describe:
When the source data is generated with mydumper -c, loader cannot load it.
Describe the feature you'd like:
Support for compressed dumps.
Describe alternatives you've considered:
Uncompress manually or dump without compression.
Teachability, Documentation, Adoption, Migration Strategy:
All around compression uses for large datasets being migrated.
2019/05/19 03:50:25 loader.go:524: [info] [loader] prepare takes 0.000139 seconds 2019/05/19 03:50:25 loader.go:332: [error] [loader] scan dir[backup/] failed, err[invalid mydumper files for there are no
-schema-create.sqlfiles found] 2019/05/19 03:50:25 main.go:84: [fatal] /home/jenkins/workspace/build_tidb_enterprise_tools_master/go/src/github.com/pingcap/tidb-enterprise-tools/loader/loader.go:429: invalid mydumper files for there are no
-schema-create.sql files found /home/jenkins/workspace/build_tidb_enterprise_tools_master/go/src/github.com/pingcap/tidb-enterprise-tools/loader/loader.go:333:
Is your feature request related to a problem? Please describe:
support to drop all DDL assertions in DM, like pingcap/tidb#9367, to reduce replication breaking as @morgo suggest
Describe alternatives you've considered:
remove all related sub ast node in ast of DDL
Is your feature request related to a problem? Please describe:
The id
field in loader/syncer unit checkpoint table is char(32)/varchar(32), as source-id
is defined by user without any length restriction, this may lead to error initialize checkpoint: Error 1406: Data too long for column 'id' at row 1
Describe the feature you'd like:
add source-id
length check in check-task
, and find out whether more restrictions should be added.
Is your feature request related to a problem? Please describe:
Describe the feature you'd like:
Describe alternatives you've considered:
Teachability, Documentation, Adoption, Migration Strategy:
Is your feature request related to a problem? Please describe:
I’m always frustrated when we must start task or subtask manually after we restart dm-worker.
Essential cause is that both dm-master and dm-worker don’t store task/subtask configuration in a remote storage or local disk
Describe the feature you'd like:
After we restart dm-worker, task and subtask would be restarted automatically.
DM should store task/subtask information in a remote storage or local disk
Describe alternatives you've considered:
Declare in advance that these solution is irrelevant about HA of dm-master/dm-worker processes
S1: store task configuration in dm-master
S2: store subtask configuration in dm-worker’s disk
if keeping dm-master simple, we would like to choose S2, there are benefits:
Is your feature request related to a problem? Please describe:
related to #44
Describe the feature you'd like:
Is your feature request related to a problem? Please describe:
Currently, we need the following assumptions to make relay unit workers correctly:
But these assumptions may not always satisfied.
Describe the feature you'd like:
We can change the assumption to the following:
In other words, the correctness of the relay unit relies on the sending behavior of the master, not binlog file on the master.
In practice, we can use some dummy events to fill the relay log file if no corresponding events received from the master.
Is your feature request related to a problem? Please describe:
Describe the feature you'd like:
Describe alternatives you've considered:
Teachability, Documentation, Adoption, Migration Strategy:
Please answer these questions before submitting your issue. Thanks!
running the task.
the task keep running.
task paused frequently because of "invalid connection".
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
until the version below, the problem exists.
Release Version: v1.0.0-alpha-10-g4d01d79
Git Commit Hash: 4d01d798415e835417bb6db7250c56fa2d964d47
Git Branch: master
UTC Build Time: 2019-02-11 14:50:57
Go Version: go version go1.11.2 linux/amd64
Operation logs
DM-worker's log
2019/01/29 05:27:54.883 subtask.go:213: [error] [subtask] vccs_sharding_merge dm-unit Sync process error with type ExecSQL:
invalid connection
github.com/pingcap/errors.AddStack
/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/errors.go:174
github.com/pingcap/errors.Trace
/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/juju_adaptor.go:12
github.com/pingcap/dm/syncer.(*Conn).executeSQLJobImp
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:242
github.com/pingcap/dm/syncer.(*Conn).executeSQLJob
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:195
github.com/pingcap/dm/syncer.(*Syncer).sync.func3
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:733
github.com/pingcap/dm/syncer.(*Syncer).sync
github.com/pingcap/errors.Trace
/home/jenkins/workspace/build_dm/go/pkg/mod/github.com/pingcap/[email protected]/juju_adaptor.go:12
github.com/pingcap/dm/syncer.(*Conn).executeSQLJobImp
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:242
github.com/pingcap/dm/syncer.(*Conn).executeSQLJob
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/db.go:195
github.com/pingcap/dm/syncer.(*Syncer).sync.func3
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:733
github.com/pingcap/dm/syncer.(*Syncer).sync
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:806
github.com/pingcap/dm/syncer.(*Syncer).Run.func2
/home/jenkins/workspace/build_dm/go/src/github.com/pingcap/dm/syncer/syncer.go:867
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1333
What I have done:
$ tar -xzvf dm-ansible-latest.tar.gz
$ mv dm-ansible-latest dm-ansible
$ cd /home/tidb/dm-ansible
$ sudo pip install -r ./requirements.txt
$ ansible --version
What I expect to see:
ansible 2.5.0
What I actually see:
Traceback (most recent call last):
File "/bin/ansible", line 67, in <module>
import ansible.constants as C
File "/usr/lib/python2.7/site-packages/ansible/constants.py", line 17, in <module>
from ansible.config.manager import ConfigManager, ensure_type, get_ini_config_value
File "/usr/lib/python2.7/site-packages/ansible/config/manager.py", line 16, in <module>
from yaml import load as yaml_load
ImportError: No module named yaml
Fixed by:
$ sudo pip install pyyaml --ignore-installed
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
Looking in indexes: http://mirrors.aliyun.com/pypi/simple/
Collecting pyyaml
Downloading http://mirrors.aliyun.com/pypi/packages/9e/a3/1d13970c3f36777c583f136c136f804d70f500168edc1edea6daa7200769/PyYAML-3.13.tar.gz (270kB)
100% |████████████████████████████████| 276kB 3.4MB/s
Installing collected packages: pyyaml
Running setup.py install for pyyaml ... done
Successfully installed pyyaml-3.13
Maybe we should make sure pyyaml
is up to date.
Please answer these questions before submitting your issue. Thanks!
What did you do? If possible, provide a recipe for reproducing the error.
What did you expect to see?
What did you see instead?
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
(paste DM version here, and you must ensure versions of dmctl, DM-worker and DM-master are same)
Upstream MySQL/MariaDB server version:
(paste upstream MySQL/MariaDB server version here)
Downstream TiDB cluster version (execute SELECT tidb_version();
in a MySQL client):
(paste TiDB cluster version here)
How did you deploy DM: DM-Ansible or manually?
(leave DM-Ansible or manually here)
Other interesting information (system version, hardware config, etc):
current status of DM cluster (execute query-status
in dmctl)
Operation logs
dm-worker.log
for every DM-worker instance if possibledm-master.log
if possibleConfiguration of the cluster and the task
dm-worker.toml
for every DM-worker instance if possibledm-master.toml
for DM-master if possibletask.yaml
if possibleinventory.ini
if deployed by DM-AnsibleScreenshot/exported-PDF of Grafana dashboard or metrics' graph in Prometheus for DM if possible
我们这边线上已经使用了20多个worker,希望dmctl增加批量任务展示的界面,类似supervisor的任务管理,仅展示任务名称及任务状态,目前的query-status输出信息太多了,如果我想查看全部任务哪些存在异常,只能通过报警功能
Describe the feature you'd like:
Now what information does DM have to show to users?
dmctl
query-status
: queries basic information of the task, including some complex and unclear error messagesshow-ddl-lock
....query-error
...Grafana: many nonsense monitor graphs
What are the disadvantages of the above methods? Lack of contextual information leads to incomprehensible or inferential problems.
We need a way to show the system or task running status details in a natural way, like a straightforward way to show the speed of data flow, key events and where to happen.
There may be two kind of work
test
Please answer these questions before submitting your issue. Thanks!
DM fails to load a dump of wordpress, because wordpress uses DEFAULT '0000-00-00 00:00:00'
for many datetime
columns, and the default SQL mode includes NO_ZERO_DATE
.
DM should be able to successfully load any schema files that were successfully loaded into and exported from an upstream instance.
This is handled in mysqldump by writing SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO'
to the beginning of the dump file and then writing SET SQL_MODE=@OLD_SQL_MODE
at the end of the dump file (https://github.com/mysql/mysql-server/blob/5.7/client/mysqldump.c#L745).
This is handled in mydumper by having the loader execute SET SQL_MODE='NO_AUTO_VALUE_ON_ZERO'
before loading data (https://bugs.launchpad.net/mydumper/+bug/1124106).
The most straightforward fix for this issue would be to add functionality to loader/db.go to execute SET SQL_MODE='NO_AUTO_VALUE_ON_ZERO'
before executing any DDL.
2019/06/18 22:49:36.798 db.go:169: [warning] [exec][sql]CREATE TABLE `wp_comments` (`comment_ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,`comment_post_I[err
or]Error 1067: Invalid default value for 'comment_date'
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
Release Version: v1.0.0-alpha-94-g1173d26
Git Commit Hash: 1173d269309c79f45b8d0897247e22f0a714c39d
Git Branch: master
UTC Build Time: 2019-06-14 08:37:46
Go Version: go version go1.12 linux/amd64
Upstream MySQL/MariaDB server version:
mysqld Ver 5.7.26 for Linux on x86_64 (MySQL Community Server (GPL))
Downstream TiDB cluster version (execute SELECT tidb_version();
in a MySQL client):
Release Version: v3.0.0-rc.1-201-gb0d6c5b35
Git Commit Hash: b0d6c5b35bf8faa33ae80ec290ba27c5d243a74e
Git Branch: master
UTC Build Time: 2019-06-18 08:20:54
GoVersion: go version go1.12 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false
Is your feature request related to a problem? Please describe:
in dm-worker.log, when mydumper exports data, the password is plaintext and should be coded using XXXXX
Describe the feature you'd like:
Describe alternatives you've considered:
Teachability, Documentation, Adoption, Migration Strategy:
hi:
when i use syncer,my source db execut "REPLACE INTO" statment,then syncer broken,the error log is:
2019/02/18 17:10:43 db.go:130: [warning] [exec][sql]REPLACE INTO `sytms_prod`.`system_user_role_4s` (`id`,`user_id`,`role_id`) VALUES (?,?,?);[args][33576 15258 32][error]Error 1452: Cannot add or update a child row: a foreign key constraint fails (`sytms_prod`.`system_user_role_4s`, CONSTRAINT `system_user_role_4s_ibfk_2` FOREIGN KEY (`user_id`) REFERENCES `system_user_4s` (`id`) ON DELETE CASCADE ON UPDATE CASCADE)
2019/02/18 17:10:43 db.go:102: [error] [exec][sql][REPLACE INTO `sytms_prod`.`system_user_role_4s` (`id`,`user_id`,`role_id`) VALUES (?,?,?);][args][[33576 15258 32]][error]Error 1452: Cannot add or update a child row: a foreign key constraint fails (`sytms_prod`.`system_user_role_4s`, CONSTRAINT `system_user_role_4s_ibfk_2` FOREIGN KEY (`user_id`) REFERENCES `system_user_4s` (`id`) ON DELETE CASCADE ON UPDATE CASCADE)
2019/02/18 17:10:43 syncer.go:502: [fatal] Error 1452: Cannot add or update a child row: a foreign key constraint fails (`sytms_prod`.`system_user_role_4s`, CONSTRAINT `system_user_role_4s_ibfk_2` FOREIGN KEY (`user_id`) REFERENCES `system_user_4s` (`id`) ON DELETE CASCADE ON UPDATE CASCADE)
/home/jenkins/workspace/build_tidb_enterprise_tools_master/go/src/github.com/pingcap/tidb-enterprise-tools/syncer/db.go:136:
/home/jenkins/workspace/build_tidb_enterprise_tools_master/go/src/github.com/pingcap/tidb-enterprise-tools/syncer/db.go:103:
i execute the sql manual , is ok. source db an target is mysql,not tidb
Please answer these questions before submitting your issue. Thanks!
What did you do? If possible, provide a recipe for reproducing the error.
What did you expect to see?
What did you see instead?
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
(paste DM version here, and you must ensure versions of dmctl, DM-worker and DM-master are same)
Upstream MySQL/MariaDB server version:
(paste upstream MySQL/MariaDB server version here)
Downstream TiDB cluster version (execute SELECT tidb_version();
in a MySQL client):
(paste TiDB cluster version here)
How did you deploy DM: DM-Ansible or manually?
(leave DM-Ansible or manually here)
Other interesting information (system version, hardware config, etc):
current status of DM cluster (execute query-status
in dmctl)
Operation logs
dm-worker.log
for every DM-worker instance if possibledm-master.log
if possibleConfiguration of the cluster and the task
dm-worker.toml
for every DM-worker instance if possibledm-master.toml
for DM-master if possibletask.yaml
if possibleinventory.ini
if deployed by DM-AnsibleScreenshot/exported-PDF of Grafana dashboard or metrics' graph in Prometheus for DM if possible
Please answer these questions before submitting your issue. Thanks!
What did you do? If possible, provide a recipe for reproducing the error.
2019/01/22 18:57:43.871 main.go:38: [error] parse cmd flags err flag provided but not defined: -log-rotate
What did you expect to see?
What did you see instead?
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
./dmctl -V
Release Version: v1.0.0-alpha-1-g6b61fa7
Git Commit Hash: 6b61fa7
Git Branch: master
UTC Build Time: 2019-01-21 10:04:51
Go Version: go version go1.11.2 linux/amd64
./dm-master -V
Release Version: v1.0.0-alpha-1-g6b61fa7
Git Commit Hash: 6b61fa7
Git Branch: master
UTC Build Time: 2019-01-21 10:04:57
Go Version: go version go1.11.2 linux/amd64
./dm-worker -V
Release Version: v1.0.0-alpha-1-g6b61fa7
Git Commit Hash: 6b61fa7
Git Branch: master
UTC Build Time: 2019-01-21 10:04:54
Go Version: go version go1.11.2 linux/amd64
(paste DM version here, and you must ensure versions of dmctl, DM-worker and DM-master are same)
Upstream MySQL/MariaDB server version:
(paste upstream MySQL/MariaDB server version here)
Server version: 5.6.27-log MySQL Community Server (GPL)
- Downstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```
(paste TiDB cluster version here)
```
- How did you deploy DM: DM-Ansible or manually?
```
(leave DM-Ansible or manually here)
```
DM-Ansible
- Other interesting information (system version, hardware config, etc):
>
>
current status of DM cluster (execute query-status
in dmctl)
Operation logs
dm-worker.log
for every DM-worker instance if possibledm-master.log
if possibleConfiguration of the cluster and the task
dm-worker.toml
for every DM-worker instance if possibledm-master.toml
for DM-master if possibletask.yaml
if possibleinventory.ini
if deployed by DM-AnsibleScreenshot/exported-PDF of Grafana dashboard or metrics' graph in Prometheus for DM if possible
It seems like that the pkg/parser/comment_test.go is a typo?
pkg/parser/common.go
pkg/parser/comment_test.go
Before asking a question, make sure you have:
Please answer these questions before submitting your issue. Thanks!
replace a DM-worker instance with another DM-worker instance with the same source-id
, but report ghost table not found when replicating online DDL operation.
task keeps running.
task is paused
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
any version before commit `1173d269309c79f45b8d0897247e22f0a714c39d`
Describe the feature you'd like:
We want to cover most import logic path in DM and increase code coverage as high as possible
P0, P1, P2 represents the priority, P0 has the highest priority, P2 has the lowest priority. P0 > P1 > P2
Teachability Strategy:
We want to finish this task in several steps
add tests to cover most import logic path, tests not included in DM are listed as follows
add more failpoint inject
.name
suffix compatibilityis-sharding
)increase code coverage mainly based on coverage of each source file
Is your feature request related to a problem? Please describe:
When running a dump unit, it will output the log to a mydumper-{task-name}.log
Describe the feature you'd like:
Describe alternatives you've considered:
Teachability, Documentation, Adoption, Migration Strategy:
It looks like simply building dm requires a running mysqld for some reason. Maybe for a test suite, maybe for something else? It seems like this should not be a requirement simply for a build... or at minimum it ought to be better documented.
Is the solution as easy as removing "test
" from the "build
" target in Makefile?
Is your feature request related to a problem? Please describe:
Currently the coverage rate from the same code could be different, event have 2% decrease or increase. For example we have different coverage in
https://coveralls.io/builds/21662660/source?filename=syncer/db.go#L280
https://coveralls.io/builds/21662523/source?filename=syncer/db.go#L280
https://coveralls.io/builds/21700081/source?filename=loader/db.go
https://coveralls.io/builds/21662523/source?filename=loader/db.go
Describe the feature you'd like:
We should get the same coverage rate after we run test case on the same commit/code.
Is your feature request related to a problem? Please describe:
Now, when executing a SQL statement through DM, we can't identify it whether from DM in the downstream easily. Only use the connection ID is a little hard.
Describe the feature you'd like:
two steps to achieve this:
in DM, we can do step.1 to convert SQL statement like
/* ApplicationName=DM v1.0.0-alpha-g3f4c0d8 */ INSERT INTO db.tbl VALUES (1)
Describe alternatives you've considered:
another method to achieve this is making compare connection ID easier.
TiDB server can distinguish whether a SQL is from a specified DM task.
2
N/A
we start task use yaml. when we restart this worker, panic occurs.
---
name: p17_incr4 # global unique
task-mode: incremental # full/incremental/all
is-sharding: false # whether multi dm-worker do one sharding job
meta-schema: "dm_meta" # meta schema in downstreaming database to store meta informaton of dm
remove-meta: false # remove meta from downstreaming database, now we delete checkpoint and online ddl information
enable-heartbeat: false # whether to enable heartbeat for calculating lag between master and syncer
# timezone: "Asia/Shanghai" # target database timezone, all timestamp event in binlog will translate to format time based on this timezone, default use local timezone
target-database:
host: "10.19.XX.XX"
port: 4000
user: "root"
password: "XXXXX"
mysql-instances: # one or more source database, config more source database for sharding merge
-
source-id: "10.19.65.17" # unique in all instances, used as id when save checkpoints, configs, etc.
# binlog pos used to as start pos for syncer, for different task-mode, this maybe used or not
# `full` / `all`:
# never be used
# `incremental`:
# if `remove-meta` is true, this will be used
# else if checkpoints already exists in `meta-schema`, this will not be used
# otherwise, this will be used
meta:
binlog-name: mysql-bin.004429
binlog-pos: 577611518
route-rules: ["user-route-rules-schema1"]
filter-rules: ["user-filter-1","user-filter-2"]
#column-mapping-rules: ["instance-1"]
black-white-list: "instance"
# `mydumper-config-name` and `mydumper` should only set one
mydumper-config-name: "global" # ref `mydumpers` config
# `loader-config-name` and `loader` should only set one
loader-config-name: "global" # ref `loaders` config
syncer-config-name: "global" # ref `syncers` config
# other common configs shared by all instances
routes: # schema/table route mapping
user-route-rules-schema1:
schema-pattern: "db_lfds"
target-schema: "db_lfds"
filters: # filter rules, mysql instance can ref rules in it
user-filter-1:
schema-pattern: "db_lfds"
events: ["truncate table", "drop table"] # ignore truncate/drop table ddl
action: Ignore
user-filter-2:
schema-pattern: "db_lfds"
table-pattern: "~.*"
events: ["all dml"] # only do all DML events
action: Do
black-white-list:
instance:
do-dbs: ["db_lfds"]
ignore-dbs: ["mysql", "information_schema","performance_schema"]
do-tables:
- db-name: "db_lfds"
tbl-name: "~.*"
column-mappings: # column mapping rules, mysql instance can ref rules in it
instance-1:
schema-pattern: "test_*"
table-pattern: "t_*"
expression: "partition id" # handle sharding partition id
source-column: "id"
target-column: "id"
arguments: ["1", "test_", "t_"]
instance-2:
schema-pattern: "test_*"
table-pattern: "t_*"
expression: "partition id" # handle sharding partition id
source-column: "id"
target-column: "id"
arguments: ["2", "test_", "t_"]
mydumpers: # mydumper process unit specific configs, mysql instance can ref one config in it
global:
mydumper-path: "./bin/mydumper"
threads: 4
chunk-filesize: 64
skip-tz-utc: true
#extra-args: "-B test -T t1,t2 --no-locks"
extra-args: "-B db_lfds --no-locks"
#extra-args: "-x db_lfds.*|db_lums.*|lkl_job.* --no-locks"
loaders: # loader process unit specific configs, mysql instance can ref one config in it
global:
pool-size: 16
dir: "./dumped_data"
syncers: # syncer process unit specific configs, mysql instance can ref one config in it
global:
worker-count: 16
batch: 100
max-retry: 100
safe-mode: false
Versions of the cluster
[tidb@devops-deploy-8014 dmctl]$ ./dmctl -V
Release Version: v1.0.0-alpha-46-g6855ea4
Git Commit Hash: 6855ea4e40bb5e3775709054a59a55c628a0922f
Git Branch: master
UTC Build Time: 2019-04-02 12:41:38
Go Version: go version go1.12 linux/amd64
[tidb@tidb2 bin]$ ./dm-master -V
Release Version: v1.0.0-alpha-46-g6855ea4
Git Commit Hash: 6855ea4e40bb5e3775709054a59a55c628a0922f
Git Branch: master
UTC Build Time: 2019-04-02 12:41:48
Go Version: go version go1.12 linux/amd64
[tidb@tidb3 bin]$ ./dm-worker -V
Release Version: v1.0.0-alpha-46-g6855ea4
Git Commit Hash: 6855ea4e40bb5e3775709054a59a55c628a0922f
Git Branch: master
UTC Build Time: 2019-04-02 12:41:42
Go Version: go version go1.12 linux/amd64
dm-worker error log
goroutine 16 [running]:
github.com/pingcap/dm/dm/worker.(*Worker).Start(0xc0002d59d0)
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/dm/worker/worker.go:119 +0x4ea
github.com/pingcap/dm/dm/worker.(*Server).Start.func1(0xc000188c00)
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/dm/worker/server.go:83 +0x5b
created by github.com/pingcap/dm/dm/worker.(*Server).Start
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/dm/worker/server.go:80 +0x13c
panic: restore task p17_incr4 ({"is-sharding":false,"online-ddl-scheme":"","case-sensitive":false,"name":"p17_incr4","mode":"incremental","ignore-checking-items":null,"sou
rce-id":"10.19.65.17","server-id":19040801,"flavor":"mysql","meta-schema":"dm_meta","remove-meta":false,"disable-heartbeat":true,"heartbeat-update-interval":1,"heartbeat-r
eport-interval":10,"enable-heartbeat":false,"meta":{"BinLogName":"mysql-bin.004429","BinLogPos":577611518},"Timezone":"","binlog-type":"local","relay-dir":"/data1/dm-worke
r1/deploy/relay_log","from":{"host":"10.19.65.17","port":54321,"user":"dmworker","max-allowed-packet":67108864},"to":{"host":"10.19.65.35","port":4000,"user":"root","max-a
llowed-packet":67108864},"route-rules":[{"schema-pattern":"db_lfds","table-pattern":"","target-schema":"db_lfds","target-table":""}],"filter-rules":[{"schema-pattern":"db_
lfds","table-pattern":"","events":["truncate table","drop table"],"sql-pattern":null,"action":"Ignore"},{"schema-pattern":"db_lfds","table-pattern":"","events":["all dml"]
,"sql-pattern":null,"action":"Do"}],"mapping-rule":[],"black-white-list":{"do-tables":[{"db-name":"db_lfds","tbl-name":"~.*"}],"do-dbs":["db_lfds"],"ignore-tables":null,"i
gnore-dbs":["mysql","information_schema","performance_schema"]},"mydumper-path":"./bin/mydumper","threads":4,"chunk-filesize":64,"skip-tz-utc":true,"extra-args":"-B db_lfd
s --no-locks","pool-size":16,"dir":"./dumped_data","meta-file":"","worker-count":16,"batch":100,"max-retry":100,"auto-fix-gtid":false,"enable-gtid":true,"disable-detect":f
alse,"safe-mode":false,"enable-ansi-quotes":false,"log-level":"info","log-file":"/data1/dm-worker1/deploy/log/dm-worker.log","log-rotate":"","pprof-addr":"","status-addr":
"","config-file":""}) in worker starting: sub task p17_incr4 init dm-unit error pattern db_lfds already exists
github.com/pingcap/errors.AlreadyExistsf
/go/pkg/mod/github.com/pingcap/[email protected]/juju_adaptor.go:97
github.com/pingcap/tidb-tools/pkg/table-rule-selector.(*trieSelector).insert
/go/pkg/mod/github.com/pingcap/[email protected]+incompatible/pkg/table-rule-selector/trie_selector.go:186
github.com/pingcap/tidb-tools/pkg/table-rule-selector.(*trieSelector).insertSchema
/go/pkg/mod/github.com/pingcap/[email protected]+incompatible/pkg/table-rule-selector/trie_selector.go:119
github.com/pingcap/tidb-tools/pkg/table-rule-selector.(*trieSelector).Insert
/go/pkg/mod/github.com/pingcap/[email protected]+incompatible/pkg/table-rule-selector/trie_selector.go:109
github.com/pingcap/tidb-tools/pkg/binlog-filter.(*BinlogEvent).AddRule
/go/pkg/mod/github.com/pingcap/[email protected]+incompatible/pkg/binlog-filter/filter.go:155
github.com/pingcap/tidb-tools/pkg/binlog-filter.NewBinlogEvent
/go/pkg/mod/github.com/pingcap/[email protected]+incompatible/pkg/binlog-filter/filter.go:134
github.com/pingcap/dm/syncer.(*Syncer).Init
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/syncer/syncer.go:258
github.com/pingcap/dm/dm/worker.(*SubTask).Init
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/dm/worker/subtask.go:111
github.com/pingcap/dm/dm/worker.(*Worker).StartSubTask
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/dm/worker/worker.go:195
github.com/pingcap/dm/dm/worker.(*Worker).Start
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/dm/worker/worker.go:118
github.com/pingcap/dm/dm/worker.(*Server).Start.func1
/home/jenkins/workspace/build_dm_master/go/src/github.com/pingcap/dm/dm/worker/server.go:83
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1337
insert into schema selector
add rule &{SchemaPattern:db_lfds TablePattern: Events:[all dml] SQLPattern:[] sqlRegularExp:<nil> Action:Do} into binlog event filter
initial rule &{SchemaPattern:db_lfds TablePattern: Events:[all dml] SQLPattern:[] sqlRegularExp:<nil> Action:Do} in binlog event filter
task read from meta file and formated
{
"is-sharding":false,
"online-ddl-scheme":"",
"case-sensitive":false,
"name":"p17_incr4",
"mode":"incremental",
"ignore-checking-items":null,
"source-id":"10.19.65.17",
"server-id":19040801,
"flavor":"mysql",
"meta-schema":"dm_meta",
"remove-meta":false,
"disable-heartbeat":true,
"heartbeat-update-interval":1,
"heartbeat-report-interval":10,
"enable-heartbeat":false,
"meta":{
"BinLogName":"mysql-bin.004429",
"BinLogPos":577611518
},
"Timezone":"",
"binlog-type":"local",
"relay-dir":"/data1/dm-worker1/deploy/relay_log",
"from":{
"host":"10.19.65.17",
"port":54321,
"user":"dmworker",
"max-allowed-packet":67108864
},
"to":{
"host":"10.19.65.35",
"port":4000,
"user":"root",
"max-allowed-packet":67108864
},
"route-rules":[
{
"schema-pattern":"db_lfds",
"table-pattern":"",
"target-schema":"db_lfds",
"target-table":""
}
],
"filter-rules":[
{
"schema-pattern":"db_lfds",
"table-pattern":"",
"events":[
"truncate table",
"drop table"
],
"sql-pattern":null,
"action":"Ignore"
},
{
"schema-pattern":"db_lfds",
"table-pattern":"",
"events":[
"all dml"
],
"sql-pattern":null,
"action":"Do"
}
],
"mapping-rule":[
],
"black-white-list":{
"do-tables":[
{
"db-name":"db_lfds",
"tbl-name":"~.*"
}
],
"do-dbs":[
"db_lfds"
],
"ignore-tables":null,
"ignore-dbs":[
"mysql",
"information_schema",
"performance_schema"
]
},
"mydumper-path":"./bin/mydumper",
"threads":4,
"chunk-filesize":64,
"skip-tz-utc":true,
"extra-args":"-B db_lfds --no-locks",
"pool-size":16,
"dir":"./dumped_data",
"meta-file":"",
"worker-count":16,
"batch":100,
"max-retry":100,
"auto-fix-gtid":false,
"enable-gtid":true,
"disable-detect":false,
"safe-mode":false,
"enable-ansi-quotes":false,
"log-level":"info",
"log-file":"/data1/dm-worker1/deploy/log/dm-worker.log",
"log-rotate":"",
"pprof-addr":"",
"status-addr":"",
"config-file":""
}
{
"schema-pattern":"db_lfds",
"table-pattern":"", # ~. is lost, so two rules schema-pattern and table-pattern is duplicate*
"events":[
"all dml"
],
"sql-pattern":null,
"action":"Do"
}
Adding build instruction session to README.md
Is your feature request related to a problem? Please describe:
support through the public interface:
Describe the feature you'd like:
Describe alternatives you've considered:
Teachability, Documentation, Adoption, Migration Strategy:
Please answer these questions before submitting your issue. Thanks!
What did you do? If possible, provide a recipe for reproducing the error.
--binlog-annotate-row-event
in upstream MariaDBenable-gtid
for DM-worker)What did you expect to see?
What did you see instead?
set enable-gtid
with debug
log level:
2019/02/19 21:24:07.708 relay.go:751: [info] [relay] start sync for master(127.0.0.1:3307, 0-1.000001) from GTID set
2019/02/19 21:24:07.718 relay.go:741: [info] [relay] last slave connection id 14
2019/02/19 21:24:07.731 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:0 EventType:RotateEvent ServerID:1 EventSize:43 LogPos:0 Flags:32}
2019/02/19 21:24:07.731 relay.go:366: [info] [relay] rotate to (mysql-bin.000001, 887)
2019/02/19 21:24:07.731 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573719 EventType:FormatDescriptionEvent ServerID:1 EventSize:245 LogPos:249 Flags:0}
2019/02/19 21:24:07.731 relay.go:701: [debug] [relay] the first 4 bytes are [254 98 105 110]
2019/02/19 21:24:07.731 relay.go:729: [info] [relay] binlog file mysql-bin.000001 already has Format_desc event, so ignore it
2019/02/19 21:24:07.731 relay.go:586: [info] [relay] mysql-bin.000001 seek to end (887)
2019/02/19 21:24:07.731 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573719 EventType:MariadbGTIDListEvent ServerID:1 EventSize:25 LogPos:274 Flags:0}
2019/02/19 21:24:07.731 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573719 EventType:MariadbGTIDListEvent ServerID:1 EventSize:25 LogPos:274 Flags:0} (with relay file size 887)
2019/02/19 21:24:07.731 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573719 EventType:MariadbBinLogCheckPointEvent ServerID:1 EventSize:39 LogPos:313 Flags:0}
2019/02/19 21:24:07.731 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573719 EventType:MariadbBinLogCheckPointEvent ServerID:1 EventSize:39 LogPos:313 Flags:0} (with relay file size 887)
2019/02/19 21:24:07.731 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:351 Flags:8}
2019/02/19 21:24:07.731 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:351 Flags:8} (with relay file size 887)
2019/02/19 21:24:07.731 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:88 LogPos:439 Flags:0}
2019/02/19 21:24:07.732 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:88 LogPos:439 Flags:0} (with relay file size 887)
2019/02/19 21:24:07.732 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:477 Flags:8}
2019/02/19 21:24:07.732 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:477 Flags:8} (with relay file size 887)
2019/02/19 21:24:07.732 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:93 LogPos:570 Flags:0}
2019/02/19 21:24:07.732 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:93 LogPos:570 Flags:0} (with relay file size 887)
2019/02/19 21:24:07.732 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:608 Flags:8}
2019/02/19 21:24:07.732 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:608 Flags:8} (with relay file size 887)
2019/02/19 21:24:07.732 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:99 LogPos:707 Flags:0}
2019/02/19 21:24:07.732 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:99 LogPos:707 Flags:0} (with relay file size 887)
2019/02/19 21:24:07.732 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:745 Flags:8}
2019/02/19 21:24:07.732 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:745 Flags:8} (with relay file size 887)
2019/02/19 21:24:07.732 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:104 LogPos:849 Flags:0}
2019/02/19 21:24:07.733 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:QueryEvent ServerID:1 EventSize:104 LogPos:849 Flags:0} (with relay file size 887)
2019/02/19 21:24:07.733 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:887 Flags:8}
2019/02/19 21:24:07.733 relay.go:439: [warning] [relay] skip obsolete event &{Timestamp:1550573720 EventType:MariadbGTIDEvent ServerID:1 EventSize:38 LogPos:887 Flags:8} (with relay file size 887)
2019/02/19 21:24:07.733 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:24:07.733 relay.go:417: [info] [relay] gap detected from 887 to 959 in mysql-bin.000001, current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:24:07.747 relay.go:263: [info] [relay] start to fill gap in relay log file from (mysql-bin.000001, 887)
2019/02/19 21:24:07.747 relay.go:271: [info] [relay] binlog events in the gap for file relay_binlog/0-1.000001/mysql-bin.000001 not exists in previous sub directory, no need to add a LOG_EVENT_RELAY_LOG_F flag. current GTID set , previous sub directory end GTID set %!s(<nil>)
2019/02/19 21:24:07.756 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:0 EventType:RotateEvent ServerID:1 EventSize:43 LogPos:0 Flags:32}
2019/02/19 21:24:07.756 relay.go:366: [info] [relay] rotate to (mysql-bin.000001, 849)
2019/02/19 21:24:07.756 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573719 EventType:FormatDescriptionEvent ServerID:1 EventSize:245 LogPos:0 Flags:0}
2019/02/19 21:24:07.756 relay.go:701: [debug] [relay] the first 4 bytes are [254 98 105 110]
2019/02/19 21:24:07.757 relay.go:729: [info] [relay] binlog file mysql-bin.000001 already has Format_desc event, so ignore it
2019/02/19 21:24:07.757 relay.go:586: [info] [relay] mysql-bin.000001 seek to end (887)
2019/02/19 21:24:07.757 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:24:07.757 relay.go:424: [info] [relay] fill gap reaching the end pos (mysql-bin.000001, 1011), current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:24:07.757 relay.go:155: [error] [relay] process exit with error some events missing, current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}, lastPos (mysql-bin.000001, 849), current GTID 0-1-4, relay file size 887
github.com/pingcap/dm/relay.(*Relay).process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:443
github.com/pingcap/dm/relay.(*Relay).Process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:152
github.com/pingcap/dm/dm/worker.(*RelayHolder).run
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:106
github.com/pingcap/dm/dm/worker.(*RelayHolder).Start.func1
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:82
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1333
2019/02/19 21:24:07.757 relay.go:112: [error] process error with type UnknownError:
some events missing, current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}, lastPos (mysql-bin.000001, 849), current GTID 0-1-4, relay file size 887
github.com/pingcap/dm/relay.(*Relay).process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:443
github.com/pingcap/dm/relay.(*Relay).Process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:152
github.com/pingcap/dm/dm/worker.(*RelayHolder).run
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:106
github.com/pingcap/dm/dm/worker.(*RelayHolder).Start.func1
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:82
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1333
not set enable-gtid
still with debug
log level:
2019/02/19 21:25:23.489 relay.go:770: [info] [relay] start sync for master (127.0.0.1:3307, 0-1.000001) from (mysql-bin.000001, 887)
2019/02/19 21:25:23.497 relay.go:741: [info] [relay] last slave connection id 17
2019/02/19 21:25:23.506 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:0 EventType:RotateEvent ServerID:1 EventSize:43 LogPos:0 Flags:32}
2019/02/19 21:25:23.506 relay.go:366: [info] [relay] rotate to (mysql-bin.000001, 887)
2019/02/19 21:25:23.506 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573719 EventType:FormatDescriptionEvent ServerID:1 EventSize:245 LogPos:0 Flags:0}
2019/02/19 21:25:23.506 relay.go:701: [debug] [relay] the first 4 bytes are [254 98 105 110]
2019/02/19 21:25:23.506 relay.go:729: [info] [relay] binlog file mysql-bin.000001 already has Format_desc event, so ignore it
2019/02/19 21:25:23.506 relay.go:586: [info] [relay] mysql-bin.000001 seek to end (887)
2019/02/19 21:25:23.506 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:25:23.507 relay.go:417: [info] [relay] gap detected from 887 to 959 in mysql-bin.000001, current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:25:23.519 relay.go:263: [info] [relay] start to fill gap in relay log file from (mysql-bin.000001, 887)
2019/02/19 21:25:23.530 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:0 EventType:RotateEvent ServerID:1 EventSize:43 LogPos:0 Flags:32}
2019/02/19 21:25:23.530 relay.go:366: [info] [relay] rotate to (mysql-bin.000001, 887)
2019/02/19 21:25:23.530 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573719 EventType:FormatDescriptionEvent ServerID:1 EventSize:245 LogPos:0 Flags:0}
2019/02/19 21:25:23.530 relay.go:701: [debug] [relay] the first 4 bytes are [254 98 105 110]
2019/02/19 21:25:23.530 relay.go:729: [info] [relay] binlog file mysql-bin.000001 already has Format_desc event, so ignore it
2019/02/19 21:25:23.530 relay.go:586: [info] [relay] mysql-bin.000001 seek to end (887)
2019/02/19 21:25:23.531 relay.go:344: [debug] [relay] receive binlog event with header &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:25:23.531 relay.go:424: [info] [relay] fill gap reaching the end pos (mysql-bin.000001, 1011), current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}
2019/02/19 21:25:23.531 relay.go:155: [error] [relay] process exit with error some events missing, current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}, lastPos (mysql-bin.000001, 887), current GTID , relay file size 887
github.com/pingcap/dm/relay.(*Relay).process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:443
github.com/pingcap/dm/relay.(*Relay).Process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:152
github.com/pingcap/dm/dm/worker.(*RelayHolder).run
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:106
github.com/pingcap/dm/dm/worker.(*RelayHolder).Start.func1
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:82
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1333
2019/02/19 21:25:23.531 relay.go:112: [error] process error with type UnknownError:
some events missing, current event &{Timestamp:1550573720 EventType:TableMapEvent ServerID:1 EventSize:52 LogPos:1011 Flags:0}, lastPos (mysql-bin.000001, 887), current GTID , relay file size 887
github.com/pingcap/dm/relay.(*Relay).process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:443
github.com/pingcap/dm/relay.(*Relay).Process
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/relay/relay.go:152
github.com/pingcap/dm/dm/worker.(*RelayHolder).run
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:106
github.com/pingcap/dm/dm/worker.(*RelayHolder).Start.func1
/Users/zhangxc/gopath/src/github.com/csuzhangxc/dm/dm/worker/relay.go:82
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1333
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
Release Version: v1.0.0-alpha-10-g4d01d79
Git Commit Hash: 4d01d798415e835417bb6db7250c56fa2d964d47
Git Branch: master
UTC Build Time: 2019-02-11 14:50:57
Go Version: go version go1.11.2 linux/amd64
Upstream MySQL/MariaDB server version:
erver version: 5.5.5-10.1.21-MariaDB-1~jessie mariadb.org binary distribution
some binlog events in upstream MariaDB
mysql> SHOW BINLOG EVENTS IN 'mysql-bin.000001' FROM 313 LIMIT 15;
+------------------+------+---------------+-----------+-------------+------------------------------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+------------------+------+---------------+-----------+-------------+------------------------------------------------------------------------------------------+
| mysql-bin.000001 | 313 | Gtid | 1 | 351 | GTID 0-1-1 |
| mysql-bin.000001 | 351 | Query | 1 | 439 | use `mysql`; TRUNCATE TABLE time_zone |
| mysql-bin.000001 | 439 | Gtid | 1 | 477 | GTID 0-1-2 |
| mysql-bin.000001 | 477 | Query | 1 | 570 | use `mysql`; TRUNCATE TABLE time_zone_name |
| mysql-bin.000001 | 570 | Gtid | 1 | 608 | GTID 0-1-3 |
| mysql-bin.000001 | 608 | Query | 1 | 707 | use `mysql`; TRUNCATE TABLE time_zone_transition |
| mysql-bin.000001 | 707 | Gtid | 1 | 745 | GTID 0-1-4 |
| mysql-bin.000001 | 745 | Query | 1 | 849 | use `mysql`; TRUNCATE TABLE time_zone_transition_type |
| mysql-bin.000001 | 849 | Gtid | 1 | 887 | BEGIN GTID 0-1-5 |
| mysql-bin.000001 | 887 | Annotate_rows | 1 | 959 | INSERT INTO time_zone (Use_leap_seconds) VALUES ('N') |
| mysql-bin.000001 | 959 | Table_map | 1 | 1011 | table_id: 18 (mysql.time_zone) |
| mysql-bin.000001 | 1011 | Write_rows_v1 | 1 | 1046 | table_id: 18 flags: STMT_END_F |
| mysql-bin.000001 | 1046 | Query | 1 | 1116 | COMMIT |
| mysql-bin.000001 | 1116 | Gtid | 1 | 1154 | BEGIN GTID 0-1-6 |
| mysql-bin.000001 | 1154 | Annotate_rows | 1 | 1261 | INSERT INTO time_zone_name (Name, Time_zone_id) VALUES ('Africa/Abidjan', @time_zone_id) |
+------------------+------+---------------+-----------+-------------+------------------------------------------------------------------------------------------+
According to the log and events in the upstream MariaDB, we can see the relay unit did not receive the following event:
| mysql-bin.000001 | 887 | Annotate_rows | 1 | 959 | INSERT INTO time_zone (Use_leap_seconds) VALUES ('N')
It seems events with Annotate_rows
type are not sent to slave (here is relay unit) default, ref http://worklog.askmonty.org/worklog/Server-Sprint/?tid=47.
DM cluster information,blow:
dm cluster version:latest
PLAY [grafana_servers] ************************************************************************************************************************************************
TASK [start grafana by systemd] ***************************************************************************************************************************************
changed: [grafana]
TASK [wait for grafana up] ********************************************************************************************************************************************
ok: [grafana]
TASK [set_fact] *******************************************************************************************************************************************************
ok: [grafana]
TASK [import grafana data source] *************************************************************************************************************************************
changed: [grafana]
TASK [import grafana dashboards - prepare config] *********************************************************************************************************************
changed: [grafana -> localhost]
TASK [import grafana dashboards - run import script] ******************************************************************************************************************
fatal: [grafana -> localhost]: FAILED! => {"changed": true, "cmd": "python import_grafana_dashboards.py", "delta": "0:00:00.095492", "end": "2019-03-07 10:34:27.356195", "msg": "non-zero return code", "rc": 1, "start": "2019-03-07 10:34:27.260703", "stderr": "Traceback (most recent call last):\n File "import_grafana_dashboards.py", line 95, in \n raise RuntimeError\nRuntimeError", "stderr_lines": ["Traceback (most recent call last):", " File "import_grafana_dashboards.py", line 95, in ", " raise RuntimeError", "RuntimeError"], "stdout": "[load] from <dm.json>:dm_worker\n[import] to [Dm-Cluster]\t............. ERROR: [Errno 113] No route to host", "stdout_lines": ["[load] from <dm.json>:dm_worker", "[import] to [Dm-Cluster]\t............. ERROR: [Errno 113] No route to host"]}
to retry, use: --limit @/home/tidb/dm-ansible/retry_files/start.retry
PLAY RECAP ************************************************************************************************************************************************************
alertmanager : ok=2 changed=1 unreachable=0 failed=0
dm-worker1-1 : ok=2 changed=1 unreachable=0 failed=0
dm-worker1-2 : ok=2 changed=1 unreachable=0 failed=0
dm_master : ok=2 changed=1 unreachable=0 failed=0
grafana : ok=5 changed=3 unreachable=0 failed=1
localhost : ok=0 changed=0 unreachable=0 failed=0
prometheus : ok=2 changed=1 unreachable=0 failed=0
From the error information,when do 'python import_grafana_dashboards.py' get "No route to host",this error let me puzzled.In my opinion,whether the dm.json file version is not correct with 'import_grafana_dashboards.py' version?Or some other questions? TKS a lot for your help!
Is your feature request related to a problem? Please describe:
we can add more static check tools to help developer to write better codes
Describe the feature you'd like:
probably tools list:
Please answer these questions before submitting your issue. Thanks!
What did you do? If possible, provide a recipe for reproducing the error.
I use dm binlog event filter to filer some binlog event.But i find that it is not working when configuration the uppercase event; Such as use “ALL DML” is not working. But us "all dml" is working
What did you expect to see?
Both use "ALL DML" and "all dml" are all working
What did you see instead?
I just use "all dml" instead "ALL DML"
Versions of the cluster
DM version (run dmctl -V
or dm-worker -V
or dm-master -V
):
Release Version: v1.0.0-alpha-70-g8bfa3e0
Git Commit Hash: 8bfa3e0
Git Branch: master
UTC Build Time: 2019-05-06 03:05:07
Go Version: go version go1.12 linux/amd64
```
- Upstream MySQL/MariaDB server version:
```
5.7.20-log
```
- Downstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```
Release Version: v2.1.5-1-g0bd6b1b
Git Commit Hash: 0bd6b1b60816bf8a2022e657d7a472311548d82e
Git Branch: release-2.1
UTC Build Time: 2019-02-28 08:17:03
GoVersion: go version go1.11.2 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false
```
- How did you deploy DM: DM-Ansible or manually?
```
DM-Ansible
```
请教一下,在我们的业务场景下dm默认提供的"partition id"的策略不能满足我们的需求。所以我们想自定义列合并策略。原本我们打算将tidb-tools包配置为本地包,然后再对column-mapping这个子包进行修改,dm下的go.mod配置如下:
module github.com/pingcap/dm
require (
github.com/BurntSushi/toml v0.3.1
github.com/DATA-DOG/go-sqlmock v1.3.3
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e
github.com/go-sql-driver/mysql v1.4.1
github.com/gogo/protobuf v1.2.0
github.com/golang/protobuf v1.3.1
github.com/pingcap/check v0.0.0-20190102082844-67f458068fc8
github.com/pingcap/errors v0.11.4
github.com/pingcap/failpoint v0.0.0-20190422094118-d8535965f59b
github.com/pingcap/parser v0.0.0-20190427000002-f3ecae036b23
github.com/pingcap/tidb v0.0.0-20190429084711-cd10bca66609
github.com/pingcap/tidb-tools v3.0.0-beta.1.0.20190522080351-b06622ae57fd+incompatible
github.com/prometheus/client_golang v0.9.3
github.com/prometheus/common v0.4.1
github.com/satori/go.uuid v1.2.0
github.com/siddontang/go v0.0.0-20180604090527-bdc77568d726
github.com/siddontang/go-mysql v0.0.0-20190312052122-c6ab05a85eb8
github.com/sirupsen/logrus v1.4.2
github.com/soheilhy/cmux v0.1.4
github.com/spf13/cobra v0.0.4
github.com/syndtr/goleveldb v1.0.0
golang.org/x/sys v0.0.0-20190523142557-0e01d883c5c5
google.golang.org/grpc v1.17.0
gopkg.in/natefinch/lumberjack.v2 v2.0.0
gopkg.in/yaml.v2 v2.2.2
)
replace github.com/pingcap/tidb-tools v3.0.0-beta.1.0.20190522080351-b06622ae57fd+incompatible => ../extends/tidb-tools
目前遇到的问题是当将tidb-tools包设置为本地包之后,再对dm进行编译时,会提示如下异常:
(base) yzwa0122a:dm roro.p.tu$ make build
gofmt (simplify)
GO111MODULE=off go get golang.org/x/lint/golint
golint
GO111MODULE=on go build -o bin/shadow golang.org/x/tools/go/analysis/passes/shadow/cmd/shadow
vet
# github.com/pingcap/tidb/store/tikv
../../go/pkg/mod/github.com/pingcap/[email protected]/store/tikv/split_region.go:87:20: s.pdClient.ScatterRegion undefined (type pd.Client has no field or method ScatterRegion)
../../go/pkg/mod/github.com/pingcap/[email protected]/store/tikv/split_region.go:108:26: s.pdClient.GetOperator undefined (type pd.Client has no field or method GetOperator)
make: *** [vet] Error 2
目前的编译环境如下:
(base) yzwa0122a:dm roro.p.tu$ go env
GOARCH="amd64"
GOBIN="/usr/local/bin"
GOCACHE="/Users/roro.p.tu/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/roro.p.tu/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.5/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.5/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/roro.p.tu/Downloads/dm/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/bv/clpjyngs47jdqp5f57lp7c7sgdkft7/T/go-build642745298=/tmp/go-build -gno-record-gcc-switches -fno-common"
Is your feature request related to a problem? Please describe:
we have some problems in our rpc framework:
Describe the feature you'd like:
Is your feature request related to a problem? Please describe:
when the original task config file lost or when debugging the migration task, we may want to retrieve the configuration of the running task from the DM cluster.
Describe the feature you'd like:
retrieve the configuration of the running task from the DM cluster
600
Contact the mentors: #tidb-challenge-program channel in TiDB Community Slack Workspace
Is your feature request related to a problem? Please describe:
now, we need to set flavor
for DM-worker, but it may set incorrectly.
Describe the feature you'd like:
flavor
automatically if detectedTeachability, Documentation, Adoption, Migration Strategy:
use select @@version
or other variables to detect server type
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.