bersler / openlogreplicator Goto Github PK
View Code? Open in Web Editor NEWOpen Source Oracle database CDC
Home Page: https://www.bersler.com
License: GNU General Public License v3.0
Open Source Oracle database CDC
Home Page: https://www.bersler.com
License: GNU General Public License v3.0
Is your feature request related to a problem? Please describe.
Replicate directly to RabbitMQ
Describe the solution you'd like
Connect directly to the target without any intermediate technology.
Describe alternatives you've considered
Replicate to Kafka and use RabbitMQ connector to further push transactions.
Additional context
Decrease delay and reduce additional dependency.
Is your feature request related to a problem? Please describe.
For multi tenant databases OpenLogReplicator allows to replicate just from one PDB.
Describe the solution you'd like
Allow to replicate from multiple PDBs with once OpenLogReplicator instance.
Describe alternatives you've considered
Replication from multiple PDBs requires to run many instances.
Additional context
Running multiple instances of OpenLogReplicator means that the same redo log is parsed multiple times.
When in type=offline mode
, the tool only polls the archive log path once on startup and then doesn't see any new archived redo logs because it always skips the current day.
With the following change it will skip only the previously seen days, not the current day. (I haven't tested this in type=online + arch=path
mode.)
--- Replicator.cpp.orig 2023-07-09 14:40:19.291668291 +1000
+++ Replicator.cpp 2023-07-09 18:19:44.866635031 +1000
@@ -485,7 +485,7 @@
continue;
// Skip earlier days
- if (replicator->lastCheckedDay == ent->d_name)
+ if (!replicator->lastCheckedDay.empty() && replicator->lastCheckedDay > ent->d_name)
continue;
if (replicator->ctx->trace & TRACE_ARCHIVE_LIST)
Is your feature request related to a problem? Please describe.
When the source database contains transactions which include INSERT /+APPEND/ they are ignored by OpenLogReplicator.
Describe the solution you'd like
Such transactions could be fully supported.
Describe alternatives you've considered
Not to use Direct Insert and use regular insert instead.
Additional context
Sometimes such transactions are generated by the application. It might not be always possible to change the application source code to avoid transactions using Direct Insert.
While building on ubuntu, (having installed the 19.3 instant client from the rpm from oracle with alien -i),
OpenLogReplicator won't build with out some manual intervention, Is this a standard oracle client environment to have?
<path_to_oracle_client>/sdk/include for the includes?
<path_to_oracle_client> for the libs?
Following the rpm installation the includes are in
/usr/include/oracle/19.3/client64/
and the link libraries are in
/usr/lib/oracle/19.3/client64/lib/
This can't be handled with a single with_instantclient
if test "${with_instantclient+set}" = set; then :
withval=$with_instantclient;
CPPFLAGS="-I$withval/sdk/include -DLINK_LIBRARY_OCI $CPPFLAGS";
LDFLAGS="-L$withval -lclntshcore -lnnz19 -lclntsh $LDFLAGS"
fi
In the end I modified this line to be (to get past the need to make symbolic links)
+ withval=$with_instantclient; CPPFLAGS="-I$withval -DLINK_LIBRARY_OCI $CPPFLAGS"; LDFLAGS="-L/usr/lib/oracle/19.3/client64/lib -lclntshcore -lnnz19 -lclntsh $LDFLAGS
"
It would be nice if the configure script could handle a "semi" standard installation.
Add column to a table after start the proccess, and then insert the table, the process will be terminated.
Error: signal 11
./OpenLogReplicator[0x41b5e1]
/lib64/libc.so.6(+0x363f0)[0x7f4f1ed0a3f0]
/lib64/libstdc++.so.6(_ZNSsC1ERKSs+0x1b)[0x7f4f1f678f7b]
./OpenLogReplicator[0x4093f0]
./OpenLogReplicator[0x43bf21]
./OpenLogReplicator[0x435a5c]
./OpenLogReplicator[0x43697a]
./OpenLogReplicator[0x427c0a]
./OpenLogReplicator[0x43984a]
/lib64/libpthread.so.0(+0x7ea5)[0x7f4f22a21ea5]
/lib64/libc.so.6(clone+0x6d)[0x7f4f1edd28cd]
Do you know any way to extract Oracle redo logs remotely?
Sometimes when UNDO record is earlier then REDO which is to be rolled back a warning can occur:
WARNING: can't rollback transaction part, UBA: 0x00c0205c.03c2.17 DBA: 0xc0205c SLT: 30 RCI: 23 OPFLAGS: 0
In this situation some part of the transaction is not rolled back.
Is your feature request related to a problem? Please describe.
Replicate directly to Google Pub/Sub target.
Describe the solution you'd like
Connect directly to the target without any intermediate technology.
Describe alternatives you've considered
Replicate to Kafka and use Google Pub/Sub connector to further push transactions.
Additional context
Decrease delay and reduce additional dependency.
When a large amount of SQL is written every time, after about tens of thousands of entries, OpenLogReplicator does not work.
2021-07-08 14:34:16 [INFO] streaming to client
2021-07-08 14:34:16 [INFO] processing redo log: group: 1 scn: 212707636 to 0 seq: 1635 path: /data/oracle/oradata/orcl/redo04.log offset: 1024
2021-07-08 14:37:48 [INFO] processing redo log: group: 2 scn: 214834738 to 0 seq: 1636 path: /data/oracle/oradata/orcl/redo05.log offset: 1024
when i run batch mode. found title saied error.
here is my json config:
{
"version": "0.8.1",
"sources": [
{
"alias": "S1",
"name": "O112A",
"reader": {
"type": "batch",
"redo-logs": [
"/home/build/oracle-ha/OpenLogReplicator/archivedlog",
"/home/build/oracle-ha/OpenLogReplicator/redolog"
],
"log-archive-format": ""
},
"format": {
"type": "json"
},
"memory-min-mb": 64,
"memory-max-mb": 1024,
"tables": [
{
"table": "sys.zbbtest"
}
]
}
],
"targets": [
{
"alias": "K1",
"source": "S1",
"writer": {
"type": "file",
"name": "transactions.json"
}
}
]
}
here is raw error:
[build@build197 OpenLogReplicator]$ ./src/OpenLogReplicator
OpenLogReplicator v.0.8.1 (C) 2018-2021 by Adam Leszczynski ([email protected]), see LICENSE file for licensing information
Adding source: S1
Adding target: K1
INFO: Writer is starting: File:transactions.json
INFO: Oracle Analyzer for O112A in batch mode is starting (flags: 1) from NOW
INFO: last confirmed SCN: <none>
INFO: missing schema for O112A
ERROR: schema file missing - required for offline mode
INFO: Oracle analyzer for: O112A is shutting down
INFO: Oracle analyzer for: O112A is shut down, allocated at most 64MB memory, max disk read buffer: 0MB
INFO: Writer is stopping: File:transactions.json, max queue size: 0
and how create xxx.scheam.json, and content is ?
Is your feature request related to a problem? Please describe.
Be able to support Oracle RAC - at least single node RAC.
Describe the solution you'd like
Be able to simultaneously read from multiple nodes of the cluster, merge data and send to output.
Describe alternatives you've considered
Don't exist
Additional context
Currently RAC is not supported.
Is your feature request related to a problem? Please describe.
If you have a very big transaction bigger than amount of memory the program would fail with out of memory.
Describe the solution you'd like
OpenLogReplicator could swap to disk when it run out of memory instead of fail.
Describe alternatives you've considered
Introduce swap file and add virtual memory.
Additional context
Ability to swap to disk would allow to set memory boundary low for OpenLogReplicator and have smaller memory impact to OS.
when I compile,Error message:
Building file: ../src/CommandBuffer.cpp
Invoking: Cross G++ Compiler
g++ -std=c++1y -I/opt/instantclient_11_2/sdk/include -I/opt/rapidjson/include -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/CommandBuffer.d" -MT"src/CommandBuffer.o" -o "src/CommandBuffer.o" "../src/CommandBuffer.cpp"
cc1plus: error: unrecognized command line option "-std=c++1y"
make: *** [src/CommandBuffer.o] Error 1
My server environment is redhat linux 6.8 server
Can you provide compilation conditions?thank you!
Hi, @bersler .
Using latest version program, the program show "ERROR: transaction does not exists in hash map1" and then exit.
By the way, using the fixed redo buffer size cause high memory usage.
Originally posted by @linanh in #4 (comment)
Issue Description
I build debug version successfully, but core when I run the code
System
Linux t420 4.15.0-99-generic #100-Ubuntu SMP Wed Apr 22 20:32:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
GCC
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Oracle Client
instantclient_19_6
Oracle Server
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
Error log
Open Log Replicator v. 0.5.0 (C) 2018-2020 by Adam Leszczynski, [email protected], see LICENSE file for licensing information
Adding source: XE
- connecting to Oracle database XE
ERROR: 12162: =================================================================
==17745==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x608000000138 in thread T0
#0 0x7f89f4f692c0 in operator delete(void*) (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xe12c0)
#1 0x561cd9f1256f in OpenLogReplicator::OracleReader::checkConnection(bool) ../src/OracleReader.cpp:150
#2 0x561cd9f1922b in OpenLogReplicator::OracleReader::initialize() ../src/OracleReader.cpp:591
#3 0x561cd9eff3a6 in main ../src/OpenLogReplicator.cpp:164
#4 0x7f89efd76b96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
#5 0x561cd9ebff89 in _start (/home/philip/media/instantclient_19_6/OpenLogReplicator/Debug/OpenLogReplicator+0x1ff89)
0x608000000138 is located 24 bytes inside of 82-byte region [0x608000000120,0x608000000172)
allocated by thread T0 here:
#0 0x7f89f4f68448 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xe0448)
#1 0x7f89f0431c28 in std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (/usr/lib/x86_64-linux-gnu/libstdc++.so.6+0xd3c28)
SUMMARY: AddressSanitizer: bad-free (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xe12c0) in operator delete(void*)
==17745==ABORTING
Any fix can be provided?
In
OpenLogReplicator/src/KafkaWriter.cpp
Line 240 in 8385580
OpenLogReplicator/src/writer/WriterFile.cpp
Line 267 in cc363de
It looks like the check for an existing output file was broken in commit 730bd18
Patch that gets it working on my machine:
--- WriterFile.cpp.orig 2023-07-09 14:40:19.293668300 +1000
+++ WriterFile.cpp 2023-07-09 16:41:14.144576432 +1000
@@ -264,7 +264,7 @@
// File is closed, open it
if (outputDes == -1) {
struct stat fileStat;
- if (stat(fullFileName.c_str(), &fileStat) != 0) {
+ if (stat(fullFileName.c_str(), &fileStat) == 0) {
// File already exists, append?
if (append == 0)
throw RuntimeException(10003, "file: " + fullFileName + " - stat returned: " + strerror(errno));
Thanks for all your hard work!
In DatabaseConnection.cpp:33 DatabaseConnection()
`
OCIHandleAlloc((dvoid*) env->envhp, (dvoid**) &errhp, OCI_HTYPE_ERROR, 0, nullptr);
OCIHandleAlloc((dvoid*) env->envhp, (dvoid**) &srvhp, OCI_HTYPE_SERVER, 0, nullptr);
OCIHandleAlloc((dvoid*) env->envhp, (dvoid**) &svchp, OCI_HTYPE_SVCCTX, 0, nullptr);
OCIHandleAlloc((dvoid*) env->envhp, (dvoid**) &authp, OCI_HTYPE_SESSION, 0, nullptr);
//if server is offline , checkErr call RUNTIME_FAIL and throw RuntimeException , but the ~DatabaseConnection() never called
//Ideally, OCIHandleAlloc should call OCIHandleFree once
env->checkErr(errhp, OCIServerAttach(srvhp, errhp, (const OraText*) server.c_str(), server.length(), OCI_DEFAULT));
env->checkErr(errhp, OCIAttrSet((dvoid*) svchp, OCI_HTYPE_SVCCTX, srvhp, 0, OCI_ATTR_SERVER, (OCIError*) errhp));
env->checkErr(errhp, OCIAttrSet((dvoid*) authp, OCI_HTYPE_SESSION, (dvoid*) user.c_str(), user.length(), OCI_ATTR_USERNAME, (OCIError*) errhp));
env->checkErr(errhp, OCIAttrSet((dvoid*) authp, OCI_HTYPE_SESSION, (dvoid*) password.c_str(), password.length(), OCI_ATTR_PASSWORD, (OCIError*) errhp));
`
In cpp , Constructor throw exception ,the Destructor will not be called
In function KafkaWriter::parseDML:
From KafkaWriter.cpp#L495 to KafkaWriter.cpp#L504, according variable "redoLogRecord1->object->totalCols" to allocate array.
But in KafkaWriter.cpp#L828, the table of "redoLogRecord->next" may be different from the table of "redoLogRecord1". Then, the program will be terminated.
CREATE TABLE "TEST"."EMP1" ( "id" NUMBER, "name" VARCHAR2 ( 255 ), "salary" NUMBER, "will_renamed" NUMBER, "will_droped" NUMBER, "will_unused" NUMBER, PRIMARY KEY ( "id" ) );
ALTER TABLE "TEST"."EMP1" ADD ( "added_col" NUMBER ( 7, 2 ) );
ALTER TABLE "TEST"."EMP1" RENAME COLUMN "will_renamed" TO "renamed";
ALTER TABLE "TEST"."EMP1" DROP ( "will_droped" );
ALTER TABLE "TEST"."EMP1"
SET UNUSED ( "will_unused" );
INSERT INTO "TEST"."EMP1" ( "id", "name", "salary", "added_col", "renamed" ) SELECT ROWNUM
,
'Employee ' || to_char( ROWNUM ),
dbms_random.value ( 2, 9 ) * 1000,
147,
258
FROM
dual CONNECT BY LEVEL <= 100;
DELETE
FROM
"TEST"."EMP1"
WHERE
50 <= "id"
AND "id" <= 100;
UPDATE "TEST"."EMP1"
SET "salary" = 1234;`
when analysing the redo log of table SYS.SMON_SCN_TIME, I find that:
1.by DBMS_LOGMNR, the table SYS.SMON_SCN_TIME has 9 columns;
2.by OpenLogReplicator, the table SYS.SMON_SCN_TIME has 8 columns;
do you known the reason?
Is your feature request related to a problem? Please describe.
Very often a big table is replicated but contain just few rows which are needed for replication. There is no filtering available so currently when table is chosen for replication - all transactions related to this table would be sent to output.
Describe the solution you'd like
A simple filter for one or many columns would allow to select such rows.
Describe alternatives you've considered
Make filtering on Kafka - but this requires to unnecessary process large volume of data.
Additional context
Filtering could be based on one, or many columns. Simple conditions for filters like AND and OR would be useful.
Is your feature request related to a problem? Please describe.
When the source database contains a long running transaction (like for 24h+) the recovery after restart takes very long and requires to start parsing of redo logs from very old redo log file.
Describe the solution you'd like
Cache to disk long running transactions so during recovery (after forced reboot for example) the program can restore information about long running transaction from disk and would not need to start reading from very old redo log file.
Describe alternatives you've considered
None
Additional context
This feature might add requirement for disk space.
Is your feature request related to a problem? Please describe.
When memory is limited and the output sink is not available (network is down) all memory could be exhausted resulting in suspend of redo log parser. When this case takes longer, source redo logs might be deleted and after resuming with they are not available any more.
Describe the solution you'd like
In case of low memory and slow progress with output sink, the transaction stream could be buffered to disk.
Describe alternatives you've considered
None
Additional context
Disk space should be used for such buffering of redo log data.
Is your feature request related to a problem? Please describe.
Support old Oracle database versions like 8, 9, 10, 11.1
Describe the solution you'd like
Add ability for OpenLogReplicator to parse redo log produced by ancient Oracle databases.
Describe alternatives you've considered
Upgrade the database to at least 11.2 and run replication.
Additional context
Ancient Oracle databases have very complex and redo log format. Code wich works with version 11.2 will not be able to work with older versions. It requires much work.
On the other hand old versions are very very seldom used and for a few database instances it makes not much sense to put a lot of effort in software development.
Hi, @bersler . The source code missing OracleAnalyser.h and OracleAnalyser.cpp
OpenLogReplicator 、StreamClient
2021-07-06 09:56:59 [INFO] last confirmed scn: 205414729, starting sequence: 1631, offset: 0
2021-07-06 09:56:59 [INFO] found redo log version: 0x0b200000
2021-07-06 09:56:59 [INFO] streaming to client
2021-07-06 09:56:59 [INFO] processing redo log: group: 3 scn: 205290092 to 0 seq: 1631 path: /data/oracle/oradata/orcl/redo06.log offset: 1024
2021-07-06 09:56:59 [ERROR] signal 11
./src/OpenLogReplicator[0x42fa40]
/lib64/libc.so.6(+0x36450)[0x7f923f201450]
It shows signal 11
Is your feature request related to a problem? Please describe.
Currently all transactions are sent just to one Kafka topic.
Describe the solution you'd like
It would be useful to divide the load into multiple topics and send transactions to certain topic based on table name or other conditions.
Describe alternatives you've considered
To divide the load - such spread of transactions to multiple Kafka topics can be made as an another level of transaction processing.
Additional context
Dividing transactions might mean that we lose transaction consistency.
Modifying part of LOB with dbms_lob.write does not work. The information about DML commands do not appear in output.
If the amount of data exceeds 500,000, the program will get an error:
[ERROR] starting sequence if unknown, failing
the config json set:
"type": "online",
and in the writer ,set "start-scn"
the tag:0.9.32-beta working well
Start link from OpenLogReplicator page is broken.
Is your feature request related to a problem? Please describe.
To initiate replication the database must be available (either primary or standby) to extract a consistent copy of system tables containing copy of the schema at a certain starting time. This is very problematic.
Describe the solution you'd like
Add possibility to start replication with any given SCN (or point in time).
Describe alternatives you've considered
To do that one would need to restore a copy of the database at certain time (up to the minute database restore), copy the schema and use it as an initial schema for replication.
Additional context
This feature is not a must have to use replication but would certainly make replication configuration easier.
Can we read redo log files by passing them to openlogreplicator? Put in another way, without adding the oracle config params to the json file and only having redo logs can we read it with openlogreplicator? This is because the redo logs are dumped to a shared drive and we do not have access to the oracle servers?
I had updated GCC to 4.9,But error
[root@kafka1 OpenLogReplicator]# ./src/OpenLogReplicator
2022-05-23 05:49:48 [INFO] OpenLogReplicator v.0.9.41-beta (C) 2018-2022 by Adam Leszczynski ([email protected]), see LICENSE file for licensing information, linked modules: Kafka OCI
2022-05-23 05:49:48 [ERROR] binaries are build with no regex implementation, check if you have gcc version >= 4.9
[root@kafka1 OpenLogReplicator]# gcc --version
gcc (GCC) 4.9.0
Copyright © 2014 Free Software Foundation, Inc.
Is your feature request related to a problem? Please describe.
Many production databases use ASM.
Describe the solution you'd like
OpenLogReplicator could read directly from ASM instance.
Describe alternatives you've considered
The alternative is to set up a physical data guard using filesystem and read from copy.
Additional context
hi @bersler :
the objectMap some times erace twice, cause a crash.
see the code comments
In Schema.cpp func
void Schema::rebuildMaps(void);
for (auto it = objectMap.cbegin(); it != objectMap.cend() ; ) {
OracleObject* object = it->second;
if (object->user == user) {
removeFromDict(object); // here, objectMap erase object
INFO("dropped schema: " << object->owner << "." << object->name << " (dataobj: " << std::dec << object->dataObj
<< ", obj: " << object->obj << ")");
objectMap.erase(it++);//if the it point to zhe removeFromDict() erased object, crash
delete object;
} else {
++it;
}
}
in func
void Schema::removeFromDict(OracleObject* object)
void Schema::removeFromDict(OracleObject* object) {
if (objectMap.find(object->obj) == objectMap.end()) {
CONFIG_FAIL("can't remove object (obj: " << std::dec << object->obj << ", dataObj: " << object->dataObj << ")");
}
objectMap.erase(object->obj);// erase object
in func
void Schema::addToDict(OracleObject* object)
if (objectMap.find(object->obj) != objectMap.end()) {
CONFIG_FAIL("can't add object (obj: " << dec << object->obj << ", dataObj: " << object->dataObj << ")");
}
objectMap[object->obj] = object;//add obj
Is your feature request related to a problem? Please describe.
Start replication, add tables to replicated set of tables from another schema (not present during startup).
Describe the solution you'd like
If set of replicated tables changers, OpenLogReplicator would update the schema for new user not present before and resume replication.
Describe alternatives you've considered
Current workaround is to stop replication, delete checkpoint files, and restart (collecting whole schema again) with new configuration.
Additional context
Might be very usable when the set of replicated schemas is not known during program startup.
The current implementation does not use memory buffers to allocate LOB data. Instead heap pool is used. This leads to a situation that the total memory usage can not be limited and information presented about memory usage does not include memory allocated for LOBs.
It would be best to actually use also memory buffers for LOB.
Is your feature request related to a problem? Please describe.
When the list of tables is updated, OpenLogReplicator needs to be restarted. Technically this should not be necessary. Instead it could just update the config file and continue replication with changed set of tables.
Describe the solution you'd like
Track changes in OpenLogReplicator.json config file. If file is changed, it would be loaded, and list of tables (or maybe other configuration parameters too) would be updated with new values.
Describe alternatives you've considered
Stop OpenLogReplicator. Update config. Start OpenLogReplicator.
Additional context
Stopping and starting again might be time consuming when there are large transactions. This causes delay in the replication stream.
in oracle12C ,Display error:[ORA-00942: table or view does not exist]
2021-11-04 11:32:01 [INFO] version: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production, context: SJZQUERYPDB, resetlogs: 1056128228, activation: 2064052263, con_id: 5, con_name: SJZQUERYPDB
2021-11-04 11:32:01 [ERROR] OCI ERROR: [ORA-00942: table or view does not exist]
2021-11-04 11:32:01 [ERROR] HINT: run: ALTER SESSION SET CONTAINER = SJZQUERYPDB;
2021-11-04 11:32:01 [ERROR] HINT: run: GRANT SELECT, FLASHBACK ON SYS.USER$ TO empquery;
need to specify other containers first?
Is your feature request related to a problem? Please describe.
Support columns like BLOB, CLOB, etc.
Describe the solution you'd like
Allow to replicate values stored in BLOB, CLOB column. Currently such columns are ignored.
Describe alternatives you've considered
Convert to VARCHAR2/BINARY when column contains small values. But not possible when you need to store large values.
Additional context
LOB column might be store out of row or in row. The implementation varies for different versions of the database. All database versions and LOB types should be supported.
when i run OpenLogReplicator again, it appears title error.
raw error:
[root@localhost OpenLogReplicator]# ./src/OpenLogReplicator
OpenLogReplicator v.0.8.1 (C) 2018-2021 by Adam Leszczynski ([email protected]), see LICENSE file for licensing information
Adding source: S1
Adding target: K1
INFO: connecting to Oracle instance of orcl to //172.16.146.14:1521/orcl
INFO: Writer is starting: File:transactions.json
INFO: checkpoint - reading scn: 2708181
INFO: version: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production, context: orcl, resetlogs: 1046100063, activation: 1573917661, con_id: 0, con_name: orcl
INFO: loading character mapping for AL32UTF8
INFO: loading character mapping for AL16UTF16
INFO: Oracle Analyzer for orcl in online mode is starting from SCN:2708181
INFO: last confirmed SCN: 2708181
INFO: reading schema for orcl
ERROR: parsing orcl-schema.json, field data-obj not found
INFO: Oracle analyzer for: orcl is shutting down
INFO: Oracle analyzer for: orcl is shut down, allocated at most 64MB memory, max disk read buffer: 0MB
INFO: Writer is stopping: File:transactions.json, max queue size: 0
i see the schema, data-obj is mean dataObj?
schema-file:
{
"obj": 511,
"dataObj": 511,
"clu-cols": 0,
"total-pk": 0,
"options": 0,
"max-seg-col": 1,
"owner": "SYS",
"name": "_default_auditing_options_",
"columns": [
{
"col-no": 1,
"guard-seg-no": -1,
"seg-col-no": 1,
"name": "A",
"type-no": 1,
"length": 1,
"precision": -1,
"scale": -1,
"num-pk": 0,
"charset-id": 873,
"nullable": 0,
"invisible": 0,
"stored-as-lob": 0,
"constraint": 0,
"added": 0,
"guard": 0
}
]
},
Is your feature request related to a problem? Please describe.
Replicate directly to Apache Flink.
Describe the solution you'd like
Connect directly to the target without any intermediate technology.
Describe alternatives you've considered
Replicate to Kafka and use Apache Flink connector to further push transactions.
Additional context
Decrease delay and reduce additional dependency.
The proccess print the error logs:
ERROR: part transaction delete: not yet implemented
Below is the stack information:
(gdb) bt
#0 OpenLogReplicator::DatabaseEnvironment::checkErr (this=0x602000005070, errhp=0x62c00000d4c8, status=-1) at DatabaseEnvironment.cpp:84
#1 0x00000000006d2354 in OpenLogReplicator::DatabaseStatement::executeQuery (this=0x6070000d6fc0) at DatabaseStatement.cpp:67
#2 0x000000000071a67a in OpenLogReplicator::ReaderASM::redoRead (this=0x6120000133c0, buf=0x6250000a5400 '\276' <repeats 200 times>..., offset=1, size=512) at ReaderASM.cpp:137
#3 0x000000000071af41 in OpenLogReplicator::ReaderASM::reloadHeaderRead (this=0x6120000133c0) at ReaderASM.cpp:156
#4 0x000000000056d6bc in OpenLogReplicator::Reader::reloadHeader (this=0x6120000133c0) at Reader.cpp:261
#5 0x00000000005731b9 in OpenLogReplicator::Reader::run (this=0x6120000133c0) at Reader.cpp:416
#6 0x000000000068c9a8 in OpenLogReplicator::Thread::runStatic (context=0x6120000133c0) at Thread.cpp:39
#7 0x00007ffff15daea5 in start_thread () from /lib64/libpthread.so.0
#8 0x00007fffef9a58dd in clone () from /lib64/libc.so.6
(gdb) c
Continuing.
2021-11-04 19:23:32 [ERROR] OCI ERROR: [ORA-01000: maximum open cursors exceeded]
Is your feature request related to a problem? Please describe.
Add ability to probe OpenLogReplicator process to provide statistics about processed work.
Describe the solution you'd like
Introduce a service which would provide monitoring statistics like number of processed transactions, used memory, biggest transaction, etc.
Describe alternatives you've considered
Track messages from the error log. But the provided information is not sufficient.
Additional context
Monitoring would allow to provide faster way of detecting potential errors and troubleshooting.
Hi,
Our Oracle has two Arch threads. the Arch file sequence is below.
thread_1_seq_50219.3285.1051090681
thread_2_seq_42952.3286.1051091023
thread_1_seq_50220.3287.1051091323
thread_2_seq_42953.3288.1051091637
thread_1_seq_50221.3289.1051091977
thread_2_seq_42954.3290.1051092267
But the scn sequence is out of order.
So the Check Point is no use. I think, we should have one Check Point per thread.
The scn sequence in the attach file, which is a xlsx file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.