scalar-labs / scalardb Goto Github PK
View Code? Open in Web Editor NEWUniversal transaction manager
Home Page: https://scalardb.scalar-labs.com/docs
License: Apache License 2.0
Universal transaction manager
Home Page: https://scalardb.scalar-labs.com/docs
License: Apache License 2.0
Is your feature request related to a problem? Please describe.
Not a major problem but currently the test is done manually: #760 (comment)
Describe the solution you'd like
Add a test called from GitHub Actions.
Describe alternatives you've considered
Continue manual testing.
Additional context
Is your feature request related to a problem? Please describe.
https://github.com/scalar-labs/scalardb/tree/master/tools/data_loader
を使ってバージョン3.0.0のscalarDBにデータを投入したい。
現状はこちらのbuild.gradleをローカルで編集してscalarDBのバージョンを3.0.0にあげると以下のコンパイルエラーになる。
https://github.com/scalar-labs/scalardb/blob/master/tools/data_loader/build.gradle
Configure project :
The JavaApplication.setMainClassName(String) method has been deprecated. This is scheduled to be removed in Gradle 8.0. Use #getMainClass().set(...) instead. See https://docs.gradle.org/6.7.1/dsl/org.gradle.api.plugins.JavaApplication.html#org.gradle.api.plugins.JavaApplication:mainClass for more details.
at build_81mtyax5oyegbg0tco40liq3j$_run_closure3.doCall(/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/build.gradle:31)
(Run with --stacktrace to get the full stack trace of this deprecation warning.)
> Task :compileJava FAILED
/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/src/main/java/com/scalar/dataloader/ScalarDbRepository.java:103: エラー: 例外CrudExceptionは報告されません。スローするには、捕捉または宣言する必要があります
getTx().delete(delete);
^
/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/src/main/java/com/scalar/dataloader/ScalarDbRepository.java:135: エラー: 例外CrudExceptionは報告されません。スローするには、捕捉または宣言する必要があります
getTx().put(put);
^
/Users/yamaguchitmk019/Downloads/scalardb-3.0.0/tools/data_loader/src/main/java/com/scalar/dataloader/ScalarDbRepository.java:208: エラー: 例外TransactionExceptionは報告されません。スローするには、捕捉または宣言する必要があります
.start();
^
エラー3個
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':compileJava'.
> Compilation failed; see the compiler error output for details.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 1s
1 actionable task: 1 executed
Describe the solution you'd like
data_loaderを使ってAmazon DynamoDB,MySQLに対してデータ投入を行いたい。
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Backport of #943 to branch(3.6) failed
Backport of #954 to branch(3.7) failed
Backport of #1037 to branch(3.10) failed
Backport of #934 to branch(3.9) failed
Backport of #1144 to branch(3.9) failed
CREATE TRANSACTION TABLE foo.foo (
id TEXT PARTITIONKEY,
value TEXT
);
id | value | tx_version (tx metadata) |
---|---|---|
foo | 2 | 1 |
Steps
Actual states :
id | value | tx_version |
---|---|---|
foo | 3 | 2 |
The above state can NOT be produced by T1, T2 and T3 in some serial order, which means the execution is not serializable.
T1->T3->T2 and T3->T1->T2
no record
T2->T1->T3 and T2->T3->T1
id | value | tx_version |
---|---|---|
foo | 2 | 2 |
id | value | tx_version |
---|---|---|
foo | 1 | 1 |
Is your feature request related to a problem? Please describe.
scalardbのツールを使ってdynamodbにschemaを定義したいのだが特定のパターンのテーブルだと定義できない。
https://github.com/scalar-labs/scalardb/tree/master/tools/scalar-schema
パーティションキーあり、クラスタリーキーなし、セカンダリインデックスありの条件のテーブルをscalar-schemaでdynamodbにテーブル作成しようとすると以下のエラーになった。
./workflow_templates.json
2021-04-09 13:19:44,696 Exception in thread "main" software.amazon.awssdk.services.dynamodb.model.DynamoDbException: One or more parameter values were invalid: Some index key attributes are not defined in AttributeDefinitions. Keys: [deleted], AttributeDefinitions: [concatenatedPartitionKey] (Service: DynamoDb, Status Code: 400, Request ID: 810F9NAKCICUGV9JIB02EIGRFJVV4KQNSO5AEMVJF66Q9ASUAAJG, Extended Request ID: null)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:123)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:79)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:59)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:64)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:34)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:128)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:154)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:107)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:162)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:91)
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
at software.amazon.awssdk.services.dynamodb.DefaultDynamoDbClient.createTable(DefaultDynamoDbClient.java:1062)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:167)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:102)
at scalar_schema.dynamo$create_table.invokeStatic(dynamo.clj:196)
at scalar_schema.dynamo$make_dynamo_operator$reify__3394.create_table(dynamo.clj:210)
at scalar_schema.operations$create_tables$fn__4030.invoke(operations.clj:23)
at clojure.core$map$fn__5866.invoke(core.clj:2755)
at clojure.lang.LazySeq.sval(LazySeq.java:42)
at clojure.lang.LazySeq.seq(LazySeq.java:51)
at clojure.lang.Cons.next(Cons.java:39)
at clojure.lang.RT.next(RT.java:713)
at clojure.core$next__5386.invokeStatic(core.clj:64)
at clojure.core$dorun.invokeStatic(core.clj:3142)
at clojure.core$doall.invokeStatic(core.clj:3148)
at scalar_schema.operations$create_tables.invokeStatic(operations.clj:19)
at scalar_schema.core$_main.invokeStatic(core.clj:35)
at scalar_schema.core$_main.doInvoke(core.clj:32)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at scalar_schema.core.main(Unknown Source)
こちらのJSONデータを使ってscalar-schemaを実行した際に上のエラーになりました。
{
"keyspacename.workflow_templates": {
"transaction": true,
"partition-key": [
"template_id"
],
"columns": {
"template_id": "TEXT",
"template_name": "TEXT",
"template_desc": "TEXT",
"owner": "TEXT",
"members": "TEXT",
"status": "TEXT",
"created_at": "BIGINT",
"created_by": "TEXT",
"updated_at": "BIGINT",
"updated_by": "TEXT",
"template_detail_json": "TEXT",
"deleted": "BOOLEAN"
},
"secondary-index": [
"deleted"
]
}
}
Describe the solution you'd like
パーティションキーあり、クラスタリーキーなし、セカンダリインデックスありの条件のテーブルをscalar-schemaでdynamodbにテーブルを登録したい
Describe alternatives you've considered
Additional context
Backport of #1144 to branch(3.8) failed
Is your feature request related to the problem? Please explain.
I believe ScalarDB supports DynamoDB since Version 3.0.0. I would like to use ScalarDB in the form of storing data in DynamoDBLocal. The reason for this is that when we develop as a team, we prepare test data to validate the source code implemented in the local environment, and we want to develop without mixing in test data used by other engineers on the same team while working. We are also concerned that if we store the test data during development in the local environment in AWS DynamoDB, we will incur AWS DynamoDB costs for that.
Is there any way to use ScalarDB with DynamoDBLocal?
Please describe the solution you want.
I would like to use ScalarDB in the form of storing data in DynamoDBLocal.
Describe the alternatives you have considered.
If there is no way to use ScalarDB in the form of storing data in DynamoDBLocal, I would consider the following methods
If there is no way to use ScalarDB in the form of storing data in DynamoDBLocal, I would consider the following: - Unify the AWS DynamoDB tables used for development in the local environment with the team.
Assign an AWS DynamoDB table to each engineer.
I would like to avoid both of these methods, however, because they incur AWS DynamoDB costs.
Additional context.
The schema generator tool does not seem to parse the sdbql
files correctly. When I run the generator on the provided samples I get the following errors:
$ ./generator sample_input1.sdbql out
generator: error: sample_input1.sdbql:4:8: unexpected "TRANSACTION" (expected "NAMESPACE")
$ ./generator sample_input2.sdbql out
generator: error: sample_input2.sdbql:5:8: unexpected "TABLE" (expected "NAMESPACE")
I assume that, similar to #19, this issue is due to the participle
library being updated.
Backport of #1005 to branch(3.10) failed
Backport of #910 to branch(3.8) failed
Backport of #932 to branch(3.9) failed
Backport of #920 to branch(3.9) failed
Backport of #1144 to branch(3.10) failed
Backport of #909 to branch(3.9) failed
Backport of #919 to branch(3.7) failed
Describe the bug
I'm playing with https://github.com/scalar-labs/scalardb/blob/master/docs/getting-started-with-scalardb.md#store--retrieve-data. I might be missing something, but I noticed Put operation without preceding Get operation for the same target record fails after the first test finishes successfully. It works when I try to fetch the target record using TransactionCrudOperable.get()
before Put operation.
To Reproduce
Steps to reproduce the behavior:
# The JDBC URL
scalar.db.contact_points=jdbc:postgresql://localhost:5432/
scalar.db.username=scalardb
scalar.db.password=xxxxxx
scalar.db.storage=jdbc
scalar.db.consensus_commit.isolation_level=SERIALIZABLE
public class ElectronicMoney implements Closeable {
:
public void set(String id, int amount) throws TransactionException {
DistributedTransaction tx = manager.start();
try {
Put put =
Put.newBuilder()
.namespace(NAMESPACE)
.table(TABLENAME)
.partitionKey(Key.ofText(ID, id))
.intValue(BALANCE, amount)
.build();
tx.put(put);
tx.commit();
} catch (Exception e) {
tx.abort();
throw e;
}
}
public static void main(String[] args) throws IOException, TransactionException {
try (ElectronicMoney emoney = new ElectronicMoney()) {
emoney.set("komamitsu", 42);
}
}
}
ElectronicMoney.main()
. The first execution would finish successfully.ElectronicMoney.main()
again.Expected behavior
The test code should finish successfully without any error even it's 2nd execution or later.
Error message
Exception in thread "main" com.scalar.db.exception.transaction.CommitConflictException: preparing record exists
at com.scalar.db.transaction.consensuscommit.CommitHandler.prepare(CommitHandler.java:63)
at com.scalar.db.transaction.consensuscommit.CommitHandler.commit(CommitHandler.java:42)
at com.scalar.db.transaction.consensuscommit.ConsensusCommit.commit(ConsensusCommit.java:121)
at ElectronicMoney.set(ElectronicMoney.java:47)
at ElectronicMoney.main(ElectronicMoney.java:56)
Caused by: com.scalar.db.exception.storage.NoMutationException: no mutation was applied
Caused by: com.scalar.db.exception.storage.NoMutationException: no mutation was applied
at com.scalar.db.storage.jdbc.JdbcDatabase.put(JdbcDatabase.java:111)
at com.scalar.db.storage.jdbc.JdbcDatabase.mutate(JdbcDatabase.java:151)
at com.scalar.db.transaction.consensuscommit.CommitHandler.lambda$prepareRecords$0(CommitHandler.java:81)
at com.scalar.db.transaction.consensuscommit.ParallelExecutor.executeTasks(ParallelExecutor.java:97)
at com.scalar.db.transaction.consensuscommit.ParallelExecutor.prepare(ParallelExecutor.java:52)
at com.scalar.db.transaction.consensuscommit.CommitHandler.prepareRecords(CommitHandler.java:83)
at com.scalar.db.transaction.consensuscommit.CommitHandler.prepare(CommitHandler.java:52)
... 4 more
PostgreSQL records
scalardb=> select * from coordinator.state ;
tx_id | tx_state | tx_created_at
--------------------------------------+----------+---------------
c9679069-e489-4b60-8d39-f799fbdba8d0 | 3 | 1662294830783
0ef1552e-2928-4e58-a33c-915340f26701 | 4 | 1662294975106
(2 rows)
scalardb=> select * from emoney.account ;
id | balance | tx_id | tx_state | tx_version | tx_prepared_at | tx_committed_at | before_tx_id | before_tx_state | before_tx_version | before_tx_prepared_at | before_tx_committed_at | before_balance
-----------+---------+--------------------------------------+----------+------------+----------------+-----------------+--------------+-----------------+-------------------+-----------------------+------------------------+----------------
komamitsu | 42 | c9679069-e489-4b60-8d39-f799fbdba8d0 | 3 | 1 | 1662294830519 | 1662294830787 | | | | | |
(1 row)
Desktop (please complete the following information):
BTW, this issue template doesn't seem to really fit this project (e.g. browser) ?
Backport of #1028 to branch(3.9) failed
Backport of #1005 to branch(3.9) failed
Describe the bug
With EXTRA_READ
strategy, scanning values after putting values doesn't return the written value, nor results in an error.
To Reproduce
Initial state: empty
key05
(returns empty).1
at key05
.key01
to key10
.Expected behavior
The scan returns a value 1
, or the commit results in an error.
Actual behavior
The scan returns empty result and the commit succeeds.
Additional context
Linux and Cassandra 3 on Docker.
Possible fixes
EXTRA_WRITE
strategy, or if keys in the write set overlap the scanned range.Backport of #931 to branch(3) failed
Backport of #1005 to branch(3.7) failed
Backport of #1055 to branch(3.10) failed
Describe the bug
A clear and concise description of what the bug is.
In log traces, Scalardb prints ? character instead of values. Eg.
query to prepare : [UPDATE codingjedi.partitions_of_a_tag SET tx_id=?,tx_state=?,tx_prepared_at=?,partition_info=?,before_tx_prepared_at=?,before_tx_id=?,before_tx_state=?,before_tx_committed_at=?,before_tx_version=?,before_partition_info=?,tx_version=? WHERE tag=? IF tx_version=? AND tx_id=?;].
To Reproduce
Steps to reproduce the behavior:
Increase tracing level to highest and do Put operation
Expected behavior
The values should be printed
Desktop (please complete the following information):
Windows
Backport of #1004 to branch(3.6) failed
Backport of #1020 to branch(3.8) failed
May easily forget to add a necessary condition when adding a new RDB engine support.
Use a strategy pattern, where an interface has detectDuplicateKeyError(e)
and concrete classes implement the logics to check SQLSTATE, for example.
Describe the bug
A clear and concise description of what the bug is.
I am running a test case in which I get
a record to check that it doesn't exist, then I put
the record, then I get
it to see it was successfully added, then I update it (by calling put
again) and then I get
it again to see that the value was updated.
No Mutation applied
ERROR
2020-09-25 12:11:15,225 [TRACE] from repository.AnswersTransactionRepository in ScalaTest-run-running-AllRepositorySpecs - putting answer Put{namespace=Optional[codingjedi], table=Optional[answer_by_user_id_and_question_id], partitionKey=Key{TextValue{name=answered_by_user, value=Optional[11111111-1111-1111-1111-111111111111]}, TextValue{name=question_id, value=Optional[11111111-1111-1111-1111-111111111111]}}, clusteringKey=Optional.empty, values={answer_id=TextValue{name=answer_id, value=Optional[11111111-1111-1111-1111-111111111111]}, image=TextValue{name=image, value=Optional[{"image":["image1binarydata","image2binarydata"]}]}, answer=TextValue{name=answer, value=Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}]}, creation_year=BigIntValue{name=creation_year, value=2019}, creation_month=BigIntValue{name=creation_month, value=12}, notes=TextValue{name=notes, value=Optional[some notesupdated]}}, consistency=SEQUENTIAL, condition=Optional[com.scalar.db.api.PutIfExists@2e057637]}
2020-09-25 12:11:15,225 [DEBUG] from com.scalar.db.storage.cassandra.Cassandra in ScalaTest-run-running-AllRepositorySpecs - executing batch-mutate operation with [Put{namespace=Optional[codingjedi], table=Optional[answer_by_user_id_and_question_id], partitionKey=Key{TextValue{name=answered_by_user, value=Optional[11111111-1111-1111-1111-111111111111]}, TextValue{name=question_id, value=Optional[11111111-1111-1111-1111-111111111111]}}, clusteringKey=Optional.empty, values={tx_id=TextValue{name=tx_id, value=Optional[5239a8db-07c9-4b9b-ba25-732875af2475]}, tx_state=IntValue{name=tx_state, value=1}, tx_prepared_at=BigIntValue{name=tx_prepared_at, value=1601032275225}, answer_id=TextValue{name=answer_id, value=Optional[11111111-1111-1111-1111-111111111111]}, image=TextValue{name=image, value=Optional[{"image":["image1binarydata","image2binarydata"]}]}, answer=TextValue{name=answer, value=Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}]}, creation_year=BigIntValue{name=creation_year, value=2019}, creation_month=BigIntValue{name=creation_month, value=12}, notes=TextValue{name=notes, value=Optional[some notesupdated]}, tx_version=IntValue{name=tx_version, value=1}}, consistency=LINEARIZABLE, condition=Optional[com.scalar.db.api.PutIfNotExists@21bf308]}]
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.Cassandra in ScalaTest-run-running-AllRepositorySpecs - executing put operation with Put{namespace=Optional[codingjedi], table=Optional[answer_by_user_id_and_question_id], partitionKey=Key{TextValue{name=answered_by_user, value=Optional[11111111-1111-1111-1111-111111111111]}, TextValue{name=question_id, value=Optional[11111111-1111-1111-1111-111111111111]}}, clusteringKey=Optional.empty, values={tx_id=TextValue{name=tx_id, value=Optional[5239a8db-07c9-4b9b-ba25-732875af2475]}, tx_state=IntValue{name=tx_state, value=1}, tx_prepared_at=BigIntValue{name=tx_prepared_at, value=1601032275225}, answer_id=TextValue{name=answer_id, value=Optional[11111111-1111-1111-1111-111111111111]}, image=TextValue{name=image, value=Optional[{"image":["image1binarydata","image2binarydata"]}]}, answer=TextValue{name=answer, value=Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}]}, creation_year=BigIntValue{name=creation_year, value=2019}, creation_month=BigIntValue{name=creation_month, value=12}, notes=TextValue{name=notes, value=Optional[some notesupdated]}, tx_version=IntValue{name=tx_version, value=1}}, consistency=LINEARIZABLE, condition=Optional[com.scalar.db.api.PutIfNotExists@21bf308]}
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.StatementHandler in ScalaTest-run-running-AllRepositorySpecs - query to prepare : [INSERT INTO codingjedi.answer_by_user_id_and_question_id (answered_by_user,question_id,tx_id,tx_state,tx_prepared_at,answer_id,image,answer,creation_year,creation_month,notes,tx_version) VALUES (?,?,?,?,?,?,?,?,?,?,?,?) IF NOT EXISTS;].
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.StatementHandler in ScalaTest-run-running-AllRepositorySpecs - there was a hit in the statement cache for [INSERT INTO codingjedi.answer_by_user_id_and_question_id (answered_by_user,question_id,tx_id,tx_state,tx_prepared_at,answer_id,image,answer,creation_year,creation_month,notes,tx_version) VALUES (?,?,?,?,?,?,?,?,?,?,?,?) IF NOT EXISTS;].
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[11111111-1111-1111-1111-111111111111] is bound to 0
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[11111111-1111-1111-1111-111111111111] is bound to 1
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[5239a8db-07c9-4b9b-ba25-732875af2475] is bound to 2
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 1 is bound to 3
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 1601032275225 is bound to 4
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[11111111-1111-1111-1111-111111111111] is bound to 5
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[{"image":["image1binarydata","image2binarydata"]}] is bound to 6
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[{"answer":[{"filename":"c.js","answer":"some answer"}]}] is bound to 7
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 2019 is bound to 8
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 12 is bound to 9
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - Optional[some notesupdated] is bound to 10
2020-09-25 12:11:15,226 [DEBUG] from com.scalar.db.storage.cassandra.ValueBinder in ScalaTest-run-running-AllRepositorySpecs - 1 is bound to 11
2020-09-25 12:11:15,241 [WARN] from com.scalar.db.transaction.consensuscommit.CommitHandler in ScalaTest-run-running-AllRepositorySpecs - preparing records failed
com.scalar.db.exception.storage.NoMutationException: no mutation was applied.
at com.scalar.db.storage.cassandra.MutateStatementHandler.handle(MutateStatementHandler.java:47)
at com.scalar.db.storage.cassandra.Cassandra.put(Cassandra.java:108)
To Reproduce
Steps to reproduce the behavior:
| answered_by_user text,
| question_id text,
| answer_id text,
| before_answer_id text,
| answer text,
| before_answer text,
| creation_month bigint,
| before_creation_month bigint,
| creation_year bigint,
| before_creation_year bigint,
| image text,
| before_image text,
| notes text,
| before_notes text,
| tx_id TEXT,
| before_tx_id TEXT,
| tx_prepared_at BIGINT,
| before_tx_prepared_at BIGINT,
| tx_committed_at BIGINT,
| before_tx_committed_at BIGINT,
| tx_state INT,
| before_tx_state INT,
| tx_version INT,
| before_tx_version INT,
| PRIMARY KEY ((answered_by_user, question_id))
|);```
2. create three methods (add, get, update).
def update(transaction:DistributedTransaction, answer:AnswerOfAPracticeQuestion) = {
logger.trace(s"updating answer value ${answer}")
add(transaction,answer, new PutIfExists)
}
---------------
def add(transaction:DistributedTransaction,answer:AnswerOfAPracticeQuestion,mutationCondition:MutationCondition = new PutIfNotExists()) = {
logger.trace(s"adding answer ${answer} with mutation state ${mutationCondition}")
val pAnswerKey = new Key(new TextValue("answered_by_user", answer.answeredBy.get.answerer_id.toString),
new TextValue("question_id",answer.question_id.toString))
//to check duplication, both partition and clustering keys need to be present
//val cAnswerKey = new Key(new TextValue("answer_id",answer.answer_id.toString))
//logger.trace(s"created keys. ${pAnswerKey}, ${cAnswerKey}")
val imageData = answer.image.map(imageList=>imageList).getOrElse(List())
logger.trace(s"will check in ${keyspaceName},${tablename}")
val putAnswer: Put = new Put(pAnswerKey/*,cAnswerKey*/)
.forNamespace(keyspaceName)
.forTable(tablename)
.withCondition(mutationCondition)
.withValue(new TextValue("answer_id", answer.answer_id.get.toString))
.withValue(new TextValue("image", convertImageToString(imageData)))
.withValue(new TextValue("answer", convertAnswersFromModelToString(answer.answer)))
.withValue(new BigIntValue("creation_year", answer.creationYear.getOrElse(0)))
.withValue(new BigIntValue("creation_month", answer.creationMonth.getOrElse(0)))
.withValue(new TextValue("notes", answer.notes.getOrElse("")))
logger.trace(s"putting answer ${putAnswer}")
transaction.put(putAnswer)
}
-------------------
def get(transaction:DistributedTransaction,key:AnswerKeys):Either[AnswerNotFoundException,AnswerOfAPracticeQuestion] = {
//Create the transaction//Create the transaction
logger.trace("checking if answer exists for" + key);
//Perform the operations you want to group in the transaction
val pAnswerKey = new Key(new TextValue("answered_by_user",key.answerer_id.toString),
new TextValue("question_id",key.question_id.toString))
// val cAnswerKey = if(key.answer_id.isDefined) Some(new Key(new TextValue("answer_id",key.answer_id.get.toString))) else None
//logger.trace(s"created user keys ${pAnswerKey},${cAnswerKey}")
logger.trace(s"getting answer from ${keyspaceName}, ${tablename} using keys ${pAnswerKey}")
val get:Get = new Get(pAnswerKey/*,cAnswerKey.get*/)
.forNamespace(keyspaceName)
.forTable(tablename)
val result:Optional[Result] = transaction.get(get);
logger.trace(s"got result ${result}")
if(result.isPresent){
logger.trace(s"found answer ${result}")
//checktest-get an answer
Right(rowToModel(result))
} else {
//checktest-not get answer if answer doesn't exist
logger.error(s"Answer doesn't exist")
Left(AnswerNotFoundException())
}
}
1. execute the get, add, get, update calls in order like in the following test case
"update an answer if the answer exists" in {
beforeEach()
embeddedCassandraManager.executeStatements(cqlStartupStatements) //set up tables
val cassandraConnectionService = CassandraConnectionManagementService() //set db connection
val (cassandraSession, cluster) = cassandraConnectionService.connectWithCassandra("cassandra://localhost:9042/codingjedi", "codingJediCluster")
//TODOM - pick the database and keyspace names from config file.
cassandraConnectionService.initKeySpace(cassandraSession.get, "codingjedi")
val transactionService = cassandraConnectionService.connectWithCassandraWithTransactionSupport("localhost", "9042", "codingJediCluster" /*,dbUsername,dbPassword*/)
val repository = new AnswersTransactionRepository("codingjedi", "answer_by_user_id_and_question_id")
val answerKey = AnswerKeys(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.answer_id.get,
repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.question_id,
Some(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.answer_id.get))
logger.trace(s"checking if answer already exists")
val distributedTransactionBefore = transactionService.get.start()
val resultBefore = repository.get(distributedTransactionBefore, answerKey)
distributedTransactionBefore.commit()
resultBefore.isLeft mustBe true
resultBefore.left.get.isInstanceOf[AnswerNotFoundException] mustBe true
logger.trace(s"no answer found. adding answer")
val distributedTransactionDuring = transactionService.get.start()
repository.add(distributedTransactionDuring, repoTestEnv.answerTestEnv.answerOfAPracticeQuestion)
distributedTransactionDuring.commit()
logger.trace(s"answer added")
val distributedTransactionAfter = transactionService.get.start()
val result = repository.get(distributedTransactionAfter, answerKey)
distributedTransactionAfter.commit()
result mustBe (Right(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion))
logger.trace(s"got answer from repo ${result}")
val updatedNotes = if(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.notes.isDefined)
Some(repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.notes.get+"updated") else Some("updated notes")
val updatedAnswer = repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.copy(notes=updatedNotes)
logger.trace(s"old notes ${repoTestEnv.answerTestEnv.answerOfAPracticeQuestion.notes} vs new notes ${updatedNotes}")
logger.trace(s"updated answer ${updatedAnswer}")
val distributedTransactionForUpdate = transactionService.get.start()
val resultOfupdate = repository.update(distributedTransactionForUpdate,updatedAnswer)
distributedTransactionForUpdate.commit()
logger.trace(s"update done. getting answer again")
val distributedTransactionAfterUpdate = transactionService.get.start()
val resultAfterUpdate = repository.get(distributedTransactionAfterUpdate, answerKey)
distributedTransactionForUpdate.commit()
resultAfterUpdate mustBe (Right(updatedAnswer))
logger.trace(s"got result after update ${resultAfterUpdate}")
afterEach()
}
**Expected behavior**
update should be successful
**Desktop (please complete the following information):**
scala, scalar db
A record was committed unexpectedly by the Put
has a clustering key as a value.
This commit should fail because the Cassandra driver throws InvalidQueryException
.
This issue happens when a record is an initial record for the primary key.
The put for insertion isn't checked if the Put
has the correct keys in the driver.
When the insertion succeeds, the preparation phase of the transaction also succeeds.
The commit of the transaction for the record fails because of the lack of clustering keys.
But it is eventual committed because this commit phase is after the state updating.
Backport of #1005 to branch(3.8) failed
Backport of #943 to branch(3.8) failed
Backport of #1020 to branch(3.7) failed
Backport of #1005 to branch(3.6) failed
Backport of #1140 to branch(3.7) failed
CREATE TRANSACTION TABLE foo.foo (
id TEXT PARTITIONKEY,
sub_id TEXT CLUSTERINGKEY,
value INT,
);
Initial state: no records
Steps
foo
as a partition key, and calculate a sum of value. (name the sum A). 0 since the table is empty.
foo
as a partition key, and calculate a sum of value (name the sum B). 0 since the table is empty.
Actual states :
id | sub_id | value |
---|---|---|
foo | t1 | 1 |
foo | t2 | 1 |
The above states can NOT be produced by either executing T1 and T2 in this order or T2 and T1 in this order as follows, which means the execution is not serializable.
id | sub_id | value |
---|---|---|
foo | t1 | 1 |
foo | t2 | 2 |
id | sub_id | value |
---|---|---|
foo | t1 | 2 |
foo | t2 | 1 |
Backport of #954 to branch(3.8) failed
Backport of #954 to branch(3.6) failed
Describe the bug
Typecast issue exists in scalardb while reading data from Cosmos DB using scalardb
To Reproduce
Steps to reproduce the behavior:
Submit a Question
button.An error occurred while looking up the question
after creating the question.Expected behavior
Read data from cosmos db using scalardb without any error.
Screenshots
Typecast issue exists while reading data from cosmos db.
Backend Details
Additional context
Issue related logs
2020-10-08 23:12:59.355 ERROR 8243 --- [0.1-8090-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long] with root cause
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
at com.scalar.db.storage.cosmos.ResultImpl.convert(ResultImpl.java:142) ~[scalardb-2.2.0.jar:na]
at com.scalar.db.storage.cosmos.ResultImpl.add(ResultImpl.java:120) ~[scalardb-2.2.0.jar:na]
at com.scalar.db.storage.cosmos.ResultImpl.lambda$interpret$0(ResultImpl.java:101) ~[scalardb-2.2.0.jar:na]
at com.google.common.collect.ImmutableSortedMap.forEach(ImmutableSortedMap.java:588) ~[guava-24.1-jre.jar:na]
at java.util.Collections$UnmodifiableMap.forEach(Collections.java:1505) ~[na:1.8.0_152-ea]
at com.scalar.db.storage.cosmos.ResultImpl.interpret(ResultImpl.java:99) ~[scalardb-2.2.0.jar:na]
at com.scalar.db.storage.cosmos.ResultImpl.<init>(ResultImpl.java:44) ~[scalardb-2.2.0.jar:na]
at com.scalar.db.storage.cosmos.ScannerIterator.next(ScannerIterator.java:34) ~[scalardb-2.2.0.jar:na]
at com.scalar.db.storage.cosmos.ScannerIterator.next(ScannerIterator.java:9) ~[scalardb-2.2.0.jar:na]
at com.example.qa.dao.question.QuestionDao.scan(QuestionDao.java:49) ~[main/:na]
at com.example.qa.service.question.QuestionServiceForStorage.get(QuestionServiceForStorage.java:130) ~[main/:na]
at com.example.qa.controller.question.QuestionController.getQuestions(QuestionController.java:54) ~[main/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_152-ea]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_152-ea]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_152-ea]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_152-ea]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:877) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:783) ~[spring-webmvc-5.0.7.RELEASE.jar:5.0.7.RELEASE]
Scalar DB Schema Tools can not be "make".
The reason is that the IF of the Parser being used by the Tool has changed, and since there is an argument with no argument, an error occurs at make time.
■Error detail
$ cd tools/schema
$ make
go get github.com/alecthomas/kingpin
go get github.com/alecthomas/participle
- _/home/ec2-user/scalardb/tools/schema/internal/parser
internal/parser/parser.go:42:68: not enough arguments in call to participle.UseLookahead
have ()
want (int)
make: *** エラー 2
■Parser Tool
https://github.com/alecthomas/participle
Backport of #1020 to branch(3.6) failed
Is your feature request related to a problem? Please describe.
例として以下のようなテーブルに対して
id | name | tel |
---|---|---|
id,nameのみをデータ入力したINSERT文を用意してデータ投入を行いたい。現状はテーブルに定義された全てのカラムに対してデータを用意する必要がある。
Describe the solution you'd like
data_loaderを使ってテーブルにデータを投入する際に一部のカラムのみデータを入力した状態でもINSERTしたい
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Backport of #1004 to branch(3.7) failed
Backport of #943 to branch(3.7) failed
Describe the bug
DynamoAdmin.namespaceExists()
check only prefixes of namespaces.
scalardb/core/src/main/java/com/scalar/db/storage/dynamo/DynamoAdmin.java
Lines 1145 to 1146 in 4f6113f
To Reproduce
Expected behavior
The test above passes (but fails at 2nd assertion).
Screenshots
Environment (please complete the following information):
Additional context
Backport of #975 to branch(3.9) failed
Backport of #1140 to branch(3.6) failed
Is your feature request related to a problem? Please describe.
Let's state that this is a program executing a Put request targeting a Cassandra table named FooTable
using the transaction mode.
When Scalar DB fails to execute the Put
because FooTable
was not created in Cassandra, the stack trace will look like this.
no table information found
com.scalar.db.exception.storage.StorageRuntimeException: no table information found
at com.scalar.db.storage.cassandra.ClusterManager.getMetadata(ClusterManager.java:83)
at com.scalar.db.storage.cassandra.Cassandra.getTableMetadata(Cassandra.java:185)
at com.scalar.db.storage.cassandra.Cassandra.get(Cassandra.java:79)
at com.scalar.db.transaction.consensuscommit.RollbackMutationComposer.getLatestResult(RollbackMutationComposer.java:145)
at com.scalar.db.transaction.consensuscommit.RollbackMutationComposer.add(RollbackMutationComposer.java:60)
at com.scalar.db.transaction.consensuscommit.Snapshot.lambda$to$0(Snapshot.java:126)
at java.util.concurrent.ConcurrentHashMap$EntrySetView.forEach(ConcurrentHashMap.java:4795)
at com.scalar.db.transaction.consensuscommit.Snapshot.to(Snapshot.java:122)
at com.scalar.db.transaction.consensuscommit.RecoveryHandler.rollback(RecoveryHandler.java:58)
at com.scalar.db.transaction.consensuscommit.CommitHandler.commit(CommitHandler.java:44)
at com.scalar.db.transaction.consensuscommit.ConsensusCommit.commit(ConsensusCommit.java:121)
at com.scalar.ist.api.ScalarDBTest.put(ScalarDBTest.java:71)
...
Since the stracktrace is originating from the faulty user code, I tend to think that FooTable
is missing which is correct in that case.
But, in the case the FooTable
was created but the coordinator.state
table is missing. The exception will look like this :
no table information found
com.scalar.db.exception.storage.StorageRuntimeException: no table information found
at com.scalar.db.storage.cassandra.ClusterManager.getMetadata(ClusterManager.java:83)
at com.scalar.db.storage.cassandra.Cassandra.getTableMetadata(Cassandra.java:185)
at com.scalar.db.storage.cassandra.Cassandra.checkIfPrimaryKeyExists(Cassandra.java:191)
at com.scalar.db.storage.cassandra.Cassandra.put(Cassandra.java:107)
at com.scalar.db.transaction.consensuscommit.Coordinator.put(Coordinator.java:101)
at com.scalar.db.transaction.consensuscommit.Coordinator.putState(Coordinator.java:49)
at com.scalar.db.transaction.consensuscommit.CommitHandler.commitState(CommitHandler.java:114)
at com.scalar.db.transaction.consensuscommit.CommitHandler.commit(CommitHandler.java:62)
at com.scalar.db.transaction.consensuscommit.ConsensusCommit.commit(ConsensusCommit.java:121)
at com.scalar.ist.api.ScalarDBTest.put(ScalarDBTest.java:71)
...
The exception message is identical to the missing FooTable
but the stacktrace partly differs. Though, it is not obvious that the coordinator.state
table is missing and not the FooTable
.
It happened that I spent some time trying to understand what was wrong because I was convinced wrongly that Scalar DB could not find the FooTable
for some reason even though the coordinator table was the one missing.
I guess new Scalar DB users who are still not fully aware that the coordinator table is required may be quite confused as well.
Describe the solution you'd like
The exception message below could be updated to include the missing table keyspace and name
com.scalar.db.exception.storage.StorageRuntimeException: no table information found
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.