try_git's People
try_git's Issues
backup-restore : failure_durring_restore_no_corrupt_data
Check that we can recover from a failure durring restore
1. Use a single node and create a keyspace + table
2. Insert data
3. Create snapshot and save files
4. Drop keyspace
5. Create keyspace + table + populate different data
6. Start restore data
7. Kill node
8. Start node
9. Check if non restored data exists
10. Check if any restored data exists
11. Restore data
12. Check that all data exists
E1
E2
repair : repair_while_node_is_decomissioned_test
Check that repair is accompileshed while node is decomissioned
1. Create a cluster of 3 nodes with rf=2
2. Stop node 2
3. Insert data
4. Start node 2
6. Start repair
7. Decomission node 3
8. Stop node 1 - does node 2 hold all the data
backup-restore:incremental_backup
Checkthatincremetalbackupworksasexpected1.Useasinglenode2.Enableincremental_backup3.Createakeyspace+table4.Insertdata5.Checkthatwhilesstablesareflushed-incrementalbackupsarecreated6.Runcompact-forcingallssstablestobemerged7.Checkthatbackupsholdsalltheoldfilesandthenewcompactedfile
repair : repair_while_new_node_is_added_test
Check that repair is accompileshed while new node is added
1. Create a cluster of 2 nodes with rf=2
2. Stop node 2
3. Insert data
4. Start node 2
6. Start repair
7. Create a new node and start it
repair : test keyspace, column family parameter
repair : fail_node_responding_to_repair_test
Check that killing a repairing node does not cause additional failures
1. Create a cluster of 2 nodes with rf=2
2. Stop node 2
3. Insert data
4. Start node 2
5. In a loop
a. Start node 1 if its down
b. Start repair
c. Kill node 1
d. Check that cluster is avilable (read/writes)
repair : repair_fixes_updates_to_cells_test
Check that repair fixes a few update to cells contents
1. Create a cluster of 2 nodes with rf=2, disable read_repair, hinttef_handoff
2. Insert data
3. Shutdown node 2
4. Update some cells
5. Start node 2
6. Run repair on node 2
7. Shutdown node 1 - check that all data exists on node 2
repair : test dc awarness
A
B
A
B
A
B
repair : test partitioner range option
repair : repair_fixes_deletion_of_range_of_cells_test
Check that repair fixes a deletion of cell range
1. Create a cluster of 2 nodes with rf=2, disable read_repair, hinttef_handoff
2. Insert data
3. Shutdown node 2
4. Delete a range of some cells
5. Start node 2
6. Run repair on node 2
7. Shutdown node 1 - check that all data exists on node 2
backup-restore : restore_snapshot_using_old_schema
Check that we can restore snapshot files that use old schema
1. Use a single node and create a keyspace + table
2. Insert data
3. Create snapshot and save files
4. Drop keyspace
5. Create keyspace + table
6. Alter table
7. Restore data
8. Check that all data exists
A
B
A
B
repair : repair_of_cluster_all_nodes_are_out_of_sync
Check that repair fixes all inconsistencies in data
1. Create a cluster of 3 nodes with rf=3, disable read_repair, hintted_handoff
2. Shutdown node 2,3
3. Insert data set A
4. Start node 2 Shutdown node 1
5. Insert data set B (A != B)
6. Start node 3 Shutdown node 2
7. Insert data set C (A != B, B != C, A != C)
8. Start node 1,2
9. Run repair on node 3
10. Shutdown node 1,2 check all data on node 3
11. Shutdown node 1,3 check all data on node 2
12. Shutdown node 2,3 check all data on node 1
A
B
repair : repair_while_nodes_are_down_2_test
Check that repair is able to complete if one replica of data exists
1. Create a cluster of 4 nodes with rf=3
2. Stop node 2
3. Insert data
4. Stop node 3
5. Start node 2
6. Start repair on node 2
7. Check if repair is succesfull
A
B
repair : test keyspace parameter
repair : full_repair_of_node_initiated_on_node_without_data
Check that repair transfers all the data in case non exists
1. Create a cluster of 2 nodes with rf=2, disable read_repair, hintted_handoff
2. Shutdown node 2 (prior to schema creation)
3. Insert data
4. Start node 2
5. Run repair on node 2
6. Shutdown node 1 - check that all data exists on node 2
repair : repair_fixes_remove_of_keys_test
Check that repair fixes a few removed keys
1. Create a cluster of 2 nodes with rf=2, disable read_repair, hinttef_handoff
2. Insert data
3. Shutdown node 2
4. Remove some keys
5. Start node 2
6. Run repair on node 2
7. Shutdown node 1 - check that all data exists on node 2
EF1
F2
repair : repair_while_nodes_are_down_1_test
Check that repair is not able to complete if no replicas for data exist
1. Create a cluster of 3 nodes with rf=2
2. Stop node 2
3. Insert data
4. Stop node 3
5. Start node 2
6. Start repair on node 2
7. Check if repair is succesfull
A
B
repair : repair_fixes_deletion_of_cells_test
Check that repair fixes a deleteion of cells
1. Create a cluster of 2 nodes with rf=2, disable read_repair, hinttef_handoff
2. Insert data
3. Shutdown node 2
4. Delete some cells
5. Start node 2
6. Run repair on node 2
7. Shutdown node 1 - check that all data exists on node 2
backup-restore : incremental_backup
Check that incremetal backup works as expected
1. Use a single node
2. Enable incremental_backup
3. Create a keyspace + table
4. Insert data
5. Check that while sstables are flushed - incremental backups are created
6. Run compact -forcing all ssstables to be merged
7. Check that backups holds all the old files and the new compacted file
backup-restore : restore_snapshot_using_different_smp_setting
Check that we can restore snapshot files that used a different smp setting
1. Use a single node with smp=1 and create a keyspace + table
2. Insert data
3. Create snapshot and save files
4. Drop keyspace
5. Stop node, start it with smp=2
6. Create keyspace + table
7. Restore data
8. Check that all data exists
repair : full_repair_of_node_initiated_on_node_with_latest_data_test
Check that repair transfers all the data in case non exists
1. Create a cluster of 2 nodes with rf=2, disable read_repair, hintted_handoff
2. Shutdown node 2 (prior to schema creation)
3. Insert data
4. Start node 2
5. Run repair on node 1
6. Shutdown node 1 - check that all data exists on node 2
repair : test start token / end token option
repair : repair_while_data_is_updated_test
Check that data can be updated while repair is running
1. Create a cluster of 2 nodes with rf=2
2. Stop node 2
3. Insert data
4. Start node 2
5. In a loop update part of data with CL=2
6. Start repair
7. Stop node 1
8. Check that all the data is up to date
repair : fail_node_initiating_repair_test
Check that killing a repaired node does not cause additional failures
1. Create a cluster of 2 nodes with rf=2
2. Stop node 2
3. Insert data
4. In a loop
a. Start node 2
b. Start repair
c. Kill node 2
d. Check that cluster is avilable (read/writes)
D1
D2
repair : large data repair
repair a cluster that has uses a large data 1 TB
repair : repair_fixes_update_of_ttl_test
Check that repair fixes updates to ttl
CQL: UPDATE table USING TTL <ttl value> where key=X
1. Create a cluster of 2 nodes with rf=2, disable read_repair, hinttef_handoff
2. Insert data
3. Shutdown node 2
4. Update ttl of some cells
5. Start node 2
6. Run repair on node 2
7. Shutdown node 1 - check that all data exists on node 2
A
B
A
B
repair : test sequential repair
backup-restore : replay_restore_no_additional_data
Check that we can restore snapshot files that use old schema
1. Use a single node and create a keyspace + table
2. Insert data
3. Create snapshot and save files
4. Drop keyspace
5. Create keyspace + table
6. Restore data
7. Check that all data exists
8. Restore data
9. Check that all data exists
A
B
backup-restore : restore_snapshot_from_cassandra
Check that we can restore snapshot files that have been created by cassandra
1. Use a single node and create a keyspace + table
2. Restore data from a cassandra snapshot
3. Check that all data exists
backup-restore : restore_snapshot_using_old_token_ownership
Check that we can restore snapshot files that use a non updated token ownership
1. Use a single node and create a keyspace + table
2. Insert data
3. Create snapshot and save files
4. Add an additional node
5. Drop keyspace
6. Create keyspace + table
7. Restore data
8. Check that all data exists
repair : test parallel repair
backup-restore : incremental_backup
Check that incremetal backup works as expected
1. Use a single node
2. Enable incremental_backup
3. Create a keyspace + table
4. Insert data
5. Check that while sstables are flushed - incremental backups are created
6. Run compact -forcing all ssstables to be merged
7. Check that backups holds all the old files and the new compacted file
repair : test_multiple_repair_test
Check that repair is accompilshed when multiple repairs are initiated in parallel
1. Create a cluster of 3 nodes with rf=3
2. Insert data
3. Stop node 2
4. Insert data
5. Stop node 3
6. Insert data
7. Start node 2, Start node 3
8. Start repair on node 2, node 3
9. Stop node 1,node 3 - does node 2 hold all the data
10. Stop node 1,node 2 - does node 3 hold all the data
backup-restore : failure_durring_snapshot_no_corrupt_data
Check that we can recover from a failure durring snapshot
1. Use a single node and create a keyspace + table
2. Insert data
3. Start create snapshot
4. Kill node
5. Start node
6. Check that all data exists
A
B
backup-restore:restore_snapshot_using_old_schema
Checkthatwecanrestoresnapshotfilesthatuseoldschema1.Useasinglenodeandcreateakeyspace+table2.Insertdata3.Createsnapshotandsavefiles4.Dropkeyspace5.Createkeyspace+table6.Altertable7.Restoredata8.Checkthatalldataexists
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.