Comments (6)
Hi, thanks for the notes. The scripts actually use the disks I explicitly create for the PVs/PVCs. What is actually redundant is the line "kubectl apply -f ../resources/gce-ssd-storageclass.yaml" in genrate.sh to craete a storage class. This is only needed if the Dynamic PVCs, but in my example I create the disks explicitly (partly to ensure XFS is used - may not be required anymore?). I've commented out that line with an explanation (I've left it in with a comment though as its a useful reference point for people to see). Both methods do result in persistent volume claims which are tolerant of recycling StatefulSet pods, I've just made it more explicit which method I'm using (Static PVs vs Dynamic PVs). See below which shows how the explicitly created disks via PVs are being used for the PVCs.. Thanks Paul
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-volume-4g-1 4Gi RWO Retain Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-2 fast 2m
data-volume-4g-2 4Gi RWO Retain Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-0 fast 2m
data-volume-4g-3 4Gi RWO Retain Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-1 fast 2m
data-volume-8g-1 8Gi RWO Retain Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-0 fast 2m
data-volume-8g-2 8Gi RWO Retain Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-2 fast 2m
data-volume-8g-3 8Gi RWO Retain Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-0 fast 2m
data-volume-8g-4 8Gi RWO Retain Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-1 fast 2m
data-volume-8g-5 8Gi RWO Retain Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-1 fast 2m
data-volume-8g-6 8Gi RWO Retain Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-1 fast 2m
data-volume-8g-7 8Gi RWO Retain Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-2 fast 2m
data-volume-8g-8 8Gi RWO Retain Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-2 fast 2m
data-volume-8g-9 8Gi RWO Retain Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-0 fast 2m
from gke-mongodb-shards-demo.
what kubernetes version are you using?
because it works completely different for me..
root@vps:~/gke-mongodb-shards-demo/scripts# k get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
data-volume-4g-1 4Gi RWO Retain Available 3d
data-volume-4g-2 4Gi RWO Retain Available 3d
data-volume-4g-3 4Gi RWO Retain Available 3d
data-volume-50g-1 50Gi RWO Retain Available 3d
data-volume-50g-2 50Gi RWO Retain Available 3d
data-volume-50g-3 50Gi RWO Retain Available 3d
data-volume-50g-4 50Gi RWO Retain Available 3d
data-volume-50g-5 50Gi RWO Retain Available 3d
data-volume-50g-6 50Gi RWO Retain Available 3d
pvc-4934d8ee-d5be-11e7-a4b9-42010a8401b3 4Gi RWO Delete Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-0 3d
pvc-49a57c43-d5be-11e7-a4b9-42010a8401b3 50Gi RWO Delete Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-0 3d
pvc-501bc8b4-d5be-11e7-a4b9-42010a8401b3 50Gi RWO Delete Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-0 3d
pvc-5600bd65-d5be-11e7-a4b9-42010a8401b3 4Gi RWO Delete Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-1 3d
pvc-56851c5b-d5be-11e7-a4b9-42010a8401b3 50Gi RWO Delete Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-0 3d
pvc-5bc633d4-d5be-11e7-a4b9-42010a8401b3 50Gi RWO Delete Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-1 3d
pvc-61dd56df-d5be-11e7-a4b9-42010a8401b3 50Gi RWO Delete Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-1 3d
pvc-63215897-d5be-11e7-a4b9-42010a8401b3 4Gi RWO Delete Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-2 3d
pvc-87c17743-d60a-11e7-a4b9-42010a8401b3 50Gi RWO Delete Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-1 3d
from gke-mongodb-shards-demo.
I'm just using current GKE defaults.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}$ gcloud container get-server-config
Fetching server config for europe-west1-b
defaultClusterVersion: 1.7.8-gke.0
Are you running the project on a Kubernetes platform that is not GKE?
from gke-mongodb-shards-demo.
# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# gcloud container get-server-config
Fetching server config for europe-west1-d
defaultClusterVersion: 1.7.8-gke.0
could it be the client version?
from gke-mongodb-shards-demo.
I upgraded my client kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
and the persistentVolumes are created as intented.
We can close this issue now :) Thanks.
from gke-mongodb-shards-demo.
Great, thanks for testing!
from gke-mongodb-shards-demo.
Related Issues (6)
- pin mongo to 3.4.10 HOT 4
- replica set initiate getting failed with self domain + Mongos Add Shard CONNECT_ERROR HOT 3
- "Failed to connect to 127.0.0.1:27017" after the pods are transferred to a new node in GKE HOT 3
- Config server doesn't write to persistent volume HOT 2
- x509: certificate signed by unknown authority
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gke-mongodb-shards-demo.