Giter VIP home page Giter VIP logo

helm-charts's People

Contributors

88plug avatar andy108369 avatar arno01 avatar boz avatar chainzero avatar cloud-j-luna avatar sacreman avatar sadfroghk avatar troian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Create a price script that only accepts "large enough" deployments

With these variables, we can give providers the option to only accept deployments large enough. 0.1 vCPU / 512 Mi / 512 Mi deployments can lead to more trouble than they're worth and sometimes the provider wants to fill out all of their resources. There's a user in the Discord channel right now that has 1.1 PB of storage. While this is distributed across the network with NFS assuming that's possible on Akash, sometimes it's worth more just having large deployments.

cpu_requested
memory_requested
ephemeral_storage_requested
hdd_pers_storage_requested
ssd_pers_storage_requested
nvme_pers_storage_requested

I'm thinking that we can have these values plus boolean flags for providers if they don't care about a singular resource being utilized to the max.

12.8 <= cpu_mem_correlation   => 10   # disk   | memory
12.8 <= cpu_disk_correlation  => 10   # disk   | ephemeral + persistent
0.78 <= mem_disk_correlation  => 1.28 # memory | ephemeral + persistent

Which are dictated by:

min_cpu  = 10  # minimum number of threads used on a filled machine
min_mem  = 100 # minimum memory used on a filled machine (GB)
min_disk = 100 # minimum ephemeral + persistent disk used on a filled machine (GB)

So in this case, one would have to rent minimum 10 GB RAM and 10 GB disk (any kind) to get a bid back on a single thread. This would create for some inconsistencies, so we would also have to use some max values:

max_cpu  = 10  # minimum number of threads used on a filled machine
max_mem  = 128 # minimum memory used on a filled machine (GB)
max_disk = 128 # minimum ephemeral + persistent disk used on a filled machine (GB)

Now, nobody is allowed more than 12.8 GB memory or disk (any kind) per thread, or less than 10 GB memory or disk (any kind) per thread. It can still be more dynamic and be improved, but this would make it possible for a provider to feel safe in the way that all of their resources will be used equally across the machine (minimizing idle) and not having to worry about overselling one resource either. The example is a bit flawed because the range is so small, but it's just an example.

Cluster domain not configurable

Looks hard coded too in the images it uses

E[2023-10-24|07:47:04.177] dns discovery failed                         cmp=provider cmp=service-discovery-agent error="lookup _status._TCP.akash-hostname-operator.akash-services.svc.cluster.local on 10.96.0.10:53: no such host" portName=status service-name=akash-hostname-operator namespace=akash-services

As a work around I've had to edit coredns like so

        ready
        kubernetes k8s.sfxworks cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }

[akash-provider] update provider txs are expensive and may drain the account

Each time akash-provider pod restarts for whatever reason, it spends about 1.6 AKT for /akash.provider.v1beta2.MsgUpdateProvider transaction.

https://www.mintscan.io/akash/txs/1096C12AD0FA61E9CA7527ABEDCE21B9FE729D2F0BE07F6E0E5CFFCB6E164B20

Gas settings:

# kubectl -n akash-services exec -ti $(kubectl -n akash-services get pods -l app=akash-provider --output jsonpath='{.items[0].metadata.name}') -- env | grep -E 'AKASH_GAS'
AKASH_GAS_PRICES=0.025uakt
AKASH_GAS=auto
AKASH_GAS_ADJUSTMENT=1.25

## akash version 0.16.4-rc0

Related code lines in akash-provider:

account drainage

Normally one would not be restarting his provider often, but in the event of the issue like this https://discord.com/channels/747885925232672829/771909807359262751/971372088948588576

Users may get their entire account drained really quick:
image

Suggestions

    1. [automated] Query the blockchain first and only issue akash tx provider create provider.yaml & akash tx provider update provider.yaml transactions if the provider is missing and not updated on the blockchain first similarly to how I want to tackle the issue #9 though, one of the proposed methods there is to leverage the persistent storage there.
    1. [manual] Do provider & certificate creation/attributes update a manual separate step as suggested here

do not create a new cert each time akash-provider gets restarted

After checking with @88plug , it looks like the akash-provider is creating new certs each time it gets restarted:

$ akash query cert list --owner akash10fl5f6ukr8kc03mtmf8vckm6kqqwqpc04eruqa -o json | jq | grep state | sort | uniq -c
     58         "state": "revoked",
     27         "state": "valid",
$ akash query txs --events "message.sender=akash10fl5f6ukr8kc03mtmf8vckm6kqqwqpc04eruqa&message.action=cert-create-certificate" --page 1 --limit 100  | jq -r '.txs[] | [ .txhash, .timestamp, (.tx.body.messages[] | [  ."@type", .cert, .owner, .host_uri ] )[] ] | @csv' | awk -F',' -v OFS=',' '{tmp="echo " $4 " | openssl base64 -A -d | openssl x509 -ext subjectAltName -noout | xargs"; tmp | getline cksum; $4=cksum; print;}'

...
...
"4B22C17F35A7B08D87E2CC7DAA6AC4D474B4E9B4FFAEE3F3640C1B7F94DD4EE8","2022-02-16T16:30:53Z","/akash.cert.v1beta1.MsgCreateCertificate",X509v3 Subject Alternative Name: DNS:provider.akash.world,"akash10fl5f6ukr8kc03mtmf8vckm6kqqwqpc04eruqa",
"26844B77B5539385059AEF901F0F03AC23F87C459F2006EFB963F6D0E3790290","2022-02-16T16:39:38Z","/akash.cert.v1beta1.MsgCreateCertificate",X509v3 Subject Alternative Name: DNS:provider.akash.world,"akash10fl5f6ukr8kc03mtmf8vckm6kqqwqpc04eruqa",
"480F683DDA2CCDB22B73FEDC13BDD221108F44758386A8F4121EAF0B7274A112","2022-02-16T17:47:15Z","/akash.cert.v1beta1.MsgCreateCertificate",X509v3 Subject Alternative Name: DNS:provider.akash.world,"akash10fl5f6ukr8kc03mtmf8vckm6kqqwqpc04eruqa",
...
...

based on the configmap-boot
https://github.com/ovrclk/helm-charts/blob/688b55b5/charts/akash-provider/templates/configmap-boot.yaml#L64

    /bin/akash tx cert create server provider.{{ .Values.domain }} $POPTS || exit 1

I think it should really save ~/.akash/<akash1....>.pem file in a local volume somewhere, so it won't attempt to recreate that cert each time it gets restarted, but instead, it would detect it's already present first.

local volumes can be added this way
https://kubernetes.io/docs/concepts/storage/volumes/#local

and there are more alternative ways in K8s, most of them are at that page.

state-sync variables must be passed to the main pod

Or state-sync must sync the initial data off of state-sync server up until it catches up first.

Whilst the init pod sets the correct exports AKASH_STATESYNC_TRUST_HEIGHT & AKASH_STATESYNC_TRUST_HASH they do not go to the main pod.:

  • init pod
++ jq -r .result.block_id.hash
+ TRUST_HASH=1BC0C03ED5425221E2695889DB09EF0A6698BEEC676812A941F15C89445A6C3B
+ echo 'TRUST HEIGHT: 13277740'
+ TRUST_HASH=1BC0C03ED5425221E2695889DB09EF0A6698BEEC6768
+ echo 'TRUST HASH: 1BC0C03ED5425221E2695889DB09EF0A6698BEEC676812A941F15C89445A6C3B'
+ export AKASH_STATESYNC_TRUST_HEIGHT=13277740
+ AKASH_STATESYNC_TRUST_HEIGHT=13277740
+ export AKASH_STATESYNC_TRUST_HASH=1BC0C03ED5425221E2695889DB09EF0A6698BEEC676812A941F15C89445A6C3B
+ AKASH_STATESYNC_TRUST_HASH=1BC0C03ED5425221E2695889DB09EF0A6698BEEC676812A941F15C89445A6C3B
+ [[ false == \t\r\u\e ]]
TRUST HEIGHT: 13277740
TRUST HASH: 1BC0C03ED5425221E2695889DB09EF0A6698BEEC676812A941F15C89445A6C3B
  • main pod
INF service start impl=ConsensusReactor module=consensus msg={}
INF Reactor  module=consensus waitSync=true
INF Starting state sync module=statesync
Error: failed to start state sync: failed to set up light client state provider: invalid TrustOptions: negative or zero height

reported by: @88plug

[generic bid price script] set better pricing for storage

I think we should adjust these values to get at least cheaper than AWS:
https://github.com/ovrclk/helm-charts/blob/provider-0.173.0/charts/akash-provider/scripts/price_script_generic.sh#L50-L56

Amazon charges for EBS storage (GB-month):

  • $0.015 sc1 (Cold HDD)
  • $0.045 st1 (Throughput Optimized HDD)
  • $0.08 gp3 (General Purpose SSD)
  • $0.10 gp2 (General Purpose SSD)
  • $0.125 io1, io2 (Provisioned IOPS SSD)

https://aws.amazon.com/ebs/pricing/
https://calculator.aws/#/createCalculator/EBS

I am thinking of making the following adjustments to the USD prices per GB-month,

before

TARGET_HD_EPHEMERAL="0.25" # previously TARGET_HD
TARGET_HD_PERS_HDD="0.30"  # beta1
TARGET_HD_PERS_SSD="0.40"  # beta2
TARGET_HD_PERS_NVME="0.45" # beta3

currently it gives the following prices for 16 TiB persistent storage:

  • $4096 a month for ephemeral;
  • $4915 a month for HDD (beta1) persistent;
  • $6553 a month for SSD (beta2) persistent;
  • $7372 a month for NVME (beta3) persistent;

to calculate this, multiple GB by TARGET price and you get the result in USD, e.g. 16*1024*0.25 = 4096.

after

TARGET_HD_EPHEMERAL="0.08" # previously TARGET_HD
TARGET_HD_PERS_HDD="0.10"  # beta1
TARGET_HD_PERS_SSD="0.12"  # beta2
TARGET_HD_PERS_NVME="0.14" # beta3

so that price for 16 TiB persistent storage in Akash will be:

  • $1310 a month for ephemeral;
  • $1638 a month for HDD (beta1) persistent;
  • $1966 a month for SSD (beta2) persistent;
  • $2293 a month for NVME (beta3) persistent;

[provider] move user supplied values into values.yaml to reduce the amount of `--set` we use

  • the defaults will be passed via a user-supplied -f <sandbox|mainnet>.yaml file;
  • we will also have our defaults skeleton files for <sandbox|mainnet>.yaml which the user can open, read & modify to their needs;
  • on bid price script
    • the bid price script can be added this way -f <sandbox|mainnet>.yaml --set bidpricescript=$(cat price_script_generic.sh | openssl base64 -A)"
  • and update the documentation

more descriptive attribute names

These top-level attributes should have more descriptive names and/or be grouped together by function:

wallet_key_name: x
wallet_key_password: y
akash_chain_id: z
akash_node: a

or

akash:
  key_name: x
  key_password: y
  chain_id: z
  node: a

For a couple of reasons:

  1. logically grouping related attributes together makes reading configuration easier.
  2. short top-level symbols can easily collide with other symbols (key, from, etc... are getting very close to default template functions)

provider: check cert expiration locally

We are currently verifying if the certificate expires within the chain, as indicated in this section of the code - https://github.com/akash-network/helm-charts/blob/provider-4.3.7/charts/akash-provider/scripts/init.sh#L116.

However, there is a potential issue where a worker node may cache an already expired certificate. The expired certificate would be stored at ~root/.akash/k8s-config/provider.pem on the worker node. This becomes especially problematic when there are multiple worker nodes where the akash-provider pod could be deployed.

Therefore, it's advisable to also check for certificate expiration locally.


The workaround until then would be:

  1. remove the provider cert on all worker nodes:
rm -vf ~root/.akash/k8s-config/provider.pem
  1. bounce the provider pod:
kubectl -n akash-services delete pods -l app=akash-provider

[akash-node & akash-e2e] rpc, grpc, p2p, api, e2e are exposed to WAN by default

Do we really want to expose these to the WAN?

AFAIK, exposing the akash-node isn't optimal as one can easily DDoS it.
akash-node is just the RPC which should be used within the K8s's cluster network only.

If you deploy the akash-provider from helm charts right now, you'll see these are exposed to WAN:

root@node1:~# kubectl get ing -A
NAMESPACE        NAME                      CLASS   HOSTS                  ADDRESS                       PORTS   AGE
akash-services   akash-ingress-e2e         nginx   e2e.decloud.pro        134.122.72.70,134.122.72.91   80      4m26s

akash-services   akash-ingress-node-api    nginx   api.decloud.pro        134.122.72.70,134.122.72.91   80      4m26s
akash-services   akash-ingress-node-grpc   nginx   grpc.decloud.pro       134.122.72.70,134.122.72.91   80      4m26s
akash-services   akash-ingress-node-p2p    nginx   p2p.decloud.pro        134.122.72.70,134.122.72.91   80      4m26s
akash-services   akash-ingress-node-rpc    nginx   rpc.decloud.pro        134.122.72.70,134.122.72.91   80      4m26s

akash-services   akash-ingress-provider    nginx   provider.decloud.pro   134.122.72.70,134.122.72.91   80      4m26s  << OK

Provider upgrade does not see old leases

bdl.computer (akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp) is showing multiple active leases before upgrade:

Found a lease with akash12rc45a8x0j4jdl5u2fcsytmtlqt25d4mff58vq
Found a lease with akash17ndnqvz4kx6aaf4l4tp6zsny0xkwqyl69egxw2
Found a lease with akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt
Found a lease with akash1p83uq6sr8nqrv68cce7yjndrleswy7j55e3y29
Found a lease with akash1sum0e5ncl2d0kdeudy6w6x99ah5dy4u35x0wmf
Found a lease with akash1c9x3x96gdvkn3pq4u3qfnvv9gyzacvaymqqhpm
Found a lease with akash1w5uscl3jje6ydgq8gyw93qlmmgh9t3xjmamq7s
Found a lease with akash130est2p07yrfddsnmgyu4ejq6h5nadac0u7qhg
Found a lease with akash1m94c5jla8z0dm98ccn0gxhcehfke4pry3jqfum

Followed upgrade instructions and provider does not see any of the active leases before the upgrade.

I noticed that all the Manifest/ProviderHost/ProviderLeasedIP were not restored - they are blank after upgrade. Uninstalling old provider as per the instructions does not keep the manifests using the helm.sh/resource-policy=keep it appears they are deleted regardless when running helm -n akash-services uninstall akash-provider

I had to run kubectl apply -f manifests-backup.yaml and kubectl apply -f providerhosts-backup.yaml and pop the provider pod to see the old leases. But then even more trouble:


I[2022-11-27|03:16:34.578] using in cluster kube config                 cmp=provider
I[2022-11-27|03:16:34.690] using in cluster kube config                 cmp=provider
D[2022-11-27|03:16:34.695] service being found via autodetection        cmp=provider service=hostname-operator
I[2022-11-27|03:16:34.696] dns discovery success                        cmp=provider cmp=service-discovery-agent addrs="[{Target:akash-hostname-operator.akash-services.svc.cluster.local. Port:8188 Priority:0 Weight:100}]" portName=status service-name=akash-hostname-operator namespace=akash-services
D[2022-11-27|03:16:34.696] satisfying pending requests                  cmp=provider cmp=service-discovery-agent qty=1
I[2022-11-27|03:16:34.705] check result                                 cmp=provider operator=hostname status=200
I[2022-11-27|03:16:34.705] ready                                        cmp=provider cmp=waiter waitable="<*operatorclients.hostnameOperatorClient 0xc0006846c0>"
I[2022-11-27|03:16:34.705] all waitables ready                          cmp=provider cmp=waiter
I[2022-11-27|03:16:34.707] starting with existing reservations          module=provider-cluster cmp=provider cmp=service cmp=inventory-service qty=13
D[2022-11-27|03:16:34.710] found existing hostname                      module=provider-cluster cmp=provider cmp=service hostname=62jl13jhf5cj3anie7gs84b958.ingress.bdl.computer id=akash12rc45a8x0j4jdl5u2fcsytmtlqt25d4mff58vq/8371090/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:34.710] found existing hostname                      module=provider-cluster cmp=provider cmp=service hostname=blocklessghost.akash.pro id=akash1w5uscl3jje6ydgq8gyw93qlmmgh9t3xjmamq7s/8436789/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:34.710] found existing hostname                      module=provider-cluster cmp=provider cmp=service hostname=hlj6e8ntcdb4l7avv9aide98a4.ingress.bdl.computer id=akash17ndnqvz4kx6aaf4l4tp6zsny0xkwqyl69egxw2/8510193/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:34.710] found existing hostname                      module=provider-cluster cmp=provider cmp=service hostname=jsrltmfl8teo9a1t01rcqoaf50.ingress.bdl.computer id=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:34.710] found existing hostname                      module=provider-cluster cmp=provider cmp=service hostname=n8ps10lhihckhbrqnspm3vqm68.ingress.bdl.computer id=akash1w5uscl3jje6ydgq8gyw93qlmmgh9t3xjmamq7s/8436789/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:34.710] found existing hostname                      module=provider-cluster cmp=provider cmp=service hostname=op5mo9gn5dbprfuigng0gqbv18.ingress.bdl.computer id=akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt/8286524/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:34.710] found existing hostname                      module=provider-cluster cmp=provider cmp=service hostname=re6sr3kt11akd4ake9tnpne19c.ingress.bdl.computer id=akash1sum0e5ncl2d0kdeudy6w6x99ah5dy4u35x0wmf/8523185/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
E[2022-11-27|03:16:34.752] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=MemoryPressure status=Unknown
E[2022-11-27|03:16:34.752] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=DiskPressure status=Unknown
E[2022-11-27|03:16:34.752] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=PIDPressure status=Unknown
D[2022-11-27|03:16:34.859] inventory ready                              module=provider-cluster cmp=provider cmp=service cmp=inventory-service
D[2022-11-27|03:16:34.859] cluster resources                            module=provider-cluster cmp=provider cmp=service cmp=inventory-service dump="{\"nodes\":[{\"name\":\"k3s-rs2\",\"allocatable\":{\"cpu\":32000,\"memory\":50618888192,\"storage_ephemeral\":192163573200},\"available\":{\"cpu\":30890,\"memory\":49962479616,\"storage_ephemeral\":191626702288}},{\"name\":\"akash-rpc-node\",\"allocatable\":{\"cpu\":24000,\"memory\":16766521344,\"storage_ephemeral\":264429877246},\"available\":{\"cpu\":23690,\"memory\":16500183040,\"storage_ephemeral\":264429877246}},{\"name\":\"k3s-c27\",\"allocatable\":{\"cpu\":16000,\"memory\":42168496128,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":15890,\"memory\":42048958464,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct21\",\"allocatable\":{\"cpu\":40000,\"memory\":101349371904,\"storage_ephemeral\":276607029646},\"available\":{\"cpu\":37890,\"memory\":94787383296,\"storage_ephemeral\":276607029646}},{\"name\":\"k3s-ct22\",\"allocatable\":{\"cpu\":40000,\"memory\":50615308288,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":38890,\"memory\":49958899712,\"storage_ephemeral\":12926937896}},{\"name\":\"k3s-rs4\",\"allocatable\":{\"cpu\":32000,\"memory\":50618884096,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":31890,\"memory\":50499346432,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct30\",\"allocatable\":{\"cpu\":12000,\"memory\":8334454784,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":11890,\"memory\":8214917120,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-node12\",\"allocatable\":{\"cpu\":128000,\"memory\":101226201088,\"storage_ephemeral\":4031878577677},\"available\":{\"cpu\":127790,\"memory\":100972445696,\"storage_ephemeral\":4031878577677}},{\"name\":\"k3s-rs3\",\"allocatable\":{\"cpu\":32000,\"memory\":18859696128,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":30890,\"memory\":18203287552,\"storage_ephemeral\":13426060072}},{\"name\":\"k3s-wildgamer\",\"allocatable\":{\"cpu\":32000,\"memory\":10396827648,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":30890,\"memory\":9740419072,\"storage_ephemeral\":13962930984}},{\"name\":\"k3s-chia1\",\"allocatable\":{\"cpu\":12000,\"memory\":2038071296,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":11880,\"memory\":1884979200,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-wildfan\",\"allocatable\":{\"cpu\":32000,\"memory\":29429399552,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":31890,\"memory\":29309861888,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-wilditx\",\"allocatable\":{\"cpu\":32000,\"memory\":10429444096,\"storage_ephemeral\":421090118326},\"available\":{\"cpu\":31890,\"memory\":10309906432,\"storage_ephemeral\":421090118326}},{\"name\":\"k3s-wildpony\",\"allocatable\":{\"cpu\":32000,\"memory\":33658916864,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":31890,\"memory\":33539379200,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-wildprox\",\"allocatable\":{\"cpu\":12000,\"memory\":8336101376,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":11890,\"memory\":8216563712,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct10\",\"allocatable\":{\"cpu\":12000,\"memory\":29435994112,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":11890,\"memory\":29316456448,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct26\",\"allocatable\":{\"cpu\":24000,\"memory\":8310390784,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":23890,\"memory\":8190853120,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct28\",\"allocatable\":{\"cpu\":12000,\"memory\":50625253376,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":11890,\"memory\":50505715712,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct29\",\"allocatable\":{\"cpu\":24000,\"memory\":52734746624,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":23890,\"memory\":52615208960,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-wildtaichi\",\"allocatable\":{\"cpu\":32000,\"memory\":29397020672,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":31890,\"memory\":29277483008,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-chia4\",\"allocatable\":{\"cpu\":24000,\"memory\":8342425600,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":23890,\"memory\":8222887936,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct20\",\"allocatable\":{\"cpu\":40000,\"memory\":101349376000,\"storage_ephemeral\":276607029646},\"available\":{\"cpu\":35890,\"memory\":92639903744,\"storage_ephemeral\":169232847246}},{\"name\":\"k3s-ct23\",\"allocatable\":{\"cpu\":40000,\"memory\":50615308288,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":39390,\"memory\":50361552896,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-node11\",\"allocatable\":{\"cpu\":128000,\"memory\":101223075840,\"storage_ephemeral\":4095691946254},\"available\":{\"cpu\":123790,\"memory\":96271699968,\"storage_ephemeral\":4074217109774}},{\"name\":\"k3s-chia2\",\"allocatable\":{\"cpu\":24000,\"memory\":4112039936,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":23890,\"memory\":3992502272,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-ct25\",\"allocatable\":{\"cpu\":24000,\"memory\":20993912832,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":22890,\"memory\":20337504256,\"storage_ephemeral\":13426060072}},{\"name\":\"k3s-rs1\",\"allocatable\":{\"cpu\":32000,\"memory\":8340492288,\"storage_ephemeral\":421090118326},\"available\":{\"cpu\":31890,\"memory\":8220954624,\"storage_ephemeral\":421090118326}},{\"name\":\"k3s-wildatxm\",\"allocatable\":{\"cpu\":24000,\"memory\":29431472128,\"storage_ephemeral\":195002651086},\"available\":{\"cpu\":22890,\"memory\":28775063552,\"storage_ephemeral\":193928909262}},{\"name\":\"k3s-wildbro\",\"allocatable\":{\"cpu\":32000,\"memory\":16738611200,\"storage_ephemeral\":14499801896},\"available\":{\"cpu\":31890,\"memory\":16619073536,\"storage_ephemeral\":14499801896}},{\"name\":\"k3s-wildsfo\",\"allocatable\":{\"cpu\":36000,\"memory\":67394703360,\"storage_ephemeral\":195002651086},\"available\":{\"cpu\":34890,\"memory\":65127682048,\"storage_ephemeral\":186412716494}}],\"total_allocatable\":{\"cpu\":1016000,\"memory\":1113891405824,\"storage_ephemeral\":10659559610413},\"total_available\":{\"cpu\":994790,\"memory\":1084623552512,\"storage_ephemeral\":10516252825645}}\n"
I[2022-11-27|03:16:35.048] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1m94c5jla8z0dm98ccn0gxhcehfke4pry3jqfum/8449748/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement cnt=0
I[2022-11-27|03:16:35.055] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1p83uq6sr8nqrv68cce7yjndrleswy7j55e3y29/8418358/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement cnt=0
E[2022-11-27|03:16:35.056] lease not active, not deploying              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast
E[2022-11-27|03:16:35.057] execution error                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast state=deploy-active err="inactive Lease: akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp"
D[2022-11-27|03:16:35.058] purged ips                                   module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast
D[2022-11-27|03:16:35.059] purged hostnames                             module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast
E[2022-11-27|03:16:35.061] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast err="namespaces \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
I[2022-11-27|03:16:35.068] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1sum0e5ncl2d0kdeudy6w6x99ah5dy4u35x0wmf/8523185/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash cnt=0
I[2022-11-27|03:16:35.076] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash12rc45a8x0j4jdl5u2fcsytmtlqt25d4mff58vq/8371090/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash cnt=0
D[2022-11-27|03:16:35.077] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1m94c5jla8z0dm98ccn0gxhcehfke4pry3jqfum/8449748/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement
D[2022-11-27|03:16:35.081] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1p83uq6sr8nqrv68cce7yjndrleswy7j55e3y29/8418358/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement
I[2022-11-27|03:16:35.084] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash130est2p07yrfddsnmgyu4ejq6h5nadac0u7qhg/8449829/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement cnt=0
E[2022-11-27|03:16:35.088] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=MemoryPressure status=Unknown
E[2022-11-27|03:16:35.088] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=DiskPressure status=Unknown
E[2022-11-27|03:16:35.088] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=PIDPressure status=Unknown
I[2022-11-27|03:16:35.100] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1c9x3x96gdvkn3pq4u3qfnvv9gyzacvaymqqhpm/8449775/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement cnt=0
I[2022-11-27|03:16:35.102] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash17ndnqvz4kx6aaf4l4tp6zsny0xkwqyl69egxw2/8510193/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=dcloud cnt=0
I[2022-11-27|03:16:35.102] declaring hostname                           cmp=provider client=kube lease=akash1sum0e5ncl2d0kdeudy6w6x99ah5dy4u35x0wmf/8523185/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp service-name=app external-port=80 host=re6sr3kt11akd4ake9tnpne19c.ingress.bdl.computer
I[2022-11-27|03:16:35.104] declaring hostname                           cmp=provider client=kube lease=akash12rc45a8x0j4jdl5u2fcsytmtlqt25d4mff58vq/8371090/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp service-name=tronwallet external-port=80 host=62jl13jhf5cj3anie7gs84b958.ingress.bdl.computer
D[2022-11-27|03:16:35.105] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1sum0e5ncl2d0kdeudy6w6x99ah5dy4u35x0wmf/8523185/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
D[2022-11-27|03:16:35.106] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash12rc45a8x0j4jdl5u2fcsytmtlqt25d4mff58vq/8371090/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
I[2022-11-27|03:16:35.109] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt/8286524/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash cnt=0
E[2022-11-27|03:16:35.111] lease not active, not deploying              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
E[2022-11-27|03:16:35.111] execution error                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash state=deploy-active err="inactive Lease: akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp"
E[2022-11-27|03:16:35.146] lease not active, not deploying              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
E[2022-11-27|03:16:35.146] execution error                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash state=deploy-active err="inactive Lease: akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp"
E[2022-11-27|03:16:35.146] lease not active, not deploying              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner
D[2022-11-27|03:16:35.146] purged ips                                   module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
E[2022-11-27|03:16:35.146] execution error                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner state=deploy-active err="inactive Lease: akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp"
D[2022-11-27|03:16:35.146] purged hostnames                             module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
D[2022-11-27|03:16:35.148] purged ips                                   module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
I[2022-11-27|03:16:35.148] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1w5uscl3jje6ydgq8gyw93qlmmgh9t3xjmamq7s/8436789/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash cnt=0
E[2022-11-27|03:16:35.149] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
D[2022-11-27|03:16:35.150] purged ips                                   module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner
D[2022-11-27|03:16:35.151] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash130est2p07yrfddsnmgyu4ejq6h5nadac0u7qhg/8449829/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement
D[2022-11-27|03:16:35.151] purged hostnames                             module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner
D[2022-11-27|03:16:35.151] purged hostnames                             module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
E[2022-11-27|03:16:35.151] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:35.152] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner err="namespaces \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:35.247] teardown lease: unable to delete manifest    cmp=provider client=kube ns=u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92 error="manifests.akash.network \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:35.247] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast err="namespaces \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
D[2022-11-27|03:16:35.250] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1c9x3x96gdvkn3pq4u3qfnvv9gyzacvaymqqhpm/8449775/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=argoPlacement
E[2022-11-27|03:16:35.251] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u error="manifests.akash.network \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:35.251] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:35.253] teardown lease: unable to delete manifest    cmp=provider client=kube ns=6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4 error="manifests.akash.network \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:35.253] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
I[2022-11-27|03:16:35.254] declaring hostname                           cmp=provider client=kube lease=akash17ndnqvz4kx6aaf4l4tp6zsny0xkwqyl69egxw2/8510193/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp service-name=node external-port=80 host=hlj6e8ntcdb4l7avv9aide98a4.ingress.bdl.computer
E[2022-11-27|03:16:35.254] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq error="manifests.akash.network \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:35.254] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner err="namespaces \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
D[2022-11-27|03:16:35.256] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash17ndnqvz4kx6aaf4l4tp6zsny0xkwqyl69egxw2/8510193/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=dcloud
I[2022-11-27|03:16:35.259] declaring hostname                           cmp=provider client=kube lease=akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt/8286524/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp service-name=tetris external-port=80 host=op5mo9gn5dbprfuigng0gqbv18.ingress.bdl.computer
I[2022-11-27|03:16:35.260] declaring hostname                           cmp=provider client=kube lease=akash1w5uscl3jje6ydgq8gyw93qlmmgh9t3xjmamq7s/8436789/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp service-name=ghost external-port=80 host=n8ps10lhihckhbrqnspm3vqm68.ingress.bdl.computer
D[2022-11-27|03:16:35.261] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt/8286524/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
I[2022-11-27|03:16:35.263] declaring hostname                           cmp=provider client=kube lease=akash1w5uscl3jje6ydgq8gyw93qlmmgh9t3xjmamq7s/8436789/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp service-name=ghost external-port=80 host=blocklessghost.akash.pro
D[2022-11-27|03:16:35.265] deploy complete                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1w5uscl3jje6ydgq8gyw93qlmmgh9t3xjmamq7s/8436789/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash
E[2022-11-27|03:16:35.277] adjust inventory for pending reservation     module=provider-cluster cmp=provider cmp=service cmp=inventory-service error="insufficient capacity"
E[2022-11-27|03:16:35.451] teardown lease: unable to delete manifest    cmp=provider client=kube ns=u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92 error="manifests.akash.network \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:35.451] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast err="namespaces \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:35.454] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u error="manifests.akash.network \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:35.454] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:35.456] teardown lease: unable to delete manifest    cmp=provider client=kube ns=6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4 error="manifests.akash.network \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:35.456] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:35.456] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq error="manifests.akash.network \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:35.456] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner err="namespaces \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
I[2022-11-27|03:16:35.746] found orders                                 module=bidengine-service cmp=provider count=11
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1qqzwc5d7hynl67nsmn9jukvwqp3vzdl6j2t7lk/2528509/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1qqmhmrx9sd75kkfyyg9uncm326l8kclc5tckcu/5545245/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1qpygf7nphng80k6w3j9ljqy3ksnkk2gdcmwe55/4185046/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1qtj2uhj324wxg0083j5fkld7nv54dg8auksut6/2252060/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8654676/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1qsrzvymyw06z4egr9w9w84jtxcy0na3rrtexc4/6445173/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1qk5pw9g348gl76e2m2esuh0j4wnvqjus9ts3c6/2249866/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1qch2ewwv4mm54ucn9z8y7ym43svg9gzk4g45pk/8274414/1/1
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/2/1/2
D[2022-11-27|03:16:35.746] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/3/1/1
D[2022-11-27|03:16:35.747] creating catchup order                       module=bidengine-service cmp=provider order=order/akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/4/1/1
I[2022-11-27|03:16:35.846] found existing bid                           module=bidengine-order cmp=provider
I[2022-11-27|03:16:35.846] found existing bid                           module=bidengine-order cmp=provider
E[2022-11-27|03:16:35.846] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.846] fetched provider attributes                  module=bidengine-service cmp=provider provider=akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
E[2022-11-27|03:16:35.846] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.846] group fetched                                module=bidengine-order cmp=provider order=akash1qqzwc5d7hynl67nsmn9jukvwqp3vzdl6j2t7lk/2528509/1/1
I[2022-11-27|03:16:35.846] shutting down                                module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/2/1/2
I[2022-11-27|03:16:35.846] found existing bid                           module=bidengine-order cmp=provider
I[2022-11-27|03:16:35.846] shutting down                                module=bidengine-order cmp=provider order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8654676/1/1
E[2022-11-27|03:16:35.846] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.846] found existing bid                           module=bidengine-order cmp=provider
I[2022-11-27|03:16:35.846] group fetched                                module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/3/1/1
I[2022-11-27|03:16:35.846] shutting down                                module=bidengine-order cmp=provider order=akash1qqmhmrx9sd75kkfyyg9uncm326l8kclc5tckcu/5545245/1/1
E[2022-11-27|03:16:35.846] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.846] shutting down                                module=bidengine-order cmp=provider order=akash1qk5pw9g348gl76e2m2esuh0j4wnvqjus9ts3c6/2249866/1/1
I[2022-11-27|03:16:35.846] fetching provider auditor attributes         module=bidengine-service cmp=provider auditor=akash1qqzwc5d7hynl67nsmn9jukvwqp3vzdl6j2t7lk provider=akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:35.846] unable to fulfill: incompatible provider attributes module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/3/1/1
D[2022-11-27|03:16:35.846] declined to bid                              module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/3/1/1
I[2022-11-27|03:16:35.846] shutting down                                module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/3/1/1
I[2022-11-27|03:16:35.849] found existing bid                           module=bidengine-order cmp=provider
E[2022-11-27|03:16:35.849] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.849] shutting down                                module=bidengine-order cmp=provider order=akash1qpygf7nphng80k6w3j9ljqy3ksnkk2gdcmwe55/4185046/1/1
I[2022-11-27|03:16:35.851] found existing bid                           module=bidengine-order cmp=provider
E[2022-11-27|03:16:35.851] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.851] shutting down                                module=bidengine-order cmp=provider order=akash1qch2ewwv4mm54ucn9z8y7ym43svg9gzk4g45pk/8274414/1/1
I[2022-11-27|03:16:35.854] group fetched                                module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/4/1/1
D[2022-11-27|03:16:35.854] unable to fulfill: incompatible provider attributes module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/4/1/1
D[2022-11-27|03:16:35.854] declined to bid                              module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/4/1/1
I[2022-11-27|03:16:35.854] shutting down                                module=bidengine-order cmp=provider order=akash1q6y7dnh2mlluh5w8v03asxqthe3m354x289asr/4/1/1
E[2022-11-27|03:16:35.856] teardown lease: unable to delete manifest    cmp=provider client=kube ns=u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92 error="manifests.akash.network \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:35.856] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast err="namespaces \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
I[2022-11-27|03:16:35.856] found existing bid                           module=bidengine-order cmp=provider
E[2022-11-27|03:16:35.856] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.856] shutting down                                module=bidengine-order cmp=provider order=akash1qtj2uhj324wxg0083j5fkld7nv54dg8auksut6/2252060/1/1
E[2022-11-27|03:16:35.857] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u error="manifests.akash.network \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:35.857] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:35.858] teardown lease: unable to delete manifest    cmp=provider client=kube ns=6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4 error="manifests.akash.network \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:35.858] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:35.859] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq error="manifests.akash.network \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:35.859] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner err="namespaces \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
I[2022-11-27|03:16:35.859] found existing bid                           module=bidengine-order cmp=provider
E[2022-11-27|03:16:35.859] bid in unexpected state                      module=bidengine-order cmp=provider bid-state=closed
I[2022-11-27|03:16:35.859] shutting down                                module=bidengine-order cmp=provider order=akash1qsrzvymyw06z4egr9w9w84jtxcy0na3rrtexc4/6445173/1/1
D[2022-11-27|03:16:35.861] attribute signature requirements not met     module=bidengine-order cmp=provider order=akash1qqzwc5d7hynl67nsmn9jukvwqp3vzdl6j2t7lk/2528509/1/1
D[2022-11-27|03:16:35.861] declined to bid                              module=bidengine-order cmp=provider order=akash1qqzwc5d7hynl67nsmn9jukvwqp3vzdl6j2t7lk/2528509/1/1
I[2022-11-27|03:16:35.861] shutting down                                module=bidengine-order cmp=provider order=akash1qqzwc5d7hynl67nsmn9jukvwqp3vzdl6j2t7lk/2528509/1/1
E[2022-11-27|03:16:36.660] teardown lease: unable to delete manifest    cmp=provider client=kube ns=u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92 error="manifests.akash.network \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:36.660] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast err="namespaces \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:36.660] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u error="manifests.akash.network \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:36.660] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:36.661] teardown lease: unable to delete manifest    cmp=provider client=kube ns=6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4 error="manifests.akash.network \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:36.661] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:36.662] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq error="manifests.akash.network \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:36.662] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner err="namespaces \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
2022-11-27 03:16:36.780742 I | http: TLS handshake error from 10.0.21.151:44846: EOF
D[2022-11-27|03:16:37.863] lease is out of funds                        module=provider-service cmp=provider cmp=balance-checker lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:37.863] sending withdraw                             module=provider-service cmp=provider cmp=balance-checker lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
E[2022-11-27|03:16:37.874] failed to do lease withdrawal                module=provider-service cmp=provider cmp=balance-checker err="rpc error: code = Unknown desc = rpc error: code = Unknown desc = failed to execute message; message index: 0: payment closed [cosmos/[email protected]/baseapp/baseapp.go:781] With gas wanted: '0' and gas used: '48650' : unknown request" LeaseID=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:38.155] lease is out of funds                        module=provider-service cmp=provider cmp=balance-checker lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:38.155] sending withdraw                             module=provider-service cmp=provider cmp=balance-checker lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
E[2022-11-27|03:16:38.165] failed to do lease withdrawal                module=provider-service cmp=provider cmp=balance-checker err="rpc error: code = Unknown desc = rpc error: code = Unknown desc = failed to execute message; message index: 0: payment closed [cosmos/[email protected]/baseapp/baseapp.go:781] With gas wanted: '0' and gas used: '48638' : unknown request" LeaseID=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
E[2022-11-27|03:16:38.263] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u error="manifests.akash.network \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:38.263] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:38.263] teardown lease: unable to delete manifest    cmp=provider client=kube ns=6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4 error="manifests.akash.network \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:38.263] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:38.263] teardown lease: unable to delete manifest    cmp=provider client=kube ns=u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92 error="manifests.akash.network \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:38.263] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast err="namespaces \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:38.265] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq error="manifests.akash.network \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:38.265] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner err="namespaces \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
D[2022-11-27|03:16:40.078] running check                                module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt/8286524/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash cmp=deployment-monitor attempt=1
I[2022-11-27|03:16:40.085] check result                                 module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt/8286524/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash cmp=deployment-monitor ok=true attempt=1
D[2022-11-27|03:16:40.085] reservation status update                    module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash1xe342jdk25wugvrhlcsl65csr2z48etxp32mpt/8286524/1/1 resource-group=akash allocated=true
E[2022-11-27|03:16:40.097] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=MemoryPressure status=Unknown
E[2022-11-27|03:16:40.097] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=DiskPressure status=Unknown
E[2022-11-27|03:16:40.097] node in poor condition                       cmp=provider client=kube node=k3s-wildponyexpress condition=PIDPressure status=Unknown
D[2022-11-27|03:16:40.753] lease is out of funds                        module=provider-service cmp=provider cmp=balance-checker lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
D[2022-11-27|03:16:40.753] sending withdraw                             module=provider-service cmp=provider cmp=balance-checker lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
E[2022-11-27|03:16:40.762] failed to do lease withdrawal                module=provider-service cmp=provider cmp=balance-checker err="rpc error: code = Unknown desc = rpc error: code = Unknown desc = failed to execute message; message index: 0: payment closed [cosmos/[email protected]/baseapp/baseapp.go:781] With gas wanted: '0' and gas used: '48644' : unknown request" LeaseID=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp
E[2022-11-27|03:16:41.268] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq error="manifests.akash.network \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:41.268] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1kccah0qqt0fsmve74900ha7en44c3qwsv72att/8042485/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=pkt-miner err="namespaces \"jo9uinc9s7q0l8r1ca8d12nolg92m9r7dm4mh7e79v7pq\" not found"
E[2022-11-27|03:16:41.268] teardown lease: unable to delete manifest    cmp=provider client=kube ns=6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4 error="manifests.akash.network \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:41.268] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8476291/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"6r3h1ci17kj2d2n48f4nftkdu62o7ji1nh98d61jek3v4\" not found"
E[2022-11-27|03:16:41.268] teardown lease: unable to delete manifest    cmp=provider client=kube ns=jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u error="manifests.akash.network \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:41.268] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash18u6zew2sg9kfwlk4rurh3m8vxlylp9jmzr25ah/8525559/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=akash err="namespaces \"jeaesjdj77ipj44ujmmgqdv36h37oofbs99v48eb7be3u\" not found"
E[2022-11-27|03:16:41.268] teardown lease: unable to delete manifest    cmp=provider client=kube ns=u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92 error="manifests.akash.network \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"
E[2022-11-27|03:16:41.268] lease teardown failed                        module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1fpaarnccrmrz5vfwqyyjd70x4uq7l5vnasay5r/8381375/1/1/akash19yhu3jgw8h0320av98h8n5qczje3pj3u9u2amp manifest-group=westcoast err="namespaces \"u4fctraths8fvqb2l74h3ercdhvfbktrktsb9i7gabe92\" not found"

remove hard-coded network-specific variables from chart-specific `values.yaml`.

CRD manifest not found with new provider setup

Deployed latest helm charts to create a new provider. RPC node and provider start. Provider bids on a workload. Provider receives the manifest but then has an error.

The deployment never starts and the lease remains open.



I[2022-11-21|17:56:14.414] requesting reservation                       module=bidengine-order cmp=provider order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
D[2022-11-21|17:56:14.415] reservation requested                        module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1 resources="group_id:<owner:\"akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v\" dseq:8579389 gseq:1 > state:open group_spec:<name:\"akash\" requirements:<signed_by:<> > resources:<resources:<cpu:<units:<val:\"1000\" > > memory:<quantity:<val:\"536870912\" > > storage:<name:\"default\" quantity:<val:\"536870912\" > > endpoints:<> > count:1 price:<denom:\"uakt\" amount:\"10000000000000000000000\" > > > created_at:8579391 "
D[2022-11-21|17:56:14.415] reservation count                            module=provider-cluster cmp=provider cmp=service cmp=inventory-service cnt=1
I[2022-11-21|17:56:14.415] Reservation fulfilled                        module=bidengine-order cmp=provider order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
D[2022-11-21|17:56:15.782] submitting fulfillment                       module=bidengine-order cmp=provider order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1 price=58.000000000000000000uakt
I[2022-11-21|17:56:20.556] bid complete                                 module=bidengine-order cmp=provider order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
D[2022-11-21|17:56:36.656] cluster resources                            module=provider-cluster cmp=provider cmp=service cmp=inventory-service dump="{\"nodes\":[{\"name\":\"den-sm2027tr-h72rf-5-1\",\"allocatable\":{\"cpu\":24000,\"memory\":135043149824,\"storage_ephemeral\":1793383324904},\"available\":{\"cpu\":23700,\"memory\":134741159936,\"storage_ephemeral\":1793383324904}}],\"total_allocatable\":{\"cpu\":24000,\"memory\":135043149824,\"storage_ephemeral\":1793383324904},\"total_available\":{\"cpu\":23700,\"memory\":134741159936,\"storage_ephemeral\":1793383324904}}\n"
2022-11-21 17:56:42.191403 I | http: TLS handshake error from 10.0.0.137:54892: EOF
I[2022-11-21|17:56:45.331] order detected                               module=bidengine-service cmp=provider order=order/akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
I[2022-11-21|17:56:45.335] group fetched                                module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
I[2022-11-21|17:56:45.335] fetching provider auditor attributes         module=bidengine-service cmp=provider auditor=akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63 provider=akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:45.336] got auditor attributes                       module=bidengine-service cmp=provider auditor=akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63 size=9
I[2022-11-21|17:56:45.336] fetching provider auditor attributes         module=bidengine-service cmp=provider auditor=akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4 provider=akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:45.337] requesting reservation                       module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
D[2022-11-21|17:56:45.338] reservation requested                        module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1 resources="group_id:<owner:\"akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw\" dseq:8579394 gseq:1 > state:open group_spec:<name:\"akash\" requirements:<signed_by:<any_of:\"akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63\" any_of:\"akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4\" > > resources:<resources:<cpu:<units:<val:\"100\" > > memory:<quantity:<val:\"100663296\" > > storage:<name:\"default\" quantity:<val:\"6291456\" > > endpoints:<> > count:1 price:<denom:\"uakt\" amount:\"10000000000000000000000\" > > > created_at:8579396 "
D[2022-11-21|17:56:45.338] reservation count                            module=provider-cluster cmp=provider cmp=service cmp=inventory-service cnt=2
I[2022-11-21|17:56:45.338] Reservation fulfilled                        module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
D[2022-11-21|17:56:46.215] submitting fulfillment                       module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1 price=6.000000000000000000uakt
I[2022-11-21|17:56:51.344] bid complete                                 module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
D[2022-11-21|17:56:51.487] ignoring group                               module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1 group=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1
I[2022-11-21|17:56:51.487] lease won                                    module=bidengine-order cmp=provider order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1 lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:51.487] shutting down                                module=bidengine-order cmp=provider order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
I[2022-11-21|17:56:51.487] lease won                                    module=provider-manifest cmp=provider lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:51.487] new lease                                    module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
D[2022-11-21|17:56:51.487] watchdog start                               module=provider-manifest cmp=provider leaseID=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:51.492] data received                                module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 version=8a4849176214875d2d3a7fe27e813ffac8dc00f8efc297aed1c28b4f3d93839f
D[2022-11-21|17:56:51.492] emit received events skipped                 module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 data="{Deployment:{DeploymentID:akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 State:active Version:[138 72 73 23 98 20 135 93 45 58 127 226 126 129 63 250 200 220 0 248 239 194 151 174 209 194 139 79 61 147 131 159] CreatedAt:8579391} Groups:[{GroupID:akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1 State:open GroupSpec:{Name:akash Requirements:{SignedBy:{AllOf:[] AnyOf:[]} Attributes:[]} Resources:[{Resources:{CPU:units:<val:\"1000\" >  Memory:quantity:<val:\"536870912\" >  Storage:[{Name:default Quantity:{Val:536870912} Attributes:[]}] Endpoints:[{Kind:SHARED_HTTP SequenceNumber:0}]} Count:1 Price:10000.000000000000000000uakt}]} CreatedAt:8579391}] EscrowAccount:{ID:{Scope:deployment XID:akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389} Owner:akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v State:open Balance:5000000.000000000000000000uakt Transferred:0.000000000000000000uakt SettledAt:8579397 Depositor:akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v Funds:0.000000000000000000uakt}}" manifests=0
I[2022-11-21|17:56:59.282] manifest received                            module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389
I[2022-11-21|17:56:59.282] watchdog done                                module=provider-manifest cmp=provider lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389
I[2022-11-21|17:56:59.287] data received                                module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 version=8a4849176214875d2d3a7fe27e813ffac8dc00f8efc297aed1c28b4f3d93839f
D[2022-11-21|17:56:59.287] requests valid                               module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 num-requests=1
D[2022-11-21|17:56:59.287] publishing manifest received                 module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 num-leases=1
D[2022-11-21|17:56:59.287] publishing manifest received for lease       module=manifest-manager cmp=provider deployment=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389 lease_id=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:59.288] manifest received                            module=provider-cluster cmp=provider cmp=service lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:59.296] hostnames withheld                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash cnt=0
W1121 17:56:59.379063    3423 warnings.go:70] unknown field "status"
E[2022-11-21|17:56:59.379] applying manifest                            cmp=provider client=kube err="namespaces \"lease\" not found" lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
I[2022-11-21|17:56:59.382] declaring hostname                           cmp=provider client=kube lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 service-name=fast external-port=80 host=ho317c95mhd1v1o9jm9fpdao5o.ingress.akash.rocks
W1121 17:56:59.438531    3423 warnings.go:70] unknown field "status"
E[2022-11-21|17:56:59.438] execution error                              module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash state=deploy-active err="namespaces \"lease\" not found"
D[2022-11-21|17:56:59.442] purged ips                                   module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
D[2022-11-21|17:56:59.443] purged hostnames                             module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
E[2022-11-21|17:56:59.447] teardown lease: unable to delete manifest    cmp=provider client=kube ns=vuhjk7lrvdmn2gq39cfdfm9hi87fk0u2sckqcp9gjab04 error="manifests.akash.network \"vuhjk7lrvdmn2gq39cfdfm9hi87fk0u2sckqcp9gjab04\" not found"
D[2022-11-21|17:56:59.447] teardown complete                            module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
D[2022-11-21|17:56:59.447] shutting down                                module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
D[2022-11-21|17:56:59.447] waiting on dm.wg                             module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
I[2022-11-21|17:56:59.447] shutting down unclean, running teardown now  module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
D[2022-11-21|17:56:59.451] purged ips                                   module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
D[2022-11-21|17:56:59.452] purged hostnames                             module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
E[2022-11-21|17:56:59.454] teardown lease: unable to delete manifest    cmp=provider client=kube ns=vuhjk7lrvdmn2gq39cfdfm9hi87fk0u2sckqcp9gjab04 error="manifests.akash.network \"vuhjk7lrvdmn2gq39cfdfm9hi87fk0u2sckqcp9gjab04\" not found"
I[2022-11-21|17:56:59.454] shutdown complete                            module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
D[2022-11-21|17:56:59.454] hostnames released                           module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
D[2022-11-21|17:56:59.455] sending manager into channel                 module=provider-cluster cmp=provider cmp=service cmp=deployment-manager lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0 manifest-group=akash
I[2022-11-21|17:56:59.455] manager done                                 module=provider-cluster cmp=provider cmp=service lease=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1/akash1qhjtxmacslmefm3v4sn5ggq6ed9jn83cy2rjd0
D[2022-11-21|17:56:59.455] unreserving capacity                         module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
I[2022-11-21|17:56:59.455] attempting to removing reservation           module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
I[2022-11-21|17:56:59.455] removing reservation                         module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
I[2022-11-21|17:56:59.455] unreserve capacity complete                  module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash1q0m0kz83qwpuc5ss39y8sf25mq85a43ffjfd3v/8579389/1/1
D[2022-11-21|17:56:59.455] reservation count                            module=provider-cluster cmp=provider cmp=service cmp=inventory-service cnt=1
I[2022-11-21|17:57:04.558] CRD manifest not found                       cmp=provider client=kube lease-ns=vuhjk7lrvdmn2gq39cfdfm9hi87fk0u2sckqcp9gjab04
I[2022-11-21|17:57:21.743] lease lost                                   module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1 lease=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1/akash1u5cdg7k3gl43mukca4aeultuz8x2j68mgwn28e
I[2022-11-21|17:57:21.743] shutting down                                module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
D[2022-11-21|17:57:21.743] unreserving reservation                      module=bidengine-order cmp=provider order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
D[2022-11-21|17:57:21.743] unreserving capacity                         module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
I[2022-11-21|17:57:21.743] attempting to removing reservation           module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
I[2022-11-21|17:57:21.743] removing reservation                         module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1
I[2022-11-21|17:57:21.743] unreserve capacity complete                  module=provider-cluster cmp=provider cmp=service cmp=inventory-service order=akash13k6shlz0m25mpcn3f4kj0lum990yat5nve6fsw/8579394/1/1

akash-provider is slowly draining users funds when is out of gas

https://www.mintscan.io/akash/account/akash1fjsgc7uy2vy23juehaja3f5wzfqu8wd6rwck7p

https://www.mintscan.io/akash/txs/0D5E387041830524381FCA5D607E6918F66C69CE8FE9A8A7B320C92317182324
image

User remek is having issues with his akash provider due to issue with the gas,
I'm trying to figure it out currently here https://discord.com/channels/747885925232672829/771909807359262751/950722800086319144

But the point is that the akash-provider (or rather the script it uses) should restart akash tx with an exponential back-off delay (10s, 20s, 40s, 80s, ...), probably capped at 8 hours so it won't keep draining the uakts.

addressing log file growth and performance impact in `akash-provider` pods

Issue

The akash-provider has been running for nearly a week on a sizable provider (approximately 100 vCPUs), generating a log file (provider.log - inside the pod) that has grown to 1.7GiB in size. This poses a risk of exhausting the available space on nodefs, which commonly shares the root filesystem (/).

Additionally, as the log file expands, the livenessProbe - which uses grep to scan /var/log/provider.log - takes increasingly longer to execute, especially on systems with slower disk speeds.

Rationale

Implementing a log rotation mechanism for provider.log presents several challenges. Achieving atomic log rotation without disrupting the running service or incurring data loss is a complex task, particularly when using a shell script. On the other hand, implementing log rotation at the container level might be redundant. This is because the container's logs (captured from stdout and stderr) are stored in the container's logPath, located at /var/log/pods/<namespace>_<pod-name>_<pod-uuid>/<container-name>/. These logs are ephemeral and only exist for the duration of the pod's lifecycle. Therefore, if long-term log retention is needed, an external logging solution would be the more appropriate approach.

Potential solutions

A. Implement Space Limits for the akash-provider Pod:
Pros: The akash-provider pod will automatically restart when the log file reaches the set ephemeral storage limit.
Cons: Brief service disruption expected, typically lasting less than a minute.

B. Modify Script for Conditional Log Truncation
Pros: No disruption to the akash-provider service.
Cons: None.

B would be preferable solution based on the rationale presented above.

B - solution PoC

The PoC would be updating the run.sh script as follows, yet to be tested:

#!/bin/bash

# Define log file
LOG_FILE="/var/log/provider.log"

# Function to rotate logs when they reach a certain size (100 MiB)
rotate_logs() {
  while true; do
    sleep 300  # Check every 5 minutes
    LOG_SIZE=$(stat -c %s "$LOG_FILE")

    # 104857600 bytes = 100 MiB
    MAX_LOG_SIZE=104857600

    if [ "$LOG_SIZE" -ge "$MAX_LOG_SIZE" ]; then
      # Truncate the log file to zero size (or rotate as needed)
      truncate -s 0 "$LOG_FILE"
    fi
  done
}

# Start the log rotation function in the background
rotate_logs &

# livenessProbe is going to check the provider log for errors
exec &> >(tee -a "$LOG_FILE")

# Install apps required by the bid price script
apt -qq update && DEBIAN_FRONTEND=noninteractive apt -qq -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" --no-install-recommends install curl jq bc mawk ca-certificates

# fail fast should there be a problem installing curl / jq packages
type curl || exit 1
type jq || exit 1
type awk || exit 1
type bc || exit 1

exec provider-services run

1.7GiB log file evidence

pod[akash-provider]:/# stat /proc/1
  File: /proc/1
  Size: 0         	Blocks: 0          IO Block: 1024   directory
Device: 30005fh/3145823d	Inode: 52354372    Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2023-08-21 01:14:58.501505888 +0000
Modify: 2023-08-21 01:14:58.501505888 +0000
Change: 2023-08-21 01:14:58.501505888 +0000
 Birth: -

pod[akash-provider]:/# stat /var/log/provider.log 
  File: /var/log/provider.log
  Size: 1732027990	Blocks: 3382880    IO Block: 4096   regular file
Device: 300054h/3145812d	Inode: 16945398    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2023-08-27 21:33:34.259246170 +0000
Modify: 2023-08-27 21:33:38.275187578 +0000
Change: 2023-08-27 21:33:38.275187578 +0000
 Birth: 2023-08-21 01:14:58.513505726 +0000

pod[akash-provider]:/# ls -lah /var/log/provider.log
-rw-r--r-- 1 root root 1.7G Aug 27 21:33 /var/log/provider.log

pod[akash-provider]:/# time grep -Eq 'account sequence mismatch' /var/log/provider.log

real	0m0.509s
user	0m0.369s
sys	0m0.141s

operator-ip: recover from failure until akash-network/support#105 is solved

provider pod won't respond over 8443/status & 8444 (grpc) status endpoints if operator-ip is used and the latter failed to initialize (typically on first worker node re/boot - as the network still haven't fully initialized it is unable to talk to the DNS yet 169.254.25.10:53: server misbehaving):

Operator-ip Logs:

$ kubectl -n akash-services logs deployment/operator-ip
I[2024-04-20|08:48:29.791] using in cluster kube config                 cmp=provider
D[2024-04-20|08:48:41.089] service being found via autodetection        operator=ip service=metal-lb
I[2024-04-20|08:48:41.093] clients                                      operator=ip kube="kube client 0xc001168000 ns=lease" metallb="metal LB client 0xc001168060"
I[2024-04-20|08:48:41.093] HTTP listening                               operator=ip address=:8080
I[2024-04-20|08:48:41.093] ip operator start                            operator=ip
I[2024-04-20|08:48:41.093] associated provider                          operator=ip addr=akash15tl6v6gd0nte0syyxnv57zmmspgju4c3xfmdhk
I[2024-04-20|08:48:41.093] fetching existing IP passthroughs            operator=ip
E[2024-04-20|08:48:41.105] dns discovery failed                         operator=ip cmp=service-discovery-agent error="lookup _monitoring._TCP.controller.metallb-system.svc.cluster.local on 169.254.25.10:53: server misbehaving" portName=monitoring service-name=controller namespace=metallb-system
D[2024-04-20|08:48:41.105] satisfying pending requests                  operator=ip cmp=service-discovery-agent qty=1
E[2024-04-20|08:48:41.105] observation stopped                          operator=ip err="lookup _monitoring._TCP.controller.metallb-system.svc.cluster.local on 169.254.25.10:53: server misbehaving"
I[2024-04-20|08:48:41.105] delaying                                     operator=ip
I[2024-04-20|08:48:44.107] associated provider                          operator=ip addr=akash15tl6v6gd0nte0syyxnv57zmmspgju4c3xfmdhk
I[2024-04-20|08:48:44.107] fetching existing IP passthroughs            operator=ip
E[2024-04-20|08:49:44.118] observation stopped                          operator=ip err="context deadline exceeded"
I[2024-04-20|08:49:44.119] associated provider                          operator=ip addr=akash15tl6v6gd0nte0syyxnv57zmmspgju4c3xfmdhk
I[2024-04-20|08:49:44.119] fetching existing IP passthroughs            operator=ip
E[2024-04-20|08:50:44.126] observation stopped                          operator=ip err="context deadline exceeded"
I[2024-04-20|08:50:44.126] associated provider                          operator=ip addr=akash15tl6v6gd0nte0syyxnv57zmmspgju4c3xfmdhk
I[2024-04-20|08:50:44.126] fetching existing IP passthroughs            operator=ip
E[2024-04-20|08:51:12.567] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:14.567] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:16.568] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:18.569] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:20.570] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:22.572] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:24.572] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:26.574] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:28.576] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:30.576] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:32.577] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:34.578] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:36.580] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:38.580] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:40.581] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:42.582] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:44.133] observation stopped                          operator=ip err="context deadline exceeded"
I[2024-04-20|08:51:44.133] associated provider                          operator=ip addr=akash15tl6v6gd0nte0syyxnv57zmmspgju4c3xfmdhk
I[2024-04-20|08:51:44.133] fetching existing IP passthroughs            operator=ip
E[2024-04-20|08:51:44.584] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:46.585] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:48.585] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:50.587] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:52.588] barrier is locked, can't service request     operator=ip path=/health
E[2024-04-20|08:51:54.589] barrier is locked, can't service request     operator=ip path=/health

Provider logs:

$ kubectl -n akash-services logs akash-provider-0 --tail=100 -f
...
I[2024-04-20|08:51:13.666] found orders                                 module=bidengine-service cmp=provider count=28
I[2024-04-20|08:51:13.667] fetched provider attributes                  module=bidengine-service cmp=provider provider=akash15tl6v6gd0nte0syyxnv57zmmspgju4c3xfmdhk
I[2024-04-20|08:51:13.669] grpc listening on "0.0.0.0:8444"             cmp=provider
I[2024-04-20|08:51:14.567] check result                                 cmp=provider operator=ip status=503
E[2024-04-20|08:51:14.567] not yet ready                                cmp=provider cmp=waiter waitable="<*ip.client 0xc00167d6c0>" error="ip operator is not yet alive"

Fix:

$ kubectl -n akash-services rollout restart deployment/operator-ip
deployment.apps/operator-ip restarted

Probably need to implement a workaround that would keep restarting the operator-ip pod once it finds "barrier is locked, can't service request" message in the logs, similarly to how it's done in akash-provider pod. Except that in some cases this error might be a real error, and we probably don't want operator-ip keep restarting way too often.. need to see if can do exponential backoff mechanism.

OR, ideally, if there is a healthz kind of endpoint operator-ip may have that can be queried.

Refs.
akash-network/support#105

akash-ingress: make akash-node ingress hosts and configmap (port mapping) optional

We get akash-node-1 (as well as akash-node-[234]) configured by default even if we aren't using it, i.e. helm <install/upgrade> ... --set node="http://rpc.edgenet-1.ewr1.aksh.pw:26657"

helm-charts$ git grep akash-node-1 charts/akash-ingress
charts/akash-ingress/templates/configmap-tcp.yaml:  1317: "akash-services/akash-node-1:1317"
charts/akash-ingress/templates/configmap-tcp.yaml:  9090: "akash-services/akash-node-1:9090"
charts/akash-ingress/templates/configmap-tcp.yaml:  26656: "akash-services/akash-node-1:26656"
charts/akash-ingress/templates/configmap-tcp.yaml:  26657: "akash-services/akash-node-1:26657"
charts/akash-ingress/templates/ingress-node-api.yaml:                name: akash-node-1
charts/akash-ingress/templates/ingress-node-grpc.yaml:                name: akash-node-1
charts/akash-ingress/templates/ingress-node-p2p.yaml:                name: akash-node-1
charts/akash-ingress/templates/ingress-node-rpc.yaml:                name: akash-node-1

This leads to these errors:

root@vmi791458:~# kubectl -n akash-services describe ing | grep akash-node-1
                         /   akash-node-1:9090 (<error: endpoints "akash-node-1" not found>)
                        /   akash-node-1:1317 (<error: endpoints "akash-node-1" not found>)
                        /   akash-node-1:26656 (<error: endpoints "akash-node-1" not found>)
                        /   akash-node-1:26657 (<error: endpoints "akash-node-1" not found>)
root@vmi791458:~# kubectl -n ingress-nginx logs $(kubectl -n ingress-nginx get pods -l app.kubernetes.io/component=controller --output jsonpath='{.items[0].metadata.name}') --tail=150 | grep akash-node-1
...
...
W0310 21:24:25.556084       8 controller.go:359] Error getting Service "akash-services/akash-node-1": no object matching key "akash-services/akash-node-1" in local store
W0310 21:24:25.556112       8 controller.go:359] Error getting Service "akash-services/akash-node-1": no object matching key "akash-services/akash-node-1" in local store
W0310 21:24:25.556120       8 controller.go:359] Error getting Service "akash-services/akash-node-1": no object matching key "akash-services/akash-node-1" in local store
W0310 21:24:25.556139       8 controller.go:359] Error getting Service "akash-services/akash-node-1": no object matching key "akash-services/akash-node-1" in local store

KUBE_CONFIG needs to go

Looks like KUBE_CONFIG needs to go.

It doesn't appear to be valid nor required since we set AKASH_CLUSTER_K8S=true.


akash-provider Helm-Charts

helm-charts$ git grep -A1 KUBE_CONFIG
charts/akash-provider/templates/deployment.yaml:            - name: KUBE_CONFIG
charts/akash-provider/templates/deployment.yaml-              value: "{{ .Values.home }}/.kube/config"

akash-provider Pod:

$ kubectl -n akash-services describe pod $(kubectl -n akash-services get pods -l app=akash-provider --output jsonpath='{.items[0].metadata.name}') | grep -i config
      KUBE_CONFIG:                            /root/.akash/.kube/config
$ kubectl -n akash-services exec -ti $(kubectl -n akash-services get pods -l app=akash-provider --output jsonpath='{.items[0].metadata.name}') -- bash
root@akash-provider-855f68b48f-rl7xj:/# ps -ef|grep provider
root        2986       1 20 Jul20 ?        1-11:05:05 /bin/akash provider run --cluster-k8s

root@akash-provider-855f68b48f-rl7xj:/# cat /proc/2986/environ | xargs -0 -n1 |grep -i kube_config
KUBE_CONFIG=/root/.akash/.kube/config

root@akash-provider-855f68b48f-rl7xj:/# ls -la /root/.akash/.kube/config /root/.akash/.kube 
ls: cannot access '/root/.akash/.kube/config': No such file or directory
ls: cannot access '/root/.akash/.kube': No such file or directory

root@akash-provider-855f68b48f-rl7xj:/# find / -xdev |grep -i kube\/config 
root@akash-provider-855f68b48f-rl7xj:/# 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.