Giter VIP home page Giter VIP logo

Comments (4)

dalehamel avatar dalehamel commented on August 17, 2024

Had a chat with @sirupsen and I think that the approach we've settled on boils down to:

Have a new semaphore based on the maximum number of tickets that tracks the tickets per worker (needs to be unique per process or thread  -probably parent pid or pid_threadid).

The difference between this and the configured global maximum is the number of tickets currently available for that resource.

Based on the updates to this, we can dynamically update the number of available RPC tickets, based on some fraction. If the floor of this is different from the previous floor, do a thread-safe update of the ticket count. 

from semian.

sirupsen avatar sirupsen commented on August 17, 2024

@csfrancis @casperisfine any comments on this approach? Namely the newest comment from Dale

from semian.

dalehamel avatar dalehamel commented on August 17, 2024

@csfrancis asked me:

in k8s are sysv semaphores shared across the entire physical host? that seems strange

and this brings up a point of clarification: the reason why any of this is necessary is because we have to use hostIPC for logging. Because we are logging to SySV MQ, we are forced into using the host IPC namespace.

from semian.

sirupsen avatar sirupsen commented on August 17, 2024

K, @csfrancis had a short conversation and he's onboard with the solution of dynamically adjusting ticket counts. Some excellent point Scott made:

  • We should call the configuration option quota, not tickets to not confuse the two. ArgumentError should be raised if both are set.
  • When workers unregister themselves (they're killed or stop and SEM_UNDO does its thing), the worker count needs to be adjusted by something. We can cache the worker count in a semaphore in the semaphore set for the resource. On #acquire if it's different, we call update_ticket_count. It seems better for this reason to do it at #acquire time rather than #register time.
  • A problem with the alternative approach of Semian["#{Process.ppid}_#{Thread.id}_#{resource_name} that Scott pointed out is that you'll basically have to GC it. By default, this limit is 32,000 on Linux. We have about a 100 in Shopify. If we run, say 10 pods per host, it'll take 32,000 / (100 * 10) = 32 deploys before we exhaust this space.

from semian.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.