Giter VIP home page Giter VIP logo

Comments (12)

Jeffwan avatar Jeffwan commented on May 18, 2024 1

@ericl We did some analysis and notice it's kind of hard to start monitor and keep it exact same pattern as it is in ray/core. I do think we need some changes to provides a smooth and pluggable experience. Let us add more details in the issue and we can have the discussion

from kuberay.

pcmoritz avatar pcmoritz commented on May 18, 2024 1

I wrote a design doc fleshing out the above proposals a bit more:

https://docs.google.com/document/d/1I2CYu2-hTQUJ29wPonMvCZgEiRPs1-KeqT1mzrC6LXY

Please let us know about the direction and any suggestions or improvements you might have :)

from kuberay.

ericl avatar ericl commented on May 18, 2024

It would be great to see support for in-tree autoscaling! Are there any API changes to the in-tree autoscaler or proto APIs that might make this easier to implement / maintain?

(I'm happy to work together on this issue)

from kuberay.

ericl avatar ericl commented on May 18, 2024

Cc @DmitriGekhtman, who maintains the in-tree operator.

from kuberay.

DmitriGekhtman avatar DmitriGekhtman commented on May 18, 2024

@Jeffwan could you say more about why having the autoscaler run in the head pod is preferable for the use-cases you are considering?

If I understand right, you'd also prefer the autoscaler to directly interact with K8s api server, rather than acting on a custom resource and delegating pod management to the operator.

Just curious if there are particular reasons this way of doing things works best for you, besides the fact that the Ray autoscaler is currently set up to favor this deployment strategy.

from kuberay.

DmitriGekhtman avatar DmitriGekhtman commented on May 18, 2024

I guess "in-tree autoscaler" mostly means "monitor.py" from the main Ray project.
One way to make it work is to write a NodeProvider implementation whose "create node" and "terminate node" methods act on the scale fields of the RayCluster CR.

from kuberay.

Jeffwan avatar Jeffwan commented on May 18, 2024

@Jeffwan could you say more about why having the autoscaler run in the head pod is preferable for the use-cases you are considering?

@DmitriGekhtman I missed your last comment. We can scope autoscaler at the cluster level which is under our expectation. Since autoscaler in the future may have different policies etc, this gives us enough flexibility to custom autoscaler for each cluster for different ray versions. (we are not end users and version upgrade takes time, it's common to have multiple versions running at the same time in the cluster)

If I understand right, you'd also prefer the autoscaler to directly interact with K8s api server, rather than acting on a custom resource and delegating pod management to the operator.

I actually prefer to ask autoscaler to update Kubernetes CRD so there's always one owner of the pods and the responsibility is clear.

from kuberay.

Jeffwan avatar Jeffwan commented on May 18, 2024

I guess "in-tree autoscaler" mostly means "monitor.py" from the main Ray project.
One way to make it work is to write a NodeProvider implementation whose "create node" and "terminate node" methods act on the scale fields of the RayCluster CR.

That's correct. We did some POC like below to verify the functionality but feel there're some upstream changes to make. Currently, we are not using autoscaling yet in our envs.

  1. CRD -> a config file autoscaler can recongnize
  2. operator converts CRD to config and create a ConfigMap and mount to head node
  3. head node start monitoring process and reads the config.

from kuberay.

DmitriGekhtman avatar DmitriGekhtman commented on May 18, 2024

All of this makes sense.
I think it might be advantageous to deploy the autoscaler as a separate deployment (scoped to a single Ray cluster). That gives more flexibility. Also, it's better for resource management -- we've observed the autoscaler using up a lot of memory under certain conditions.

Mounting a config map works. Another option is to have the autoscaler read the custom resource and do the translation to a suitable format itself, once per autoscaler iteration. This has the advantage that changes to the CR propagate faster to the autoscaler -- mounted config maps take a while to update.

from kuberay.

Jeffwan avatar Jeffwan commented on May 18, 2024

ray-project/ray#21086
ray-project/ray#22348

Ray upstream already have the support. Under current implementation, kuberay operator's work become easier, operator should take actions on this field to orchestrate the autoscaler. Entire process should be transparent to users

// EnableInTreeAutoscaling indicates whether operator should create in tree autoscaling configs
EnableInTreeAutoscaling *bool `json:"enableInTreeAutoscaling,omitempty"`

While, version management is still tricky. We should not support autoscaler for earlier Ray versions.

from kuberay.

DmitriGekhtman avatar DmitriGekhtman commented on May 18, 2024

Yep, I agree that we don't need to support the Ray autoscaler with earlier Ray versions.

from kuberay.

Jeffwan avatar Jeffwan commented on May 18, 2024

Major implementation is done. Let's create separate issues to track future improvements.

from kuberay.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.