Comments (32)
This is discouraged, and it only adds to the confusion. Users don't look at CRD definitions.
ok, I don't want to delay things, I am fine with keeping the x-k8s.io
domain just because it is used by other subprojects (it doesn't make it less weird though) and revisit when we go to Beta;
from kueue.
/kind feature
/priority backlog
from kueue.
Using sigs.k8s.io
as domain isn't necessary and doesn't add value. If this ever moves to core k8s, it would likely be under batch.k8s.io
anyways.
For the time being, and to avoid the awkward x-
, it seems reasonable to use the shorter kueue.sh
domain.
from kueue.
I think the value is the direct association to the Kubernetes project. I prefer having it in the API name. But what do others think?
from kueue.
Anyway, it will be moved to core k8s in the future, so why not choose to use batch.k8s.io
directly. This will reduce the work of users to change api groups in the future.
from kueue.
Assuming that you don't mean batch
, which would require the API to be part of k/k.
Any k8s.io
domain name requires an API review. This will take a week or two. Should we do it or just release the alpha APIs as-is? I think I prefer to release them sooner for more chances to get real-world feedback. And then we can go for an API review when we aim for beta.
from kueue.
I think the value is the direct association to the Kubernetes project. I prefer having it in the API name.
The x-
makes it look weird.
Any k8s.io domain name requires an API review. This will take a week or two.
If that is a one time thing, then we can pursue it; but I am not sure if we want it if it requires a review for every change we make.
from kueue.
It probably does. But maybe we can just rename to batch.k8s.io
and mark it as unapproved for now https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/2337-k8s.io-group-protection#proposal
from kueue.
+1 We can mark it unapproved at the alpha version.
from kueue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from kueue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
from kueue.
/remove-lifecycle stale
maybe for 0.3.0 :)
from kueue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
from kueue.
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from kueue.
/reopen
from kueue.
@alculquicondor: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from kueue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
from kueue.
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from kueue.
/reopen
/lifecycle frozen
we need to get back to this when targeting a v1beta1 API.
from kueue.
@alculquicondor: Reopened this issue.
In response to this:
/reopen
/lifecycle frozen
we need to get back to this when targeting a v1beta1 API.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from kueue.
I'm starting a list of potential changes to the API. See Issue description.
from kueue.
cc @kerthcet for feedback (and anyone already in the thread, of course)
from kueue.
Move
admission
from Workload spec into status (from Enforce timeout for podsReady #498)
I'm ok with this, admission presents the actual state of a workload.
Rename
min
,max
into something easier to understand.
Before, we named them requests/limits
? Anyway, I think reading document is always needed, whatever the name. So I'm fine with min/max
, another pair maybe guarantee/capacity
?
Support queue name as a label, in addition to annotation (makes it easier to filter workloads by queue).
Can you provide some more context why we need this? It looks like looking for workloads via label selector.
from kueue.
Before, we named them requests/limits?
No, that would have been very confusing because they already mean something in the pod spec. I'll start a separate doc to discuss options.
Can you provide some more context why we need this? It looks like looking for workloads via label selector.
It's actually about filtering Jobs using the queue name.
from kueue.
No, that would have been very confusing because they already mean something in the pod spec. I'll start a separate doc to discuss options.
When I joined the kueue project, one of the most confusing of the kueue specification was the relationship between min/max
and cohort.
IMO, the guarantee/capacity
looks good rather than min/max
.
from kueue.
It's actually about filtering Jobs using the queue name.
It made me think of adding queueName to job's spec kubernetes/enhancements#3397, we can filter jobs with field-selector then.
from kueue.
Yes, that would be ideal, but that KEP got significant push back, so I don't see it happening anytime soon.
from kueue.
We might also need to add ObjectMeta into each PodSet template. Cluster autoscaler needs the metadata to properly scale up.
from kueue.
Is this for scaling up in advance? Or autoscaler only watching the unschedulable pods, who contains the metadata.
from kueue.
Yes, to scale up in advance
from kueue.
I've created a summary doc with the proposed changes as we graduate to beta (also available in the issue description): https://docs.google.com/document/d/1Uu4hfGxux4Wh_laqZMLxXdEVdty06Sb2DwB035hj700/edit?usp=sharing&resourcekey=0-b7mU7mGPCkEfhjyYDsXOBg
Some of the enhancements come from UX study sessions that we have conducted, see notes here: https://docs.google.com/document/d/1xbN46OLuhsXXHeqZrKrl9I57kpFQ2yqiYdOx0sHZC4Q/edit?usp=sharing
I have a WIP in #532
from kueue.
/assign @alculquicondor
from kueue.
Related Issues (20)
- Enable randomization in the queuing strategy HOT 4
- Release v0.7.0 HOT 4
- Helm chart repository HOT 1
- [MultiKueue] When a JobSet is deleted on management cluster it takes up to 1min to delete from workers HOT 7
- How to enable fairsharing in version v0.7.0-rc.1 HOT 1
- Support ClusterQueues defining DWS flavors HOT 3
- Add a hugo template to highlight minimum supported version HOT 8
- Unknown fields in the Configuration API are silently ignored HOT 1
- Support MultiKueue for Plain Pod Integration HOT 2
- Provide a way to implement a MultiKueue support for the custom Jobs HOT 12
- Make the multikueue jobAdapter a part of the jobFramework. HOT 1
- Flaky integration test: Should print workloads list with paging HOT 9
- [multikueue] Add a no GC integration tests suite. HOT 2
- [MultiKueue] integration tests fail occasionally HOT 2
- Vendor dependencies HOT 8
- A new Workload condition which signalize it should be deactivated HOT 3
- ProvisioningRequestNotSchedulableInNodepool for Pod with Generic Ephemeral Volume HOT 1
- Workloads are not admitted when there are 3 or more Workloads in ClusterQueue HOT 7
- Flaky test: Kueuectl List when List ClusterQueue when List Workloads Should print workloads list with paging HOT 4
- Flaky integration tests for jobset controller HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kueue.