So the question is, why not cut a release branch every two years, and at the same time keep the unstable/testing alive?
https://wiki.debian.org/ReleaseProposals
It
also aligns the incentives for enough people to make sure that we can successfully make a release in a finite time - even developers who
don't really care about releases and just want the latest versions
are incentivized to fix enough things to make the next release happen,
so that the freeze will end and they can get back to uploading the
latest versions to unstable.
However, the problem with freezing testing but not freezing unstable is
that if you do that, all updates to testing during the freeze (to fix the release-critical bugs that stop it from already being ready for release)
have to go into testing via testing-proposed-updates, which approximately nobody uses.
Having code changes for our next stable release be essentially untested
is not great from a QA perspective - if nobody is trying out those new versions except for their maintainer, then nobody can find and report the (potentially serious) bugs that only happen in system configurations that differ from the maintainer's system. That's why the release team strongly discourages packages going into testing via testing-proposed-updates, and encourages packages going into testing via unstable.
Hi Simon,
For me, the long freeze are very problematic. They may spawn for 6
months, which is how long it takes for a new OpenStack release to show
up, and then I don't know where to upload it... :/
As a result, the Wallaby release of OpenStack (released last spring)
never had the time to migrate fully to testing, for example, because I uploaded Xena (released last October).
Anyways, here's my reply inline below...
On 10/18/21 6:54 PM, Simon McVittie wrote:
It
also aligns the incentives for enough people to make sure that we can
successfully make a release in a finite time - even developers who
don't really care about releases and just want the latest versions
are incentivized to fix enough things to make the next release happen,
so that the freeze will end and they can get back to uploading the
latest versions to unstable.
I don't know how you can make sure that using testing-proposed-updates instead of unstable would suddenly demotivate everyone that cares about
about next stable. Could you explain?
On 10/18/21 6:54 PM, Simon McVittie wrote:
However, the problem with freezing testing but not freezing unstable is
that if you do that, all updates to testing during the freeze (to fix the
release-critical bugs that stop it from already being ready for release)
have to go into testing via testing-proposed-updates, which approximately
nobody uses.
We don't use it, because we're told to use unstable...
If we were told that it's ok to upload changes to unstable during the
freeze, and upload to testing-proposed-updates, we'd do it (and IMO,
it'd be a very good move from the release team).
Having code changes for our next stable release be essentially untested
is not great from a QA perspective - if nobody is trying out those new
versions except for their maintainer, then nobody can find and report the
(potentially serious) bugs that only happen in system configurations that
differ from the maintainer's system. That's why the release team strongly
discourages packages going into testing via testing-proposed-updates, and
encourages packages going into testing via unstable.
If we were, during the freeze, directed to upload fixes to testing-proposed-updates, then there would be more people adding it to
their sources.list during the freeze.
Cheers,
Thomas Goirand (zigo)
It's about using on machines, not about uploading.However, the problem with freezing testing but not freezing unstable is that if you do that, all updates to testing during the freeze (to fix the release-critical bugs that stop it from already being ready for release) have to go into testing via testing-proposed-updates, which approximately nobody uses.
We don't use it, because we're told to use unstable...
If we were told that it's ok to upload changes to unstable during theThe RT position on this was always "nobody uses t-p-u it so please no", as repeated every freeze when someone asks why we don't do this.
freeze, and upload to testing-proposed-updates, we'd do it (and IMO,
it'd be a very good move from the release team).
Hello,
Am 20.10.21 um 02:43 schrieb Thomas Goirand:
Hi Simon,
For me, the long freeze are very problematic. They may spawn for 6
months, which is how long it takes for a new OpenStack release to show
up, and then I don't know where to upload it... :/
You can upload it to experimental
On Wed, Oct 20, 2021 at 02:43:47AM +0200, Thomas Goirand wrote:
It's about using on machines, not about uploading.However, the problem with freezing testing but not freezing unstable is
that if you do that, all updates to testing during the freeze (to fix the >>> release-critical bugs that stop it from already being ready for release) >>> have to go into testing via testing-proposed-updates, which approximately >>> nobody uses.
We don't use it, because we're told to use unstable...
If we were told that it's ok to upload changes to unstable during theThe RT position on this was always "nobody uses t-p-u it so please no", as repeated every freeze when someone asks why we don't do this.
freeze, and upload to testing-proposed-updates, we'd do it (and IMO,
it'd be a very good move from the release team).
You can upload it to experimental
That's obviously what I'm doing. But when there's 2 releases during the freeze, it means one of them will never reach Unstable.
Thomas Goirand dijo [Wed, Oct 20, 2021 at 09:11:13AM +0200]:
You can upload it to experimental
That's obviously what I'm doing. But when there's 2 releases during the
freeze, it means one of them will never reach Unstable.
Right, which makes perfect sense.
The group of people interested in having always the latest OpenStack
will be able to install from your packages in experimental.
I guess
very few will, but if it's needed, it's available -- and the work for
you when the freeze is done is much smaller (just re-target changelog, re-build, re-upload).
What do you lose by those uploads not reaching unstable?
On 10/20/21 7:50 PM, Gunnar Wolf wrote:
Thomas Goirand dijo [Wed, Oct 20, 2021 at 09:11:13AM +0200]:
You can upload it to experimental
That's obviously what I'm doing. But when there's 2 releases during the
freeze, it means one of them will never reach Unstable.
Right, which makes perfect sense.
The group of people interested in having always the latest OpenStack
will be able to install from your packages in experimental.
Mostly, OpenStack is consumed using the unofficial backports we provide through osbpo.debian.net, which contains backports from Jessie to
Bullseye, for 14 OpenStack releases so far. I'd love to make it an
official Debian channel on debian.org, through the official Debian
backports repositories if only I could have 4 or 5 repos per Debian
release. I had hope in 2014 when Ganneff described his vision of
Bikesheds, but it's not happening, unfortunately.
Consuming OpenStack from Experimental, while probably doable, looks like
not an easy thing to do at least.
I guess
very few will, but if it's needed, it's available -- and the work for
you when the freeze is done is much smaller (just re-target changelog, re-build, re-upload).
What do you lose by those uploads not reaching unstable?
Very simple: an upgrade path. In most OpenStack projects, you cannot
skip an OpenStack release, at least because of the db schema upgrades.
Cheers,
Thomas Goirand (zigo)
In most OpenStack projects, you cannot skip an OpenStack release,[...]
at least because of the db schema upgrades.
On 2021-10-20 22:51:59 +0200 (+0200), Thomas Goirand wrote:
[...]
In most OpenStack projects, you cannot skip an OpenStack release,[...]
at least because of the db schema upgrades.
Upstream, I want to keep pushing on what we referred to as
"skip-level upgrades" which would be something akin to embedding
just the routines needed to upgrade data structures for earlier
versions into each later version. The "fast-forward upgrades" we
worked out (where you at least don't need to start any services on
the intermediate versions) is certainly an improvement, but not a
desirable end state in my opinion.
Granted, from a Debian perspective, this would be akin to upgrading
from buster to bookworm without installing bullseye's packages along
the way. Not as vast a collection of software albeit, but still
hundreds of projects which need upgrading and need to be able to
"skip" between arbitrary numbers of intermediate releases, so not
trivial either.
What I don't know is how far OpenStack went for supporting
skipping releases. For example, would it work to upgrade from
Rocky (in Buster) to Victoria (in Bullseye) directly? For which
projects?
That's obviously what I'm doing. But when there's 2 releases during the
freeze, it means one of them will never reach Unstable.
Right, which makes perfect sense.(...)
I guess very few will, but if it's needed, it's available -- and
the work for you when the freeze is done is much smaller (just
re-target changelog, re-build, re-upload).
What do you lose by those uploads not reaching unstable?
Very simple: an upgrade path. In most OpenStack projects, you cannot
skip an OpenStack release, at least because of the db schema upgrades.
Simon> However, the problem with freezing testing but not freezing
Simon> unstable is that if you do that, all updates to testing
Simon> during the freeze (to fix the release-critical bugs that stop
Simon> it from already being ready for release) have to go into
Simon> testing via testing-proposed-updates, which approximately
Simon> nobody uses.
Have we ever looked into getting more people to use TPU so it's a viable path?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 77:28:20 |
Calls: | 6,658 |
Calls today: | 4 |
Files: | 12,203 |
Messages: | 5,332,832 |
Posted today: | 1 |