Is it normal and ok to upload a new major release of a library to
unstable, without either a) testing that reverse dependencies do not
break, or b) coordinating with maintainers of reverse dpendencies
_before_ such upload?
Is it normal and ok to upload a new major release of a library to
unstable, without either a) testing that reverse dependencies do not
break, or b) coordinating with maintainers of reverse dpendencies
_before_ such upload?
Sure, accidents happen - but do the label "unstable" only mean that
accidents can happen or also that coordination/warning is optional?
People are expected to do so (coordination/testing etc).
- Mistakes happen.
BUT:
- Apparently some people forgot this and deliberately don't follow (and
I don't mean the can-happen accidents).
(In the speficic case I have in mind the maintainer just added a Breaks: without telling anyone,
so "communicating" with d-d- c and/or failing autopkgtests..)
People are expected to do so (coordination/testing etc).
- Mistakes happen.
BUT:
- Apparently some people forgot this and deliberately don't follow (and
I don't mean the can-happen accidents).
(In the speficic case I have in mind the maintainer just added a Breaks:
without telling anyone,
so "communicating" with d-d- c and/or failing autopkgtests..)
there's also a problem of resources: let's take the example of numpy,
which has 500+ rdeps. am i expected to:
* rebuild all its reverse dependencies with the new version
* evaluate which packages failed, and if that failures is due to the
new version of numpy or an already existing/independent cause
* provide fixes that are compatible with the current version and the
new one (because we cant break what we currently have and we need to
prepare for the new version)
* wait for all of the packages with issues to have applied the patch
and been uploaded to unstable
* finally upload to unstable the new version of numpy
?
that's unreasonably long, time consuming and work-intensive for several reason
* first and foremost rebuild 500 packages takes hardware resources not
every dd is expected to have at hand (or pay for, like a cloud
account), so until there's a ratt-as-as-service >(https://github.com/Debian/ratt) kinda solution available to every DD,
do not expect that for any sizable package, but maybe only for the
ones with the smallest packages "networks" (which are also the ones
causing the less "damage" if something goes wrong),
* one maintainer vs many maintainers, one for each affected pkg;
distribute the load (pain?)
* upload to experimental and use autopkgtests you say? sure that's one
way, but tracker.d.o currently doesnt show experimental excuses
(#944737, #991237), so you dont immediately see which packages failed,
and many packages still dont have autopkgtest, so that's not really
covering everything anyway
* sometimes i ask Lucas to do an archive rebuild with a new version,
but that's still relying on a single person to run the tests, parse
the build log, and the open bugs for the failed packages; maybe most
of it is automated, but not all of it (and you cant really do this for
every pkg in debian, because the archive rebuild tool needs 2 config
files for each package you wanna test: 1. how to setup the build env
to use the new package, 2. the list of packages to rebuild it that
env).
what exactly are you expecting from other DDs?
unstable is unstable for a reason, breakage will happen, nobody wants
to break intentionally (i hope?) others people work/packages, but
until we come up with a simple, effective technical solution to the
"build the rdeps and see what breaks" issue, we will upload to
unstable and see what breaks *right there*.
Maybe it's just lazy on my part, but there needs to be a cutoff
between making changes/progress and dealing with the consequences, and >walking on eggshells every time there's a new upstream release (or
even a patch!) and you need to upload a new pkg.
It's not an either or.
Generally, the Release Team should coordinate timing of transitions. New libraries should be staged in Experimental first. Maintainers of rdpends should be alerted to the impending transition so they can check if they are ready.
Debian is developed by a team and we should work together to move things forward. Particularly for a big transition like numpy, we all need to work together to get the work done.
It's true that breakage will happen in unstable. We shouldn't be afraid of it, but we should also work to keep it manageable.
and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
replace 1.21.4 in unstable. [...]
Regards,
It's not an either or.
Generally, the Release Team should coordinate timing of transitions. New libraries should be staged in Experimental first. Maintainers of rdpends should be alerted to the impending transition so they can check if they
are ready.
Debian is developed by a team and we should work together to move things forward. Particularly for a big transition like numpy, we all need to
work together to get the work done.
It's true that breakage will happen in unstable. We shouldn't be afraidlet's not get hung up on the details of numpy; what if the package to
of it, but we should also work to keep it manageable.
update is a small library, with say 20 rdeps, but one of them is llvm
or gcc or libreoffice, and maybe only for their doc. Are we really
asking the maintainer of that library to rebuild all the rdeps, which
can require considerable time, memory and disk space nor readily
available (we can assume the rdeps maintainers have figured out their resource availability and so they'd be able to rebuild their packages easily)?
and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
replace 1.21.4 in unstable. should i have instead uploaded to
experimental and asked the RT for a transition slot? how do i know if
a transition is required, in this and in all other cases, for all
packages? while only a patch release, there's a non-zero chance there
should be a regresion or an incompatible chance was released with it.
which can only be discovered by rdeps rebuild and so we go back to my previous mail.
there's also a problem of resources: let's take the example of numpy,maintainer to
which has 500+ rdeps. am i expected to:
* rebuild all its reverse dependencies with the new version
* evaluate which packages failed, and if that failures is due to the
new version of numpy or an already existing/independent cause
* provide fixes that are compatible with the current version and the
new one (because we cant break what we currently have and we need to
prepare for the new version)
* wait for all of the packages with issues to have applied the patch
and been uploaded to unstable
* finally upload to unstable the new version of numpy
?
that's unreasonably long, time consuming and work-intensive for several reason That's true. However, I think it is reasonable to expect a
Maybe it's just lazy on my part, but there needs to be a cutoffI believe if you are maintainer of an important package with many
between making changes/progress and dealing with the consequences, and >walking on eggshells every time there's a new upstream release (or
even a patch!) and you need to upload a new pkg.
i choose making progress
That's true. However, I think it is reasonable to expect a
maintainer to
* look at the release notes for documented API breakage,
* rebuild a few reverse dependencies (ideally the ones which
ย exercise the most functionality, but a random pick is probably
ย fine, too),
* file bugs if you find any issues, and
* monitor the PTS and check for autopkgtest failures, so you can
ย help figure out (or even fix) what broke.
reverse dependencies, you should spend more time to avoid breakage
because you have a huge lever effect. For instance, if you can cut
corners to save 10 hours of work, but 100 other DDs will need to
spend 30 minutes each to fix the breakage as a result, it is still a
bad tradeoff.
there's also a problem of resources: let's take the example of numpy,
which has 500+ rdeps. am i expected to:
* rebuild all its reverse dependencies with the new version
* evaluate which packages failed, and if that failures is due to the
new version of numpy or an already existing/independent cause
* provide fixes that are compatible with the current version and the
new one (because we cant break what we currently have and we need to
prepare for the new version)
* wait for all of the packages with issues to have applied the patch
and been uploaded to unstable
that's unreasonably long, time consuming and work-intensive forseveral reason
* first and foremost rebuild 500 packages takes hardware resources not
every dd is expected to have at hand (or pay for, like a cloud
account), so until there's a ratt-as-as-service (https://github.com/Debian/ratt) kinda solution available to every DD,
do not expect that for any sizable package, but maybe only for the
ones with the smallest packages "networks" (which are also the ones
causing the less "damage" if something goes wrong),
* upload to experimental and use autopkgtests you say? sure that's one
way, but tracker.d.o currently doesnt show experimental excuses
(#944737, #991237), so you dont immediately see which packages failed,
and many packages still dont have autopkgtest, so that's not really
covering everything anyway
unstable is unstable for a reason, breakage will happen, nobody wants
to break intentionally (i hope?) others people work/packages, but
rebuild 500 packages takes hardware resources notevery dd is expected to have at hand (or pay for, like a cloud
account), so until there's a ratt-as-as-service (https://github.com/Debian/ratt) kinda solution available to every DD
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 67:57:24 |
Calls: | 6,654 |
Files: | 12,200 |
Messages: | 5,332,028 |