• releasing major library change to unstable without coordination

    From Jonas Smedegaard@21:1/5 to All on Wed Dec 22 23:40:01 2021
    Hi fellow developers,

    Is it normal and ok to upload a new major release of a library to
    unstable, without either a) testing that reverse dependencies do not
    break, or b) coordinating with maintainers of reverse dpendencies
    _before_ such upload?

    Sure, accidents happen - but do the label "unstable" only mean that
    accidents can happen or also that coordination/warning is optional?

    Reason for my question is bug#1001591 where (apart from my failures in
    getting my points across as clearly as I would have desired) me and the involved package maintainer seem to have very different views on the
    matter, and I would like to understand more generally if I am living is
    some fantasy World different from common practices in Debian.


    Regards,

    - Jonas

    --
    * Jonas Smedegaard - idealist & Internet-arkitekt
    * Tlf.: +45 40843136 Website: http://dr.jones.dk/

    [x] quote me freely [ ] ask before reusing [ ] keep private --==============€82217887215566913=MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Description: signature
    Content-Type: application/pgp-signature; name="signature.asc"; charset="us-ascii"

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEn+Ppw2aRpp/1PMaELHwxRsGgASEFAmHDuI4ACgkQLHwxRsGg ASFAFA/+K+hFhfekX7O+x7FrAD1ZfsolQy0gSXjV/xyZxI0LWl5IX97I0JmJnmAz Vwe3PeO3LbrwVk9Gn1xOe93sEw0GRlVaX/XEPpFk0U1d7PRht9efN1ZCim5KlXz+ 2ew1QcHgt9fhHH0XcLprPFxaWj8D67j62wRdb6P9m+pUEIM6YFKebUXHFrvAMXqg vRJEa0Xtg/csUPPPY/I9JOKuHSXDJbS5kVAG3bK7CJF/hT38HvUAIrLxMaRFmLcN GLgRlz4DstrMJMEq439Dq53Tx4GuKoP2cl1qvSXni3GUDwVEiJTPM1fhtxY+zPfF mnZEhX2TgY/WE3XgdE3dHpY9EUUrxBlKkgwzR8YLPt6SwPq8OwwHf6jpnTkpb09k OSrDg3ZN8xTG7mCh81zWNaZ/ZOGOe6e/faup/f7PJgyBchT4OrYkoDp+LRZFteOl nsUNoet6IhYu5STdEZFaGncvRPcpfsOETIx2fhDC37+PQB9dvXMVD8M9Lb+ABSRq v4sokCxFCBnDX5+Np
  • From Samuel Thibault@21:1/5 to All on Wed Dec 22 23:50:01 2021
    Jonas Smedegaard, le jeu. 23 dรฉc. 2021 00:45:23 +0100, a ecrit:
    Is it normal and ok to upload a new major release of a library to
    unstable, without either a) testing that reverse dependencies do not
    break, or b) coordinating with maintainers of reverse dpendencies
    _before_ such upload?

    Usually I'd upload to experimental first, for people to easily check&fix
    their rdeps package, and notify them with an "important" bug. Then after
    some time raise to "severe" and upload to unstable.

    Samuel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rene Engelhard@21:1/5 to All on Thu Dec 23 00:50:02 2021
    Hi,

    Am 23.12.21 um 00:45 schrieb Jonas Smedegaard:
    Is it normal and ok to upload a new major release of a library to
    unstable, without either a) testing that reverse dependencies do not
    break, or b) coordinating with maintainers of reverse dpendencies
    _before_ such upload?

    People are expected to do so (coordination/testing etc).


    - Mistakes happen.


    BUT:


    - Apparently some people forgot this and deliberately don't follow (and
    I don't mean the can-happen accidents).

    (In the speficic case I have in mind the maintainer just added a Breaks: without telling anyone,

    so "communicating" with d-d- c and/or failing autopkgtests..)


    Sure, accidents happen - but do the label "unstable" only mean that
    accidents can happen or also that coordination/warning is optional?

    I don't think it is.


    Regards,


    Rene

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sandro Tosi@21:1/5 to All on Thu Dec 23 01:30:01 2021
    People are expected to do so (coordination/testing etc).


    - Mistakes happen.


    BUT:


    - Apparently some people forgot this and deliberately don't follow (and
    I don't mean the can-happen accidents).

    (In the speficic case I have in mind the maintainer just added a Breaks: without telling anyone,

    so "communicating" with d-d- c and/or failing autopkgtests..)

    there's also a problem of resources: let's take the example of numpy,
    which has 500+ rdeps. am i expected to:

    * rebuild all its reverse dependencies with the new version
    * evaluate which packages failed, and if that failures is due to the
    new version of numpy or an already existing/independent cause
    * provide fixes that are compatible with the current version and the
    new one (because we cant break what we currently have and we need to
    prepare for the new version)
    * wait for all of the packages with issues to have applied the patch
    and been uploaded to unstable
    * finally upload to unstable the new version of numpy

    ?

    that's unreasonably long, time consuming and work-intensive for several reason

    * first and foremost rebuild 500 packages takes hardware resources not
    every dd is expected to have at hand (or pay for, like a cloud
    account), so until there's a ratt-as-as-service (https://github.com/Debian/ratt) kinda solution available to every DD,
    do not expect that for any sizable package, but maybe only for the
    ones with the smallest packages "networks" (which are also the ones
    causing the less "damage" if something goes wrong),
    * one maintainer vs many maintainers, one for each affected pkg;
    distribute the load (pain?)
    * upload to experimental and use autopkgtests you say? sure that's one
    way, but tracker.d.o currently doesnt show experimental excuses
    (#944737, #991237), so you dont immediately see which packages failed,
    and many packages still dont have autopkgtest, so that's not really
    covering everything anyway
    * sometimes i ask Lucas to do an archive rebuild with a new version,
    but that's still relying on a single person to run the tests, parse
    the build log, and the open bugs for the failed packages; maybe most
    of it is automated, but not all of it (and you cant really do this for
    every pkg in debian, because the archive rebuild tool needs 2 config
    files for each package you wanna test: 1. how to setup the build env
    to use the new package, 2. the list of packages to rebuild it that
    env).

    what exactly are you expecting from other DDs?

    unstable is unstable for a reason, breakage will happen, nobody wants
    to break intentionally (i hope?) others people work/packages, but
    until we come up with a simple, effective technical solution to the
    "build the rdeps and see what breaks" issue, we will upload to
    unstable and see what breaks *right there*.

    Maybe it's just lazy on my part, but there needs to be a cutoff
    between making changes/progress and dealing with the consequences, and
    walking on eggshells every time there's a new upstream release (or
    even a patch!) and you need to upload a new pkg.

    i choose making progress

    Cheers,
    --
    Sandro "morph" Tosi
    My website: http://sandrotosi.me/
    Me at Debian: http://wiki.debian.org/SandroTosi
    Twitter: https://twitter.com/sandrotosi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Kitterman@21:1/5 to Sandro Tosi on Thu Dec 23 03:40:01 2021
    On December 23, 2021 12:24:16 AM UTC, Sandro Tosi <morph@debian.org> wrote:
    People are expected to do so (coordination/testing etc).


    - Mistakes happen.


    BUT:


    - Apparently some people forgot this and deliberately don't follow (and
    I don't mean the can-happen accidents).

    (In the speficic case I have in mind the maintainer just added a Breaks:
    without telling anyone,

    so "communicating" with d-d- c and/or failing autopkgtests..)

    there's also a problem of resources: let's take the example of numpy,
    which has 500+ rdeps. am i expected to:

    * rebuild all its reverse dependencies with the new version
    * evaluate which packages failed, and if that failures is due to the
    new version of numpy or an already existing/independent cause
    * provide fixes that are compatible with the current version and the
    new one (because we cant break what we currently have and we need to
    prepare for the new version)
    * wait for all of the packages with issues to have applied the patch
    and been uploaded to unstable
    * finally upload to unstable the new version of numpy

    ?

    that's unreasonably long, time consuming and work-intensive for several reason

    * first and foremost rebuild 500 packages takes hardware resources not
    every dd is expected to have at hand (or pay for, like a cloud
    account), so until there's a ratt-as-as-service >(https://github.com/Debian/ratt) kinda solution available to every DD,
    do not expect that for any sizable package, but maybe only for the
    ones with the smallest packages "networks" (which are also the ones
    causing the less "damage" if something goes wrong),
    * one maintainer vs many maintainers, one for each affected pkg;
    distribute the load (pain?)
    * upload to experimental and use autopkgtests you say? sure that's one
    way, but tracker.d.o currently doesnt show experimental excuses
    (#944737, #991237), so you dont immediately see which packages failed,
    and many packages still dont have autopkgtest, so that's not really
    covering everything anyway
    * sometimes i ask Lucas to do an archive rebuild with a new version,
    but that's still relying on a single person to run the tests, parse
    the build log, and the open bugs for the failed packages; maybe most
    of it is automated, but not all of it (and you cant really do this for
    every pkg in debian, because the archive rebuild tool needs 2 config
    files for each package you wanna test: 1. how to setup the build env
    to use the new package, 2. the list of packages to rebuild it that
    env).

    what exactly are you expecting from other DDs?

    unstable is unstable for a reason, breakage will happen, nobody wants
    to break intentionally (i hope?) others people work/packages, but
    until we come up with a simple, effective technical solution to the
    "build the rdeps and see what breaks" issue, we will upload to
    unstable and see what breaks *right there*.

    Maybe it's just lazy on my part, but there needs to be a cutoff
    between making changes/progress and dealing with the consequences, and >walking on eggshells every time there's a new upstream release (or
    even a patch!) and you need to upload a new pkg.

    It's not an either or.

    Generally, the Release Team should coordinate timing of transitions. New libraries should be staged in Experimental first. Maintainers of rdpends should be alerted to the impending transition so they can check if they are ready.

    Debian is developed by a team and we should work together to move things forward. Particularly for a big transition like numpy, we all need to work together to get the work done.

    It's true that breakage will happen in unstable. We shouldn't be afraid of it, but we should also work to keep it manageable.

    Scott K

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sandro Tosi@21:1/5 to All on Thu Dec 23 05:10:01 2021
    It's not an either or.

    Generally, the Release Team should coordinate timing of transitions. New libraries should be staged in Experimental first. Maintainers of rdpends should be alerted to the impending transition so they can check if they are ready.

    Debian is developed by a team and we should work together to move things forward. Particularly for a big transition like numpy, we all need to work together to get the work done.

    It's true that breakage will happen in unstable. We shouldn't be afraid of it, but we should also work to keep it manageable.

    let's not get hung up on the details of numpy; what if the package to
    update is a small library, with say 20 rdeps, but one of them is llvm
    or gcc or libreoffice, and maybe only for their doc. Are we really
    asking the maintainer of that library to rebuild all the rdeps, which
    can require considerable time, memory and disk space nor readily
    available (we can assume the rdeps maintainers have figured out their
    resource availability and so they'd be able to rebuild their packages
    easily)?

    and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
    replace 1.21.4 in unstable. should i have instead uploaded to
    experimental and asked the RT for a transition slot? how do i know if
    a transition is required, in this and in all other cases, for all
    packages? while only a patch release, there's a non-zero chance there
    should be a regresion or an incompatible chance was released with it.
    which can only be discovered by rdeps rebuild and so we go back to my
    previous mail.

    Regards,
    --
    Sandro "morph" Tosi
    My website: http://sandrotosi.me/
    Me at Debian: http://wiki.debian.org/SandroTosi
    Twitter: https://twitter.com/sandrotosi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niels Thykier@21:1/5 to All on Thu Dec 23 05:40:01 2021
    Sandro Tosi:
    and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
    replace 1.21.4 in unstable. [...]

    Regards,

    Hi,

    If you feel discussing patch releases is worth a topic of its own, I
    think we should start a separate thread for that because the process is
    likely to be considerably different compared to a *major library change*
    (which is what Jonas asked for).

    Thanks,
    ~Niels

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Kitterman@21:1/5 to All on Thu Dec 23 06:10:01 2021
    On Wednesday, December 22, 2021 11:07:51 PM EST Sandro Tosi wrote:
    It's not an either or.

    Generally, the Release Team should coordinate timing of transitions. New libraries should be staged in Experimental first. Maintainers of rdpends should be alerted to the impending transition so they can check if they
    are ready.

    Debian is developed by a team and we should work together to move things forward. Particularly for a big transition like numpy, we all need to
    work together to get the work done.

    It's true that breakage will happen in unstable. We shouldn't be afraid
    of it, but we should also work to keep it manageable.
    let's not get hung up on the details of numpy; what if the package to
    update is a small library, with say 20 rdeps, but one of them is llvm
    or gcc or libreoffice, and maybe only for their doc. Are we really
    asking the maintainer of that library to rebuild all the rdeps, which
    can require considerable time, memory and disk space nor readily
    available (we can assume the rdeps maintainers have figured out their resource availability and so they'd be able to rebuild their packages easily)?

    and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
    replace 1.21.4 in unstable. should i have instead uploaded to
    experimental and asked the RT for a transition slot? how do i know if
    a transition is required, in this and in all other cases, for all
    packages? while only a patch release, there's a non-zero chance there
    should be a regresion or an incompatible chance was released with it.
    which can only be discovered by rdeps rebuild and so we go back to my previous mail.

    For things like major python packages (other languages too), it's not a simple as for C (and to a lesser extent C++) libraries where it's either binary compatible, no so name change, and not a transition, or it isn't.

    I sympathize, really. I think that for things that are supposed to be
    backward compatible, uploading to unstable is generally fine. More extensive work is appropriate for larger, more "major" upgrades.

    Scott K
    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEE53Kb/76FQA/u7iOxeNfe+5rVmvEFAmHEAkMACgkQeNfe+5rV mvH25A//QoEkdfgoL6WVZqfwqqrzd107TJ8mlmlKI+QRD4CoOYYpY0ViTf9szLjN X6lsdcMcfa7kvvnvaO3kWziJybg6N+3PTJDJgRYT9gBMzakkbqI1DIAN+XoblpqB C9n/ZuMfPLSHaCh60g8H8dLgEFKg0kjWLasZ7ZbYlbMtToATZxDE7Sm40w7kibKr 1Py+7PUthgON+6xrkt05tbISOrIGFOuzM+poQpbHX557Mls8FEHQUtucYv3vXGZ4 PyUCs/fR+wQSMxnGHirGPq/H1V4VU1VOS0aiuzQeVovxjvux7GZOI3XIySseJuL4 yXagPjFVGyiExmUpE+O3wF2qFBdJbc2XMjbMbYxG8toqR4oEjNisAeD2FKoJbn7R Ugx47h0wbq+WGTn0P8hiihddvmdLjq/SZZRAtYoo7iyRws+a92tvmlw9RYz2A+zL nbIR5PiZ7R6edojDh6K+UG2i+oVSRqI1G+nc9cjPEIaNKWDXSq9FwSBLpShq6F0S JRU/ZknRlt7NNzAcMy1Zt3RkH44mfJxCB0Gai7qvWkbBhbUZDDnM7/926ZdbLZsu 7ColLDuCh1TgwIY3rlBSOwIh3NP+RUY0dmdgn4BWkjT1/E09Res+gcTokxNT7A/3 u/KAeECwg9iucCNpjU9JLMqFuLVZlgaxrb3/PhFTZThvVKy+Y2k=
    =k/L6
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Gevers@21:1/5 to All on Thu Dec 23 09:50:02 2021
    This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --------------qqLrPQxIDG1vliBfuqnwYb0C
    Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: base64

    SGksDQoNCltJJ3ZlIHJlYWQgdGhlIHJlc3Qgb2YgdGhlIHRocmVhZCBzbyBmYXIsIGFuc3dl cmluZyB0aGUgdHJhbnNpdGlvbiANCnF1ZXN0aW9uXS4NCg0KT24gMjMtMTItMjAyMSAwMDo0 NSwgSm9uYXMgU21lZGVnYWFyZCB3cm90ZToNCj4gSXMgaXQgbm9ybWFsIGFuZCBvayB0byB1 cGxvYWQgYSBuZXcgbWFqb3IgcmVsZWFzZSBvZiBhIGxpYnJhcnkgdG8NCj4gdW5zdGFibGUs IHdpdGhvdXQgZWl0aGVyIGEpIHRlc3RpbmcgdGhhdCByZXZlcnNlIGRlcGVuZGVuY2llcyBk byBub3QNCj4gYnJlYWssIG9yIGIpIGNvb3JkaW5hdGluZyB3aXRoIG1haW50YWluZXJzIG9m IHJldmVyc2UgZHBlbmRlbmNpZXMNCj4gX2JlZm9yZV8gc3VjaCB1cGxvYWQ/DQoNCldoYXQg d2UgKHRoZSBSZWxlYXNlIFRlYW0pIGV4cGVjdHMvaG9wZXMgZm9yIGlzIGRvY3VtZW50ZWQg b24gdGhlIHdpa2kgDQpbMV0uIEEgbG90IG9mIHRoZSBwYWdlIGlzIGFsc28gdmFsaWQgZm9y IHVwbG9hZHMgdGhhdCBhcmUga25vd24sIA0KZXhwZWN0ZWQgb3Igc3VzcGVjdGVkIHRvIGJl IGludHJ1c2l2ZS4gSW4gY2FzZSBvZiBkb3VidCwgcGxlYXNlIGFsd2F5cyANCmNvb3JkaW5h dGUgd2l0aCB0aGUgUmVsZWFzZSBUZWFtIChidXQgZG9uJ3Qgb3ZlcmRvIHlvdXIgZG91YnQg OykpLiBUbyANCmFuc3dlciB5b3VyIHF1ZXN0aW9uLCBtb3N0IHBlb3BsZSBkbyB3aGF0IHlv dSBleHBlY3QsIGJ1dCBub3QgYWxsLg0KDQpXZSAoYWdhaW4sIHRoZSBSZWxlYXNlIFRlYW0p IGRvbid0IGV4cGVjdCBtYWludGFpbmVycyBvZiBsaWJyYXJpZXMvY29yZSANCnBhY2thZ2Vz IHRvIGZpeCBhbGwgcmVncmVzc2lvbnMgY2F1c2VkIGJ5IHRoZWlyIHVwbG9hZHMuIFdoYXQg d2UgZG8gDQpleHBlY3QgaXMgY29tbXVuaWNhdGlvbi4gRWl0aGVyIGEgcGxhaW4gd2Fybmlu ZyAod2VsbCkgaW4gYWR2YW5jZSAoZWFzeSwgDQpidXQgbGVzcyBlZmZlY3RpdmUpIG9yIGJ5 IHVwbG9hZGluZyB0byBleHBlcmltZW50YWwsIGNoZWNraW5nIHJldmVyc2UgDQoodGVzdCkg ZGVwZW5kZW5jaWVzIGFuZCBmaWxpbmcgKGltcG9ydGFudCkgYnVncywgd2hpY2ggYXJlIHJh aXNlZCB0byANCnNlcmlvdXMgb25jZSB0aGUgdXBsb2FkIHRvIHVuc3RhYmxlIGhhcHBlbnMu IEluIGVpdGhlciBjYXNlLCB3aGVuIA0KYnJlYWthZ2UgaXMgZXhwZWN0ZWQsIHBsZWFzZSBn aXZlIHlvdXIgcmV2ZXJzZSBkZXBlbmRlbmN5IHNvbWUgdGltZSB0byANCnByZXBhcmUgdG8g bm90IGhhdmUgdGhlaXIgcGFja2FnZSBpbnN0YW50bHkgUkMgYnVnZ3kuIE1heWJlIGFuIA0K aW50ZXJlc3Rpbmcgbm90ZSB0byBhbGwsIGlmIHRoZSBidWcgaGFzIGFscmVhZHkgYWdlZCBh dCBsb3dlciBzZXZlcml0eSwgDQp0aGUgYXV0b3JlbW92YWwgY291bnRlciBzdGFydHMgY291 bnRpbmcgaW1tZWRpYXRlbHkgb25jZSB0aGUgYnVnIGlzIA0KcmFpc2VkIHRvIHNlcmlvdXMu DQoNClRoZSBtYWluIHBvaW50IGZyb20gbXkgbWVzc2FnZSBtYXkgYmU6IGlmIHdlIGJyZWFr IHVuc3RhYmxlIHRvbyBtdWNoLCB3ZSANCm1heSBub3QgYmUgYWJsZSB0byBoYXZlIHBhY2th Z2VzIG1pZ3JhdGUgdG8gdGVzdGluZywgYmVjYXVzZSBpdCBnZXRzIA0KaGFyZGVyIGFuZCBo YXJkZXIgdG8gZmluZCBzZXRzIHRoYXQgYXJlIHJlYWR5IHRvZ2V0aGVyLg0KDQpIYXZpbmcg YWxsIHRoYXQgc2FpZCwgdW5zdGFibGUgaXMgdW5zdGFibGUuIEV4cGVjdCBicmVha2FnZSB0 aGVyZS4gQnV0IEkgDQpob3BlIHdlIGNhbiB0cnkgdG8gYXZvaWQgdW5hbm5vdW5jZWQgbGFy Z2UgYnJlYWthZ2Ugd2hlbiB0aGUgYnJlYWthZ2UgaXMgDQpleHBlY3RlZC4NCg0KUGF1bA0K DQpbMV0gaHR0cHM6Ly93aWtpLmRlYmlhbi5vcmcvVGVhbXMvUmVsZWFzZVRlYW0vVHJhbnNp dGlvbnMNCg==

    --------------qqLrPQxIDG1vliBfuqnwYb0C--

    -----BEGIN PGP SIGNATURE-----

    wsB5BAABCAAjFiEEWLZtSHNr6TsFLeZynFyZ6wW9dQoFAmHENrYFAwAAAAAACgkQnFyZ6wW9dQqO sQgAtFkvmUKsb6QuHX84T+BinOCvHNID29xbAfdEIkHubDAEpb+JCZQ8dVxpQgitVQ0WWRV1c+tP Qs7fVxtw+8h/x+O+P2SOZcEfqhfTb4lwrPd4lDtS+wlRZwMMDWIvpJ/prZHp0IGOaRigMdGibVAG l7FS3O4Q7n33VjsECKX7z7tM7EERYDXC/XQKF+HTIy0mUwslpoECc6CKMYoljaB5X7AFHqgc6TkF FFtMs9a7AT8gEMjUu8Ws9Y9KbeiDAcNEnEF5i7IcMj7RkXi68DZakPkgQ4D49pprHIlFDdjQmTAI Co7wQ+3WLCnqfi60a/ua/NCIaR9JNf4WYIYxhsYpbg==
    =yKwO
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Timo =?utf-8?Q?R=C3=B6hling?=@21:1/5 to All on Thu Dec 23 10:50:01 2021
    Hi Sandro!

    * Sandro Tosi <morph@debian.org> [2021-12-22 19:24]:
    there's also a problem of resources: let's take the example of numpy,
    which has 500+ rdeps. am i expected to:

    * rebuild all its reverse dependencies with the new version
    * evaluate which packages failed, and if that failures is due to the
    new version of numpy or an already existing/independent cause
    * provide fixes that are compatible with the current version and the
    new one (because we cant break what we currently have and we need to
    prepare for the new version)
    * wait for all of the packages with issues to have applied the patch
    and been uploaded to unstable
    * finally upload to unstable the new version of numpy

    ?

    that's unreasonably long, time consuming and work-intensive for several reason That's true. However, I think it is reasonable to expect a
    maintainer to
    * look at the release notes for documented API breakage,
    * rebuild a few reverse dependencies (ideally the ones which
    exercise the most functionality, but a random pick is probably
    fine, too),
    * file bugs if you find any issues, and
    * monitor the PTS and check for autopkgtest failures, so you can
    help figure out (or even fix) what broke.

    Personally, I also like to run something like
    `git diff upstream/<old> upstream/<new> -- '*.h'` or
    `git diff upstream/<old> upstream/<new< -- '*.py'` to get an idea
    how much has changed, and if I find breakage (either through
    inspection or rebuilding), look for other usage of the broken API
    with sources.debian.org.

    Maybe it's just lazy on my part, but there needs to be a cutoff
    between making changes/progress and dealing with the consequences, and >walking on eggshells every time there's a new upstream release (or
    even a patch!) and you need to upload a new pkg.

    i choose making progress
    I believe if you are maintainer of an important package with many
    reverse dependencies, you should spend more time to avoid breakage
    because you have a huge lever effect. For instance, if you can cut
    corners to save 10 hours of work, but 100 other DDs will need to
    spend 30 minutes each to fix the breakage as a result, it is still a
    bad tradeoff.

    OTOH, as a maintainer of an unpopular leaf package, I can get away
    with atrocious uploads because nobody but me will notice or care.


    Cheers
    Timo


    --
    โข€โฃดโ พโ ปโขถโฃฆโ € โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
    โฃพโ โข โ ’โ €โฃฟโก โ”‚ Timo Rรถhling โ”‚
    โขฟโก„โ ˜โ ทโ šโ ‹โ € โ”‚ 9B03 EBB9 8300 DF97 C2B1 23BF CC8C 6BDD 1403 F4CA โ”‚
    โ ˆโ ณโฃ„โ €โ €โ €โ € โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

    -----BEGIN PGP SIGNATURE-----

    iQGzBAABCgAdFiEEJvtDgpxjkjCIVtam+C8H+466LVkFAmHERRMACgkQ+C8H+466 LVnXnQwAm7M0h2Okfli559FJel3Low9txfZYsjRbH/XAiHHYieOt87vy40MrTHPv 3ZQMgcppTbd0XAk6/7koUGoSw0lzJRXHy3SOoxTx4axxIecmMfgAkCGqB2qWAlXZ Jd/eC84D8WRMi0V4Dn0tQB2wEyN6/t3YP07pUkNc/Gurgn05P2Yjb81EaBTXNSm4 vmssd3roeYEIztvtaJ2zyj/SB58GS+u38tLa04cX4eBElB9nA5At4Zir0VaRXmod qsJUfKrWTaFmI/CC6xjFqNirpi5pydZ45lbwfZrphnIDMRtQSAalx+7NZR85trjQ Bb9LH0XFs0PP350wIZOqIpqZRJYSsq12dsNI5NifhGH
  • From Rene Engelhard@21:1/5 to All on Thu Dec 23 11:40:03 2021
    Hi,

    Am 23.12.21 um 10:44 schrieb Timo Rรถhling:
    That's true. However, I think it is reasonable to expect a
    maintainer to
    * look at the release notes for documented API breakage,
    * rebuild a few reverse dependencies (ideally the ones which
    ย  exercise the most functionality, but a random pick is probably
    ย  fine, too),
    * file bugs if you find any issues, and
    * monitor the PTS and check for autopkgtest failures, so you can
    ย  help figure out (or even fix) what broke.

    Full ACK.


    And not to deliberately breaking packages (deliberately because he knew
    it'd break) without any information except a Breaks:.

    No bug or any info whatsoever


    the first failure might be an accident, if you point that out and an
    other upload causes the same failure mode again with adding a silent
    Breaks: and people only notice because _no direct r-dep_s autopkgtest
    fails is nothing else then deliberate.

    I believe if you are maintainer of an important package with many
    reverse dependencies, you should spend more time to avoid breakage
    because you have a huge lever effect. For instance, if you can cut
    corners to save 10 hours of work, but 100 other DDs will need to
    spend 30 minutes each to fix the breakage as a result, it is still a
    bad tradeoff.

    Or the release team will blame the victim of the change (not the
    maintainer doing that uncoodinated) change

    for the breakage...


    Regards,


    Rene

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rene Engelhard@21:1/5 to All on Thu Dec 23 11:50:01 2021
    Hi,

    Am 23.12.21 um 01:24 schrieb Sandro Tosi:
    there's also a problem of resources: let's take the example of numpy,
    which has 500+ rdeps. am i expected to:

    * rebuild all its reverse dependencies with the new version
    * evaluate which packages failed, and if that failures is due to the
    new version of numpy or an already existing/independent cause
    * provide fixes that are compatible with the current version and the
    new one (because we cant break what we currently have and we need to
    prepare for the new version)

    You can fix the new version and be done with it. (Wouldn't necessarily
    work for libraries, though,

    but (build-)deps can be tightened. And if the fix is trivial one doesn't
    need to do a patch either (although I personally often do anyway)

    * wait for all of the packages with issues to have applied the patch
    and been uploaded to unstable

    No, you just need senseful waiting time depending on the issue and then
    you make the already existing bug RC/you file it as RC to begin with.


    Noone says "wait until eternity until the last maintainer fixes the
    package or someone fixes the package if it's unmaintained".

    that's unreasonably long, time consuming and work-intensive for
    several reason

    I agree that you need to take this with a grain of salt and it depends
    on the circumstances, yes.

    And yes, it can be ong, time consuming and work-intensive...


    * first and foremost rebuild 500 packages takes hardware resources not
    every dd is expected to have at hand (or pay for, like a cloud
    account), so until there's a ratt-as-as-service (https://github.com/Debian/ratt) kinda solution available to every DD,
    do not expect that for any sizable package, but maybe only for the
    ones with the smallest packages "networks" (which are also the ones
    causing the less "damage" if something goes wrong),

    I've done that for dependencies involving firefox, chromium, libreoffice
    other big stuff. But yeah, I get the point.


    In many cases it's not that much, though, especially not if you already
    know a package which _will definitely_ break by your change.

    * upload to experimental and use autopkgtests you say? sure that's one
    way, but tracker.d.o currently doesnt show experimental excuses
    (#944737, #991237), so you dont immediately see which packages failed,
    and many packages still dont have autopkgtest, so that's not really
    covering everything anyway

    Yeah


    unstable is unstable for a reason, breakage will happen, nobody wants
    to break intentionally (i hope?) others people work/packages, but

    Evidence shows (in the example I have in mind) this was uploaded with
    perfectly knowing that it will break.


    Regards,


    Rene

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane_Blondon?=@21:1/5 to All on Thu Dec 23 15:00:03 2021
    Le jeu. 23 dรฉc. 2021 ร  01:25, Sandro Tosi <morph@debian.org> a รฉcrit :

    rebuild 500 packages takes hardware resources not
    every dd is expected to have at hand (or pay for, like a cloud
    account), so until there's a ratt-as-as-service (https://github.com/Debian/ratt) kinda solution available to every DD


    If providing an easy way to rebuild the reverse depencies by DD could solve
    the problem, what blocks us to do it? We could spawn container including
    ratt ; the DD uploads the new package ; rdepends are built ; compilation report is sent to DD ; container is deleted.

    I guess it's a lack of time and volonteers but I wonder if there are other problems.

    I wonder if it would help to use Debian money to pay for hosting?


    --
    Stรฉphane Blondon



    <div dir="auto"><div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le jeu. 23 dรฉc. 2021 ร  01:25, Sandro Tosi &lt;<a href="mailto:morph@debian.org" target="_blank" rel="noreferrer">morph@debian.org</a>&gt; a รฉcritย :<br></div><blockquote
    class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">&gt;ย rebuild 500 packages takes hardware resources not<br>
    every dd is expected to have at hand (or pay for, like a cloud<br>
    account), so until there&#39;s a ratt-as-as-service<br>
    (<a href="https://github.com/Debian/ratt" rel="noreferrer noreferrer noreferrer" target="_blank">https://github.com/Debian/ratt</a>) kinda solution available to every DD</blockquote></div></div><div dir="auto"><br></div><div dir="auto">If providing an
    easy way to rebuild the reverse depencies by DD could solve the problem, what blocks us to do it? We could spawn container including ratt ; the DD uploads the new package ;ย  rdepends are built ; compilation
  • From Paul Gevers@21:1/5 to All on Thu Dec 23 19:00:01 2021
    This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --------------YBFTJ0hJ0YB3TvRc3xJuVvnS
    Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: base64

    SGksDQoNCk9uIDIzLTEyLTIwMjEgMTU6MDMsIEFsZXhpcyBNdXJ6ZWF1IHdyb3RlOg0KPiBJ c24ndCBjaS5kZWJpYW4ubmV0IGRvaW5nIGF1dG9tYXRlZCBidWlsZHMgd2l0aCBleHBlcmlt ZW50YWwgdmVyc2lvbiBvZg0KPiBkZXBlbmRlbmNpZXMgPw0KDQpjaS5kZWJpYW4ubmV0IGRv ZXNuJ3QgZG8gYnVpbGRzIGV4Y2VwdCBmb3IgYXV0b3BrZ3Rlc3QgdGhhdCBoYXZlIHRoZSAN CiJuZWVkcy1idWlsZCIgcmVzdHJpY3Rpb24sIHdoaWNoIHdlIGRpc2NvdXJhZ2UgdW5sZXNz IHJlYWxseSBuZWVkZWQuDQoNClBhdWwNCg==

    --------------YBFTJ0hJ0YB3TvRc3xJuVvnS--

    -----BEGIN PGP SIGNATURE-----

    wsB5BAABCAAjFiEEWLZtSHNr6TsFLeZynFyZ6wW9dQoFAmHEtswFAwAAAAAACgkQnFyZ6wW9dQp9 dgf+IrUlfZZ4lcuGxtuM79nMfzI14ia0WLGSDYfhHtwiRULU7VonsDble3Both1RNIXEJXmDisRk LpNGO7uihvQ6QuElfdQsNL8lbMZZn8F67YhXtmK29Bqg9ZFsIICG8t9QXvS5eYjZktl99ENQGOzA tYSaQgsycGAqsqkgzxidW7JnG9BGdQCUtOxMeBEFbY6kel/aa8XRCloS3oWDtvnEWJ8T38STXAQl Zi8fWR99fOw4u+NXlYeYMIX7c1QlAuo44eAJoxPpGrd+GUH/LYYz7gXunur8gd3RDuAI+6gJqHnF WiUgKnNvXZnlu0wSD+pWO9KN3F2VD+FRVszYJfGI9A==
    =jpOh
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)