• Enabling salsa-ci on all Debian Python Team repos

    From Emanuele Rocca@21:1/5 to All on Mon Sep 19 12:50:01 2022
    Hello debian-salsa-ci and debian-python!

    I was wondering if it would make sense to enable CI/CD on Salsa for all projects owned by the Debian Python Team, or if there's any concern
    about scaling issues in terms of pipeline workers (or anything else
    really).

    For the past few days I've been enabling CI/CD on Salsa for various
    packages owned by the DPT. I've been doing this on a case-by-case basis:
    if the package I wanted to work on (for reasons unrelated to CI) did not
    have CI/CD yet, I'd add [1] as the pipeline configuration file and carry
    on with my work.

    Perhaps there's an opportunity to automate and getting wider CI usage.

    Thanks,
    Emanuele

    [1] recipes/debian.yml@salsa-ci-team/pipeline

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?iso-8859-1?Q?I=F1aki?= Malerba@21:1/5 to Emanuele Rocca on Mon Sep 19 14:20:02 2022
    Hello Emanuele !

    On Mon, Sep 19, 2022 at 12:51:00PM +0200, Emanuele Rocca wrote:
    Hello debian-salsa-ci and debian-python!

    I was wondering if it would make sense to enable CI/CD on Salsa for all projects owned by the Debian Python Team, or if there's any concern
    about scaling issues in terms of pipeline workers (or anything else
    really).

    That would be great !

    Mass-enabling CI on groups might be something that you want to
    discuss with Salsa admins first. On previous occasions they pushed
    back on this kind of steep adoption due to the workload increase,
    but I'm not sure what's the stance right now on that matter.


    For the past few days I've been enabling CI/CD on Salsa for various
    packages owned by the DPT. I've been doing this on a case-by-case basis:
    if the package I wanted to work on (for reasons unrelated to CI) did not
    have CI/CD yet, I'd add [1] as the pipeline configuration file and carry
    on with my work.

    Perhaps there's an opportunity to automate and getting wider CI usage.

    One of the biggest issues we had when a team adopted the pipeline was
    DDOSing of the instance because of the multiple pipelines generated when
    pusing the .gitlab-ci.yml file to all the projects.

    If you're planning to do this, please:

    - Use the API and configure the 'CI/CD configuration file' project
    field, as you mentioned in the email. This won't generate a pipeline
    when configured but only on the next push.

    - If you need create the .gitlab-ci.yml file, please use the
    `ci.skip`[1] push option.


    Thanks, and good luck :)

    Iñaki

    1_ https://docs.gitlab.com/ee/user/project/push_options.html#push-options-for-gitlab-cicd


    Thanks,
    Emanuele

    [1] recipes/debian.yml@salsa-ci-team/pipeline

    --
    Debian-salsa-ci mailing list
    Debian-salsa-ci@alioth-lists.debian.net https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-salsa-ci

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julian Gilbey@21:1/5 to All on Mon Sep 19 17:10:01 2022
    On Mon, Sep 19, 2022 at 01:52:09PM +0200, Iñaki Malerba wrote:
    [...]
    Perhaps there's an opportunity to automate and getting wider CI usage.

    One of the biggest issues we had when a team adopted the pipeline was
    DDOSing of the instance because of the multiple pipelines generated when pusing the .gitlab-ci.yml file to all the projects.

    If you're planning to do this, please:

    - Use the API and configure the 'CI/CD configuration file' project
    field, as you mentioned in the email. This won't generate a pipeline
    when configured but only on the next push.

    Indeed; setting the configuration file to
    recipes/debian.yml@salsa-ci-team/pipeline
    will avoid any need to touch the actual repository.

    - If you need create the .gitlab-ci.yml file, please use the
    `ci.skip`[1] push option.

    And that should only be needed if the configuration is non-standard.

    Thanks, and good luck :)

    Best wishes,

    Julian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Louis-Philippe_V=c3=a9ron@21:1/5 to All on Mon Sep 19 19:20:01 2022
    This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --------------6Ifecw0TpIuwg1oTwicdxThM
    Content-Type: multipart/mixed; boundary="------------oAfbyVZm0GIAaoBJroOJttkS"

    --------------oAfbyVZm0GIAaoBJroOJttkS
    Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: base64

    T24gMjAyMi0wOS0xOSAwNiBoIDUxLCBFbWFudWVsZSBSb2NjYSB3cm90ZToNCj4gSGVsbG8g ZGViaWFuLXNhbHNhLWNpIGFuZCBkZWJpYW4tcHl0aG9uIQ0KPiANCj4gSSB3YXMgd29uZGVy aW5nIGlmIGl0IHdvdWxkIG1ha2Ugc2Vuc2UgdG8gZW5hYmxlIENJL0NEIG9uIFNhbHNhIGZv ciBhbGwNCj4gcHJvamVjdHMgb3duZWQgYnkgdGhlIERlYmlhbiBQeXRob24gVGVhbSwgb3Ig aWYgdGhlcmUncyBhbnkgY29uY2Vybg0KPiBhYm91dCBzY2FsaW5nIGlzc3VlcyBpbiB0ZXJt cyBvZiBwaXBlbGluZSB3b3JrZXJzIChvciBhbnl0aGluZyBlbHNlDQo+IHJlYWxseSkuDQo+ IA0KPiBGb3IgdGhlIHBhc3QgZmV3IGRheXMgSSd2ZSBiZWVuIGVuYWJsaW5nIENJL0NEIG9u IFNhbHNhIGZvciB2YXJpb3VzDQo+IHBhY2thZ2VzIG93bmVkIGJ5IHRoZSBEUFQuIEkndmUg YmVlbiBkb2luZyB0aGlzIG9uIGEgY2FzZS1ieS1jYXNlIGJhc2lzOg0KPiBpZiB0aGUgcGFj a2FnZSBJIHdhbnRlZCB0byB3b3JrIG9uIChmb3IgcmVhc29ucyB1bnJlbGF0ZWQgdG8gQ0kp IGRpZCBub3QNCj4gaGF2ZSBDSS9DRCB5ZXQsIEknZCBhZGQgWzFdIGFzIHRoZSBwaXBlbGlu ZSBjb25maWd1cmF0aW9uIGZpbGUgYW5kIGNhcnJ5DQo+IG9uIHdpdGggbXkgd29yay4NCj4g DQo+IFBlcmhhcHMgdGhlcmUncyBhbiBvcHBvcnR1bml0eSB0byBhdXRvbWF0ZSBhbmQgZ2V0 dGluZyB3aWRlciBDSSB1c2FnZS4NCj4gDQo+IFRoYW5rcywNCj4gICAgRW1hbnVlbGUNCj4g DQo+IFsxXSByZWNpcGVzL2RlYmlhbi55bWxAc2Fsc2EtY2ktdGVhbS9waXBlbGluZQ0KDQpI aSwNCg0KSSB3YXMgdG9sZCAicGxlYXNlIGRvbid0IiAzIHllYXJzIGFnbyBhbmQgYWx0aG91 Z2ggSSd2ZSBwdXNoZWQgYSBudW1iZXIgDQpvZiB0aW1lcyAoaW4gcHJpdmF0ZSBhbmQgaW4g cHVibGljKSwgSSBoYXZlIGhhZCBubyByZXBsaWVzOg0KDQpodHRwczovL3NhbHNhLmRlYmlh bi5vcmcvc2Fsc2Evc3VwcG9ydC8tL2lzc3Vlcy8xNzANCg0KLS0gDQogICDiooDio7TioL7i oLviorbio6bioIANCiAgIOKjvuKggeKioOKgkuKggOKjv+KhgSAgTG91aXMtUGhpbGlwcGUg VsOpcm9ubmVhdQ0KICAg4qK/4qGE4qCY4qC34qCa4qCLICAgcG9sbG9AZGViaWFuLm9yZyAv IHZlcm9ubmVhdS5vcmcNCiAgIOKgiOKgs+KjhA0KDQo= --------------oAfbyVZm0GIAaoBJroOJttkS
    Content-Type: application/pgp-keys; name="OpenPGP_0xE1E5457C8BAD4113.asc" Content-Disposition: attachment; filename="OpenPGP_0xE1E5457C8BAD4113.asc" Content-Description: OpenPGP public key
    Content-Transfer-Encoding: quoted-printable

    -----BEGIN PGP PUBLIC KEY BLOCK-----

    xjMEYEPdjBYJKwYBBAHaRw8BAQdA5yh8SOHhcvKeX/A4rv0/JTCL8Kgnnwy4/okK h1Htbs3NOExvdWlzLVBoaWxpcHBlIFbDqXJvbm5lYXUgPGxvdWlzLXBoaWxpcHBl QHZlcm9ubmVhdS5vcmc+wpkEExYKAEECGwMFCQHhM4AFCwkIBwMFFQoJCAsFFgID AQACHgECF4AWIQT2TWHTIfPLSJFWdT3h5UV8i61BEwUCYEPeHgIZAQAKCRDh5UV8 i61BE0xKAP4oRsMaA2T/Zjge126dwHbnxBsjI/Q3ky8QkGlOffUKJAEA9dWm0hE4 0URSXM8Ndtf+GeHxvNeryVMCtVDUfjHMBA/CmQQTFgoAQQIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAIZARYhBPZNYdMh88tIkVZ1PeHlRXyLrUETBQJiEWgLBQkD rr3/AAoJEOHlRXyLrUETOK0BAM9I6BMMiqhsORsRcDVcM4VTm8G67YHapBW5zdl/ llfxAPwLAsi32TCPWjuwD3UdKig+6syvKFsiIfjiNBweNIQED80sTG91aXMtUGhp bGlwcGUgVsOpcm9ubmVhdSA8cG9sbG9AZGViaWFuLm9yZz7ClgQTFgoAPhYhBPZN YdMh88tIkVZ1PeHlRXyLrUETBQJgQ93rAhsDBQkB4TOABQsJCAcDBRUKCQgLBRYC AwEAAh4BAheAAAoJEOHlRXyLrUETeLMBAJAAznKkFo3Cm0pAW6klHv6jnDeMLS/6 9tAbJQRDNEAhAQDGQTrcAJZAcAFKoYeh2UlRokm1xG3Lc+FDpZGOKJBaBcKWBBMW CgA+AhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAFiEE9k1h0yHzy0iRVnU94eVF fIutQRMFAmIRaAsFCQOuvf8ACgkQ4eVFfIutQRMItwD+Oce5l0QBRJsax1C5MXe3 7Jk5cIMV2eOH0i4hd6c2wqYA/31Wn0qt5bv7i1y+2JsCeKtv0MIsYQ3LU1XG8k9h pb8BzjMEYEPg0RYJKwYBBAHaRw8BAQdASbekNA3xJnxUhMenK8ttfm8OTepniXHJ EN0Sm1/zmifCwDUEGBYKACYWIQT2TWHTIfPLSJFWdT3h5UV8i61BEwUCYEPg0QIb AgUJAeEzgACBCRDh5UV8i61BE3YgBBkWCgAdFiEEyqdABweoFrAgL8PN9CV6ULIc +oUFAmBD4NEACgkQ9CV6ULIc+oWswwEAoRTzlukc6Ss4PaChogmudTzMdezF1FQz T5HH0C4EVawA/1JfaysK+seL/zdEQKUHD3cMdg8NvMtOXfcMg4EiFRYE1SQBAPKi UCqSMLql7QtWiB/xmDFUYltNa3+NLjRYRsNKfe9JAP9ZEaXY6oO+3owwpxbNphBp hSkH+9lEag0Dd3BEowOKDMLANQQYFgoAJgIbAhYhBPZNYdMh88tIkVZ1PeHlRXyL rUETBQJiEnvDBQkDr85yAIF2IAQZFgoAHRYhBMqnQAcHqBawIC/DzfQlelCyHPqF BQJgQ+DRAAoJEPQlelCyHPqFrMMBAKEU85bpHOkrOD2goaIJrnU8zHXsxdRUM0+R x9AuBFWsAP9SX2srCvrHi/83REClBw93DHYPDbzLTl33DIOBIhUWBAkQ4eVFfIut QRPY6AEAn9YvrTzliAvnyPef3kXXCvyH973dPn/539suXireBnsA/iqtwiOe4758 +28fgsXaVUpyFcEhirsu0/IhzSnpVXUNzjgEYEPg5RIKKwYBBAGXVQEFAQEHQIES 2w30v+hi13deaiPcx7KPVMCUIA25nu6by9Wfa5BuAwEIB8J+BBgWCgAmFiEE9k1h 0yHzy0iRVnU94eVFfIutQRMFAmBD4OUCGwwFCQHhM4AACgkQ4eVFfIutQRMNhgD9 HkVqB+Vy+F9EAzjHilHnSPft2xfLdhTrqzh6O0jEhqsA/2dd/AMSsZNAH8FYQKq3 Th+Hikj+jXXs+P9HYlULp1UHwn4EGBYKACYCGwwWIQT2TWHTIfPLSJFWdT3h5UV8 i61BEwUCYhJ72AUJA6/OcwAKCRDh5UV8i61BE2CVAP9+JHidrPFWE7WwNskxdVY1 YzHxGihO20Zt65AagSMVgAD9FlBCTPfQKpvC5jBax89pLAg07QsLq1wJ5U5v1zV5 JQTOMwRiEWorFgkrBgEEAdpHDwEBB0BkhUACsGCOaaPRY4H2lJiegjp8hFrduGkl t4qxMygJ88J4BCgWCgAgFiEE9k1h0yHzy0iRVnU94eVFfIutQRMFAmLoLeYCHQMA CgkQ4eVFfIutQROVZAD9E2NDG9xBqa7gZjYprQkY4EzUgUkZY5g5l046jI0WvN8B APK0Ab4Sjx7ekPJDDa4gB/Mr1htCyoZrPysKB7tkuCQDwsA1BBgWCgAmFiEE9k1h 0yHzy0iRVnU94eVFfIutQRMFAmIRaisCGwIFCQHhM4AAgQkQ4eVFfIutQRN2IAQZ FgoAHRYhBJBd8+ORq1094UcSk2a2zWq+wNuWBQJiEWorAAoJEGa2zWq+wNuWOv8B AKfeLq2soJeiHDAdoV0spQxoVJDme2FzgmBCxr0KxRfQAP9zaHwI9+NjirmC8Gov IGveZ7wxXJ/v8jYFnZadVhIRBqk+AQDXKlTmPsWLD6SnMvW+kF1SbHUq6aPqALXb nEai/hTTrAD+Pt7NZO1KqJQiIJ+miP1LIlPqiZKMPt8uNdw8KKqHVwbOOARiEXES EgorBgEEAZdVAQUBAQdAZSMCxsNHkDiI2tnp9FX1Xl+39/Knre9jd7exta0LGAED AQgHwngEKBYKACAWIQT2TWHTIfPLSJFWdT3h5UV8i61BEwUCYuguFwIdAwAKCRDh 5UV8i61BE3D3APsH9gDArOrY6/d2/Lefpymj+yR5DHDEWpEvQ+GTnnA9ewEA6LgH Gx3DRN/KfkW1eoXxlnaFeQPXqggLOFj8kzYkDgDCfQQYFgoAJhYhBPZNYdMh88tI kVZ1PeHlRXyLrUETBQJiEXESAhsMBQkB4TOAAAoJEOHlRXyLrUETinYA93idFyhp u054EVRbFz/ybVAlpGqkdt69+LYt3Cr0RIkBANARMMYd47lV/1/C1fWsemRuZDCd +BzH/o7byibkUa4O
    =hixQ
    -----END PGP PUBLIC KEY BLOCK-----

    --------------oAfbyVZm0GIAaoBJroOJttkS--

    --------------6Ifecw0TpIuwg1oTwicdxThM--

    -----BEGIN PGP SIGNATURE-----

    iHUEARYKAB0WIQTKp0AHB6gWsCAvw830JXpQshz6hQUCYyiitQAKCRD0JXpQshz6 hWB8AP959OSXNvColsTwSjwcClNKRygxE+U2PCpv+hrvsHiW7QEAsENesQ6fWqPA FKDkv8G2h4lOubQMuBSoiIkYThCL0QI=
    =LNVc
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sandro Tosi@21:1/5 to All on Mon Sep 19 20:20:01 2022
    I was wondering if it would make sense to enable CI/CD on Salsa for all projects owned by the Debian Python Team, or if there's any concern
    about scaling issues in terms of pipeline workers (or anything else
    really).

    what would the team get out of doing this?

    --
    Sandro "morph" Tosi
    My website: http://sandrotosi.me/
    Me at Debian: http://wiki.debian.org/SandroTosi
    Twitter: https://twitter.com/sandrotosi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Emanuele Rocca@21:1/5 to Sandro Tosi on Tue Sep 20 10:40:01 2022
    Hi Sandro,

    On 19/09 02:14, Sandro Tosi wrote:
    what would the team get out of doing this?

    The way I see it, CI on Salsa is so useful that it should be enabled by
    default unless there are good reasons not to.

    ciao,
    Emanuele

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sandro Tosi@21:1/5 to ema@debian.org on Tue Sep 20 14:40:02 2022
    On Tue, Sep 20, 2022 at 4:33 AM Emanuele Rocca <ema@debian.org> wrote:
    On 19/09 02:14, Sandro Tosi wrote:
    what would the team get out of doing this?

    The way I see it, CI on Salsa is so useful that it should be enabled by default unless there are good reasons not to.

    the way i worded my initial question was so that you could list the
    reasons that make it so useful, in detail: so would you like to do?

    --
    Sandro "morph" Tosi
    My website: http://sandrotosi.me/
    Me at Debian: http://wiki.debian.org/SandroTosi
    Twitter: https://twitter.com/sandrotosi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Emanuele Rocca@21:1/5 to Sandro Tosi on Tue Sep 20 16:20:01 2022
    Hi Sandro,

    On Tue, Sep 20, 2022 at 08:31:14AM -0400, Sandro Tosi wrote:
    the way i worded my initial question was so that you could list the
    reasons that make it so useful, in detail: so would you like to do?

    Salsa CI is useful because it automatically performs binary/source builds, arm64 crossbuilds, and it runs various pretty important tests such as lintian, piuparts, reproducible build testing, and more. It also runs autopkgtest in LXC.

    Sure you can do all this manually on your own, but it's better to live in a world where the machines work for us rather than the other way around. :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carsten Schoenert@21:1/5 to All on Tue Sep 20 17:40:01 2022
    Hi,

    Am 20.09.22 um 16:13 schrieb Emanuele Rocca:
    Salsa CI is useful because it automatically performs binary/source builds, arm64 crossbuilds, and it runs various pretty important tests such as lintian,
    piuparts, reproducible build testing, and more. It also runs autopkgtest in LXC.

    quite most of these steps I usually need to do locally before I do any
    upload of packages. So I see no real gain to run any pipeline by
    default, for me this would be just burning energy in CPU cycles just for "because we can".

    CI/CD makes sense for me within a greater view such as is a version
    upgrade of package A not break other stuff in other packages, like does
    working all packages that now need to use a new version of pytest or setuptools, django etc. But that's not ready within the current way the
    default CI pipeline is working (in my POV).

    So no, for me it makes currently no sense to enable a CI thingy for ALL packages by default!

    We have automatic Lintian checks, the buildds itself, and also the
    autopkgtest infrastructure, why double such things, that's waste of
    energy and resources! The packages are not getting better by running
    tests multiple times within the same environment.
    And given my experience within other teams and groups, nobody really
    cares about packages that fail in some tests within a CI run. I strongly believe it wouldn't be better here.

    Sure you can do all this manually on your own, but it's better to live in a world where the machines work for us rather than the other way around. :-)

    I still don't see why this is a benefit.
    If you use an CI option within your own namespace is another thing,
    doing so make sense to me to prepare a new version for uploading.

    --
    Regards
    Carsten

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sandro Tosi@21:1/5 to All on Tue Sep 20 21:20:01 2022
    Am 20.09.22 um 16:13 schrieb Emanuele Rocca:
    Salsa CI is useful because it automatically performs binary/source builds, arm64 crossbuilds, and it runs various pretty important tests such as lintian,
    piuparts, reproducible build testing, and more. It also runs autopkgtest in LXC.

    quite most of these steps I usually need to do locally before I do any
    upload of packages. So I see no real gain to run any pipeline by
    default, for me this would be just burning energy in CPU cycles just for "because we can".

    exactly this.

    the vast majority of the team members (based on the commits email i
    receive) are uploading the package to the archive at the same time as
    they are pushing a full set of changes to salsa (and sometimes only
    *after* the package has been ACCEPTED); in this case CI runs too late,
    and it has 0 benefit for that specific upload. For future ones? maybe,
    but that's to be proven, and the burden of proof is on the proponent.

    Someone with upload rights still need to verify (and build!) a package
    locally, so what would be the advantage of this CI for our packages,
    given only a very very tiny number of MRs are submitted

    i could see the benefit for projects that receive external
    contributions and/or are released out-of-sync with such contributions
    (say dh-python) but for /packages/, as Carsten said, it's a waste of
    CPU time to enable CI, IMO

    --
    Sandro "morph" Tosi
    My website: http://sandrotosi.me/
    Me at Debian: http://wiki.debian.org/SandroTosi
    Twitter: https://twitter.com/sandrotosi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Emanuele Rocca@21:1/5 to Carsten Schoenert on Wed Sep 21 12:00:01 2022
    Hallo Carsten,

    On 2022-09-20 05:35, Carsten Schoenert wrote:
    Am 20.09.22 um 16:13 schrieb Emanuele Rocca:
    Salsa CI is useful because it automatically performs binary/source builds, arm64 crossbuilds, and it runs various pretty important tests such as lintian,
    piuparts, reproducible build testing, and more. It also runs autopkgtest in LXC.

    quite most of these steps I usually need to do locally before I do any
    upload of packages.

    Well but that's the whole point of automated testing. There's no *need*
    to do it locally if it's already done by Salsa for you. What is already automated and working pretty well is:

    - amd64 build
    - i386 build
    - source build
    - autopkgtest
    - blhc
    - lintian
    - piuparts
    - reprotest
    - arm64 crossbuild

    That's a pretty time consuming list of things to go through for a human!

    The only work left to be done on your machine is a binary build to see
    if the packages look good, perhaps some specific manual testing [1],
    source build and upload. Isn't that better?

    [1] though that may be an opportunity for writing a new autopkgtest!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Samuel Thibault@21:1/5 to All on Wed Sep 21 12:40:01 2022
    Hello,

    Emanuele Rocca, le mer. 21 sept. 2022 12:01:21 +0200, a ecrit:
    The only work left to be done on your machine is a binary build to see
    if the packages look good, perhaps some specific manual testing [1],

    [1] though that may be an opportunity for writing a new autopkgtest!

    Yes, nowadays autopkgtest does more testing than what I was previously
    doing :)

    (and it prevents other packages from breaking mines).

    Samuel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Emanuele Rocca@21:1/5 to Sandro Tosi on Wed Sep 21 12:20:02 2022
    Hi,

    On 2022-09-20 03:09, Sandro Tosi wrote:
    the vast majority of the team members (based on the commits email i
    receive) are uploading the package to the archive at the same time as
    they are pushing a full set of changes to salsa (and sometimes only
    *after* the package has been ACCEPTED); in this case CI runs too late,
    and it has 0 benefit for that specific upload.

    Very interesting, I was missing this piece of information. So first do
    all the work locally, perform all the testing manually, upload the
    package to ftp-master and *then* when you're finished push to Salsa?

    What's wrong with pushing your work before uploading to ftp-master
    instead? :-)

    If you're worried about breaking things, that's what git revert and/or
    branches are for. I can maybe imagine that one doesn't like frequent
    merge requests and merge commits, you can skip that too: just use a
    remote branch for testing and only push to master once happy.

    My workflow is roughly:

    - while not done:
    - few local commits, binary build, basic local testing
    - git push
    - see if the pipeline is green
    - source build, sign, upload

    To me this seems a better approach in terms of team collaboration too.
    While you iterate on your work it's clear to other team members that
    someone is on the package, which may help in terms of avoiding
    duplicated efforts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arnaud Ferraris@21:1/5 to All on Wed Sep 21 13:20:01 2022
    Hi,

    Le 20/09/2022 à 17:35, Carsten Schoenert a écrit :
    Hi,

    Am 20.09.22 um 16:13 schrieb Emanuele Rocca:
    Salsa CI is useful because it automatically performs binary/source
    builds,
    arm64 crossbuilds, and it runs various pretty important tests such as
    lintian,
    piuparts, reproducible build testing, and more. It also runs
    autopkgtest in
    LXC.

    quite most of these steps I usually need to do locally before I do any
    upload of packages. So I see no real gain to run any pipeline by
    default, for me this would be just burning energy in CPU cycles just for "because we can".

    CI/CD makes sense for me within a greater view such as is a version
    upgrade of package A not break other stuff in other packages, like does working all packages that now need to use a new version of pytest or setuptools, django etc. But that's not ready within the current way the default CI pipeline is working (in my POV).

    So no, for me it makes currently no sense to enable a CI thingy for ALL packages by default!

    It all depends on your workflow indeed, and I assume nothing would
    prevent anyone from disabling CI on a per-repo basis (or, depending on
    the final consensus, enabling it on a per-repo basis as well).

    Just to give some feedback on this, both the DebianOnMobile and Mobian
    team have CI enabled for all repos, along with 2 group runners for
    specific things (native arm64 builds and some non-packaging-related jobs needing kvm).

    Over the past 2 years this has proven extremely useful for the following reasons:
    - it suits our workflow (develop locally, test it builds, then push and
    let CI handle the rest)
    - it doesn't require developers to manually run autopkgtest, lintian,
    piuparts or bhlc (which they could forget/not have the time to do), so
    all those tests are executed anyway, bringing immediate attention should
    any issue arise
    - we also have the benefit or reprotests which would be heavier to do
    locally
    - a significant portion of the team members have little experience with
    Debian packaging, so having all these checks automated allows them to
    focus on quality packaging rather than implementing a complete workflow including tests etc...

    Opinions may differ of course, and both the aforementioned team are very
    small (both in terms of members and packages) compared to the Python
    team, but in our case we would definitely miss CI if it weren't there.

    Cheers,
    Arnaud


    We have automatic Lintian checks, the buildds itself, and also the autopkgtest infrastructure, why double such things, that's waste of
    energy and resources! The packages are not getting better by running
    tests multiple times within the same environment.
    And given my experience within other teams and groups, nobody really
    cares about packages that fail in some tests within a CI run. I strongly believe it wouldn't be better here.

    Sure you can do all this manually on your own, but it's better to live
    in a
    world where the machines work for us rather than the other way around.
    :-)

    I still don't see why this is a benefit.
    If you use an CI option within your own namespace is another thing,
    doing so make sense to me to prepare a new version for uploading.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian May@21:1/5 to Emanuele Rocca on Fri Sep 23 01:10:01 2022
    Emanuele Rocca <ema@debian.org> writes:

    What's wrong with pushing your work before uploading to ftp-master
    instead? :-)

    I am learning to do this with my packages.

    Because otherwise, when I push to get, I often find I forgot to do a
    pull beforehand, and there are changes in salsa that are not reflected
    in my upload I just did, and as I result I have a bit of a mess to try
    and resolve.

    But still, I need to remember to do it in this order.

    Normal solution would be to get the CI to upload the new changes
    automatically, but I imagine there are going to be problems here
    regarding control of the GPG key required to sign that changes file.
    --
    Brian May <brian@linuxpenguins.xyz>
    https://linuxpenguins.xyz/brian/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sandro Tosi@21:1/5 to All on Fri Sep 23 04:40:01 2022
    Well but that's the whole point of automated testing. There's no *need*
    to do it locally if it's already done by Salsa for you. What is already automated and working pretty well is:

    - amd64 build
    - i386 build
    - source build
    - autopkgtest
    - blhc
    - lintian
    - piuparts
    - reprotest
    - arm64 crossbuild

    That's a pretty time consuming list of things to go through for a human!

    The only work left to be done on your machine is a binary build to see
    if the packages look good, perhaps some specific manual testing [1],
    source build and upload. Isn't that better?

    sure its better, now let's assume something in those tests fails: how
    do you debug it and fix it? you still need to repeat it locally = you
    wasted time.

    In conclusion, you're no only proposing a technical change (add CI to
    all our packages), but also a workflow change (push to salsa before an
    upload). experience dictates that's never a good idea, and in such an heterogeneous team like ours, it's simply not gonna give the fruits
    you think it will.

    I still think it's a waste of time, and addition of emails that we're
    going to simply ignore (or not receive at all, if you're not
    subscribed to tracker.d.o, wihch is suspect is the vast majority of
    team members), but if the majority of the core contributors want it,
    sure go ahead

    --
    Sandro "morph" Tosi
    My website: http://sandrotosi.me/
    Me at Debian: http://wiki.debian.org/SandroTosi
    Twitter: https://twitter.com/sandrotosi

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carsten Schoenert@21:1/5 to All on Fri Sep 23 07:10:01 2022
    Hello Emanuele,

    Am 21.09.22 um 12:01 schrieb Emanuele Rocca:
    Well but that's the whole point of automated testing. There's no *need*
    to do it locally if it's already done by Salsa for you. What is already automated and working pretty well is:

    - amd64 build
    - i386 build
    - source build
    - autopkgtest
    - blhc
    - lintian
    - piuparts
    - reprotest
    - arm64 crossbuild

    That's a pretty time consuming list of things to go through for a human!

    sure, that's a killer argument that I can't really argue against. But
    that is not the point for me.

    For all these checks we already have existing infrastructure, running
    the same also by a pipeline job isn't helping at all if it's not clear
    how to handle the fallout (we already mostly have seen in other places
    too!).

    As Sandro and Arnaud have pointed out it's probably mostly a matter of
    the workflow for a package upload. And for me the CI pipeline stuff
    right now doesn't fit really into the package upload workflow that is
    typically used.

    Using the CI stuff in your own namespace is perfectly fine and I'm using
    this option from time to time. But I use there also the possibility to
    do heavily force pushing to not blow up the git tree with dozens of
    Fixup commits! In the 'official' git tree this is a no go of course.

    Nobody is perfect and even every Python package will have it's own small differences in between. As long as we don't have *one* Debian way to
    build packages and a helpful way to deal with breakage in any of the
    test runs it will always be a waste of time an energy to run for all
    packages a CI run at all times!

    If the decision is to do this step I will simply need to ignore any
    errors that are not RC.

    The only work left to be done on your machine is a binary build to see
    if the packages look good, perhaps some specific manual testing [1],
    source build and upload. Isn't that better?

    I do all package built locally as a all/any build run.
    As written above and trying to say, I like atomic git commits that are
    doing things "correct" and by looking at the commit it's clear why this
    commit was done.
    I have to "fight" enough on my day job with my colleagues as they do git
    mostly using committing every forward and backward steps with no clean
    up locally finally before pushing their stuff and so I need to spend a
    lot of time to get the changes and their basically meaning. You would
    end up the same in the packages here as people would commit again and
    again to fix up the packages.

    I stand on my thinking, it's not helpful to enable a global CI for all packages. Doing this from case to case is absolutely fine to me.

    Arnaud Ferraris has written about the usage of a CI option in Debian
    Mobile etc.
    His writing is affirming me as I see and have the same experience within
    the PureOS ecosystem. People work there the same as I did describe,
    package are prepared in the local namespace and if the CI is running
    there successfully then a push to the team namespace is done.

    --
    Regards
    Carsten

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefano Rivera@21:1/5 to All on Fri Sep 23 08:30:01 2022
    Hi Carsten (2022.09.23_05:01:05_+0000)
    sure, that's a killer argument that I can't really argue against. But that
    is not the point for me.

    For all these checks we already have existing infrastructure, running the same also by a pipeline job isn't helping at all if it's not clear how to handle the fallout (we already mostly have seen in other places too!).

    Yeah, it's similar for me. I test build locally, my sbuild setup does
    most (but not all) of the same checks as gitlab CI. Then when I'm happy
    I push and upload. If there is any gitlab CI, it runs too late. And if
    it fails, I usually don't even bother to investigate, because I trust my
    local setup implicitly. Anything that's failing in gitlab CI is almost
    certain to be a failure specific to gitlab CI.

    I do see a value in having it enabled globally, for the team, though.

    1. It can make the team packages friendlier to new contributor team
    members who don't have a setup like that.
    I would like to see our team act more like a team and have people
    contribute to packages that they don't regularly maintain.
    2. Getting a test failure on a merge-request catches contributor
    mistakes early. I love having CI on incoming patches like that.

    SR

    --
    Stefano Rivera
    http://tumbleweed.org.za/
    +1 415 683 3272

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas Chauvat@21:1/5 to All on Fri Sep 23 09:00:02 2022
    Hi Carsten, Hi List,

    Le Fri, Sep 23, 2022 at 07:01:05AM +0200, Carsten Schoenert a écrit :
    heavily force pushing to not blow up the git tree with dozens of Fixup commits! In the 'official' git tree this is a no go of course.

    Would doing the work in a git branch and 'git merge --squash' at the
    end be a solution to this problem ?

    I have the same issue when trying to use CI to run tests instead of
    running them locally, but using Mercurial, I just 'hg amend' them and
    I end up with a clean history.

    <shamelessplug>
    With Mercurial and its concept of obsolete commit combined with the
    evolve extension, a team can amend commits and share these amended
    commits without anyone losing work.

    I never found the equivalent in git where rewriting an history to
    clean it once the dust as settled breaks every repository that already
    pulled these commits.

    In other words, Mercurial allows you to work in a decentralized fashion
    both on your source and on the history of your source.
    </shamelessplug>

    --
    Nicolas Chauvat

    logilab.fr - services en informatique scientifique et gestion de connaissances

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)