Hello debian-salsa-ci and debian-python!
I was wondering if it would make sense to enable CI/CD on Salsa for all projects owned by the Debian Python Team, or if there's any concern
about scaling issues in terms of pipeline workers (or anything else
really).
For the past few days I've been enabling CI/CD on Salsa for various
packages owned by the DPT. I've been doing this on a case-by-case basis:
if the package I wanted to work on (for reasons unrelated to CI) did not
have CI/CD yet, I'd add [1] as the pipeline configuration file and carry
on with my work.
Perhaps there's an opportunity to automate and getting wider CI usage.
Thanks,
Emanuele
[1] recipes/debian.yml@salsa-ci-team/pipeline
--
Debian-salsa-ci mailing list
Debian-salsa-ci@alioth-lists.debian.net https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-salsa-ci
[...]
Perhaps there's an opportunity to automate and getting wider CI usage.
One of the biggest issues we had when a team adopted the pipeline was
DDOSing of the instance because of the multiple pipelines generated when pusing the .gitlab-ci.yml file to all the projects.
If you're planning to do this, please:
- Use the API and configure the 'CI/CD configuration file' project
field, as you mentioned in the email. This won't generate a pipeline
when configured but only on the next push.
- If you need create the .gitlab-ci.yml file, please use the
`ci.skip`[1] push option.
Thanks, and good luck :)
I was wondering if it would make sense to enable CI/CD on Salsa for all projects owned by the Debian Python Team, or if there's any concern
about scaling issues in terms of pipeline workers (or anything else
really).
what would the team get out of doing this?
On 19/09 02:14, Sandro Tosi wrote:
what would the team get out of doing this?
The way I see it, CI on Salsa is so useful that it should be enabled by default unless there are good reasons not to.
the way i worded my initial question was so that you could list the
reasons that make it so useful, in detail: so would you like to do?
Salsa CI is useful because it automatically performs binary/source builds, arm64 crossbuilds, and it runs various pretty important tests such as lintian,
piuparts, reproducible build testing, and more. It also runs autopkgtest in LXC.
Sure you can do all this manually on your own, but it's better to live in a world where the machines work for us rather than the other way around. :-)
Am 20.09.22 um 16:13 schrieb Emanuele Rocca:
Salsa CI is useful because it automatically performs binary/source builds, arm64 crossbuilds, and it runs various pretty important tests such as lintian,
piuparts, reproducible build testing, and more. It also runs autopkgtest in LXC.
quite most of these steps I usually need to do locally before I do any
upload of packages. So I see no real gain to run any pipeline by
default, for me this would be just burning energy in CPU cycles just for "because we can".
Am 20.09.22 um 16:13 schrieb Emanuele Rocca:
Salsa CI is useful because it automatically performs binary/source builds, arm64 crossbuilds, and it runs various pretty important tests such as lintian,
piuparts, reproducible build testing, and more. It also runs autopkgtest in LXC.
quite most of these steps I usually need to do locally before I do any
upload of packages.
The only work left to be done on your machine is a binary build to see
if the packages look good, perhaps some specific manual testing [1],
[1] though that may be an opportunity for writing a new autopkgtest!
the vast majority of the team members (based on the commits email i
receive) are uploading the package to the archive at the same time as
they are pushing a full set of changes to salsa (and sometimes only
*after* the package has been ACCEPTED); in this case CI runs too late,
and it has 0 benefit for that specific upload.
Hi,
Am 20.09.22 um 16:13 schrieb Emanuele Rocca:
Salsa CI is useful because it automatically performs binary/source
builds,
arm64 crossbuilds, and it runs various pretty important tests such as
lintian,
piuparts, reproducible build testing, and more. It also runs
autopkgtest in
LXC.
quite most of these steps I usually need to do locally before I do any
upload of packages. So I see no real gain to run any pipeline by
default, for me this would be just burning energy in CPU cycles just for "because we can".
CI/CD makes sense for me within a greater view such as is a version
upgrade of package A not break other stuff in other packages, like does working all packages that now need to use a new version of pytest or setuptools, django etc. But that's not ready within the current way the default CI pipeline is working (in my POV).
So no, for me it makes currently no sense to enable a CI thingy for ALL packages by default!
We have automatic Lintian checks, the buildds itself, and also the autopkgtest infrastructure, why double such things, that's waste of
energy and resources! The packages are not getting better by running
tests multiple times within the same environment.
And given my experience within other teams and groups, nobody really
cares about packages that fail in some tests within a CI run. I strongly believe it wouldn't be better here.
Sure you can do all this manually on your own, but it's better to live
in a
world where the machines work for us rather than the other way around.
:-)
I still don't see why this is a benefit.
If you use an CI option within your own namespace is another thing,
doing so make sense to me to prepare a new version for uploading.
What's wrong with pushing your work before uploading to ftp-master
instead? :-)
Well but that's the whole point of automated testing. There's no *need*
to do it locally if it's already done by Salsa for you. What is already automated and working pretty well is:
- amd64 build
- i386 build
- source build
- autopkgtest
- blhc
- lintian
- piuparts
- reprotest
- arm64 crossbuild
That's a pretty time consuming list of things to go through for a human!
The only work left to be done on your machine is a binary build to see
if the packages look good, perhaps some specific manual testing [1],
source build and upload. Isn't that better?
Well but that's the whole point of automated testing. There's no *need*
to do it locally if it's already done by Salsa for you. What is already automated and working pretty well is:
- amd64 build
- i386 build
- source build
- autopkgtest
- blhc
- lintian
- piuparts
- reprotest
- arm64 crossbuild
That's a pretty time consuming list of things to go through for a human!
The only work left to be done on your machine is a binary build to see
if the packages look good, perhaps some specific manual testing [1],
source build and upload. Isn't that better?
sure, that's a killer argument that I can't really argue against. But that
is not the point for me.
For all these checks we already have existing infrastructure, running the same also by a pipeline job isn't helping at all if it's not clear how to handle the fallout (we already mostly have seen in other places too!).
heavily force pushing to not blow up the git tree with dozens of Fixup commits! In the 'official' git tree this is a no go of course.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 379 |
Nodes: | 16 (2 / 14) |
Uptime: | 70:54:54 |
Calls: | 8,084 |
Calls today: | 2 |
Files: | 13,069 |
Messages: | 5,849,833 |