Jonas Smedegaard <jonas@jones.dk> writes:
I just don't think the solution is to ignore copyright or licensing statements.
That's not the goal. The question, which keeps being raised in part
because I don't think it's gotten a good answer, is what the basis is for treating copyright and licensing bugs differently than any other bug in Debian?
The need for pre-screening was obvious when we had export control issues,
but my understanding is that those have gone away. Are we working from
legal advice telling us that this pre-screening is required for some legal purpose? If so, is it effective for the legal purpose at which it is
aimed? Is this system left over from old advice? Have we checked our assumptions recently?
NEW processing is a lot of friction for the project as a whole and a lot
of work for the ftp team. If we were able to do less work at the cost of
a minimal increase in bugs, or at the cost of handling bugs a bit differently, maybe that would be a good thing?
In other words, it's unclear what requirements we're attempting to meet
and what the basis of those requirements is, which makes it hard to have a conversation about whether the current design is the best design for the problem we're trying to solve.
May be some intermediate step would be to not hide packages in NEW queue
but exposing them as an apt source. If I'm correct this is not the case since it had certain legal consequences for the project if code with
certain non-free licenses would be downloadable from some debian.org
address. May be NEW could be considered as some kind of pre-non-free as
long as it is not checked and the legal consequences are not valid for
us any more. But I'm not educated in international law - just asking
whether somebody might know better.
Am Tue, Jan 25, 2022 at 01:45:11PM -0800 schrieb Russ Allbery:[...]
The question, which keeps being raised in part
because I don't think it's gotten a good answer, is what the basis is for treating copyright and licensing bugs differently than any other bug in Debian?
The need for pre-screening was obvious when we had export control issues, but my understanding is that those have gone away. Are we working from legal advice telling us that this pre-screening is required for some legal purpose? If so, is it effective for the legal purpose at which it is aimed? Is this system left over from old advice? Have we checked our assumptions recently?
NEW processing is a lot of friction for the project as a whole and a lot
of work for the ftp team. If we were able to do less work at the cost of
a minimal increase in bugs, or at the cost of handling bugs a bit differently, maybe that would be a good thing?
In other words, it's unclear what requirements we're attempting to meet
and what the basis of those requirements is, which makes it hard to have a conversation about whether the current design is the best design for the problem we're trying to solve.
I'm CCing debian-legal for this branch of the discussion (but I do not
read this list and think keeping debian-devel in the row is a good idea).
I thought the basis was the fact that copyright and licensing bugs may
have bad legal consequences (lawsuits against the Project for
distributing legally undistributable packages, things like that), while technical bugs do not cause issues with lawyers and are, in this sense, "easier" to fix.
I am under the impression that the pre-screening in the NEW queue is an attempt to catch legal issues *before* the package is introduced into
the archive.
Personally, I think the legal pre-screening by the FTP masters in the
NEW queue is useful and should be kept.
In fact, I wish the pre-screening were stricter.
I've seen cases, where a bug is reported against a legally
undistributable package and the issue is left unaddressed for ages with nobody apparently caring enough.
I do think that the amount of effort that the project puts into this pre-screening is of sufficiently high magnitude that it would be worth
paying a lawyer for a legal opinion about whether or not we need to do
it. The savings to the project if we found out that we didn't, or that we could do something simpler and more easily automated, would be more than
the cost of the legal opinion.
Francesco Poli <invernomuto@paranoici.org> writes:
I thought the basis was the fact that copyright and licensing bugs may
have bad legal consequences (lawsuits against the Project for
distributing legally undistributable packages, things like that), while
technical bugs do not cause issues with lawyers and are, in this sense,
"easier" to fix.
Sure, everyone says this, but is this *true*?
The free software community has a tendency to assume a lot of things about laws that aren't actually true. Sometimes this is useful conservatism,
since there are a lot of legal areas for which the answer is not
clear-cut, and if it doesn't matter much either way, better to avoid any sharp corners. But in this case, this assumption has a very high cost for the project, so maybe it's worth finding out whether it actually matters.
As people have pointed out in the numerous previous iterations of this discussion, it's not like the ftp-master screen eliminates all copyright
and licensing bugs on project services. I'm sure that we've accidentally pushed non-distributable material to Salsa, we did to Alioth before that, ftp-master will occasionally make mistakes, etc.
We should act with alacrity to remedy serious copyright and licensing
bugs; no one is arguing against that. But is it legally necessary to take the very specific measure that we are currently taking against them?
On Sun, Jan 30, 2022 at 8:35 PM Russ Allbery <rra@debian.org> wrote:
I do think that the amount of effort that the project puts into this
pre-screening is of sufficiently high magnitude that it would be
worth
paying a lawyer for a legal opinion about whether or not we need to
do
it. The savings to the project if we found out that we didn't, or
that we
could do something simpler and more easily automated, would be more
than
the cost of the legal opinion.
+1
Looking at the last financial numbers I found [1], we have at least
~750k USD we could use for this purpose. I don't really know how
expensive lawyers are, but I doubt that this would cost more than 10k.
Heck, we could even hire two lawyers without any significant financial
impact (maybe in the US and EU as I think these are probably the most prominent areas for potential copyright lawsuits), even if it costs
50k.
IMHO that would be totally worth it. And instead of investing scarce man-hours into pre-screening, we could create a money pool for
financial support in case there is a copyright lawsuit. The
pre-screening in NEW doesn't prevent someone from claiming copyright infringement anyway, there is just a smaller chance that the lawsuit
is justified. But sadly even winning a lawsuit can still cost a
significant amount of money.
Francesco Poli <invernomuto@paranoici.org> writes:
I thought the basis was the fact that copyright and licensing bugs may
have bad legal consequences (lawsuits against the Project for
distributing legally undistributable packages, things like that), while
technical bugs do not cause issues with lawyers and are, in this sense,
"easier" to fix.
Sure, everyone says this, but is this *true*?
The free software community has a tendency to assume a lot of things about laws that aren't actually true. Sometimes this is useful conservatism,
since there are a lot of legal areas for which the answer is not
clear-cut, and if it doesn't matter much either way, better to avoid any sharp corners. But in this case, this assumption has a very high cost for the project, so maybe it's worth finding out whether it actually matters.
As people have pointed out in the numerous previous iterations of this discussion, it's not like the ftp-master screen eliminates all copyright
and licensing bugs on project services. I'm sure that we've accidentally pushed non-distributable material to Salsa, we did to Alioth before that, ftp-master will occasionally make mistakes, etc.
We should act with alacrity to remedy serious copyright and licensing
bugs; no one is arguing against that. But is it legally necessary to take the very specific measure that we are currently taking against them?
On Sun, Jan 30, 2022 at 8:35 PM Russ Allbery <rra@debian.org> wrote:
I do think that the amount of effort that the project puts into this
pre-screening is of sufficiently high magnitude that it would be worth
paying a lawyer for a legal opinion about whether or not we need to do
it. The savings to the project if we found out that we didn't, or that we >> could do something simpler and more easily automated, would be more than
the cost of the legal opinion.
Looking at the last financial numbers I found [1], we have at least
~750k USD we could use for this purpose. I don't really know how
expensive lawyers are, but I doubt that this would cost more than 10k.
Heck, we could even hire two lawyers without any significant financial
impact (maybe in the US and EU as I think these are probably the most >prominent areas for potential copyright lawsuits), even if it costs
50k.
Looking at the last financial numbers I found [1], we have at least
~750k USD we could use for this purpose. I don't really know how
expensive lawyers are, but I doubt that this would cost more than 10k. >Heck, we could even hire two lawyers without any significant financial >impact (maybe in the US and EU as I think these are probably the most >prominent areas for potential copyright lawsuits), even if it costs
50k.
Even if a lawyer says A, it doesn't buy us anything if J Robert DD
gets sued and the judge says B, or "not A".
If I compare how other mediums handle copyright violations, most
services have a "file a claim infringed copyright here" button on their
site (e.g. YouTube). For example, we could write a DMCA policy like
e.g. Github [2], hyperlink in the footer of all our official websites,
make a small "debian-dmca" tool that is always available in our builds
to file claims and provide infrastructure to process such claims.
I highly doubt that anyone will ever directly start a lawsuit instead of sending a cease-and-desist letter first, I'm not even sure if it is
legal to start a lawsuit without doing this first.
Even if a lawyer says A, it doesn't buy us anything if J Robert DD gets
sued and the judge says B, or "not A".
On തി, ജനു 31 2022 at 10:07:32 രാവിലെ +0100 +0100, Stephan Lachnit<stephanlachnit@debian.org> wrote:
Is there any precedent of a lawsuit against Debian due to copyrightedOn Sun, Jan 30, 2022 at 8:35 PM Russ Allbery <rra@debian.org> wrote:
I do think that the amount of effort that the project puts into this >>>pre-screening is of sufficiently high magnitude that it would be worth >>>paying a lawyer for a legal opinion about whether or not we need to do >>>it. The savings to the project if we found out that we didn't, or that we >>>could do something simpler and more easily automated, would be more than >>>the cost of the legal opinion.
+1
Looking at the last financial numbers I found [1], we have at least
~750k USD we could use for this purpose. I don't really know how
expensive lawyers are, but I doubt that this would cost more than 10k. >>Heck, we could even hire two lawyers without any significant financial >>impact (maybe in the US and EU as I think these are probably the most >>prominent areas for potential copyright lawsuits), even if it costs
50k.
IMHO that would be totally worth it. And instead of investing scarce >>man-hours into pre-screening, we could create a money pool for
financial support in case there is a copyright lawsuit. The
pre-screening in NEW doesn't prevent someone from claiming copyright >>infringement anyway, there is just a smaller chance that the lawsuit
is justified. But sadly even winning a lawsuit can still cost a
significant amount of money.
I agree. We should get real lawyers involved, pay and settle this issue
once and for all. As a maintainer who maintains a large number of
packages, NEW queue is big friction point for me personally and I'd be
very happy to see a solution for it, other than the status quo. Even
if the status quo is correct, I'd like this to be backed by a legal
opinion that we can rely on.
</span><span style="color: #6796e6;">-</span><span style="color: #d4d4d4;"> Linux foundation</span></div><div><span style="color: #d4d4d4;">Ultimately, Debian is not bound to a particular territory?</span></div><div><span style="color: #d4d4d4;">United Nations and its satellites </span><span style="color: #d4d4d4;">[</span><span style="color: #ce9178;">5</
A lawyer cannot make that risk trade-off decision for us. We'll have to...
make it as a project. But my hope would be that they could help put a
number on the likely legal cost in the worst-case scenario and provide
some input into the likelihood of that scenario, and some context in terms
of what other organizations do and what risks it's common to accept and mediate if it becomes a problem.
Marc Haber <mh+debian-devel@zugschlus.de> writes:
Even if a lawyer says A, it doesn't buy us anything if J Robert DD
gets sued and the judge says B, or "not A".
Yes, a legal opinion cannot fully resolve the question,
unfortunately, since it's a risk judgment. Copyright law is murky
enough that it's unlikely that any lawyer will be willing to
guarantee that we won't lose a lawsuit, and of course no one can
guarantee that we won't be sued.
What a lawyer can do is give us a better risk analysis. How *likely*
is it that we would be sued over such a thing, and if we were, what
would happen then? How much would it cost us to dispose of the
resulting lawsuit?
I think it's useful to view this as a price. We're paying a quite substantial price right now to implement pre-screening. If we
increase the risk that we may temporarily distribute something that
we shouldn't until we discover that and fix it, that comes with some corresponding increased risk of a legal cost. But in the meantime
we'd be saving a substantial pre-screening cost.
I am not on the inside of these things, certainly, but I have kept my
eyes open from the outside, and I am not aware of there being any
mechanism for removing something root-and-branch - across all affected versions, however far back those may stretch - from these repositories
and archive locations once it's made it in. In order to avoid continuing
to distribute something which we once accepted but which has since been deemed legally undistributable (and thus exposing ourselves to copyright-infringement lawsuits), we would need to have such a
mechanism.
As for getting legal advice, we do have an existing contract with Aaron
K. Williamson of Williamson Legal, PLLC (https://www.akwlc.com/). His specialty is Open Source softwware, technology, licensing and contracts,
so he would be a good person to ask specific questions about this, and
since we have an existing contract with him, it's really easy to set up
a video call / email thread with him if anyone wants to get some advice
from him. So if anyone has some time / energy to put together some
concrete questions / examples (and ideally also recruit one or more
people from FTP team to be involved), then I'd be happy to do the introductions and set that up.
On Mon, Jan 31, 2022 at 10:47 AM Jonathan Carter <jcc@debian.org> wrote:
As for getting legal advice, we do have an existing contract with Aaron
K. Williamson of Williamson Legal, PLLC (https://www.akwlc.com/). His specialty is Open Source softwware, technology, licensing and contracts,
so he would be a good person to ask specific questions about this, and since we have an existing contract with him, it's really easy to set up
a video call / email thread with him if anyone wants to get some advice from him. So if anyone has some time / energy to put together some
concrete questions / examples (and ideally also recruit one or more
people from FTP team to be involved), then I'd be happy to do the introductions and set that up.
Thanks for this information, Jonathan.
I would volunteer for gathering and formulating questions and asking
them to Aaron Williamson.
I suggest the best would be to start with an IRC or video call session
for everyone interested to formulate a "call for questions to ask",
looking through the questions and formulating a document we can send
to Aaron Williamson. Then we can discuss the question in a video call,
and again formulate a document with the answers from Aaron Williamson.
The next step would then probably be a GR.
However I am currently in my exam phase until February 14th, but after
this I have quite a lot of time.
Regards,
Stephan
I seemed to remember we retain actual outside council last i knew. Is that
still the case?
This request ought to come from the ftp team if we do do this, fwiw
For what it is worth I concur with everything that Russ has written, and would like to have us look at this again (and that's honestly not particularly because I currenly have the honour of the 6th-oldest
package in NEW (8 months) :-) In general I have found NEW valuable as FTP-masters sometimes spot things that I missed, but the delay, and
perhaps worse, the highly uncertain length of the delay (anything from a
day to a year), is a significant cost and drag, and it seems
increasingly anachronistic as the rest of the software ecosystem seems
to accelerate around us (not entirely a good thing, of course). Who
needs quality when you can have updates, eh?
On Tue, Feb 01, 2022 at 09:18:07AM -0800, Russ Allbery wrote:
I would hate to entirely lose the quality review that we get via NEW,
but I wonder if we could regain many those benefits by setting up some
sort of peer review system for new packages that is less formal and
less bottlenecked on a single team than the current NEW processing
setup.
What do you think, would it be more or less staffed than the current RFS review process?
I would hate to entirely lose the quality review that we get via NEW, butWhat do you think, would it be more or less staffed than the current RFS
I wonder if we could regain many those benefits by setting up some sort of peer review system for new packages that is less formal and less
bottlenecked on a single team than the current NEW processing setup.
Wookey <wookey@wookware.org> writes:
For what it is worth I concur with everything that Russ has written, and would like to have us look at this again (and that's honestly not particularly because I currenly have the honour of the 6th-oldest
package in NEW (8 months) :-) In general I have found NEW valuable as FTP-masters sometimes spot things that I missed, but the delay, and
perhaps worse, the highly uncertain length of the delay (anything from a day to a year), is a significant cost and drag, and it seems
increasingly anachronistic as the rest of the software ecosystem seems
to accelerate around us (not entirely a good thing, of course). Who
needs quality when you can have updates, eh?
I would hate to entirely lose the quality review that we get via NEW, but
I wonder if we could regain many those benefits by setting up some sort of peer review system for new packages that is less formal and less
bottlenecked on a single team than the current NEW processing setup.
On Tuesday, February 1, 2022 12:18:07 PM EST Russ Allbery wrote:
Wookey <wookey@wookware.org> writes:
For what it is worth I concur with everything that Russ has written,
and would like to have us look at this again (and that's honestly not
particularly because I currenly have the honour of the 6th-oldest
package in NEW (8 months) :-) In general I have found NEW valuable as
FTP-masters sometimes spot things that I missed, but the delay, and
perhaps worse, the highly uncertain length of the delay (anything from
a day to a year), is a significant cost and drag, and it seems
increasingly anachronistic as the rest of the software ecosystem seems
to accelerate around us (not entirely a good thing, of course). Who
needs quality when you can have updates, eh?
I would hate to entirely lose the quality review that we get via NEW,
but I wonder if we could regain many those benefits by setting up some
sort of peer review system for new packages that is less formal and
less bottlenecked on a single team than the current NEW processing
setup.
It's my impression that review of copyright and license considerations
when not going through New is not a priority for most. I doubt making
New go away will make it more so.
Has anyone on the actual FTP team responded to this thread yet? (sorry, I can't remember who that is currently)
Either on Andreas's original simple question: 'Do we still _have_ to keep the binary-NEW thing?'
Or this more complex question: Is NEW really giving us a pain:risk ratio that is appropriate?
Andreas tried hard to get someone to just stick to the first matter
and answer that. I don't recall seeing an answer from FTP-master yet?
Hi Wookey,
Am Tue, Feb 01, 2022 at 02:07:21PM +0000 schrieb Wookey:
Has anyone on the actual FTP team responded to this thread yet?
(sorry, I can't remember who that is currently)
Either on Andreas's original simple question: 'Do we still _have_
to keep the binary-NEW thing?'
Or this more complex question: Is NEW really giving us a pain:risk
ratio that is appropriate?
Andreas tried hard to get someone to just stick to the first matter
and answer that. I don't recall seeing an answer from FTP-master
yet?
Me neither. In my eyes its a problem that it is hard to comminicate
with ftpmaster team. I tried on IRC as well but I prefer mailing
list
since this is recorded online.
I would hate to entirely lose the quality review that we get via NEW, but
I wonder if we could regain many those benefits by setting up some sort of peer review system for new packages that is less formal and less
bottlenecked on a single team than the current NEW processing setup.
On Wed, Feb 02, 2022 at 09:39:02AM -0600, John Goerzen wrote:
On Tue, Feb 01 2022, Russ Allbery wrote:
I would hate to entirely lose the quality review that we get via
NEW, but I wonder if we could regain many those benefits by
setting up some sort of peer review system for new packages that
is less formal and less bottlenecked on a single team than the
current NEW processing setup.
This is a fantastic idea.
In fact, it wouldn't have to bottleneck packages at all. I mean,
if a quality issue is found in NEW, wouldn't the same be an RC bug
preventing a transition to testing?
I'm not sure "nobody ever looked at this" is a suitable criteria for inclusion in a stable release. We sort of have that problem now in
crusty corners of the archive if someone uploads a bad change, but at
least there's been one review at some point in the package's
lifetime.
On Tue, Feb 01 2022, Russ Allbery wrote:
I would hate to entirely lose the quality review that we get via NEW, but
I wonder if we could regain many those benefits by setting up some sort of >> peer review system for new packages that is less formal and less
bottlenecked on a single team than the current NEW processing setup.
This is a fantastic idea.
In fact, it wouldn't have to bottleneck packages at all. I mean, if a >quality issue is found in NEW, wouldn't the same be an RC bug preventing
a transition to testing?
This will also decrease the number of new packages in testing, which canI would hate to entirely lose the quality review that we get via
NEW, but I wonder if we could regain many those benefits by
setting up some sort of peer review system for new packages that
is less formal and less bottlenecked on a single team than the
current NEW processing setup.
This is a fantastic idea.
In fact, it wouldn't have to bottleneck packages at all. I mean,
if a quality issue is found in NEW, wouldn't the same be an RC bug
preventing a transition to testing?
I'm not sure "nobody ever looked at this" is a suitable criteria for inclusion in a stable release. We sort of have that problem now in
crusty corners of the archive if someone uploads a bad change, but at
least there's been one review at some point in the package's
lifetime.
Doesn't that, then, lead to the suggestion that any package entering
unstable without having undergone NEW review (which, in the revised
model, might be every new package) should automatically have a bug filed against it requesting suitable review, and that bug should be treated as
a blocker for entering testing?
That wouldn't help the "someone uploads a bad change" problem for already-accepted packages, but it would seem to avoid the "nobody ever
looked at this" situation.
It would also increase the number of automatically-filed bugs by quite a
lot, I suspect, which would itself be some degree of downside...
Doesn't that, then, lead to the suggestion that any package entering
unstable without having undergone NEW review (which, in the revised
model, might be every new package) should automatically have a bug filed >against it requesting suitable review, and that bug should be treated as
a blocker for entering testing?
On Wed, Feb 02, 2022 at 11:39:11AM -0500, The Wanderer wrote:This applies to any RC bug.
Doesn't that, then, lead to the suggestion that any package entering unstable without having undergone NEW review (which, in the revised
model, might be every new package) should automatically have a bug filed against it requesting suitable review, and that bug should be treated as
a blocker for entering testing?
Not really, since anyone in the world could close said bug (including the uploader).
On Wed, Feb 02, 2022 at 12:12:30PM -0500, Michael Stone wrote:
On Wed, Feb 02, 2022 at 11:39:11AM -0500, The Wanderer wrote:This applies to any RC bug.
Doesn't that, then, lead to the suggestion that any package entering
unstable without having undergone NEW review (which, in the revised
model, might be every new package) should automatically have a bug filed >> > against it requesting suitable review, and that bug should be treated as >> > a blocker for entering testing?
Not really, since anyone in the world could close said bug (including the
uploader).
On Wed, Feb 02, 2022 at 10:16:36PM +0500, Andrey Rahmatullin wrote:
On Wed, Feb 02, 2022 at 12:12:30PM -0500, Michael Stone wrote:
On Wed, Feb 02, 2022 at 11:39:11AM -0500, The Wanderer wrote:This applies to any RC bug.
Doesn't that, then, lead to the suggestion that any package enteringfiled
unstable without having undergone NEW review (which, in the revised
model, might be every new package) should automatically have a bug
against it requesting suitable review, and that bug should betreated as
a blocker for entering testing?
Not really, since anyone in the world could close said bug (including
the
uploader).
Yes, but in this case it means that we wouldn't have that minimal
standard of at least one person other than the uploader having ever
reviewed the package--which I think is a fairly low bar that we should
meet. (It would be even better if we could add reviews for changes, but
at any rate I don't think we should go backward here.)
On Tue, Feb 01 2022, Russ Allbery wrote:
I would hate to entirely lose the quality review that we get via NEW, but
I wonder if we could regain many those benefits by setting up some sort of >> peer review system for new packages that is less formal and less
bottlenecked on a single team than the current NEW processing setup.
This is a fantastic idea.
In fact, it wouldn't have to bottleneck packages at all. I mean, if a quality issue is found in NEW, wouldn't the same be an RC bug preventing
a transition to testing?
Dear list,
On 02/02/2022 18:46, Michael Stone wrote:
On Wed, Feb 02, 2022 at 10:16:36PM +0500, Andrey Rahmatullin wrote:
On Wed, Feb 02, 2022 at 12:12:30PM -0500, Michael Stone wrote:
On Wed, Feb 02, 2022 at 11:39:11AM -0500, The Wanderer wrote:
Doesn't that, then, lead to the suggestion that any package entering >>> > unstable without having undergone NEW review (which, in the revised
model, might be every new package) should automatically have a bug
filed
against it requesting suitable review, and that bug should be
treated as
a blocker for entering testing?
Not really, since anyone in the world could close said bug (including
the
uploader).
This applies to any RC bug.
Yes, but in this case it means that we wouldn't have that minimal
standard of at least one person other than the uploader having ever reviewed the package--which I think is a fairly low bar that we should meet. (It would be even better if we could add reviews for changes, but
at any rate I don't think we should go backward here.)
This is basically a question of social contracts and tooling. It can
IMHO certainly be done.
But isn't this discussion on details moot until we clear out the
fundamentals such as the legal risk/cost analysis of dropping the NEW
queue in its current form i. e., the topic for this thread?
And not least, some input from the ftp-masters team -- this discussion
is about a huge change in a process they currently manage.
I am a member of the FTP Team and have been participating, at least a bit, in
this thread. I am not, however, speaking for the team.
I would certainly not support the notion that we have too few licensing documentation bugs in the archive and we can afford to dismantle the one process we have in place that actually makes a difference in this area.
On Thu, Feb 03, 2022 at 09:43:16AM -0500, Scott Kitterman wrote:
I am a member of the FTP Team and have been participating, at least a bit, in this thread. I am not, however, speaking for the team.
Hello Scott, thank you for taking the time to follow this thread, there
are two very specific questions outstanding that those outside the FTP
team would like an answer to - if you're not willing to speak for the
team on these then please can you encourage internal discussion and announcement of the team's opinion.
1. Is it ftpmaster's opinion and policy that there is no difference in
NEW queue review process between bin and src?
Namely that a full copyright review is necessary to catch the kind of issues you noticed and so it is unhelpful to ping a mention on e.g. IRC
that something only needs a lighter review.
Alternatively, is it true that bin-NEW is primarily about
non-copyright checks and only if something looks egregiously wrong it
becomes subject to a full review which may take more time.
https://lists.debian.org/debian-devel/2022/01/msg00226.html
I would certainly not support the notion that we have too few licensing documentation bugs in the archive and we can afford to dismantle the one process we have in place that actually makes a difference in this area.
That is not the challenge being made here. I don't believe anyone is
arguing that licensing documentation bugs would be anything other than
RC bugs according to policy 2.3, just that NEW processing is not the
only possible mitigation for the Debian project's legal risk.
2. Is the ftpmaster team willing and able to select someone to represent
the team in a collaboration with non-team members to seek further legal council on the current NEW copyright practices?
Specifically, to compile a list of questions in advance and join a
call where these questions are put, communicate the results to the team
and obviously have buy-in that any changes needed can be worked with.
As examples, there are doubts over: the "abundance of caution"
approach to avoiding redistribution during the review; the above
mentioned copyright review for bin-NEW; whether RC licensing bugs should
be treated differently to other RC bugs.
https://lists.debian.org/debian-devel/2022/01/msg00359.html
I really hope you can help get the answers to these two questions,
because without it there doesn't seem to be a way forward for those with
time available outside the ftpmaster team.
My impression is that people are tired of waiting on New, but no one
really seems to be interested in doing any work on any alternative
other than more bugs.
On Thursday, February 3, 2022 2:40:08 PM EST Phil Morrell wrote:
That is not the challenge being made here. I don't believe anyone is arguing that licensing documentation bugs would be anything other than
RC bugs according to policy 2.3, just that NEW processing is not the
only possible mitigation for the Debian project's legal risk.
Right, but my point is that anyone who wants to work on identifying licensing
and copyright documentation issues in the archive is free to do so today. Anyone can file them and, given appropriate deference to the NMU procedures, anyone can fix them. Nothing the FTP Team is doing or not doing prevents that.
If someone thinks that there is a viable alternate method, then they should demonstrate it. You do not need anyone's permission.
Scott Kitterman <debian@kitterman.com> writes:
...
My impression is that people are tired of waiting on New, but no
one really seems to be interested in doing any work on any
alternative other than more bugs.
Part of the problem is that New processing is a bit of a black box,
so it's not clear to those of us outside the team how we could help.
(or at least, not clear to me -- links welcome).
As a random example, I noticed John Goerzen's post[1] about Yggdrasil
on planet.d.o last month. John has since uploaded a package.
As I write it's still in New[2], which is no great shock, as it's
only been a couple of weeks.
On reflection, I think that removing the bottle-neck of New would be
a mistake, as it would the remove the itch we all want to scratch.
Instead please just provide us with the ability to scratch that itch
and you may find that you suddenly have quite a few more volunteers.
Scott Kitterman <debian@kitterman.com> writes:
...
My impression is that people are tired of waiting on New, but no one
really seems to be interested in doing any work on any alternative
other than more bugs.
Part of the problem is that New processing is a bit of a black box, so
it's not clear to those of us outside the team how we could help.
(or at least, not clear to me -- links welcome).
As a random example, I noticed John Goerzen's post[1] about Yggdrasil on planet.d.o last month. John has since uploaded a package.
As I write it's still in New[2], which is no great shock, as it's only
been a couple of weeks.
I'm quite keen to give it a spin.
So, what can we do with that enthusiasm:
I could grab the source and build it locally.
I could squander my enthusiasm by waiting for New.
I could complain about not being able to download from New, and if if
the FTP team listened to me, the enthusiasm would immediately become
unavailable to others, as I'd not be feeling frustrated any more.
If I knew how to do it, and there were some obvious method, I could
contribute to reviewing the package I want to see in the archive.
On reflection, I think that removing the bottle-neck of New would be a mistake, as it would the remove the itch we all want to scratch.
Instead please just provide us with the ability to scratch that itch and
you may find that you suddenly have quite a few more volunteers.
In the example above, eventually I will get sufficiently bored of
waiting to build the package myself, which will probably be rather more effort than reviewing it would have been, so I'd much rather be able to
apply that effort in a way that benefits more than just me.
What I read Scott as having been suggesting, by contrast, is that people instead do copyright review for packages already in Debian, which may
well have had changes that did not have to pass through NEW and that
might not have been able to pass the NEW copyright review.
If a practice of doing that latter were established and sufficiently widespread, then it would not be as important to do the review for every package in NEW, and the FTP team might feel less of a need to insist
that the review take place at that stage of things.
The Wanderer <wanderer@fastmail.fm> writes:
What I read Scott as having been suggesting, by contrast, is that people instead do copyright review for packages already in Debian, which may
well have had changes that did not have to pass through NEW and that
might not have been able to pass the NEW copyright review.
If a practice of doing that latter were established and sufficiently widespread, then it would not be as important to do the review for every package in NEW, and the FTP team might feel less of a need to insist
that the review take place at that stage of things.
Various people have different reactions to and opinions about the
necessity of this review, which I understand and which is great for broadening the discussion. But I feel like we're starting to lose track
of my original point, namely that I don't see why we are prioritizing this particular category of bugs over every other type of bug in Debian. The justification has always been dire consequences if we don't stamp out all
of these bugs, but to be honest I think this is wildly unlikely.
In other words, this thread is once again drifting into a discussion of
how to do copyright review *better*, when my original point is that we
should seriously consider not doing the current type of incredibly tedious and nit-picky copyright review *at all*, and instead rely more on
upstream's assertions, automated tools, and being reactive in solving the bugs that people actually care about (i.e., notice).
In other words, what if, when upstream said "this whole package is covered
by the MIT license," we just defaulted to believing them? And if there's some file buried in there that's actually covered by the GPL, we fixed
that when someone brought it to our attention, or when we were able to
detect it with automated tools, but we didn't ask people to spend hours reviewing the license headers on every source file? What, concretely,
would go wrong?
Scott correctly points out that there are a ton of copyright bugs in
Debian *anyway*, despite NEW review. He sees this as a reason for not relaxing our review standards. I see it as the exact opposite: evidence
that our current review standards are not achieving the 100% correctness
we have claimed to be striving for, and the nearly complete lack of
practical consequences for that failure. It really seems to me like
evidence that this task is not as important as we think it is.
Since we're doing strawman arguments in this thread: I disagree with the notion that it's not a problem to put crap packages in the archive and
fix them later if anyone happens to notice.
Scott Kitterman <debian@kitterman.com> writes:
Since we're doing strawman arguments in this thread: I disagree with the notion that it's not a problem to put crap packages in the archive and
fix them later if anyone happens to notice.
No, that's fine, that's not a strawman argument. That is, in fact, my argument: I think it should be okay to put crap packages in the unstable archive and fix them later, and I would rather put more effort into the "noticing" part than in the pre-review part.
We may quibble about what "crap" means and we may disagree about how much
of this potentially could be weeded by automated tools (and I want to be
very clear that I'm not opposed to automated checks and indeed think we should make them stricter), but I think this is a blunt but fair characterization of my position.
To be clear on the nuance I see here, I don't mean that this is "okay" in
the sense that people should feel fine about doing it. I think we should
all aspire to not do that, of course. But I think it should be "okay" in
the sense that I don't think we should invest the level of resources we're currently investing in trying to avoid it, because I think that's causing other significant problems for the project.
My argument in favor of this position is that while it's very obvious to
see the harm from having crap packages in the archive, we're overlooking
the very substantial cost we're paying with our current method of trying
to reduce the frequency with which this happens. I think we're underestimating just how costly and demoralizing dealing with NEW delays
is for project work, and also how much of a drain and competition for resources that is with other archive work that we could be doing.
For example, in just the past two months I have seen two *extremely experienced* Debian developers who maintain extremely high-quality
packages express qualms about package architectures that would fix other
bugs in Debian *solely* because they would force more trips through NEW
and the trip through NEW is so disruptive to their work that it was at
least tempting to accept other bugs in order to avoid that disruption. To me, this indicates that we may have our priorities out of alignment.
Now, all of that being said, I also want to say that I'm sketching out one end of the argument because I think that end has been underrepresented. I don't think this is an all-or-nothing binary choice. We could, for
example, focus our review on only packages that are viewed as riskier
(only packages with maintainer scripts or that start daemons, for
instance, or stop doing NEW review for packages uploaded under the
auspices of well-established Debian teams, or stop doing NEW review for shared libraries whose source packages are already in Debian), all of
which would be improvements from my perspective. We could also do some
parts of NEW review and not others and see if that makes it more
attractive for other people to volunteer. (The manual review for
d/copyright correctness is certainly the part of NEW review that I can't imagine volunteering to do, and I suspect I'm not alone.)
To be clear, as long as the rules in Debian are what they are, I will of course follow them as I promised to do when I became a Debian Developer.
If the project continues to believe that it is of primary importance for
us to be the copyright notice and license catalog review system for the entire free software ecosystem (which is honestly what it feels like we've currently decided to volunteer to do on top of our goal of building a distribution), then I will do my part with the packages that I upload so
that I don't put unnecessary load on the folks doing NEW review. But when we've collectively been doing something for so long, we can lose track of
the fact that it's a choice, and other choices are possible. It's worth revisiting those choices consciously from time to time.
No, that's fine, that's not a strawman argument. That is, in fact, my >argument: I think it should be okay to put crap packages in the unstable >archive and fix them later, and I would rather put more effort into the >"noticing" part than in the pre-review part.In my opinion, this is a very important train of thought, because
[...]
Now, all of that being said, I also want to say that I'm sketching out one >end of the argument because I think that end has been underrepresented. I >don't think this is an all-or-nothing binary choice. We could, for
example, focus our review on only packages that are viewed as riskier
(only packages with maintainer scripts or that start daemons, for
instance, or stop doing NEW review for packages uploaded under the
auspices of well-established Debian teams, or stop doing NEW review for >shared libraries whose source packages are already in Debian), all of
which would be improvements from my perspective.
The FTP team review should focus on these types of mistakes, and not
only with new packages: any "suspicious" upload should be rerouted to a POLICY queue for additional verification. There is some prior art with
the auto-REJECT on Lintian errors, which could be extended to a
three-way decision (ACCEPT, POLICY REVIEW, REJECT).
For instance, we could flag epoch bumps or major version numbers which
skip ahead significantly (think 020220101 instead of 0.20220101). We can certainly continue to flag new binaries and potential copyright
violations (e.g., packages with incompatible licenses or files with "(?i)(?:do|must) not distribute" in their headers), or anything else we consider important. The automated checks need not be perfect as long as
we avoid false negatives on the critical issues.
In other words, this thread is once again drifting into a discussion of
how to do copyright review *better*, when my original point is that we
should seriously consider not doing the current type of incredibly tedious and nit-picky copyright review *at all*, and instead rely more on
upstream's assertions, automated tools, and being reactive in solving the bugs that people actually care about (i.e., notice).
Currently the only answer is join the FTP Team as a trainee when there
is a call for volunteers. I totally get the frustration.
Scott Kitterman <debian@kitterman.com> writes:
...
Currently the only answer is join the FTP Team as a trainee when there
is a call for volunteers. I totally get the frustration.
People could always just send additional data points to the relevant ITP
bug, like this:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004021#10
If that's actually useful for FTP team members, it could be encouraged
on the New queue page.
A link to a wiki page with suggestions of what to check, and how best to submit reports in order to make them most useful would probably do the
trick.
Would that actually help?
On Friday, February 4, 2022 6:24:56 PM EST Philip Hands wrote:
Scott Kitterman <debian@kitterman.com> writes:
...
Currently the only answer is join the FTP Team as a trainee when there
is a call for volunteers. I totally get the frustration.
People could always just send additional data points to the relevant ITP
bug, like this:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004021#10
If that's actually useful for FTP team members, it could be encouraged
on the New queue page.
A link to a wiki page with suggestions of what to check, and how best to
submit reports in order to make them most useful would probably do the
trick.
Would that actually help?
I'm not sure what the best solution is as far as notification goes. Generally
we don't look at the ITP when reviewing packages (ITP isn't even required, so
it's really outside our scope).
A comment to the ITP should get to the original packager. If it's a worthwhile issue they can fix and re-upload.
Most packages are currently available on Salsa even though not directly available through the New queue. A copy of the New queue does exist on coccia, but it's not currently readable for non-FTP Team members. It's probably not too hard to change that if it turns out having other DDs review things is useful.
Right to the ITP bug and mention it on #debian-ftp might not be a bad way to start experimenting with external reviews.
On 2022-02-04 18:39, Russ Allbery wrote:
In other words, this thread is once again drifting into a discussion of
how to do copyright review *better*, when my original point is that we should seriously consider not doing the current type of incredibly tedious and nit-picky copyright review *at all*, and instead rely more on upstream's assertions, automated tools, and being reactive in solving the bugs that people actually care about (i.e., notice).
If we're honest, that's basically how the rest of the open source world already operates in general. Our level of scrutiny is a burden that I
don't see many others sharing.
Of course "everybody's doing it" doesn't make something right. However,
when things go wrong, they don't seem to go wrong in the dramatic ways
we might anticipate them to.
If GitHub (a Microsoft-owned entity and thus an attractive target for a lawsuit) is OK with distributing files uploaded by third parties without subjecting them to a manual review process, perhaps we have been
overthinking the risks here.
There's a huge amount of software that's undistributable: Debian's
good faith attempt to review this is one of the crucial arguments I
have with $DAYJOB about the benefits of a curated distribution,
however fallible we may be.
I think we should use automated tools where available, query with
upstream where practicable, and continue doing what we're doing as
far as possible, in my humble opinion.
Reproducible builds and DEP-5 / SPDX are also crucial in improving
everyone's quality - I don't see commercial/enterprise distributions
doing this valuable public service but I very much value the fact
that Debian does it, for example.
Just because someone else can't be bothered to do licence review checking doesn't mean that Debian shouldn't.
I'd much rather that packages were removed in NEW than that they got installed in unstable and we then had to tell people that they had
gone.
There's a huge amount of software that's undistributable: Debian's good faith attempt to review this is one of the crucial arguments I have with $DAYJOB about the benefits of a curated distribution, however fallible we may be.
Scott correctly points out that there are a ton of copyright bugs in
Debian *anyway*, despite NEW review. He sees this as a reason for not relaxing our review standards. I see it as the exact opposite: evidence
that our current review standards are not achieving the 100% correctness
we have claimed to be striving for, and the nearly complete lack of
practical consequences for that failure. It really seems to me like
evidence that this task is not as important as we think it is.
On 2022-02-04 18:39, Russ Allbery wrote:
In other words, this thread is once again drifting into a discussion of
how to do copyright review *better*, when my original point is that we
should seriously consider not doing the current type of incredibly tedious >> and nit-picky copyright review *at all*, and instead rely more on
upstream's assertions, automated tools, and being reactive in solving the
bugs that people actually care about (i.e., notice).
If we're honest, that's basically how the rest of the open source world already operates in general. Our level of scrutiny is a burden that I
don't see many others sharing.
Of course "everybody's doing it" doesn't make something right. However,
when things go wrong, they don't seem to go wrong in the dramatic ways
we might anticipate them to.
If GitHub (a Microsoft-owned entity and thus an attractive target for a lawsuit) is OK with distributing files uploaded by third parties without subjecting them to a manual review process, perhaps we have been
overthinking the risks here.
Jonas Smedegaard <jonas@jones.dk> writes:
I just don't think the solution is to ignore copyright or licensing
statements.
That's not the goal. The question, which keeps being raised in part
because I don't think it's gotten a good answer, is what the basis is for treating copyright and licensing bugs differently than any other bug in Debian?
The need for pre-screening was obvious when we had export control issues,
but my understanding is that those have gone away. Are we working from
legal advice telling us that this pre-screening is required for some legal purpose? If so, is it effective for the legal purpose at which it is
aimed? Is this system left over from old advice? Have we checked our assumptions recently?
NEW processing is a lot of friction for the project as a whole and a lot
of work for the ftp team. If we were able to do less work at the cost of
a minimal increase in bugs, or at the cost of handling bugs a bit differently, maybe that would be a good thing?
In other words, it's unclear what requirements we're attempting to meet
and what the basis of those requirements is, which makes it hard to have a conversation about whether the current design is the best design for the problem we're trying to solve.
When we treat any of the above just like other RC bugs, we are accepting
a lower likelihood that the bugs will be found, and also that they will
be fixed....
If we can't do anything else, I suspect we can reduce project a
friction a lot of we only subject packages to copyright hazing when it
is a NEW source package, and not when there is a NEW binary package
caused by some usptream maintainers not being able to maintain ABI
backwards compatibility.
On Mon, Feb 07 2022, Theodore Ts'o wrote:
If we can't do anything else, I suspect we can reduce project a
friction a lot of we only subject packages to copyright hazing when it
is a NEW source package, and not when there is a NEW binary package
caused by some usptream maintainers not being able to maintain ABI
backwards compatibility.
Yes.
Also, with backports. When packaging up Go packages, which require all
their little dependencies to be independent packages, I have probably
gone through more than 50 NEW reviews in the past few months. unstable, >bullseye, and buster backports. This process is not yet finished for
some packages.
Another related problem is with languages like Go; when a package adds a >dependency, suddenly I can't upload new releases to unstable properly
until all the deps have made it through NEW.
Hello,
On Mon 07 Feb 2022 at 12:00PM -05, Theodore Ts'o wrote:
On Mon, Feb 07, 2022 at 12:06:24AM -0700, Sean Whitton wrote:
When we treat any of the above just like other RC bugs, we are accepting >> a lower likelihood that the bugs will be found, and also that they will
be fixed....
Another part of this discussion which shouldn't be lost is the
probabiltiy that these bugs will even *exist* (since if they don't
exist, they can't be found :-P) in the case where there is a NEW
binary package caused by a shared library version bump (and so we have libflakey12 added and libflakey11 dropped as binary packages) and a
NEW source package.
Which category of bugs do you mean? I distinguished three.
The argument why a package which has an upstream-induced shared
library version bump, has to go through the entire NEW gauntlet [...]
On Mon, Feb 07, 2022 at 09:28:16PM -0500, Theodore Ts'o wrote:
The argument why a package which has an upstream-induced shared
library version bump, has to go through the entire NEW gauntlet [...]
I hear your frustration but don't you think that language like "gauntlet" >makes it, uhm, very hard for the "gauntlet team" to reply, and even more >importantly, reason with you?
IOW: how can we get to 'no NEW (or a much lighter one) for new binary packages'
or how can we communicate this if we already have this, maybe also?
'cause I think the latter could very well also be true, or very close
to it.
Am Fri, Feb 04, 2022 at 09:39:09AM -0800 schrieb Russ Allbery:
Various people have different reactions to and opinions about the
necessity of this review, which I understand and which is great for broadening the discussion. But I feel like we're starting to lose track
of my original point,
Apropos loosing track. ;-)
namely that I don't see why we are prioritizing this
particular category of bugs over every other type of bug in Debian. The justification has always been dire consequences if we don't stamp out all of these bugs, but to be honest I think this is wildly unlikely.
I fully subscribe to this.
I'd also like to thank Scott for
a) His speedy processing of onetbb (which was the package triggering
this thread.
b) Taking part in the discussion (as so far only member of the ftpteam
TTBOMK)
I think the point of Scott in this discussion is clear. However, to my understanding it is in contrast to the posting I originally pointed
to[1]. I would like to hear some official statement for the specific
case of packages that are in new.
Specific remark to onetbb: I'd fully agree with Scott that this might
be a good example to stress his point since the package was obviously
not OK and was in serious need of some extra work. But in the same way
it serves to support Russ' point as well: While d/copyright was not OK
it was probably not OK only according to our strict standards (which I subscribe as well) and nothing someone would Debian sue about. The
delay in the new queue delayed the Python 3.10 migration and kept
several other friction points. So how do we want to weight "just another
RC bug that should be fixed" against the delay of development in other aspects?
I'd be supper happy if this discussion would lead to some statement
in some document we could use as reference which would probably avoid
further discussion of this kind. I'd even volunteer to draft such
a document ... if I would only know what should be written.
Kind regards
Andreas.
[1] https://lists.debian.org/debian-devel/2021/07/msg00231.html
From my point of view, treating something like other common classes of RC bugs means that the project is producing tools and processes to make detection of such bugs more automated to remove them from the archive, that developers are actively looking for them, and that they are routinely fixed in the normal course of Debian development.
Various people have different reactions to and opinions about the
necessity of this review, which I understand and which is great for broadening the discussion. But I feel like we're starting to lose track
of my original point,
namely that I don't see why we are prioritizing this
particular category of bugs over every other type of bug in Debian. The justification has always been dire consequences if we don't stamp out all
of these bugs, but to be honest I think this is wildly unlikely.
Hi,
Release Team member hat on, but not speaking on behalf of the team. I
haven't consulted anybody on the idea I mention below.
On 08-02-2022 14:59, Scott Kitterman wrote:
If people want licensing and copyright issues to be treated like other RC bugs, I think the first step is to treat them like other RC bugs[1].
I have recently heard about somebody that wanted to do archive wide
scanning as a service. At least I am open to add support to britney to
block migration on license and copyright issues from such a service. Obviously the service would need to have a reasonable small amount of
false positives and we should have an accepted process to handle those
false positives.
Paul
On Mon, Feb 07, 2022 at 12:06:24AM -0700, Sean Whitton wrote:
When we treat any of the above just like other RC bugs, we are accepting >> a lower likelihood that the bugs will be found, and also that they will >> be fixed....
Another part of this discussion which shouldn't be lost is the probabiltiy that these bugs will even *exist* (since if they don't
exist, they can't be found :-P) in the case where there is a NEW
binary package caused by a shared library version bump (and so we have libflakey12 added and libflakey11 dropped as binary packages) and a
NEW source package.
Which category of bugs do you mean? I distinguished three.
The argument why a package which has an upstream-induced shared
library version bump, has to go through the entire NEW gauntlent, is
because it is Good Thing that to check to see if it has any copyright
or licensing issue. But if you have a different package which doesn't
have upstream-induced shared library bump, it doesn't go throguh the
same kind of copyright and license hazing. And I believe this isn't
fair.
Either we should force every single package to go through a manual copyright/licensing recheck, because Debian Cares(tm) about copyright,
or "copyright/licensing concerns are an existential threat to the
project" (I disagree with both arguments), or a package such as
libflakey which is going through constant shared library version bumps
should not go through the NEW gauntlet just because it has new binary packages (libflakey11, libflakey12, libflakey13, etc.) at every single upstream release.
The fact that the FTP team applies license/copyright review as part of their review of source packages has grounding in a number of goals of Debian as a project. The existence of a binary NEW queue is also sensible, as the FTP team have to manage the namespace of the packages in the archive. But the application of license/copyright review by the FTP team for existing
source packages as part of binary NEW processing /and at no other time/ is arbitrary. It is, at best, a historical accident that has taken on the authority of precedent.
Guarding against debian/copyright drift is a useful goal. But it is harmful to the velocity of the project to either block or reject new binary packages in the archive because of this linkage to license review.
Actively-developed library packages with ABI changes are not fundamentally more likely to have license drift than any other package in the archive, so focusing FTP team time on reviewing the licenses of these packages in particular is a misapplication of resources.
The responses I've seen from the FTP team to this can, I believe, be roughly paraphrased as "we would like all debian/copyright in the archive to be clean, but we don't have capacity to do that, so we're doing this instead".
I assert that it is much, much worse to continue doing this than to do *no* license/copyright review as part of binary NEW. It does not achieve the
goal of having clean debian/copyright across the archive; it slows down the binary NEW queue due to (self-imposed) workload of the FTP team; and it deters developers interested in this problem space from innovating better (and more systematic) solutions outside the small FTP team.
I'm sorry to be responding only a month later, but I think there are
some reasons why binNEW is not the worst place to be doing these extra checks: packages with SONAME bumps are typically C or C++ projects and
these are (i) large, such that d/copyright is more likely to drift
simply because of the volume of files; and (ii) often contain embedded
code copies with different copyright and licensing. My own NEW
experience is that I've consistently found more problems in binNEW
packages than anywhere else.
Hi Sean,
Am Wed, Mar 02, 2022 at 08:33:35AM -0700 schrieb Sean Whitton:
I'm sorry to be responding only a month later, but I think there are
some reasons why binNEW is not the worst place to be doing these extra
checks: packages with SONAME bumps are typically C or C++ projects and
these are (i) large, such that d/copyright is more likely to drift
simply because of the volume of files; and (ii) often contain embedded
code copies with different copyright and licensing. My own NEW
experience is that I've consistently found more problems in binNEW
packages than anywhere else.
Thanks a lot for your insight into this topic. I'd like to stress my
point (again) that besides I was naively thinking that the checks done
on packages that are passing new due to binary package changes (which
are not only due to changed SONAME) my main point is that I've found
a discrepancy in statements of ftpmaster teams. My question whether
we agree to status A or B[1] was not yet answered (or I missed some
explicit answer).
Kind regards
Andreas.
PS: I'm currently considering writing up some summary of the bunch
of threads that was born out of my initial mail.
[1] https://lists.debian.org/debian-devel/2022/01/msg00226.html
PS: I'm currently considering writing up some summary of the bunch
of threads that was born out of my initial mail.
[1] https://lists.debian.org/debian-devel/2022/01/msg00226.html
Assuming I'm not misreading, the ftpteam currently thinks (B).
Am Thu, Mar 03, 2022 at 09:57:26AM -0700 schrieb Sean Whitton:
PS: I'm currently considering writing up some summary of the bunch
of threads that was born out of my initial mail.
[1] https://lists.debian.org/debian-devel/2022/01/msg00226.html
Assuming I'm not misreading, the ftpteam currently thinks (B).
What do you mean be "misreading"? My mail I linked above or some
ftpmaster statements I'm not aware about (or which I was misreading)?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 360 |
Nodes: | 16 (2 / 14) |
Uptime: | 129:10:51 |
Calls: | 7,686 |
Files: | 12,828 |
Messages: | 5,711,155 |