On Thu, Jul 20, 2023 at 05:27:57PM +0200, Dmitry Vyukov wrote:
On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <slava@dubeyko.com> wrote:
On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images. So basically, you can't trust anything you read from the disc.
If the volume has been deliberately corrupted, then no guarantee that file system
driver will behave nicely. Technically speaking, inode write operation should never
happened for corrupted volume because the corruption should be detected during
b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
it is worth to do because not so many guys really use HFS/HFS+ as the main file
system under Linux.
Most popular distros will happily auto-mount HFS/HFS+ from anything inserted into USB (e.g. what one may think is a charger). This creates interesting security consequences for most Linux users.
An image may also be corrupted non-deliberately, which will lead to
random memory corruptions if the kernel trusts it blindly.
Then we should delete the HFS/HFS+ filesystems. They're orphaned in MAINTAINERS and if distros are going to do such a damnfool thing,
then we must stop them.
On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote:
Then we should delete the HFS/HFS+ filesystems. They're orphaned in MAINTAINERS and if distros are going to do such a damnfool thing,
then we must stop them.
Both HFS and HFS+ work perfectly fine. And if distributions or users are so sensitive about security, it's up to them to blacklist individual features in the kernel.
Both HFS and HFS+ have been the default filesystem on MacOS for 30 years and I don't think it's justified to introduce such a hard compatibility breakage just because some people are worried about theoretical evil
maid attacks.
HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac and I don't think it's okay to break all these systems running Linux.
If they're so popular, then it should be no trouble to find somebody
to volunteer to maintain those filesystems. Except they've been
marked as orphaned since 2011 and effectively were orphaned several
years before that (the last contribution I see from Roman Zippel is
in 2008, and his last contribution to hfs was in 2006).
On Fri, 21 Jul 2023, Matthew Wilcox wrote:
You've misunderstood. Google have decided to subject the entire kernel
(including obsolete unmaintained filesystems) to stress tests that it's
never had before. IOW these bugs have been there since the code was
merged. There's nothing to back out. There's no API change to blame.
It's always been buggy and it's never mattered before.
I'm not blaming the unstable API for the bugs, I'm blaming it for the workload. A stable API (like a userspace API) decreases the likelihood
that overloaded maintainers have to orphan a filesystem implementation.
.
On Fri, Jul 21, 2023 at 06:14:04PM +1000, Finn Thain wrote:A much more sensible answer. Thank you Ted.
I'm not blaming the unstable API for the bugs, I'm blaming it for the
workload. A stable API (like a userspace API) decreases the likelihood
that overloaded maintainers have to orphan a filesystem implementation.
You are incorrect. The HFS file system has gotten zero development
attention and the bugs were not the result of the API changes. The
main issue here is that the HFS file system does not have maintainer,
and decreasing the workload will not magically make someone appear
with deep knowledge of that particular part of the code base.
It's also the case that the actual amount of work on the "overloaded maintainers" caused by API changes is minimal --- it's dwarfed by
syzbot noise (complaints from syzbot that aren't really bugs, or for
really outré threat models).
API changes within the kernel are the responsibility of the people
making the change. For example, consider all of the folio changes
that have been landing in the kernel; the amount of extra work on the
part of most file system maintainers is minimal, because it's the
people making the API changes who update the file system. I won't say
that it's _zero_ work, because file system maintainers review the
changes, and we run regression tests, and we sometimes need to point
out when a bug has been introduced --- at which point the person
making the API change has the responsibility of fixing or reverting
the change.
An unstable API are much painful for out-of-tree kernel code. But
upstream kernel developers aren't really concerned with out-of-tree
kernel code, except to point out that the work of the people who are promulgated out-of-tree modules would be much less if they actually
got them cleaned up and made acceptable for upstream inclusion.
- Ted
.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 299 |
Nodes: | 16 (2 / 14) |
Uptime: | 39:49:12 |
Calls: | 6,682 |
Files: | 12,223 |
Messages: | 5,343,367 |