• Re: [syzbot] [hfs?] WARNING in hfs_write_inode

    From John Paul Adrian Glaubitz@21:1/5 to Matthew Wilcox on Thu Jul 20 20:00:01 2023
    Hello!

    On Thu, 2023-07-20 at 18:30 +0100, Matthew Wilcox wrote:
    On Thu, Jul 20, 2023 at 05:27:57PM +0200, Dmitry Vyukov wrote:
    On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <slava@dubeyko.com> wrote:
    On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
    Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:

    Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images. So basically, you can't trust anything you read from the disc.


    If the volume has been deliberately corrupted, then no guarantee that file system
    driver will behave nicely. Technically speaking, inode write operation should never
    happened for corrupted volume because the corruption should be detected during
    b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
    drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
    it is worth to do because not so many guys really use HFS/HFS+ as the main file
    system under Linux.


    Most popular distros will happily auto-mount HFS/HFS+ from anything inserted into USB (e.g. what one may think is a charger). This creates interesting security consequences for most Linux users.
    An image may also be corrupted non-deliberately, which will lead to
    random memory corruptions if the kernel trusts it blindly.

    Then we should delete the HFS/HFS+ filesystems. They're orphaned in MAINTAINERS and if distros are going to do such a damnfool thing,
    then we must stop them.

    Both HFS and HFS+ work perfectly fine. And if distributions or users are so sensitive about security, it's up to them to blacklist individual features
    in the kernel.

    Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
    and I don't think it's justified to introduce such a hard compatibility breakage just because some people are worried about theoretical evil
    maid attacks.

    HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
    and I don't think it's okay to break all these systems running Linux.

    Thanks,
    Adrian

    --
    .''`. John Paul Adrian Glaubitz
    : :' : Debian Developer
    `. `' Physicist
    `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeffrey Walton@21:1/5 to willy@infradead.org on Fri Jul 21 00:00:01 2023
    On Thu, Jul 20, 2023 at 2:39 PM Matthew Wilcox <willy@infradead.org> wrote:

    On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote:
    Then we should delete the HFS/HFS+ filesystems. They're orphaned in MAINTAINERS and if distros are going to do such a damnfool thing,
    then we must stop them.

    Both HFS and HFS+ work perfectly fine. And if distributions or users are so sensitive about security, it's up to them to blacklist individual features in the kernel.

    Both HFS and HFS+ have been the default filesystem on MacOS for 30 years and I don't think it's justified to introduce such a hard compatibility breakage just because some people are worried about theoretical evil
    maid attacks.

    HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac and I don't think it's okay to break all these systems running Linux.

    If they're so popular, then it should be no trouble to find somebody
    to volunteer to maintain those filesystems. Except they've been
    marked as orphaned since 2011 and effectively were orphaned several
    years before that (the last contribution I see from Roman Zippel is
    in 2008, and his last contribution to hfs was in 2006).

    One data point may help.. I've been running Linux on an old PowerMac
    and an old Intel MacBook since about 2014 or 2015 or so. I have needed
    the HFS/HFS+ filesystem support for about 9 years now (including that
    "blessed" support for the Apple Boot partition).

    There's never been a problem with Linux and the Apple filesystems.
    Maybe it speaks to the maturity/stability of the code that already
    exists. The code does not need a lot of attention nowadays.

    Maybe the orphaned status is the wrong metric to use to determine
    removal. Maybe a better metric would be installation base. I.e., how
    many users use the filesystem.

    Jeff

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Finn Thain on Fri Jul 21 14:10:01 2023
    On 7/21/23 04:31, Finn Thain wrote:
    On Fri, 21 Jul 2023, Matthew Wilcox wrote:


    You've misunderstood. Google have decided to subject the entire kernel
    (including obsolete unmaintained filesystems) to stress tests that it's
    never had before. IOW these bugs have been there since the code was
    merged. There's nothing to back out. There's no API change to blame.
    It's always been buggy and it's never mattered before.


    My oar in this resembles a toothpick.

    That does change the complexion of this problem quite a bit. So the
    folks in charge should first, find out how many actual users of it there
    are considering the last commit was roughly a decade after the last
    machine was built, and from my experience with them which forced a fire extinguisher in every edit bay containing a pair of them, their survival
    rate might total to 10 on this pale blue dot.

    The rest have had a fan fail, which started a fire and they wound up in
    the dumpster. If by some stroke of good luck, there are more, work out
    a backup that can be recovered on some other known good filesystem,
    advise the users of the existence of that method of updating to a newer filesystem, disco the old one and get a good nights sleep.

    Frankly support for NTFS-3.51 if it exists, should also join the parade
    going out the door. It's housekeeping had no problem deleting its main
    .dll. Much to M$ delight as at the time it was another $400 sale to
    restore the machine and anybody who asked about it was called a pirate
    by support.

    I'm not blaming the unstable API for the bugs, I'm blaming it for the workload. A stable API (like a userspace API) decreases the likelihood
    that overloaded maintainers have to orphan a filesystem implementation.

    .

    Cheers, Gene Heskett.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis
    Genes Web page <http://geneslinuxbox.net:6309/>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gene heskett@21:1/5 to Theodore Ts'o on Fri Jul 21 16:20:01 2023
    On 7/21/23 09:19, Theodore Ts'o wrote:
    On Fri, Jul 21, 2023 at 06:14:04PM +1000, Finn Thain wrote:

    I'm not blaming the unstable API for the bugs, I'm blaming it for the
    workload. A stable API (like a userspace API) decreases the likelihood
    that overloaded maintainers have to orphan a filesystem implementation.

    You are incorrect. The HFS file system has gotten zero development
    attention and the bugs were not the result of the API changes. The
    main issue here is that the HFS file system does not have maintainer,
    and decreasing the workload will not magically make someone appear
    with deep knowledge of that particular part of the code base.

    It's also the case that the actual amount of work on the "overloaded maintainers" caused by API changes is minimal --- it's dwarfed by
    syzbot noise (complaints from syzbot that aren't really bugs, or for
    really outré threat models).

    API changes within the kernel are the responsibility of the people
    making the change. For example, consider all of the folio changes
    that have been landing in the kernel; the amount of extra work on the
    part of most file system maintainers is minimal, because it's the
    people making the API changes who update the file system. I won't say
    that it's _zero_ work, because file system maintainers review the
    changes, and we run regression tests, and we sometimes need to point
    out when a bug has been introduced --- at which point the person
    making the API change has the responsibility of fixing or reverting
    the change.

    An unstable API are much painful for out-of-tree kernel code. But
    upstream kernel developers aren't really concerned with out-of-tree
    kernel code, except to point out that the work of the people who are promulgated out-of-tree modules would be much less if they actually
    got them cleaned up and made acceptable for upstream inclusion.

    - Ted
    A much more sensible answer. Thank you Ted.

    .

    Cheers, Gene Heskett.
    --
    "There are four boxes to be used in defense of liberty:
    soap, ballot, jury, and ammo. Please use in that order."
    -Ed Howdershelt (Author, 1940)
    If we desire respect for the law, we must first make the law respectable.
    - Louis D. Brandeis
    Genes Web page <http://geneslinuxbox.net:6309/>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)