• NFS behaving weirdly

    From William Unruh@2:250/0 to All on Thu Feb 20 01:41:42 2020
    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a number
    of directories and partitions to other machines. One of them is called
    /local, which is the mounting from a local partition on the server (let
    me call it server) to remote machines. exportfs has it listed.
    But when I mount that directory on a remote machine, it mounts but the
    contents are bizarre. On server, the directory contents look like
    ls /local
    ../ .autofsck http/ lost+found/ .mozilla/ opt/
    unruhhome/
    .../ crypt1 jc mageia/ oldinfo/ Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/ persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the contents of the mounted directory are

    ls /local
    ../ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS
    in /local on server is not on the mounted version.

    What could be going on here?




    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Thu Feb 20 05:39:43 2020
    On 20/02/2020 02.41, William Unruh wrote:
    NFS is behaving weirdly on a bunch of my systems.

    it mounts but the
    contents are bizarre. On server, the directory contents look like
    ls /local
    ./ .autofsck http/ lost+found/ .mozilla/ opt/
    unruhhome/
    ../ crypt1 jc mageia/ oldinfo/
    Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/
    persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS
    in /local on server is not on the mounted version.

    What could be going on here?

    The exported directory ain't the one you think you export. Without your configuration it's not possible to say exactly what went wrong. There
    are slight differences how you do things depending on if you are using
    nfs or nfs4, specially if you gone from nfs to nfs4 without changing configuration you may end up with something unexpected.


    --

    //Aho


    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From Aragorn@2:250/0 to All on Thu Feb 20 08:42:57 2020
    On 20.02.2020 at 01:41, William Unruh scribbled:

    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a
    number of directories and partitions to other machines. One of them
    is called /local, which is the mounting from a local partition on the
    server (let me call it server) to remote machines. exportfs has it
    listed. But when I mount that directory on a remote machine, it
    mounts but the contents are bizarre. On server, the directory
    contents look like ls /local
    ./ .autofsck http/ lost+found/ .mozilla/
    opt/ unruhhome/ ../ crypt1 jc mageia/
    oldinfo/ Notebook1/ usershome/ .xdg_menu_cache/ archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/ persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines)
    the contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that
    IS in /local on server is not on the mounted version.

    What could be going on here?

    Quite obviously, it's not mounted. Either ./unruh is a directory on
    one of the target machines itself, or something other than what you
    intended is being mounted on the target directory.


    --
    With respect,
    = Aragorn =


    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Strider (2:250/0@fidonet)
  • From Jasen Betts@2:250/0 to All on Thu Feb 20 10:24:36 2020
    On 2020-02-20, William Unruh <unruh@invalid.ca> wrote:
    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a number
    of directories and partitions to other machines. One of them is called /local, which is the mounting from a local partition on the server (let
    me call it server) to remote machines. exportfs has it listed.
    But when I mount that directory on a remote machine, it mounts but the contents are bizarre. On server, the directory contents look like
    ls /local
    ./ .autofsck http/ lost+found/ .mozilla/ opt/
    unruhhome/
    ../ crypt1 jc mageia/ oldinfo/
    Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/
    persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS
    in /local on server is not on the mounted version.

    What could be going on here?

    hard to say...
    does "exportfs -s" on the server give sensible results ?
    does "ip addr show" ?

    what about "findmnt" on the client? and on the server?
    what about "readlink -f /local/."

    which NFS server software is it?

    --
    Jasen.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: JJ's own news server (2:250/0@fidonet)
  • From Carlos E. R.@2:250/0 to All on Thu Feb 20 14:52:01 2020
    On 20/02/2020 02.41, William Unruh wrote:
    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a number
    of directories and partitions to other machines. One of them is called /local, which is the mounting from a local partition on the server (let
    me call it server) to remote machines. exportfs has it listed.
    But when I mount that directory on a remote machine, it mounts but the contents are bizarre. On server, the directory contents look like
    ls /local
    ./ .autofsck http/ lost+found/ .mozilla/ opt/
    unruhhome/
    ../ crypt1 jc mageia/ oldinfo/
    Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/
    persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS
    in /local on server is not on the mounted version.

    What could be going on here?

    Please show the /etc/exports of server and /etc/fstab of client. And
    mount output. And "exportfs -s"


    --
    Cheers,
    Carlos E.R.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Thu Feb 20 16:14:50 2020
    On 2020-02-20, Jasen Betts <jasen@xnet.co.nz> wrote:
    On 2020-02-20, William Unruh <unruh@invalid.ca> wrote:
    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a number
    of directories and partitions to other machines. One of them is called
    /local, which is the mounting from a local partition on the server (let
    me call it server) to remote machines. exportfs has it listed.
    But when I mount that directory on a remote machine, it mounts but the
    contents are bizarre. On server, the directory contents look like
    ls /local
    ./ .autofsck http/ lost+found/ .mozilla/ opt/
    unruhhome/
    ../ crypt1 jc mageia/ oldinfo/ Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/ persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the
    contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS
    in /local on server is not on the mounted version.

    What could be going on here?

    hard to say...
    does "exportfs -s" on the server give sensible results ?

    Yes.
    mount on the client shows
    server:/local on /local type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,tim eo=600,retrans=2,sec=sys,clientaddr=142.123.234.78,local_lock=none,addr=142.123 .234.56)
    and those are all the right ip addresses for the server and the client.

    does "ip addr show" ?

    They show the right addresses on both the server and the client



    what about "findmnt" on the client? and on the server?

    On the client it shows the same info as the mount above.
    on the server
    ├─/local /dev/sdb1 ext4
    rw,noatime,data=ordered
    And there is no /local/unruh or /local/unruh/mail mounted from anywhere.






    what about "readlink -f /local/."
    On both server and client
    /local



    which NFS server software is it?

    Not sure what you mean. It is the standard nfs software from mga5
    (server) and mga6 (on some clients) and mga7 (on others)
    On the client
    rpm -qa|grep nfs
    nfs-utils-1.3.4-4.mga6
    lib64nfs8-1.11.0-1.mga6
    lib64nfsidmap0-0.27-3.mga6

    and on the server
    rpm -qa |grep nfs
    nfs-utils-1.3.0-6.mga5
    libnfsidmap-doc-0.25-8.mga5
    lib64nfsidmap0-0.25-8.mga5


    Note this all worked properly until a couple of weeks ago when
    everything had to be shutdown and rebooted due to electrical work.







    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Bit Twister@2:250/0 to All on Thu Feb 20 18:05:24 2020
    On Thu, 20 Feb 2020 16:14:50 -0000 (UTC), William Unruh wrote:
    On 2020-02-20, Jasen Betts <jasen@xnet.co.nz> wrote:
    On 2020-02-20, William Unruh <unruh@invalid.ca> wrote:
    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a number >>> of directories and partitions to other machines. One of them is called
    /local, which is the mounting from a local partition on the server (let
    me call it server) to remote machines. exportfs has it listed.
    But when I mount that directory on a remote machine, it mounts but the
    contents are bizarre. On server, the directory contents look like
    ls /local
    ./ .autofsck http/ lost+found/ .mozilla/ opt/
    unruhhome/
    ../ crypt1 jc mageia/ oldinfo/ Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/ persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the >>> contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS >>> in /local on server is not on the mounted version.

    What could be going on here?

    I have seen something like that before.

    There were files in the mount point that should not have been there.
    Once the mount point was actually mounted, then the correct files were available.

    Solution was to umount it, delete the files to regain the space and
    reduce future confusion, then mount it correctly.



    hard to say...
    does "exportfs -s" on the server give sensible results ?

    Yes.
    mount on the client shows
    server:/local on /local type nfs4
    (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,tim eo=600,retrans=2,sec=sys,clientaddr=142.123.234.78,local_lock=none,addr=142.123 .234.56)
    and those are all the right ip addresses for the server and the client.

    What is the client stanza in the server's /etc/exports or /etc/exports.d/* file?

    In my server (wb) I have
    $ grep -v ^# /etc/exports.d/* | grep tb
    /spare tb(no_root_squash,sync,no_subtree_check,rw)

    $ showmount -e
    Export list for wb.home.test:
    /spare tb.home.test

    And on the client (tb) I have
    $ showmount -e wb
    Export list for wb:
    /spare tb.home.test

    And yet my client mount has ,soft,
    $ mount | grep :
    wb:/spare on /wb_spare type nfs4 (rw,nosuid,nodev,noexec,relatime,vers=4.2,rsize=32768,wsize=32768,\ namlen=255,soft,proto=tcp,timeo=14,retrans=2,sec=sys,\ clientaddr=192.168.11.100,local_lock=none,addr=192.168.11.132)

    whereas yours has ,hard,


    does "ip addr show" ?

    They show the right addresses on both the server and the client



    what about "findmnt" on the client? and on the server?

    On the client it shows the same info as the mount above.
    on the server
    ├─/local /dev/sdb1 ext4
    rw,noatime,data=ordered
    And there is no /local/unruh or /local/unruh/mail mounted from anywhere.

    what about "readlink -f /local/."
    On both server and client
    /local



    which NFS server software is it?

    Not sure what you mean. It is the standard nfs software from mga5
    (server) and mga6 (on some clients) and mga7 (on others)
    On the client
    rpm -qa|grep nfs
    nfs-utils-1.3.4-4.mga6
    lib64nfs8-1.11.0-1.mga6
    lib64nfsidmap0-0.27-3.mga6

    and on the server
    rpm -qa |grep nfs
    nfs-utils-1.3.0-6.mga5
    libnfsidmap-doc-0.25-8.mga5
    lib64nfsidmap0-0.25-8.mga5


    Note this all worked properly until a couple of weeks ago when
    everything had to be shutdown and rebooted due to electrical work.


    That would lead me to be checking that ip addresses match node names.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Thu Feb 20 20:11:41 2020
    On 2020-02-20, Bit Twister <BitTwister@mouse-potato.com> wrote:
    On Thu, 20 Feb 2020 16:14:50 -0000 (UTC), William Unruh wrote:
    On 2020-02-20, Jasen Betts <jasen@xnet.co.nz> wrote:
    On 2020-02-20, William Unruh <unruh@invalid.ca> wrote:
    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a number >>>> of directories and partitions to other machines. One of them is called >>>> /local, which is the mounting from a local partition on the server (let >>>> me call it server) to remote machines. exportfs has it listed.
    But when I mount that directory on a remote machine, it mounts but the >>>> contents are bizarre. On server, the directory contents look like
    ls /local
    ./ .autofsck http/ lost+found/ .mozilla/ opt/
    unruhhome/
    ../ crypt1 jc mageia/ oldinfo/ Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/ persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the >>>> contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS >>>> in /local on server is not on the mounted version.

    What could be going on here?

    I have seen something like that before.

    There were files in the mount point that should not have been there.
    Once the mount point was actually mounted, then the correct files were available.

    No. As I mentioned before mounting, /local was completely empty. After
    mounting there was one directory thread in /local, namely unruh/mail
    with 400 files in there. That subdirectory WAS a subdirectory which was
    mounted on server, but way inside /local, not at the top of the /local directory.

    Solution was to umount it, delete the files to regain the space and
    reduce future confusion, then mount it correctly.

    I have unmounted and remounted it about 10 times on various machines by
    now. No change in the behaviour. Somehow the server is completely
    confused.





    hard to say...
    does "exportfs -s" on the server give sensible results ?

    Yes.
    mount on the client shows
    server:/local on /local type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,tim eo=600,retrans=2,sec=sys,clientaddr=142.123.234.78,local_lock=none,addr=142.123 .234.56)
    and those are all the right ip addresses for the server and the client.

    What is the client stanza in the server's /etc/exports or /etc/exports.d/*
    file?


    /local 142.123.234.0/26


    In my server (wb) I have
    $ grep -v ^# /etc/exports.d/* | grep tb
    /spare tb(no_root_squash,sync,no_subtree_check,rw)

    $ showmount -e
    Export list for wb.home.test:
    /spare tb.home.test

    And on the client (tb) I have
    $ showmount -e wb
    Export list for wb:
    /spare tb.home.test

    And yet my client mount has ,soft,
    $ mount | grep :
    wb:/spare on /wb_spare type nfs4
    (rw,nosuid,nodev,noexec,relatime,vers=4.2,rsize=32768,wsize=32768,\
    namlen=255,soft,proto=tcp,timeo=14,retrans=2,sec=sys,\ clientaddr=192.168.11.100,local_lock=none,addr=192.168.11.132)

    whereas yours has ,hard,


    does "ip addr show" ?

    They show the right addresses on both the server and the client



    what about "findmnt" on the client? and on the server?

    On the client it shows the same info as the mount above.
    on the server
    ├─/local /dev/sdb1 ext4
    rw,noatime,data=ordered
    And there is no /local/unruh or /local/unruh/mail mounted from anywhere.

    what about "readlink -f /local/."
    On both server and client
    /local



    which NFS server software is it?

    Not sure what you mean. It is the standard nfs software from mga5
    (server) and mga6 (on some clients) and mga7 (on others)
    On the client
    rpm -qa|grep nfs
    nfs-utils-1.3.4-4.mga6
    lib64nfs8-1.11.0-1.mga6
    lib64nfsidmap0-0.27-3.mga6

    and on the server
    rpm -qa |grep nfs
    nfs-utils-1.3.0-6.mga5
    libnfsidmap-doc-0.25-8.mga5
    lib64nfsidmap0-0.25-8.mga5


    Note this all worked properly until a couple of weeks ago when
    everything had to be shutdown and rebooted due to electrical work.


    That would lead me to be checking that ip addresses match node names.

    I ssh between them regularly. I ping them regularly. And in all cases i
    get the right machine. Now maybe nfs somehow is getting the wrong
    address. I will try with the ip address instead of the name. Same
    problem.


    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Thu Feb 20 20:20:53 2020
    On 2020-02-20, William Unruh <unruh@invalid.ca> wrote:
    On 2020-02-20, Bit Twister <BitTwister@mouse-potato.com> wrote:
    On Thu, 20 Feb 2020 16:14:50 -0000 (UTC), William Unruh wrote:
    On 2020-02-20, Jasen Betts <jasen@xnet.co.nz> wrote:
    On 2020-02-20, William Unruh <unruh@invalid.ca> wrote:
    NFS is behaving weirdly on a bunch of my systems.

    I have an NFS server (running MGA5) which is supposed to export a number >>>>> of directories and partitions to other machines. One of them is called >>>>> /local, which is the mounting from a local partition on the server (let >>>>> me call it server) to remote machines. exportfs has it listed.
    But when I mount that directory on a remote machine, it mounts but the >>>>> contents are bizarre. On server, the directory contents look like
    ls /local
    ./ .autofsck http/ lost+found/ .mozilla/ opt/ unruhhome/
    ../ crypt1 jc mageia/ oldinfo/ Notebook1/ usershome/ .xdg_menu_cache/
    archive/ encrypt/ .kde/ mandriva/ CameraPicturesJul30-2019/ persuasion/ usrlocal/

    But when mounted on the remote machine ( well on each of 4 machines) the >>>>> contents of the mounted directory are

    ls /local
    ./ ../ /unruh

    Not only is /unruh NOT a directory in /local/ on server, nothing that IS >>>>> in /local on server is not on the mounted version.

    What could be going on here?

    I have seen something like that before.

    There were files in the mount point that should not have been there.
    Once the mount point was actually mounted, then the correct files were
    available.

    No. As I mentioned before mounting, /local was completely empty. After mounting there was one directory thread in /local, namely unruh/mail
    with 400 files in there. That subdirectory WAS a subdirectory which was mounted on server, but way inside /local, not at the top of the /local directory.

    Solution was to umount it, delete the files to regain the space and
    reduce future confusion, then mount it correctly.

    I have unmounted and remounted it about 10 times on various machines by
    now. No change in the behaviour. Somehow the server is completely
    confused.





    hard to say...
    does "exportfs -s" on the server give sensible results ?

    Yes.
    mount on the client shows
    server:/local on /local type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,tim eo=600,retrans=2,sec=sys,clientaddr=142.123.234.78,local_lock=none,addr=142.123 .234.56)
    and those are all the right ip addresses for the server and the client.

    What is the client stanza in the server's /etc/exports or /etc/exports.d/* file?


    /local 142.123.234.0/26


    In my server (wb) I have
    $ grep -v ^# /etc/exports.d/* | grep tb
    /spare tb(no_root_squash,sync,no_subtree_check,rw)

    $ showmount -e
    Export list for wb.home.test:
    /spare tb.home.test

    And on the client (tb) I have
    $ showmount -e wb
    Export list for wb:
    /spare tb.home.test

    And yet my client mount has ,soft,
    $ mount | grep :
    wb:/spare on /wb_spare type nfs4 (rw,nosuid,nodev,noexec,relatime,vers=4.2,rsize=32768,wsize=32768,\
    namlen=255,soft,proto=tcp,timeo=14,retrans=2,sec=sys,\
    clientaddr=192.168.11.100,local_lock=none,addr=192.168.11.132)

    whereas yours has ,hard,


    does "ip addr show" ?

    They show the right addresses on both the server and the client



    what about "findmnt" on the client? and on the server?

    On the client it shows the same info as the mount above.
    on the server
    ├─/local /dev/sdb1 ext4 rw,noatime,data=ordered
    And there is no /local/unruh or /local/unruh/mail mounted from anywhere. >>>
    what about "readlink -f /local/."
    On both server and client
    /local



    which NFS server software is it?

    Not sure what you mean. It is the standard nfs software from mga5
    (server) and mga6 (on some clients) and mga7 (on others)
    On the client
    rpm -qa|grep nfs
    nfs-utils-1.3.4-4.mga6
    lib64nfs8-1.11.0-1.mga6
    lib64nfsidmap0-0.27-3.mga6

    and on the server
    rpm -qa |grep nfs
    nfs-utils-1.3.0-6.mga5
    libnfsidmap-doc-0.25-8.mga5
    lib64nfsidmap0-0.25-8.mga5


    Note this all worked properly until a couple of weeks ago when
    everything had to be shutdown and rebooted due to electrical work.


    That would lead me to be checking that ip addresses match node names.

    I ssh between them regularly. I ping them regularly. And in all cases i
    get the right machine. Now maybe nfs somehow is getting the wrong
    address. I will try with the ip address instead of the name. Same
    problem.


    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From David W. Hodgins@2:250/0 to All on Thu Feb 20 21:15:19 2020
    On Thu, 20 Feb 2020 15:11:41 -0500, William Unruh <unruh@invalid.ca> wrote:
    On 2020-02-20, Bit Twister <BitTwister@mouse-potato.com> wrote:
    What is the client stanza in the server's /etc/exports or /etc/exports.d/* file?

    /local 142.123.234.0/26

    On my Mageia 7 nfs server, in a configuration setup using mcc, I have ...
    # cat /etc/exports
    # generated by drakhosts.pl
    /etc/profile.d/ 192.168.10.0/8(no_all_squash,async,secure,no_subtree_check,ro)

    Shouldn't matter, but the trailing slash on the directory name may make a difference
    (haven't tested to see if it does or not).

    I don't have any Mageia 5 installs still around to check to see if there have been changes in the options used by mcc, or the man page.

    I strongly recommend a new install of Mageia 7 on that system.

    Regards, Dave Hodgins

    --
    Change dwhodgins@nomail.afraid.org to davidwhodgins@teksavvy.com for
    email replies.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Thu Feb 20 21:37:10 2020
    On 2020-02-20, David W. Hodgins <dwhodgins@nomail.afraid.org> wrote:
    On Thu, 20 Feb 2020 15:11:41 -0500, William Unruh <unruh@invalid.ca> wrote:
    On 2020-02-20, Bit Twister <BitTwister@mouse-potato.com> wrote:
    What is the client stanza in the server's /etc/exports or /etc/exports.d/* file?

    /local 142.123.234.0/26

    On my Mageia 7 nfs server, in a configuration setup using mcc, I have ...
    # cat /etc/exports
    # generated by drakhosts.pl
    /etc/profile.d/
    192.168.10.0/8(no_all_squash,async,secure,no_subtree_check,ro)

    Shouldn't matter, but the trailing slash on the directory name may make a
    difference
    (haven't tested to see if it does or not).

    I don't have any Mageia 5 installs still around to check to see if there
    have
    been changes in the options used by mcc, or the man page.

    Yes, the trouble is that it is the main server, and is the main system I
    use, and I have found that reinstalling costs me about a week of time.
    Since the almost universal recommendation is to reinstall rather than
    upgrade on a major version change, trying to get the new system back to
    where the old one was, in terms of software, and configuration is a real
    real longterm pain. I might make a copy of my / for 5 and try upgrading
    that instead of installing 7 on that partition.


    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Bit Twister@2:250/0 to All on Thu Feb 20 22:10:12 2020
    On Thu, 20 Feb 2020 21:37:10 -0000 (UTC), William Unruh wrote:


    Yes, the trouble is that it is the main server, and is the main system I
    use, and I have found that reinstalling costs me about a week of time.
    Since the almost universal recommendation is to reinstall rather than
    upgrade on a major version change, trying to get the new system back to
    where the old one was, in terms of software, and configuration is a real
    real longterm pain. I might make a copy of my / for 5 and try upgrading
    that instead of installing 7 on that partition.

    Hmm, seems to me that there were problems for people trying upgrades
    and skipping releases like going from 5 to 7 instead of 5 to 6 to 7.

    Running unsupported releases could get someone in deep legal dodo.

    Saw a cisco security update a week or so ago. Did not get around to
    installing new firmware on my ~$100 VOIP cisco 7811 phone at home.
    Tuesday, I noticed phone did not have dial tone. Phone no longer works
    even after reset to factory defaults.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From David W. Hodgins@2:250/0 to All on Thu Feb 20 22:25:57 2020
    On Thu, 20 Feb 2020 16:37:10 -0500, William Unruh <unruh@invalid.ca> wrote:

    Yes, the trouble is that it is the main server, and is the main system I
    use, and I have found that reinstalling costs me about a week of time.
    Since the almost universal recommendation is to reinstall rather than
    upgrade on a major version change, trying to get the new system back to
    where the old one was, in terms of software, and configuration is a real
    real longterm pain. I might make a copy of my / for 5 and try upgrading
    that instead of installing 7 on that partition.

    My current "main" install started as Mageia 3, and has been upgraded each release. It's currently running Mageia 7.

    I do recommend upgrading as soon as possible after a new release comes out, as future updates may cause conflicts that were not found during upgrade testing prior to release.

    If you do try upgrading, make sure it's upgrading to 6, and then 7. There will still end up being a number of .rpmnew and rpmsave files in /etc to sort out, and there may be config files changes needed in /home too.

    If problems are found now upgrading from 5 to 6, it will be difficult to help as no-one else will be able to recreate the problem, and new updates for Mageia 6
    will not be provided. Resolving conflicts will require manually removing offending
    packages, keeping a list of what's removed, and eventually installing the Mageia 7
    versions (where appropriate), after the upgrading has finished.

    Regards, Dave Hodgins

    --
    Change dwhodgins@nomail.afraid.org to davidwhodgins@teksavvy.com for
    email replies.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Fri Feb 21 17:40:10 2020
    On 2020-02-20, Bit Twister <BitTwister@mouse-potato.com> wrote:
    On Thu, 20 Feb 2020 21:37:10 -0000 (UTC), William Unruh wrote:


    Yes, the trouble is that it is the main server, and is the main system I
    use, and I have found that reinstalling costs me about a week of time.
    Since the almost universal recommendation is to reinstall rather than
    upgrade on a major version change, trying to get the new system back to
    where the old one was, in terms of software, and configuration is a real
    real longterm pain. I might make a copy of my / for 5 and try upgrading
    that instead of installing 7 on that partition.

    Hmm, seems to me that there were problems for people trying upgrades
    and skipping releases like going from 5 to 7 instead of 5 to 6 to 7.


    I tried to upgrade the 5 to 6 and then was going to go from 6 to 7. Unfortunately it refused to work. It complained bitterly about not being
    able to get rid of libraries which something else depended on. I did not
    keep a record. I finally gave up and reinstalled 7 and have the complete
    mess I feared. For example shorewall has been redefined, so that the
    structure of the files has been changed, and options have been dropped
    or redefined (like DROP). Ie, it is a complete mess as I feared. So far
    one complete day wasted.


    Running unsupported releases could get someone in deep legal dodo.

    Saw a cisco security update a week or so ago. Did not get around to installing new firmware on my ~$100 VOIP cisco 7811 phone at home.
    Tuesday, I noticed phone did not have dial tone. Phone no longer works
    even after reset to factory defaults.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Bit Twister@2:250/0 to All on Fri Feb 21 18:52:13 2020
    On Fri, 21 Feb 2020 17:40:10 -0000 (UTC), William Unruh wrote:
    On 2020-02-20, Bit Twister <BitTwister@mouse-potato.com> wrote:


    I tried to upgrade the 5 to 6 and then was going to go from 6 to 7. Unfortunately it refused to work. It complained bitterly about not being
    able to get rid of libraries which something else depended on.

    Yep, been there done that. Package names can change and cause grief like that. Clean installs do not have that problem.


    I did not keep a record.

    I find it handy to use the "scrip" command for install/updates.
    For example to pull down all rpms and check if will install.

    script -c "urpmi --downloader wget --wait-lock --replacefiles \
    --auto-update --auto --download-all --test" pull_updates.log

    Then do the update
    script -c "urpmi --downloader wget --wait-lock --replacefiles \
    --auto-update --auto --download-all " install_updates.log

    I do have a script to check the logs for errors rather than me
    having to read the logs manuall.

    I finally gave up and reinstalled 7 and have the complete
    mess I feared. For example shorewall has been redefined, so that the structure of the files has been changed, and options have been dropped
    or redefined (like DROP). Ie, it is a complete mess as I feared. So far
    one complete day wasted.

    Hmmm, I can not remember drop as a shorewall problem. As an example:
    grep -i drop /etc/shorewall/*
    /etc/shorewall/policy:net all DROP info

    In rules I use REJECT. Example:

    REJECT net:$LAN_GATEWAY all udp netbios-ns
    REJECT $FW net udp mdns


    I would have expected the same problem on your current mga6 installs.




    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Fri Feb 21 19:33:46 2020
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    For example, if two exports have the same number, they get confused and
    things like you describe happen. So ensure the different entries have a explicit different number.

    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sat Feb 22 06:03:35 2020
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?



    For example, if two exports have the same number, they get confused and things like you describe happen. So ensure the different entries have a explicit different number.

    And how do I ensure they have different numbers?


    I want to (and do) export subdirectories of a mouted filesystem. For
    example /local has a subdiretory /local/usrlocal. I want to mount server:/local/usrlocal onto /usr/local of the client. I also want to
    mount /local on the client

    Is this liable to confuse nfs? Is it legal to do so? Or can I only
    export mountpoints (ie only /local which is mounted from /dev/sdb1 on
    the server)?

    The problems I seem to be having seem to be on an SSID drive on the
    server.


    The weird thing is that when I ran Mga5 I never had any problems with
    the nfs. It is only with Mga7 that everything seems to be going to hell.




    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sat Feb 22 10:11:58 2020
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?



    For example, if two exports have the same number, they get confused and
    things like you describe happen. So ensure the different entries have a
    explicit different number.

    And how do I ensure they have different numbers?

    You should have set fsid=0 for the root export directory in your export,
    the sub directories you don't need to specify anything, nfs4 takes care
    of it for you.

    for example:
    /nfs
    192.168.1.0/24(rw,async,wdelay,no_subtree_check,no_root_squash,fsid=0) /nfs/subdir1 192.168.1.0/24(rw,async,wdelay,no_subtree_check,nohide,no_root_squash) /nfs/subdir2 192.168.1.0/24(rw,async,wdelay,no_subtree_check,nohide,no_root_squash)

    Use bind-mount to get the directories to the proper location in your nfs-export directory.



    I want to (and do) export subdirectories of a mouted filesystem. For
    example /local has a subdiretory /local/usrlocal. I want to mount server:/local/usrlocal onto /usr/local of the client. I also want to
    mount /local on the client

    Just export the root, you can then mount the sub directories on the clients:

    server:
    192.168.1.0/24(rw,async,wdelay,no_subtree_check,nohide,no_root_squash) /nfs/local

    client:
    nfsserver:/local/usrlocal /usr/local nfs4 defaults,noatime 0 0



    The weird thing is that when I ran Mga5 I never had any problems with
    the nfs. It is only with Mga7 that everything seems to be going to hell.

    As I don't know Mga, but guessing from the version numbers there are at
    least a few years between the versions, so I guess Mga5 used nfs3 and
    Mga7 uses nfs4, they do have differences on how to write a proper export configuration, sure you will not get errors to use a config written for
    nfs3 and use it with nfs4, but you may get some oddness, you should
    rewrite your config.

    Here is the head of my export, it's originally for RedHat 7.3 (Community version), so it's a quite old file, and I still use it even if I have
    switched distro a few times since the early 2000:

    # /etc/exports - exports(5) - directories exported to NFS clients
    #
    # Example for NFSv2 and NFSv3:
    # /srv/home hostname1(rw,sync) hostname2(ro,sync)
    # Example for NFSv4:
    # /srv/nfs4 hostname1(rw,sync,fsid=0)
    # /srv/nfs4/home hostname1(rw,sync,nohide)
    # Using Kerberos and integrity checking:
    # /srv/nfs4 *(rw,sync,sec=krb5i,fsid=0)
    # /srv/nfs4/home *(rw,sync,sec=krb5i,nohide)
    #
    # Use `exportfs -arv` to reload.

    I know that a lot of people do not care about having a single export
    location, as this wasn't really required in nfs3, but I really do
    recommend you do so, in the above example the /srv/nfs4 will be / on the client.

    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Sat Feb 22 12:26:33 2020
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/ 192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure)




    For example, if two exports have the same number, they get confused and
    things like you describe happen. So ensure the different entries have a
    explicit different number.

    And how do I ensure they have different numbers?

    You write them.



    I want to (and do) export subdirectories of a mouted filesystem. For
    example /local has a subdiretory /local/usrlocal. I want to mount server:/local/usrlocal onto /usr/local of the client. I also want to
    mount /local on the client

    Is this liable to confuse nfs? Is it legal to do so? Or can I only
    export mountpoints (ie only /local which is mounted from /dev/sdb1 on
    the server)?

    Better post the exports lines of the server and the fstab of the client,
    so that I can see better.


    The problems I seem to be having seem to be on an SSID drive on the
    server.


    The weird thing is that when I ran Mga5 I never had any problems with
    the nfs. It is only with Mga7 that everything seems to be going to hell.

    nfs version 4 on both?


    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sat Feb 22 18:45:22 2020
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused >>>> on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/ 192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure)

    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?






    For example, if two exports have the same number, they get confused and
    things like you describe happen. So ensure the different entries have a
    explicit different number.

    And how do I ensure they have different numbers?

    You write them.



    I want to (and do) export subdirectories of a mouted filesystem. For
    example /local has a subdiretory /local/usrlocal. I want to mount
    server:/local/usrlocal onto /usr/local of the client. I also want to
    mount /local on the client

    Is this liable to confuse nfs? Is it legal to do so? Or can I only
    export mountpoints (ie only /local which is mounted from /dev/sdb1 on
    the server)?

    Better post the exports lines of the server and the fstab of the client,
    so that I can see better.


    The problems I seem to be having seem to be on an SSID drive on the
    server.


    The weird thing is that when I ran Mga5 I never had any problems with
    the nfs. It is only with Mga7 that everything seems to be going to hell.

    nfs version 4 on both?



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sat Feb 22 19:40:00 2020
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure)

    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?

    If you haven't configured them, then the file system uuid will be used,
    if the file system supports uuid and has one set, if not then it may be
    set to 0 which means you may have multiple export roots. Also exporting multiple mount points on the file system will also generate that you
    have same fsid, why it's recommended to just export the common root and
    then on the client side mount the right directory.

    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Sat Feb 22 20:47:06 2020
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has. >>>>>
    So it is definitely nfs that has gotten completely and utterly confused >>>>> on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure)

    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each exported directory.

    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sat Feb 22 21:11:54 2020
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure)

    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?

    If you haven't configured them, then the file system uuid will be used,
    if the file system supports uuid and has one set, if not then it may be
    set to 0 which means you may have multiple export roots. Also exporting

    OK, I do have UUIDs on those. So it is probable that it is using the
    same fsid for each of those subdirectories that I am exporting from the
    main partition.

    multiple mount points on the file system will also generate that you
    have same fsid, why it's recommended to just export the common root and
    then on the client side mount the right directory.

    This is really really insane. The exportfs program should assign
    different fsid s to each distinct export.
    And exporting the whole partition could be very dangerous if there are sensitive things inside those partition. For example I have an encrypted
    file on /local. While the encryption should protect the contents, it is
    insane to given people on all of the clients access to it.

    Also I do not know what the purpose is of exporting the root filesystem.
    There are lots of other partitions mounted on that root, and it is those partitions which I want to use on the other machines, not /.

    Or I presume what you meant was that I should export each partition's mountpoint location.

    This all strikes me as a real kludge and a very buggy one. Especially as
    this is NOT spelled out in the documentation on nfs.

    Again, is there any way of finding out what the fsid is that is assigend
    to the various exported stuff by exportfs?


    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Sat Feb 22 21:33:06 2020
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>
    Is there some way of checking what numbers are assigned if I do not have >>> the fsid= as an option. Ie, finding out what it is assigning?

    If you haven't configured them, then the file system uuid will be used,
    if the file system supports uuid and has one set, if not then it may be
    set to 0 which means you may have multiple export roots. Also exporting

    OK, I do have UUIDs on those. So it is probable that it is using the
    same fsid for each of those subdirectories that I am exporting from the
    main partition.

    multiple mount points on the file system will also generate that you
    have same fsid, why it's recommended to just export the common root and
    then on the client side mount the right directory.

    This is really really insane. The exportfs program should assign
    different fsid s to each distinct export.
    And exporting the whole partition could be very dangerous if there are sensitive things inside those partition. For example I have an encrypted
    file on /local. While the encryption should protect the contents, it is insane to given people on all of the clients access to it.

    Also I do not know what the purpose is of exporting the root filesystem. There are lots of other partitions mounted on that root, and it is those partitions which I want to use on the other machines, not /.

    You do not *have* to export the root filesystem, you just can. For
    backup on another machine, for instance. Whatever.


    Or I presume what you meant was that I should export each partition's mountpoint location.

    This all strikes me as a real kludge and a very buggy one. Especially as
    this is NOT spelled out in the documentation on nfs.


    It is. Man exports.

    fsid=num|root|uuid
    NFS needs to be able to identify each
    filesystem that it exports. Normally it
    will use a UUID for the filesystem (if the
    filesystem has such a thing) or the device
    number of the device holding the filesystem
    (if the filesystem is stored on the device).

    As not all filesystems are stored on
    devices, and not all filesystems have UUIDs,
    it is sometimes necessary to explicitly tell
    NFS how to identify a filesystem. This is
    done with the fsid= option.

    For NFSv4, there is a distinguished filesys-
    tem which is the root of all exported
    filesystem. This is specified with
    fsid=root or fsid=0 both of which mean
    exactly the same thing.

    Other filesystems can be identified with a
    small integer, or a UUID which should con-
    tain 32 hex digits and arbitrary punctua-
    tion.

    Linux kernels version 2.6.20 and earlier do
    not understand the UUID setting so a small
    integer must be used if an fsid option needs
    to be set for such kernels. Setting both a
    small number and a UUID is supported so the
    same configuration can be made to work on
    old and new kernels alike.



    Again, is there any way of finding out what the fsid is that is assigend
    to the various exported stuff by exportfs?


    Telcontar:~ # exportfs -s
    /data/storage_c/repositorios_zypp 127.0.0.1(rw,sync,wdelay,nohide,no_subtree_check,fsid=1234,sec=sys,insecure,no_ root_squash,no_all_squash)
    /data/storage_c/repositorios_zypp ::1(rw,sync,wdelay,nohide,no_subtree_check,fsid=1234,sec=sys,insecure,no_root_s quash,no_all_squash)
    /data/storage_c/repositorios_zypp 192.168.1.0/24(rw,sync,wdelay,nohide,no_subtree_check,fsid=1235,sec=sys,insecur e,no_root_squash,no_all_squash)

    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sat Feb 22 23:23:30 2020
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>
    Is there some way of checking what numbers are assigned if I do not have >>> the fsid= as an option. Ie, finding out what it is assigning?

    If you haven't configured them, then the file system uuid will be used,
    if the file system supports uuid and has one set, if not then it may be
    set to 0 which means you may have multiple export roots. Also exporting

    OK, I do have UUIDs on those. So it is probable that it is using the
    same fsid for each of those subdirectories that I am exporting from the
    main partition.

    Most likely and I guess you don't specify a root (fsid 0) in which you
    find the other exported folders.


    multiple mount points on the file system will also generate that you
    have same fsid, why it's recommended to just export the common root and
    then on the client side mount the right directory.

    This is really really insane. The exportfs program should assign
    different fsid s to each distinct export.
    And exporting the whole partition could be very dangerous if there are sensitive things inside those partition.

    No one was saying you should export the whole partition.


    For example I have an encrypted
    file on /local. While the encryption should protect the contents, it is insane to given people on all of the clients access to it.

    Encryption is only to protect data in rest, nothing else.


    Also I do not know what the purpose is of exporting the root filesystem. There are lots of other partitions mounted on that root, and it is those partitions which I want to use on the other machines, not /.

    The export them properly.


    Or I presume what you meant was that I should export each partition's mountpoint location.

    It's always better to export partitions separately and also keep in mind
    that if you export a mount point before you have mounted the partition,
    then the directory will be empty for the client.

    The recommendation is to have a nfs export root (path to this can be
    what ever you want), bind mount the exports here and set fsid to each
    export if you want to be sure that nfs will be completely sure about the exports.


    This all strikes me as a real kludge and a very buggy one. Especially as
    this is NOT spelled out in the documentation on nfs.

    Strange, there is this line for fsid:

    For NFSv4, there is a distinguished filesystem which is the root of all
    exported filesystem. This is specified with fsid=root or fsid=0 both
    of which mean exactly the same thing.

    This do also give you a vital hint how you should setup your nfs server regarding to exports. This is a big difference between nfs4 and nfs3 and earlier.


    Again, is there any way of finding out what the fsid is that is assigend
    to the various exported stuff by exportfs?

    You need to do two things, assuming you are using a newer kernel than
    2.6.20, check what you configure in your exports file and the use of
    blkid for the partitions you export.


    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sat Feb 22 23:24:55 2020
    On 22/02/2020 21.47, Carlos E.R. wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has. >>>>>>
    So it is definitely nfs that has gotten completely and utterly
    confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>

    Is there some way of checking what numbers are assigned if I do not have
    the fsid=  as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each exported directory.

    fsid=0 for the export root and the rest +1 for each export, this is
    mentioned in the nfs documentation.

    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 03:32:00 2020
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has. >>>>>>
    So it is definitely nfs that has gotten completely and utterly confused >>>>>> on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure)

    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each exported directory.

    I have. But I would still like to know what was happening before. It
    wasted hours of my time to no purpose.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 03:34:02 2020
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 21.47, Carlos E.R. wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>> subdirectory, instead of about 20 files or directories as /local has. >>>>>>>
    So it is definitely nfs that has gotten completely and utterly
    confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>

    Is there some way of checking what numbers are assigned if I do not have >>> the fsid=  as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each
    exported directory.

    fsid=0 for the export root and the rest +1 for each export, this is mentioned in the nfs documentation.

    As an option, with completely unknown usefullness. Not that it is
    absolutely crucial if nfs is to work. It is important enough that it
    should be in the sample exports file that is shipped with nfs



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 03:47:53 2020
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>
    Is there some way of checking what numbers are assigned if I do not have >>>> the fsid= as an option. Ie, finding out what it is assigning?

    If you haven't configured them, then the file system uuid will be used,
    if the file system supports uuid and has one set, if not then it may be
    set to 0 which means you may have multiple export roots. Also exporting

    OK, I do have UUIDs on those. So it is probable that it is using the
    same fsid for each of those subdirectories that I am exporting from the
    main partition.

    Most likely and I guess you don't specify a root (fsid 0) in which you
    find the other exported folders.


    multiple mount points on the file system will also generate that you
    have same fsid, why it's recommended to just export the common root and
    then on the client side mount the right directory.

    This is really really insane. The exportfs program should assign
    different fsid s to each distinct export.
    And exporting the whole partition could be very dangerous if there are
    sensitive things inside those partition.

    No one was saying you should export the whole partition.

    And if I do not, then exporting the subdirectories in that partition
    gives me the mess I had to wade through.



    For example I have an encrypted
    file on /local. While the encryption should protect the contents, it is
    insane to given people on all of the clients access to it.

    Encryption is only to protect data in rest, nothing else.

    I agree. But not making it available in the first place is one more
    barrier against it being comprimised.



    Also I do not know what the purpose is of exporting the root filesystem.
    There are lots of other partitions mounted on that root, and it is those
    partitions which I want to use on the other machines, not /.

    The export them properly.

    Yes. except that it is not clear what "export them properly" means. I
    thought I had exported them properly. It worked for 20 years. Then it
    failed to work and landed me in a swamp.



    Or I presume what you meant was that I should export each partition's
    mountpoint location.

    It's always better to export partitions separately and also keep in mind that if you export a mount point before you have mounted the partition,
    then the directory will be empty for the client.

    The recommendation is to have a nfs export root (path to this can be
    what ever you want), bind mount the exports here and set fsid to each
    export if you want to be sure that nfs will be completely sure about the exports.


    This all strikes me as a real kludge and a very buggy one. Especially as
    this is NOT spelled out in the documentation on nfs.

    Strange, there is this line for fsid:

    For NFSv4, there is a distinguished filesystem which is the root of all
    exported filesystem. This is specified with fsid=root or fsid=0 both
    of which mean exactly the same thing.

    That I am afraid is pretty opaque to anyone who does not already know
    it. For me, root is the / partition. It is not some arbitrary partition.
    The root of all the exported filesystems IS / on my system.
    But you are saying I believe that I can set up some arbitrary filesystem somewhere, say /This/Is/my/NFSroot/
    create directories in this directory which are each of the filesystems I
    might want to export, bind mount the directories into there, define /This/Is/my/NFSroot/
    as fsid=0 in /exports, and in exports define each of the directories I
    want to export relative to this root. And that is not a kludge?


    This do also give you a vital hint how you should setup your nfs server regarding to exports. This is a big difference between nfs4 and nfs3 and earlier.

    Yes, and "earlier" also no longer works.



    Again, is there any way of finding out what the fsid is that is assigend
    to the various exported stuff by exportfs?

    You need to do two things, assuming you are using a newer kernel than 2.6.20, check what you configure in your exports file and the use of
    blkid for the partitions you export.

    kernel 5.5

    ??? What "use of blkid"?

    Again, it appears to be true, from the absense of anyone being able to
    tell me how, that it is impossible to know what fsid exportfs assigns to
    an exported directory if I do not explicitly enter a fsid= option in /etc/exports for each directory I export.





    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 04:06:04 2020
    On 2020-02-23, William Unruh <unruh@invalid.ca> wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>
    Is there some way of checking what numbers are assigned if I do not have >>>>> the fsid= as an option. Ie, finding out what it is assigning?

    If you haven't configured them, then the file system uuid will be used, >>>> if the file system supports uuid and has one set, if not then it may be >>>> set to 0 which means you may have multiple export roots. Also exporting >>>
    OK, I do have UUIDs on those. So it is probable that it is using the
    same fsid for each of those subdirectories that I am exporting from the
    main partition.

    Most likely and I guess you don't specify a root (fsid 0) in which you
    find the other exported folders.


    multiple mount points on the file system will also generate that you
    have same fsid, why it's recommended to just export the common root and >>>> then on the client side mount the right directory.

    This is really really insane. The exportfs program should assign
    different fsid s to each distinct export.
    And exporting the whole partition could be very dangerous if there are
    sensitive things inside those partition.

    No one was saying you should export the whole partition.

    And if I do not, then exporting the subdirectories in that partition
    gives me the mess I had to wade through.



    For example I have an encrypted
    file on /local. While the encryption should protect the contents, it is
    insane to given people on all of the clients access to it.

    Encryption is only to protect data in rest, nothing else.

    I agree. But not making it available in the first place is one more
    barrier against it being comprimised.



    Also I do not know what the purpose is of exporting the root filesystem. >>> There are lots of other partitions mounted on that root, and it is those >>> partitions which I want to use on the other machines, not /.

    The export them properly.

    Yes. except that it is not clear what "export them properly" means. I
    thought I had exported them properly. It worked for 20 years. Then it
    failed to work and landed me in a swamp.



    Or I presume what you meant was that I should export each partition's
    mountpoint location.

    It's always better to export partitions separately and also keep in mind
    that if you export a mount point before you have mounted the partition,
    then the directory will be empty for the client.

    The recommendation is to have a nfs export root (path to this can be
    what ever you want), bind mount the exports here and set fsid to each
    export if you want to be sure that nfs will be completely sure about the
    exports.


    This all strikes me as a real kludge and a very buggy one. Especially as >>> this is NOT spelled out in the documentation on nfs.

    Strange, there is this line for fsid:

    For NFSv4, there is a distinguished filesystem which is the root of all
    exported filesystem. This is specified with fsid=root or fsid=0 both
    of which mean exactly the same thing.

    That I am afraid is pretty opaque to anyone who does not already know
    it. For me, root is the / partition. It is not some arbitrary partition.
    The root of all the exported filesystems IS / on my system.
    But you are saying I believe that I can set up some arbitrary filesystem somewhere, say /This/Is/my/NFSroot/
    create directories in this directory which are each of the filesystems I might want to export, bind mount the directories into there, define
    /This/Is/my/NFSroot/
    as fsid=0 in /exports, and in exports define each of the directories I
    want to export relative to this root. And that is not a kludge?


    This do also give you a vital hint how you should setup your nfs server
    regarding to exports. This is a big difference between nfs4 and nfs3 and
    earlier.

    Yes, and "earlier" also no longer works.



    Again, is there any way of finding out what the fsid is that is assigend >>> to the various exported stuff by exportfs?

    You need to do two things, assuming you are using a newer kernel than
    2.6.20, check what you configure in your exports file and the use of
    blkid for the partitions you export.

    kernel 5.5

    ??? What "use of blkid"?

    Again, it appears to be true, from the absense of anyone being able to
    tell me how, that it is impossible to know what fsid exportfs assigns to
    an exported directory if I do not explicitly enter a fsid= option in /etc/exports for each directory I export.

    To clarify things for me:
    I have a partiton from and ssdisk mounted on /fastlocal. What will the
    default fsid be assigned if I do not explicitly assign it an fsid in the exports file?
    This directory has a subdirectory /fastlocal/usrlocal which I also
    export. What will the default fsid be assigned to this subdirectory by
    exports if I do not assign any fsid= for it in /etc/fstab.
    There is another subdirectory directory, /fastlocal/unruh/mail which is
    mounted from an encryped file package via cryptmount. If I place /fastlocal/unruh/mail/ into /etc/exports with no fsid= line, what fsid
    will exportfs assign to this directory. Are any of these liable to have
    the same fsid assigned by default by exportfs?






    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sun Feb 23 08:33:20 2020
    On 23/02/2020 04.47, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:

    Also I do not know what the purpose is of exporting the root filesystem. >>> There are lots of other partitions mounted on that root, and it is those >>> partitions which I want to use on the other machines, not /.

    The export them properly.

    Yes. except that it is not clear what "export them properly" means. I
    thought I had exported them properly. It worked for 20 years. Then it
    failed to work and landed me in a swamp.

    Yes, but you used another older version of nfs, I did my switch like 20
    years ago. Back then there was many good online documentations how to configure nfs4, I guess nowadays you may find some not that well written documentations by people from a country that spits out more than 2
    million software developers each year.


    Or I presume what you meant was that I should export each partition's
    mountpoint location.

    It's always better to export partitions separately and also keep in mind
    that if you export a mount point before you have mounted the partition,
    then the directory will be empty for the client.

    The recommendation is to have a nfs export root (path to this can be
    what ever you want), bind mount the exports here and set fsid to each
    export if you want to be sure that nfs will be completely sure about the
    exports.


    This all strikes me as a real kludge and a very buggy one. Especially as >>> this is NOT spelled out in the documentation on nfs.

    Strange, there is this line for fsid:

    For NFSv4, there is a distinguished filesystem which is the root of all >> exported filesystem. This is specified with fsid=root or fsid=0 both
    of which mean exactly the same thing.

    That I am afraid is pretty opaque to anyone who does not already know
    it. For me, root is the / partition. It is not some arbitrary partition.
    The root of all the exported filesystems IS / on my system.
    But you are saying I believe that I can set up some arbitrary filesystem somewhere, say /This/Is/my/NFSroot/
    create directories in this directory which are each of the filesystems I might want to export, bind mount the directories into there, define
    /This/Is/my/NFSroot/
    as fsid=0 in /exports, and in exports define each of the directories I
    want to export relative to this root. And that is not a kludge?

    For me it felt quite clear and I did point this out in earlier posts.


    This do also give you a vital hint how you should setup your nfs server
    regarding to exports. This is a big difference between nfs4 and nfs3 and
    earlier.

    Yes, and "earlier" also no longer works.

    You can't expect configurations to just work when switching major
    versions of an application, it's like expecting that a modern car has to
    be started with a hand crank like the early gas engine cars.


    Again, is there any way of finding out what the fsid is that is assigend >>> to the various exported stuff by exportfs?

    You need to do two things, assuming you are using a newer kernel than
    2.6.20, check what you configure in your exports file and the use of
    blkid for the partitions you export.

    kernel 5.5

    ??? What "use of blkid"?

    It's one of the commands that come with linux-utils, it will tell a
    block device UUID, the man page will tell you more.


    Again, it appears to be true, from the absense of anyone being able to
    tell me how, that it is impossible to know what fsid exportfs assigns to
    an exported directory if I do not explicitly enter a fsid= option in /etc/exports for each directory I export.

    This is something nfs handles internally, it don't expose it, that's why
    you there ain't an easy way to do it and see what nfs has picked and
    that is why you need to look at your exports and use blkid.

    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sun Feb 23 08:38:56 2020
    On 23/02/2020 05.06, William Unruh wrote:

    To clarify things for me:
    I have a partiton from and ssdisk mounted on /fastlocal. What will the default fsid be assigned if I do not explicitly assign it an fsid in the exports file?

    use blkid to see the UUID of the partition that you mounted to /fastlocal.

    This directory has a subdirectory /fastlocal/usrlocal which I also
    export. What will the default fsid be assigned to this subdirectory by exports if I do not assign any fsid= for it in /etc/fstab.

    use blkid to see the UUID for the partition, I would hint with it would
    be the same.

    There is another subdirectory directory, /fastlocal/unruh/mail which is mounted from an encryped file package via cryptmount. If I place /fastlocal/unruh/mail/ into /etc/exports with no fsid= line, what fsid
    will exportfs assign to this directory. Are any of these liable to have
    the same fsid assigned by default by exportfs?

    Use blkid and see the UUID of the encrypted partition, in theory it
    could the same as for the above, but then your system is quite crap on
    random values or you changed.

    keep in mind that if you do a lot of dd and don't care of changing UUID,
    you will end up with many file systems with the same UUID.

    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sun Feb 23 08:40:38 2020
    On 23/02/2020 04.34, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 21.47, Carlos E.R. wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>>> subdirectory, instead of about 20 files or directories as /local has. >>>>>>>>
    So it is definitely nfs that has gotten completely and utterly >>>>>>>> confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>

    Is there some way of checking what numbers are assigned if I do not have >>>> the fsid=  as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each
    exported directory.

    fsid=0 for the export root and the rest +1 for each export, this is
    mentioned in the nfs documentation.

    As an option, with completely unknown usefullness. Not that it is
    absolutely crucial if nfs is to work. It is important enough that it
    should be in the sample exports file that is shipped with nfs

    This been the recommendation for nfs4 the last 20 years, the case was different with nfs2/3.

    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Sun Feb 23 12:22:54 2020
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>> subdirectory, instead of about 20 files or directories as /local has. >>>>>>>
    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>
    Is there some way of checking what numbers are assigned if I do not have >>> the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each
    exported directory.

    I have. But I would still like to know what was happening before. It
    wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different
    numbers to each one of the rest. No need to really export the zero. I don't.

    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 16:08:02 2020
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>>> subdirectory, instead of about 20 files or directories as /local has. >>>>>>>>
    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>
    Is there some way of checking what numbers are assigned if I do not have >>>> the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each
    exported directory.

    I have. But I would still like to know what was happening before. It
    wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different numbers to each one of the rest. No need to really export the zero. I don't.

    I have no idea what "the parent" is or means. My exports are scattered
    amongst the subdirectories on the server. So the only "parent" is /
    which I sure do not want to export or even have in /etc/exports.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 16:23:50 2020
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 05.06, William Unruh wrote:

    To clarify things for me:
    I have a partiton from and ssdisk mounted on /fastlocal. What will the
    default fsid be assigned if I do not explicitly assign it an fsid in the
    exports file?

    use blkid to see the UUID of the partition that you mounted to /fastlocal.

    And as I said, I am not just exporting partitions, I am also exporting subdirectories of those partitions. What is the default for their fsid.
    I am also exporting partitions which are part of file encrypted containers on one
    of the partitions. What is their default fsid. If the fsid is important,
    then it should be possible to see what the default fsid is that is being assigned. Apparently you are saying that this is impossible.
    A properly written exportfs would make sure that all of the fsids are
    unique for each exported partition. That is what defaults are all about.


    This directory has a subdirectory /fastlocal/usrlocal which I also
    export. What will the default fsid be assigned to this subdirectory by
    exports if I do not assign any fsid= for it in /etc/fstab.

    use blkid to see the UUID for the partition, I would hint with it would
    be the same.

    There is another subdirectory directory, /fastlocal/unruh/mail which is
    mounted from an encryped file package via cryptmount. If I place
    /fastlocal/unruh/mail/ into /etc/exports with no fsid= line, what fsid
    will exportfs assign to this directory. Are any of these liable to have
    the same fsid assigned by default by exportfs?

    Use blkid and see the UUID of the encrypted partition, in theory it
    could the same as for the above, but then your system is quite crap on random values or you changed.

    I have no worries that randomly assigned uuids are unique. HOwever,
    those are NOT encrypted partitions. They are encrypted files on a
    partition mounted using cryptmount. They do apparently have a blkid, but
    it is those mount points I had the most trouble with. I would request a
    mount one of the subdirectory partitions (/local/unruhhome) and instead
    would get one of the encrypted filesystems mounted rather than the partition I requested, and that in a subdirectory. nfs was totally confused about something.


    keep in mind that if you do a lot of dd and don't care of changing UUID,
    you will end up with many file systems with the same UUID.

    Nope all the partition UUIDs are different according to blkid.

    Anyway, I do NOT want to infer what the fsid assigned by default were. I want to
    know what they were. Again, you are skirting the question. HOw do I
    check what the default fsids are. Not "How do I read a badly written documentation and infer what they are" And again, using inference, what
    are the fsids of subdirectories of a mounted partitions? Are they the
    fsids of that mounted partition, or is there some other default
    assignment?



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 16:37:39 2020
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 04.47, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:

    Also I do not know what the purpose is of exporting the root filesystem. >>>> There are lots of other partitions mounted on that root, and it is those >>>> partitions which I want to use on the other machines, not /.

    The export them properly.

    Yes. except that it is not clear what "export them properly" means. I
    thought I had exported them properly. It worked for 20 years. Then it
    failed to work and landed me in a swamp.

    Yes, but you used another older version of nfs, I did my switch like 20

    I used the default nfs of Mandrake/Mandriva/Mageia. I have no idea what
    they used.

    years ago. Back then there was many good online documentations how to configure nfs4, I guess nowadays you may find some not that well written

    IF fsid are important for the running of nfs, and if all holy hell can
    brek out if they are not assigned properly, then it is incumbent on the
    writers of nfs to a) make sure the defaults are proper and do not allow
    that hell to break out, b) are clearly stated in for example the man
    pages, and c) have a clear sentence in the example file. The current man
    page is entirely opaque and ambiguous.

    documentations by people from a country that spits out more than 2
    million software developers each year.


    Or I presume what you meant was that I should export each partition's
    mountpoint location.

    It's always better to export partitions separately and also keep in mind >>> that if you export a mount point before you have mounted the partition,
    then the directory will be empty for the client.

    The recommendation is to have a nfs export root (path to this can be
    what ever you want), bind mount the exports here and set fsid to each
    export if you want to be sure that nfs will be completely sure about the >>> exports.


    This all strikes me as a real kludge and a very buggy one. Especially as >>>> this is NOT spelled out in the documentation on nfs.

    Strange, there is this line for fsid:

    For NFSv4, there is a distinguished filesystem which is the root of all >>> exported filesystem. This is specified with fsid=root or fsid=0 both >>> of which mean exactly the same thing.

    That I am afraid is pretty opaque to anyone who does not already know
    it. For me, root is the / partition. It is not some arbitrary partition.
    The root of all the exported filesystems IS / on my system.
    But you are saying I believe that I can set up some arbitrary filesystem
    somewhere, say /This/Is/my/NFSroot/
    create directories in this directory which are each of the filesystems I
    might want to export, bind mount the directories into there, define /This/Is/my/NFSroot/
    as fsid=0 in /exports, and in exports define each of the directories I
    want to export relative to this root. And that is not a kludge?

    For me it felt quite clear and I did point this out in earlier posts.


    This do also give you a vital hint how you should setup your nfs server
    regarding to exports. This is a big difference between nfs4 and nfs3 and >>> earlier.

    Yes, and "earlier" also no longer works.

    You can't expect configurations to just work when switching major
    versions of an application, it's like expecting that a modern car has to
    be started with a hand crank like the early gas engine cars.

    Yes, I can expect that the defaults "just work". If I do not assign
    fsids in the exports file, then the system assigns them by default.
    Those defaults should "just work" whether I come to them from 30 years
    of using Linux, or I am a newbe just coming to them today.
    I had nothing in my exports file which said anything about fsid. That is
    the default, and the default should work properly. I did NOT have any non-unique UUIDs as I have just checked.



    Again, is there any way of finding out what the fsid is that is assigend >>>> to the various exported stuff by exportfs?

    You need to do two things, assuming you are using a newer kernel than
    2.6.20, check what you configure in your exports file and the use of
    blkid for the partitions you export.

    kernel 5.5

    ??? What "use of blkid"?

    It's one of the commands that come with linux-utils, it will tell a
    block device UUID, the man page will tell you more.

    Yes, I do and did know "blkid". I am wondering what your phrase "use of
    blkid for the partitions you export" meant. And as I have stated, I do
    NOT just export partitions. I export subdirectories of those
    partitions. What is the fsid of one of those subdirectories?



    Again, it appears to be true, from the absense of anyone being able to
    tell me how, that it is impossible to know what fsid exportfs assigns to
    an exported directory if I do not explicitly enter a fsid= option in
    /etc/exports for each directory I export.

    This is something nfs handles internally, it don't expose it, that's why
    you there ain't an easy way to do it and see what nfs has picked and
    that is why you need to look at your exports and use blkid.

    Apparently it does it badly. Of course I am assuming it is because of non-unique fsids that I had the trouble I had, and apparently there is
    no way of finding out.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Sun Feb 23 20:40:29 2020
    On 23/02/2020 17.08, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>>>> subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>
    Is there some way of checking what numbers are assigned if I do not have >>>>> the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each >>>> exported directory.

    I have. But I would still like to know what was happening before. It
    wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different
    numbers to each one of the rest. No need to really export the zero. I don't.

    I have no idea what "the parent" is or means. My exports are scattered amongst the subdirectories on the server. So the only "parent" is /
    which I sure do not want to export or even have in /etc/exports.

    Then don't export it, but it figuratively is number zero.



    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sun Feb 23 20:47:43 2020
    On 23/02/2020 17.37, William Unruh wrote:
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 04.47, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:

    Also I do not know what the purpose is of exporting the root filesystem. >>>>> There are lots of other partitions mounted on that root, and it is those >>>>> partitions which I want to use on the other machines, not /.

    The export them properly.

    Yes. except that it is not clear what "export them properly" means. I
    thought I had exported them properly. It worked for 20 years. Then it
    failed to work and landed me in a swamp.

    Yes, but you used another older version of nfs, I did my switch like 20

    I used the default nfs of Mandrake/Mandriva/Mageia. I have no idea what
    they used.

    years ago. Back then there was many good online documentations how to
    configure nfs4, I guess nowadays you may find some not that well written

    IF fsid are important for the running of nfs, and if all holy hell can
    brek out if they are not assigned properly, then it is incumbent on the writers of nfs to a) make sure the defaults are proper and do not allow
    that hell to break out, b) are clearly stated in for example the man
    pages, and c) have a clear sentence in the example file. The current man
    page is entirely opaque and ambiguous.

    documentations by people from a country that spits out more than 2
    million software developers each year.


    Or I presume what you meant was that I should export each partition's >>>>> mountpoint location.

    It's always better to export partitions separately and also keep in mind >>>> that if you export a mount point before you have mounted the partition, >>>> then the directory will be empty for the client.

    The recommendation is to have a nfs export root (path to this can be
    what ever you want), bind mount the exports here and set fsid to each
    export if you want to be sure that nfs will be completely sure about the >>>> exports.


    This all strikes me as a real kludge and a very buggy one. Especially as >>>>> this is NOT spelled out in the documentation on nfs.

    Strange, there is this line for fsid:

    For NFSv4, there is a distinguished filesystem which is the root of all
    exported filesystem. This is specified with fsid=root or fsid=0 both >>>> of which mean exactly the same thing.

    That I am afraid is pretty opaque to anyone who does not already know
    it. For me, root is the / partition. It is not some arbitrary partition. >>> The root of all the exported filesystems IS / on my system.
    But you are saying I believe that I can set up some arbitrary filesystem >>> somewhere, say /This/Is/my/NFSroot/
    create directories in this directory which are each of the filesystems I >>> might want to export, bind mount the directories into there, define /This/Is/my/NFSroot/
    as fsid=0 in /exports, and in exports define each of the directories I
    want to export relative to this root. And that is not a kludge?

    For me it felt quite clear and I did point this out in earlier posts.


    This do also give you a vital hint how you should setup your nfs server >>>> regarding to exports. This is a big difference between nfs4 and nfs3 and >>>> earlier.

    Yes, and "earlier" also no longer works.

    You can't expect configurations to just work when switching major
    versions of an application, it's like expecting that a modern car has to
    be started with a hand crank like the early gas engine cars.

    Yes, I can expect that the defaults "just work".

    Just that your default ain't default for the tool you are using, it's a "default" for a older version of a tool that require rpc to work.


    If I do not assign
    fsids in the exports file, then the system assigns them by default.

    Yes, it does, but it expect you to have a common root for the nfs export.

    Those defaults should "just work" whether I come to them from 30 years
    of using Linux, or I am a newbe just coming to them today.

    nfs4 do work with default configuration, if you make a nfs4
    configuration. Don't forget those who switched from nfs2 to nfs3 had
    also strange behavior, they also had to change configuration if they
    wanted things to keep on working kind of as before.


    I had nothing in my exports file which said anything about fsid. That is
    the default, and the default should work properly. I did NOT have any non-unique UUIDs as I have just checked.

    But your exports didn't follow the nfs4 standard, so it didn't manage to understand what you tried to do, so it did as well it could.


    Again, is there any way of finding out what the fsid is that is assigend >>>>> to the various exported stuff by exportfs?

    You need to do two things, assuming you are using a newer kernel than
    2.6.20, check what you configure in your exports file and the use of
    blkid for the partitions you export.

    kernel 5.5

    ??? What "use of blkid"?

    It's one of the commands that come with linux-utils, it will tell a
    block device UUID, the man page will tell you more.

    Yes, I do and did know "blkid". I am wondering what your phrase "use of
    blkid for the partitions you export" meant. And as I have stated, I do
    NOT just export partitions. I export subdirectories of those
    partitions. What is the fsid of one of those subdirectories?

    Yes, and as they are on the same file system, they will have the same
    UUID. I can say nfs4 handles duplicates as long as you do have a common
    root for your exports (fsid=0), then just bind mount all the directories
    you want to export there, sure this differs from nfs3, but they are
    similar but not the same and those the recommended config is also a bit different.



    Again, it appears to be true, from the absense of anyone being able to
    tell me how, that it is impossible to know what fsid exportfs assigns to >>> an exported directory if I do not explicitly enter a fsid= option in
    /etc/exports for each directory I export.

    This is something nfs handles internally, it don't expose it, that's why
    you there ain't an easy way to do it and see what nfs has picked and
    that is why you need to look at your exports and use blkid.

    Apparently it does it badly. Of course I am assuming it is because of non-unique fsids that I had the trouble I had, and apparently there is
    no way of finding out.

    Things works badly for you use a nfs3 export, which ain't what you are supposed to be using with nfs4. just do the transformation you should
    do, which some of us did already 20 years ago.

    --

    //Aho


    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From J.O. Aho@2:250/0 to All on Sun Feb 23 20:54:10 2020
    On 23/02/2020 17.23, William Unruh wrote:

    Anyway, I do NOT want to infer what the fsid assigned by default were. I
    want to
    know what they were. Again, you are skirting the question. HOw do I
    check what the default fsids are. Not "How do I read a badly written documentation and infer what they are" And again, using inference, what
    are the fsids of subdirectories of a mounted partitions? Are they the
    fsids of that mounted partition, or is there some other default
    assignment?

    It's what you get for the partition with blkid unless you configured a
    fsid for the export. The important is that you assing the fsid 0/root to
    the nfs export root directory, things are generally handled well, just
    see to that you have mounted all file systems you want to export before
    you start your nfs daemon, or else you may export empty directories.

    --

    //Aho

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 20:56:10 2020
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 17.08, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>>>>> subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>>
    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each >>>>> exported directory.

    I have. But I would still like to know what was happening before. It
    wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different
    numbers to each one of the rest. No need to really export the zero. I don't.

    I have no idea what "the parent" is or means. My exports are scattered
    amongst the subdirectories on the server. So the only "parent" is /
    which I sure do not want to export or even have in /etc/exports.

    Then don't export it, but it figuratively is number zero.

    But if / is in /etc/exports (where I assume you would assign fsid=0 to
    it) then you are exporting it. How do you include a directory in /etc/exports and then not export it, but still assign an fsid to it?




    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 21:12:36 2020
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 17.37, William Unruh wrote:
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 04.47, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:

    Also I do not know what the purpose is of exporting the root filesystem.
    There are lots of other partitions mounted on that root, and it is those
    partitions which I want to use on the other machines, not /.

    The export them properly.

    Yes. except that it is not clear what "export them properly" means. I
    thought I had exported them properly. It worked for 20 years. Then it
    failed to work and landed me in a swamp.

    Yes, but you used another older version of nfs, I did my switch like 20

    I used the default nfs of Mandrake/Mandriva/Mageia. I have no idea what
    they used.

    years ago. Back then there was many good online documentations how to
    configure nfs4, I guess nowadays you may find some not that well written

    IF fsid are important for the running of nfs, and if all holy hell can
    brek out if they are not assigned properly, then it is incumbent on the
    writers of nfs to a) make sure the defaults are proper and do not allow
    that hell to break out, b) are clearly stated in for example the man
    pages, and c) have a clear sentence in the example file. The current man
    page is entirely opaque and ambiguous.

    documentations by people from a country that spits out more than 2
    million software developers each year.


    Or I presume what you meant was that I should export each partition's >>>>>> mountpoint location.

    It's always better to export partitions separately and also keep in mind >>>>> that if you export a mount point before you have mounted the partition, >>>>> then the directory will be empty for the client.

    The recommendation is to have a nfs export root (path to this can be >>>>> what ever you want), bind mount the exports here and set fsid to each >>>>> export if you want to be sure that nfs will be completely sure about the >>>>> exports.


    This all strikes me as a real kludge and a very buggy one. Especially as
    this is NOT spelled out in the documentation on nfs.

    Strange, there is this line for fsid:

    For NFSv4, there is a distinguished filesystem which is the root of all
    exported filesystem. This is specified with fsid=root or fsid=0 both
    of which mean exactly the same thing.

    That I am afraid is pretty opaque to anyone who does not already know
    it. For me, root is the / partition. It is not some arbitrary partition. >>>> The root of all the exported filesystems IS / on my system.
    But you are saying I believe that I can set up some arbitrary filesystem >>>> somewhere, say /This/Is/my/NFSroot/
    create directories in this directory which are each of the filesystems I >>>> might want to export, bind mount the directories into there, define /This/Is/my/NFSroot/
    as fsid=0 in /exports, and in exports define each of the directories I >>>> want to export relative to this root. And that is not a kludge?

    For me it felt quite clear and I did point this out in earlier posts.


    This do also give you a vital hint how you should setup your nfs server >>>>> regarding to exports. This is a big difference between nfs4 and nfs3 and >>>>> earlier.

    Yes, and "earlier" also no longer works.

    You can't expect configurations to just work when switching major
    versions of an application, it's like expecting that a modern car has to >>> be started with a hand crank like the early gas engine cars.

    Yes, I can expect that the defaults "just work".

    Just that your default ain't default for the tool you are using, it's a "default" for a older version of a tool that require rpc to work.


    If I do not assign
    fsids in the exports file, then the system assigns them by default.

    Yes, it does, but it expect you to have a common root for the nfs export.

    Those defaults should "just work" whether I come to them from 30 years
    of using Linux, or I am a newbe just coming to them today.

    nfs4 do work with default configuration, if you make a nfs4
    configuration. Don't forget those who switched from nfs2 to nfs3 had
    also strange behavior, they also had to change configuration if they
    wanted things to keep on working kind of as before.


    I had nothing in my exports file which said anything about fsid. That is
    the default, and the default should work properly. I did NOT have any
    non-unique UUIDs as I have just checked.

    But your exports didn't follow the nfs4 standard, so it didn't manage to understand what you tried to do, so it did as well it could.


    Again, is there any way of finding out what the fsid is that is assigend
    to the various exported stuff by exportfs?

    You need to do two things, assuming you are using a newer kernel than >>>>> 2.6.20, check what you configure in your exports file and the use of >>>>> blkid for the partitions you export.

    kernel 5.5

    ??? What "use of blkid"?

    It's one of the commands that come with linux-utils, it will tell a
    block device UUID, the man page will tell you more.

    Yes, I do and did know "blkid". I am wondering what your phrase "use of
    blkid for the partitions you export" meant. And as I have stated, I do
    NOT just export partitions. I export subdirectories of those
    partitions. What is the fsid of one of those subdirectories?

    Yes, and as they are on the same file system, they will have the same
    UUID. I can say nfs4 handles duplicates as long as you do have a common
    root for your exports (fsid=0), then just bind mount all the directories
    you want to export there, sure this differs from nfs3, but they are
    similar but not the same and those the recommended config is also a bit different.

    Sorry, nowhere in the docs does it say that you MUST have a root
    filesystem. Nowhere does it say that it will assign the same fsid to two different exported exports. Nowhere does it say what you say it does
    (which is stupid to assign the same fsid to two different exports). It
    is bug, which you seem intent on making into a feature (features usually accomplish something, no matter how remote. If you are arguing that
    completely messing up mounting on clients is "accomplishing something"
    -- increasing the worlds entropy, wasting sysadmin's time) then I must beg to differ
    in your definiton of "accomplishing something").

    It is a bug. It is a nasty bug. And it is not a bug that I think your
    cludge will in all cases work around. It is a bug that, it seems, one
    can avoid if one explicitly assigns a uniq fsid to all of the exports
    (although that also might not have been the explanation for the mess it
    created for me-- so far it seems to be a solution, but then again, the
    old system worked for me for 30 years or so and then crashed miserably).

    By the way, I have no idea why your kludge would work. The UUID for the
    bind mounts will surely either be the UUID of the root mount (fsid=0) of
    the UUID of the partitions which they originally came from. So there
    will be huge numbers of exports with the same fsid. Or are you also
    saying that you should also assign fsid to each of those exports and not
    rely on the "default" assignment?

    How do you know that all your fsids are different since there is nothing
    that can tell you what they are?



    Again, it appears to be true, from the absense of anyone being able to >>>> tell me how, that it is impossible to know what fsid exportfs assigns to >>>> an exported directory if I do not explicitly enter a fsid= option in
    /etc/exports for each directory I export.

    This is something nfs handles internally, it don't expose it, that's why >>> you there ain't an easy way to do it and see what nfs has picked and
    that is why you need to look at your exports and use blkid.

    Apparently it does it badly. Of course I am assuming it is because of
    non-unique fsids that I had the trouble I had, and apparently there is
    no way of finding out.

    Things works badly for you use a nfs3 export, which ain't what you are supposed to be using with nfs4. just do the transformation you should
    do, which some of us did already 20 years ago.


    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Sun Feb 23 21:16:22 2020
    On 23/02/2020 21.56, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 17.08, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote: >>>>>>>>>> On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>>>>>> subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>>>
    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each >>>>>> exported directory.

    I have. But I would still like to know what was happening before. It >>>>> wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different
    numbers to each one of the rest. No need to really export the zero. I don't.

    I have no idea what "the parent" is or means. My exports are scattered
    amongst the subdirectories on the server. So the only "parent" is /
    which I sure do not want to export or even have in /etc/exports.

    Then don't export it, but it figuratively is number zero.

    But if / is in /etc/exports (where I assume you would assign fsid=0 to
    it) then you are exporting it. How do you include a directory in
    /etc/exports
    and then not export it, but still assign an fsid to it?

    Sigh...

    Just imagine you wrote it. But don't write it.



    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 22:09:13 2020
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 17.23, William Unruh wrote:

    Anyway, I do NOT want to infer what the fsid assigned by default were. I want to
    know what they were. Again, you are skirting the question. HOw do I
    check what the default fsids are. Not "How do I read a badly written
    documentation and infer what they are" And again, using inference, what
    are the fsids of subdirectories of a mounted partitions? Are they the
    fsids of that mounted partition, or is there some other default
    assignment?

    It's what you get for the partition with blkid unless you configured a
    fsid for the export. The important is that you assing the fsid 0/root to
    the nfs export root directory, things are generally handled well, just
    see to that you have mounted all file systems you want to export before
    you start your nfs daemon, or else you may export empty directories.

    I was just reading a post which says that while it used to be that an
    fsid=0 was needed for nfs4, it is no longer needed.

    As I said, things used to be handled well. And suddenly it stopped being handled well. Nothing changed in nfs on the server or client but it
    stopped working properly. So, experience that it works well does not
    mean that it will continue working well.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Sun Feb 23 22:11:32 2020
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 21.56, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 17.08, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote: >>>>>>>>>>> On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first >>>>>>>>>>>> subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>>>>
    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning?

    Then edit the file and write the option now. Different number for each >>>>>>> exported directory.

    I have. But I would still like to know what was happening before. It >>>>>> wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different >>>>> numbers to each one of the rest. No need to really export the zero. I don't.

    I have no idea what "the parent" is or means. My exports are scattered >>>> amongst the subdirectories on the server. So the only "parent" is /
    which I sure do not want to export or even have in /etc/exports.

    Then don't export it, but it figuratively is number zero.

    But if / is in /etc/exports (where I assume you would assign fsid=0 to
    it) then you are exporting it. How do you include a directory in /etc/exports
    and then not export it, but still assign an fsid to it?

    Sigh...

    Just imagine you wrote it. But don't write it.

    Ah just as I did when it stopped working. and created a mess.
    OK, I have always been doing what you suggest, if I now understand it. Althought maybe you mean that is my imagining I did it, that makes it
    work:-)





    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Mon Feb 24 02:50:43 2020
    On 23/02/2020 23.11, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 21.56, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 17.08, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote: >>>>>>>>>> On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote: >>>>>>>>>>>> On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>>>>>
    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning? >>>>>>>>
    Then edit the file and write the option now. Different number for each
    exported directory.

    I have. But I would still like to know what was happening before. It >>>>>>> wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different >>>>>> numbers to each one of the rest. No need to really export the zero. I don't.

    I have no idea what "the parent" is or means. My exports are scattered >>>>> amongst the subdirectories on the server. So the only "parent" is /
    which I sure do not want to export or even have in /etc/exports.

    Then don't export it, but it figuratively is number zero.

    But if / is in /etc/exports (where I assume you would assign fsid=0 to
    it) then you are exporting it. How do you include a directory in /etc/exports
    and then not export it, but still assign an fsid to it?

    Sigh...

    Just imagine you wrote it. But don't write it.

    Ah just as I did when it stopped working. and created a mess.
    OK, I have always been doing what you suggest, if I now understand it. Althought maybe you mean that is my imagining I did it, that makes it
    work:-)

    But you do have to write proper entries for nfs version 4, remember. And
    you need write fsid for every entry in there. Like 1, 2, 3, 4. Just
    number them. And reserve the 0 for the root, even if you don't export it.

    Just stop complaining and write them numbers. You do not need to wonder
    about what would be the automatic number. Just write them.

    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From Carlos E.R.@2:250/0 to All on Mon Feb 24 02:53:05 2020
    On 23/02/2020 22.12, William Unruh wrote:
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 17.37, William Unruh wrote:
    On 2020-02-23, J.O. Aho <user@example.net> wrote:
    On 23/02/2020 04.47, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:
    On 22/02/2020 22.11, William Unruh wrote:
    On 2020-02-22, J.O. Aho <user@example.net> wrote:


    ??? What "use of blkid"?

    It's one of the commands that come with linux-utils, it will tell a
    block device UUID, the man page will tell you more.

    Yes, I do and did know "blkid". I am wondering what your phrase "use of
    blkid for the partitions you export" meant. And as I have stated, I do
    NOT just export partitions. I export subdirectories of those
    partitions. What is the fsid of one of those subdirectories?

    Yes, and as they are on the same file system, they will have the same
    UUID. I can say nfs4 handles duplicates as long as you do have a common
    root for your exports (fsid=0), then just bind mount all the directories
    you want to export there, sure this differs from nfs3, but they are
    similar but not the same and those the recommended config is also a bit
    different.

    Sorry, nowhere in the docs does it say that you MUST have a root
    filesystem. Nowhere does it say that it will assign the same fsid to two different exported exports. Nowhere does it say what you say it does
    (which is stupid to assign the same fsid to two different exports). It
    is bug, which you seem intent on making into a feature (features usually

    It is a bug simply because you do not understand it. Stop bitching and
    write them numbers.

    And no, don't ask me to explain. I don't know and I don't care.

    --
    Cheers, Carlos.

    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: Air Applewood, The Linux Gateway to the UK & Eire (2:250/0@fidonet)
  • From William Unruh@2:250/0 to All on Mon Feb 24 05:45:02 2020
    On 2020-02-24, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 23.11, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 21.56, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 17.08, William Unruh wrote:
    On 2020-02-23, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 23/02/2020 04.32, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote:
    On 22/02/2020 19.45, William Unruh wrote:
    On 2020-02-22, Carlos E.R. <robin_listas@es.invalid> wrote: >>>>>>>>>>> On 22/02/2020 07.03, William Unruh wrote:
    On 2020-02-21, Carlos E.R. <robin_listas@es.invalid> wrote: >>>>>>>>>>>>> On 20/02/2020 21.20, William Unruh wrote:
    OK, It is getting weirder. On server, if I do
    mount server:/local /media
    I get the same weird behaviour, /media now has unruh as its first
    subdirectory, instead of about 20 files or directories as /local has.

    So it is definitely nfs that has gotten completely and utterly confused
    on server.

    Verify the fsid field on the server.

    How do I do that?

    It is an option on the exports file.

    /data/
    192.168.1.0/24(fsid=1235,rw,no_root_squash,nohide,no_subtree_check,insecure) >>>>>>>>>>
    Is there some way of checking what numbers are assigned if I do not have
    the fsid= as an option. Ie, finding out what it is assigning? >>>>>>>>>
    Then edit the file and write the option now. Different number for each
    exported directory.

    I have. But I would still like to know what was happening before. It >>>>>>>> wasted hours of my time to no purpose.

    Well, I told you what was happening :-)
    Because I had the same problem once.

    Simply give the parent of all your exports number zero, and different >>>>>>> numbers to each one of the rest. No need to really export the zero. I don't.

    I have no idea what "the parent" is or means. My exports are scattered >>>>>> amongst the subdirectories on the server. So the only "parent" is / >>>>>> which I sure do not want to export or even have in /etc/exports.

    Then don't export it, but it figuratively is number zero.

    But if / is in /etc/exports (where I assume you would assign fsid=0 to >>>> it) then you are exporting it. How do you include a directory in /etc/exports
    and then not export it, but still assign an fsid to it?

    Sigh...

    Just imagine you wrote it. But don't write it.

    Ah just as I did when it stopped working. and created a mess.
    OK, I have always been doing what you suggest, if I now understand it.
    Althought maybe you mean that is my imagining I did it, that makes it
    work:-)

    But you do have to write proper entries for nfs version 4, remember. And
    you need write fsid for every entry in there. Like 1, 2, 3, 4. Just
    number them. And reserve the 0 for the root, even if you don't export it.


    As I said, I have already done that. I am now trying to figure out why
    it behaved the way it did, and worrying that fsid was not the reason.
    After all things worked for many years without my ever using fsid. And
    suddenly they did not, after all the machines had to be shut down
    because they decided they had to shut off all the power to the building
    for 12 hours.

    Just stop complaining and write them numbers. You do not need to wonder
    about what would be the automatic number. Just write them.

    I believe it is a bug. Bugs should be reported. I really would like to
    know if the problems I had were due to bad fsids, and am astonished that
    the default behaviour of nfs would produce such a mess, astonished that
    there is nowhere which says you should provide fsids, and astonished
    that, if that was the cause of the mess, that there exists no way of
    finding out if the fsids were duplicated by default, since there is no
    way of discovering what fsids were assigned by exportfs.
    I am a scientist by training, and mysteries bring out the desire to
    understand them. And I suspect others have wasted days of their time
    trying to figure out why nfs did not work for them, and would like to
    save them the time.



    --- MBSE BBS v1.0.7.13 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/0@fidonet)