• Re: nfs has wrong mode

    From Dennis Lee Bieber@3:770/3 to All on Tue Oct 6 10:12:51 2020
    On Tue, 6 Oct 2020 15:38:14 +0200, Hans-Werner Kneitinger <hans-werner.kneitinger@gmx.de> declaimed the following:


    If I mount the nfs manually and check with MC to the dir/files, user and >group are right, but mode is always 0777. It should be 0640 or 0644. Has >sombody an idea wats going wrong?

    Is this applicable? https://linux.die.net/man/1/rsync
    """
    -p, --perms preserve permissions
    """


    --
    Wulfraed Dennis Lee Bieber AF6VN
    wlfraed@ix.netcom.com http://wlfraed.microdiversity.freeddns.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Tue Oct 6 15:38:14 2020
    Hallo,
    I am running an RPi4 with latest RasbianOS. I want to do an backup some dirs/files on an Synology NAS via NFS nad rsync. It is working but mode attributes are not correct. User / Group are right.

    This is in my fstab:
    192.168.2.5:/volume1/hoshi_ext/opt/fhem /mnt/ds9_fhem nfs nfsvers=3,sync,noauto 0 0

    A cron job is running this script:
    ---
    #!/bin/bash
    # Backuplaufwerk einhängen
    mount /mnt/ds9_fhem && sleep 2

    # FHEM Logadteien sichern sichern
    rsync --delete -avzu /opt/fhem/log/ /mnt/ds9_fhem/

    # Backuplaufwerk auswerfen
    sleep2 && umount /mnt/ds9_fhem
    ---

    If I mount the nfs manually and check with MC to the dir/files, user and
    group are right, but mode is always 0777. It should be 0640 or 0644. Has sombody an idea wats going wrong?
    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Anssi Saari@3:770/3 to Hans-Werner Kneitinger on Tue Oct 6 17:20:02 2020
    Hans-Werner Kneitinger <hans-werner.kneitinger@gmx.de> writes:

    If I mount the nfs manually and check with MC to the dir/files, user and group are right, but mode is always 0777. It should be 0640 or 0644. Has sombody an idea wats going wrong?

    Could it be the file system on the NAS which you don't mention?
    Something that doesn't have the concept of permissions at all or they
    aren't enabled? Or permissions that don't map to Unix that well
    especially when shared over NFS? Just guessing here. Maybe provide more
    info?

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Andy Burns@3:770/3 to Anssi Saari on Tue Oct 6 15:30:29 2020
    Anssi Saari wrote:

    Could it be the file system on the NAS which you don't mention?
    Something that doesn't have the concept of permissions at all or they
    aren't enabled?

    I believe a Synology supports either ext4 or btrfs ...

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Jim Jackson@3:770/3 to Dennis Lee Bieber on Tue Oct 6 16:54:29 2020
    On 2020-10-06, Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:
    On Tue, 6 Oct 2020 15:38:14 +0200, Hans-Werner Kneitinger
    <hans-werner.kneitinger@gmx.de> declaimed the following:


    If I mount the nfs manually and check with MC to the dir/files, user and >>group are right, but mode is always 0777. It should be 0640 or 0644. Has >>sombody an idea wats going wrong?

    Is this applicable? https://linux.die.net/man/1/rsync
    """
    -p, --perms preserve permissions
    """

    He has that set - '-a'

    from man entry...

    -a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)

    It a bit baffling as I do almost exactly the same over nfs version 3.
    But to another linux machine that I setup - not some NAS box.

    One thing I have added recently is --numeric-ids option as I found that
    while I keep all user uids equal across my system, some of the system
    uids - those less than 1000 e.g. for various demon services etc - change depending on distro and version of distro it seems. So some of my backup
    had their uids changed to match to same username on the NFS server. When
    I copied back after a disk break I got a few niggles. I don't know why I thought it should have mapped back - but ... who knows. The --numeric-ids
    means I dont have to care how I put the files back.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to Jim Jackson on Tue Oct 6 19:47:01 2020
    On 06/10/2020 17:54, Jim Jackson wrote:
    One thing I have added recently is --numeric-ids option as I found that
    while I keep all user uids equal across my system, some of the system
    uids - those less than 1000 e.g. for various demon services etc - change depending on distro and version of distro it seems. So some of my backup
    had their uids changed to match to same username on the NFS server. When
    I copied back after a disk break I got a few niggles. I don't know why I thought it should have mapped back - but ... who knows. The --numeric-ids means I dont have to care how I put the files back.

    I had exactly the same issue when I tried to recover a Pi from a backup
    for the first time, and it took a bit of sorting out.

    Although most of my Pi's have been cloned from a previous one, the ids
    vary depending on the order certain services are installed, so they can
    quickly get out of step.

    I did think of harmonising them across all dozen Pi's, but it would be a
    lot of work to change all files on each to match the new values. Using --numeric_ids is a much quicker solution.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Martin Gregorie@3:770/3 to Hans-Werner Kneitinger on Tue Oct 6 20:29:15 2020
    On Tue, 06 Oct 2020 15:38:14 +0200, Hans-Werner Kneitinger wrote:

    Hallo,
    I am running an RPi4 with latest RasbianOS. I want to do an backup some dirs/files on an Synology NAS via NFS nad rsync. It is working but mode attributes are not correct. User / Group are right.


    Hmm.

    I do something superficially similar:

    1) attach a USB disk to my house server and mount it.
    There is one partition on this disk.

    2) for each of the four machines being backed up (three over my LAN the
    4th is the house server

    2a) launch rsync, run as root, on the house server to use an sshd
    connection to read changed files from each machine to update
    files on the USB disk. The files from each machine are backed up
    to a separate root directory, named to match each host being backed
    up.

    3) When done, unmount the USB disk and store ot offline.

    When I've needed to recover files from the backup, I've mounted the USB
    disk on the host containing the master files and copied the files and directories back off the backup disk. I've never had to correct user or
    group IDs unless I'm restoring to a freshly formatted disk which hasn't
    yet had user names restored, and even then, putting users and groups back
    with adduser etc after restoring the files needs no tweaking of file permissions provided I have a record of the corresponding user and group
    ids and use them when recreating /etc/passwd and /etc/group.

    I also do an overnight backup from the house server to another USB disk,
    but this uses rsnapshot and takes about 8 minutes a night plus the same
    time again for the weekly snapshot, which just shuffles file images on
    the backup disk.

    It looks like the main difference is that I don't mount the backup disk
    on the host being backed up, which AFAIK has the side effect of
    completely decoupling the user and group IDs in the two filing systems.

    Maybe you could try that?

    The rsync SSHD transfer may be a bit slower - can take between 10-40
    minutes per host, depending on the size of the weekly distro update,
    because I keep two generations of backup disk which means that each
    weekly backup sees two weeks worth of data that needs to be copied
    across, but at least I have no user/group id issues.


    --
    Martin | martin at
    Gregorie | gregorie dot org

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Wed Oct 7 07:13:09 2020
    Am 06.10.20 um 18:54 schrieb Jim Jackson:

    from man entry...

    -a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)

    I thought so.

    It a bit baffling as I do almost exactly the same over nfs version 3.
    But to another linux machine that I setup - not some NAS box.

    One thing I have added recently is --numeric-ids option as I found that
    while I keep all user uids equal across my system, some of the system
    ...
    thought it should have mapped back - but ... who knows. The --numeric-ids means I dont have to care how I put the files back.

    Can you explain this a little mor how to do, please?

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Wed Oct 7 07:23:15 2020
    Am 06.10.20 um 16:20 schrieb Anssi Saari:

    Could it be the file system on the NAS which you don't mention?
    Something that doesn't have the concept of permissions at all or they
    aren't enabled? Or permissions that don't map to Unix that well
    especially when shared over NFS? Just guessing here. Maybe provide more
    info?

    The NAS HDDs are ext4. The NFS export settings are set via GUI.
    Limited to Hosts, rw previleges, no squash, sync, previlegs ports only,
    Users have access to dirs.

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Wed Oct 7 07:33:19 2020
    Am 06.10.20 um 22:29 schrieb Martin Gregorie:
    On Tue, 06 Oct 2020 15:38:14 +0200, Hans-Werner Kneitinger wrote:

    Hallo,
    I am running an RPi4 with latest RasbianOS. I want to do an backup some
    dirs/files on an Synology NAS via NFS nad rsync. It is working but mode
    attributes are not correct. User / Group are right.


    Hmm.

    I do something superficially similar:
    ...
    3) When done, unmount the USB disk and store ot offline.

    My homeserver is the NAS. I want to store some files ther periodictly.
    The NAS is backuped to USB.



    The rsync SSHD transfer may be a bit slower

    That I haver never got working. I try it for some week, but no success.

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Wed Oct 7 10:05:00 2020
    Am 07.10.20 um 09:43 schrieb Anssi Saari:

    So I guess the next question is, does this happen only with rsync or
    other programs too if you copy files over?

    I am logging in as admin-user on the client. I mount te nfs manualy and
    started the Midnight Commander "MC" on the client. I could not change
    the mode from 777 to anything else. Same when I am via WEBIN connectet
    to the client.

    mkdir -p /mnt/ds9_vmail/test -> same mode is 0777. Group:user root:root
    and can be changed via chown to anthing eles.


    touch /mnt/ds9_vmail/test/test.txt -> same as above.

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Anssi Saari@3:770/3 to Hans-Werner Kneitinger on Wed Oct 7 10:43:06 2020
    Hans-Werner Kneitinger <hans-werner.kneitinger@gmx.de> writes:

    The NAS HDDs are ext4. The NFS export settings are set via GUI.
    Limited to Hosts, rw previleges, no squash, sync, previlegs ports only,
    Users have access to dirs.

    So I guess the next question is, does this happen only with rsync or
    other programs too if you copy files over?

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Wed Oct 7 10:47:24 2020
    Am 07.10.20 um 09:43 schrieb Anssi Saari:

    So I guess the next question is, does this happen only with rsync or
    other programs too if you copy files over?

    Thank you for hint, solved the problem.

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Wed Oct 7 10:58:11 2020
    Am 07.10.20 um 10:46 schrieb Chris Green:

    I always try and maintain the same UID and GID across systems on my
    LAN for this reason, it makes things work more smoothly.

    It is not so easy.

    No access on UIDs on the NAS.
    Not every user on the Client has an account on the NAS.

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to Hans-Werner Kneitinger on Wed Oct 7 09:46:11 2020
    Hans-Werner Kneitinger <hans-werner.kneitinger@gmx.de> wrote:
    Am 07.10.20 um 09:43 schrieb Anssi Saari:

    So I guess the next question is, does this happen only with rsync or
    other programs too if you copy files over?

    I am logging in as admin-user on the client. I mount te nfs manualy and started the Midnight Commander "MC" on the client. I could not change
    the mode from 777 to anything else. Same when I am via WEBIN connectet
    to the client.

    mkdir -p /mnt/ds9_vmail/test -> same mode is 0777. Group:user root:root
    and can be changed via chown to anthing eles.


    touch /mnt/ds9_vmail/test/test.txt -> same as above.

    Is it simply down to different UIDs and GIDs on the two systems?

    I always try and maintain the same UID and GID across systems on my
    LAN for this reason, it makes things work more smoothly.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to Hans-Werner Kneitinger on Wed Oct 7 11:43:45 2020
    Hans-Werner Kneitinger <hans-werner.kneitinger@gmx.de> wrote:
    Am 07.10.20 um 10:46 schrieb Chris Green:

    I always try and maintain the same UID and GID across systems on my
    LAN for this reason, it makes things work more smoothly.

    It is not so easy.

    No access on UIDs on the NAS.
    Not every user on the Client has an account on the NAS.

    But you're using NFS which means the NFS drive is mounted on the
    system you're connecting from as if it's a local drive there has to be
    some sort of strategy for handling UIDs and GIDs. What do you
    expect/hope it will do?

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Wed Oct 7 19:43:54 2020
    Am 07.10.20 um 12:43 schrieb Chris Green:

    But you're using NFS which means the NFS drive is mounted on the
    system you're connecting from as if it's a local drive there has to be
    some sort of strategy for handling UIDs and GIDs. What do you
    expect/hope it will do?

    The RPis are datacolectors or a control-systems. Every application has
    its own user:group settings. System user:group differ much between NAS
    and different distributions. The applicatin itself has no need to access
    NFS.

    user:group whre alwas OK, but note MODE permissios. Mode are 0x0777.
    After setting the correct NAS NFS switch for LINUX and not Windows ACL.
    mode, permissons ale correct too.

    There is no need to restore the backup to an other distribution. Only
    when a RPi (or its SD Card) died, I want to setup a fresh system and
    restore the colected data to the fresh installation. For that I have a
    RPi in cushion and an image from the every fresh installation.

    It works like this:

    The script is executed by admin-user and did a rsync -a* on nfs. Its all
    local, no internet. protected by firewalls, never accessed by public.

    user:group are correct shown by name if I look from this client.
    user:group are correct shown by number if I look direkt from the NAS.
    Thats OK for my, it is a backup. This Backup, and all other too
    including the NAS itself, are backuped via NAS Backup Tool to USB Drive connected to NAS.

    I don't know better.
    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to Hans-Werner Kneitinger on Wed Oct 7 21:22:34 2020
    On 07/10/2020 06:13, Hans-Werner Kneitinger wrote:
    Am 06.10.20 um 18:54 schrieb Jim Jackson:
    One thing I have added recently is --numeric-ids option as I found that
    while I keep all user uids equal across my system, some of the system
    ...
    thought it should have mapped back - but ... who knows. The --numeric-ids
    means I dont have to care how I put the files back.

    Can you explain this a little mor how to do, please?

    For me it things went wrong when backing a Raspberry Pi with rsync to a
    SD card image file on another machine, where the id's for users and
    system processes were different.

    For example on the Raspberry Pi you have user A with id 1001 and user B
    with id 1002, but on the other machine users were created with ids 1004
    and 1005. When rsync backs up it sees files with ids 1001 and 1002 owned
    by A and B, and 'helpfully' writes then out with correct ids for users A
    and B on that machine which are 1004 and 1005. The problem comes when
    you take that backup image and put it back on the original Pi, as it
    knows nothing about ids 1004 and 1005.

    The solution is to use --numeric-ids argument to rsync, then when
    backing up it will write the same ids as reads, in the case above 1002
    and 1003, even those those ids corresponds to completely different users
    on that machine. But now when you copy the backup image back to the Pi,
    it has retained the correct ids.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Thu Oct 8 08:23:34 2020
    Am 07.10.20 um 22:22 schrieb druck:

    For me it things went wrong when backing a Raspberry Pi with rsync to a
    SD card image file on another machine, where the id's for users and
    system processes were different.

    Thank you for explanation but my use-case seems to be little different.
    Its for backup/recovery only. No need for file sharing between clients,
    all local only. The NAS is the central backup storage. No RPi user has
    an account on the NAS. I think there is no need for and its better to
    have less accounts as possible.

    The RPis are data collectors or controllers. I have a backup-image from
    every fresh setup and an RPi on cushion.

    1. If an RPi or its SD Card dies, I restore from backup image and then
    restore latest data and setting from NAS.

    2. If distro and/or hardware update is requiered, I do a fresh
    installation and a data restore from NAS.

    3. No RPi user has access to the NAS but the special backup user.

    4. My backup strategie is:
    Backup fresh SD Card as image -> NAS -> external USB Drive.
    Latest data -> NAS -> external USB Drive.

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Hans-Werner Kneitinger@3:770/3 to All on Thu Oct 8 12:19:11 2020
    Am 08.10.20 um 11:45 schrieb druck:

    But the NAS doesn't know about those users and groups, and their
    relationship to the backup user. To allow the backup user to access
    those files, they have to be set to world readable and writeable
    i.e. 777

    Problem is solved. It was a NAS NFS setting that has to be corrected.
    The NAS was set to Windows ACL permissions for shares, but no Windows
    nor MAC shares are used. After switching to NFS permissions, it works as wanted.

    --
    cu
    hawe

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to Hans-Werner Kneitinger on Thu Oct 8 10:45:30 2020
    On 08/10/2020 07:23, Hans-Werner Kneitinger wrote:
    Thank you for explanation but my use-case seems to be little different.
    Its for backup/recovery only. No need for file sharing between clients,
    all local only. The NAS is the central backup storage. No RPi user has
    an account on the NAS. I think there is no need for and its better to
    have less accounts as possible.

    The RPis are data collectors or controllers. I have a backup-image from
    every fresh setup and an RPi on cushion.

    1. If an RPi or its SD Card dies, I restore from backup image and then restore latest data and setting from NAS.

    2. If distro and/or hardware update is requiered, I do a fresh
    installation and a data restore from NAS.

    3. No RPi user has access to the NAS but the special backup user.

    That's probably your problem. You backup user is writing files which
    have various Pi user and group id's to the NAS, and the NAS stores those
    ids, as otherwise every file would belong to the backup user, which
    isn't what you want.

    But the NAS doesn't know about those users and groups, and their
    relationship to the backup user. To allow the backup user to access
    those files, they have to be set to world readable and writeable
    i.e. 777

    If you created matching users and groups on the NAS, and made sure the
    backup user was part of those groups, the NAS would then know who was
    allowed to access what, and the file permissions could then also be
    stored correctly.

    This is my interpretation of how NFS works, I may be wrong.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to druck on Thu Oct 8 11:56:52 2020
    On 08/10/2020 10:45, druck wrote:
    On 08/10/2020 07:23, Hans-Werner Kneitinger wrote:
    Thank you for explanation but my use-case seems to be little different.
    Its for backup/recovery only. No need for file sharing between clients,
    all local only. The NAS is the central backup storage. No RPi user has
    an account on the NAS. I think there is no need for and its better to
    have less accounts as possible.

    The RPis are data collectors or controllers. I have a backup-image from
    every fresh setup and an RPi on cushion.

    1. If an RPi or its SD Card dies, I restore from backup image and then
    restore latest data and setting from NAS.

    2. If distro and/or hardware update is requiered, I do a fresh
    installation and a data restore from NAS.

    3. No RPi user has access to the NAS but the special backup user.

    That's probably your problem. You backup user is writing files which
    have various Pi user and group id's to the NAS, and the NAS stores those
    ids, as otherwise every file would belong to the backup user, which
    isn't what you want.

    But the NAS doesn't know about those users and groups, and their
    relationship to the backup user. To allow the backup user to access
    those files, they have to be set to world readable and writeable
    i.e. 777

    If you created matching users and groups on the NAS, and made sure the
    backup user was part of those groups, the NAS would then know who was
    allowed to access what, and the file permissions could then also be
    stored correctly.

    This is my interpretation of how NFS works, I may be wrong.

    ---druck

    My NFS always spreads UIDS, GIDS and permissions *exactly*, using e.g.
    this style of export line in /etc/exports

    /home/spare *(rw,sync,no_root_squash,no_subtree_check)

    However obviously to propagate these to the NFS mounted remote file
    system, rsync *must* run as root.

    There can be no 'backup user'

    --
    "Corbyn talks about equality, justice, opportunity, health care, peace, community, compassion, investment, security, housing...."
    "What kind of person is not interested in those things?"

    "Jeremy Corbyn?"

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Andy Burns@3:770/3 to The Natural Philosopher on Thu Oct 8 12:45:21 2020
    The Natural Philosopher wrote:

    My NFS always spreads UIDS, GIDS and permissions *exactly*,

    But how well does it handle "so-called posix" ACLs?

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Andy Burns on Thu Oct 8 15:46:36 2020
    On 08/10/2020 12:45, Andy Burns wrote:
    The Natural Philosopher wrote:

    My NFS always spreads UIDS, GIDS and permissions *exactly*,

    But how well does it handle "so-called posix" ACLs?

    Dunno. Never bothered with em.

    I assume with *nix either side, NFS would propagate those. Remember it
    was developed so that SUN clusters could use central disks as if they
    were locally connected.

    Only rarely have I had it behave in slightly weird fashion with respect
    to permissions, and I never found out why

    I tend not to use file permissions as a security measure, much, anyway.

    I am more concerned about wide area than local area security.


    --
    New Socialism consists essentially in being seen to have your heart in
    the right place whilst your head is in the clouds and your hand is in
    someone else's pocket.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)