• My darn NAS...

    From paul lee@1:105/420 to All on Sun Jan 10 17:46:28 2021
    I had setup an Open Media Vault on a Raspberry Pi 3, which I know doesn't have gigabit ethernet...

    I use cheap USB Hard Drives, 2 x 4TB Seagates...

    I run a PLEX server and, although I'm sure I don't have awesome 4k content [Probably not even all 1080p!], it seems to work just fine for streaming to ONE television at a time.

    However, both on my Samba Shares and NFS Shares, I'm getting around 10mb/Sec transfer rates. Sometimes they'll bump up to +/-18mb but not often; I'm sure this is just the particular instance reporting wrong.... I'm around 10-12mb constantly.

    So... I thought my bottle-neck was the Pi, and not having gigabit - I threw a Pi 4 8gbRAM model at it today... I reinstalled fully, and setup from scratch. I've only pushed over one of my drives YET because... wouldn't ya know it, the transfer rate is the EXACT same as on the Pi 3!!

    I did some dd and hdparm commands:

    'dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync'
    @ 184 MB/s

    'dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync'
    @ 27.5 MB/s

    (Which, I don't even get if I tested the right drive becaues the HDD is @ /sda1 / /sda

    'hdparm -t /dev/sda1'
    @178 MB/sec
    @180 MB/sec
    @177 MB/sec

    (hdparm gave an error of SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 00 24 00 00 00 00 00 00 00 00 00 00 00 00), which I didn't understand but...

    So I *think* my HD is reading at 180MB-ish and writing at 28MB???

    Where is my bottle-neck? I *thought* with a gigabit connection I could get over 100MB/s on a shared folder transfer over my network. I am doing this WiFi to my ThinkPad laptop - hmmmm... maybe this old ThinkPad has a sucky WiFi card? Maybe I should plug the laptop into ethernet and see what the transfer rates are then???

    'Lost in NAS-land, pretty decent for a newbie but... where's the beef?'



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/05 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From The Natural Philosopher@3:770/3 to paul lee on Mon Jan 11 04:10:53 2021
    On 10/01/2021 04:46, paul lee wrote:
    Where is my bottle-neck? I*thought* with a gigabit connection I could get
    over
    100MB/s on a shared folder transfer over my network. I am doing this WiFi to
    my
    ThinkPad laptop - hmmmm... maybe this old ThinkPad has a sucky WiFi card?
    Maybe
    I should plug the laptop into ethernet and see what the transfer rates are then???

    you said gigabit Ethernet and now you are saying Wifi?

    Wifi is to put it bluntly, utter shit designed for morons. Especially on laptops with no proper antennae.

    I have NEVER gotten more than 10Mbps *actual transfer rate* out of a
    basic 2.4Ghz wifi link, even feet away from the router. Even when it
    said it was connected at 65Mbps or 72Mbps.

    Remember wifi is half duplex., Every time you send an ack back, it stops
    the forward channel.

    And if any other device is on the wlan, you are sharing the link speed
    with that, too.

    It is worse than old coaxial Ethernet was at 10Mbps. It is to put it
    bluntly consumer crap for morons. Like StupidPhones™.


    Using iwconfig I have watched connection rates and attenuation vary by
    3:1 for no apparent reason whatsoever. Or simply stop working altogether
    until reconnected. Yes, I have foil in all my walls and that makes for a
    tricky wifi environment, but even so.




    --
    “The urge to save humanity is almost always only a false face for the
    urge to rule it.”
    – H. L. Mencken

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to All on Mon Jan 11 10:15:55 2021
    paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:
    [snip speeds etc.]

    FWIW here are some figures I just got copy a large file across my
    network to my Pi 'NAS':-

    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:/bak/esprimo/cur/home/chris
    2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup: 2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38

    As you can see I was using scp, the first copy is to an external USB3 hard drive,
    the second is to the Pi's SD card. As you can see the speed is identical which
    suggests to me that it's limited almost entirely by the network rather than the
    Pi's internals.

    It's all Gigabit (I checked), out of 'esprimo' which is a desktop machine, via switch near my desktop, along buried UTP to another switch in the garage and thence to the Pi.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to paul lee on Mon Jan 11 10:25:12 2021
    On 10/01/2021 04:46, paul lee wrote:
    I had setup an Open Media Vault on a Raspberry Pi 3, which I know doesn't
    have
    gigabit ethernet...

    I use cheap USB Hard Drives, 2 x 4TB Seagates...

    I run a PLEX server and, although I'm sure I don't have awesome 4k content [Probably not even all 1080p!], it seems to work just fine for streaming to
    ONE
    television at a time.

    However, both on my Samba Shares and NFS Shares, I'm getting around 10mb/Sec transfer rates. Sometimes they'll bump up to +/-18mb but not often; I'm sure this is just the particular instance reporting wrong.... I'm around 10-12mb constantly.

    Is that megabits or megaBytes per second?

    So... I thought my bottle-neck was the Pi, and not having gigabit - I threw Pi 4 8gbRAM model at it today... I reinstalled fully, and setup from
    scratch.
    I've only pushed over one of my drives YET because... wouldn't ya know it,
    the
    transfer rate is the EXACT same as on the Pi 3!!

    I'm using a 4B with a USB 3.1 HD. The HD does about 100MB/s (megaBytes) read/write locally, and using Cyrstal Diskmark over Samba its showing
    maximum transfer rate of 72MB/s read and 58MB/s write. When backing up
    from my other Pis over NFS I'm seeing rates of 40-50MB/s.

    Check you are getting 1000Mb/s Full duplex using the command:-
    ethtool eth0

    Also try another Ethernet cable (at least Cat 5e), as it wasn't until I
    started using that Pi as a NAS did I find its upload speed was very
    poor. Up to then it was only used for web browsing and its download
    speed was fine. I think the cable had been kinked, replacing it restored
    the upload speed.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to Chris Green on Mon Jan 11 15:16:39 2021
    Chris Green <cl@isbd.net> wrote:
    paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:
    [snip speeds etc.]

    FWIW here are some figures I just got copy a large file across my
    network to my Pi 'NAS':-

    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip
    backup:/bak/esprimo/cur/home/chris
    2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38 chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup: 2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38

    The above speeds are wrong, I think one of my switches was playing up,
    revised speeds as follows:-

    Desktop to backup SD card:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 20.6MB/s 00:41
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 21.3MB/s 00:39

    Desktop to backup external USB3 hard drive:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 47.3MB/s 00:17
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 45.3MB/s 00:18

    Backup to desktop, from SD card:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 36.9MB/s 00:23

    Backup to desktop, from external USB3 hard drive:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 39.2MB/s 00:21

    So, getting on for half the theoretical speed over a Gigabit network in the best case.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Chris Green on Mon Jan 11 15:31:15 2021
    On 11/01/2021 15:16, Chris Green wrote:
    Chris Green <cl@isbd.net> wrote:
    paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:
    [snip speeds etc.]

    FWIW here are some figures I just got copy a large file across my
    network to my Pi 'NAS':-

    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:/bak/esprimo/cur/home/chris
    2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:
    2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38

    The above speeds are wrong, I think one of my switches was playing up, revised speeds as follows:-

    Desktop to backup SD card:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 20.6MB/s 00:41
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 21.3MB/s 00:39

    Desktop to backup external USB3 hard drive:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 47.3MB/s 00:17
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 45.3MB/s 00:18

    Backup to desktop, from SD card:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 36.9MB/s 00:23

    Backup to desktop, from external USB3 hard drive:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 39.2MB/s 00:21

    So, getting on for half the theoretical speed over a Gigabit network in the
    best case.

    Mmm. I get pretty close to my 100Mbps network against a linux server
    with NFS.


    --
    Canada is all right really, though not for the whole weekend.

    "Saki"

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From NY@3:770/3 to paul lee on Mon Jan 11 15:39:22 2021
    "paul lee" <nospam.paul.lee@f420.n105.z1.binkp.net> wrote in message news:497943163@f420.n105.z1.binkp.net...
    I had setup an Open Media Vault on a Raspberry Pi 3, which I know doesn't have
    gigabit ethernet...

    I use cheap USB Hard Drives, 2 x 4TB Seagates...

    I run a PLEX server and, although I'm sure I don't have awesome 4k content [Probably not even all 1080p!], it seems to work just fine for streaming
    to ONE
    television at a time.

    Do you find that the RPi3 acting as a Plex server is powerful enough to transcode recordings from the format in which you recorded them (MPEG TS or H264 TS) into whatever esoteric format Plex clients require?

    I found that even a RPi4 gets very hot and runs at very high CPU % if it has
    to do any transcoding. Using the Plex client on a Roku box, it seems that it will play SD recordings (MPEG TS) natively but has to transcode HD
    recordings (H264 TS). I suppose this is an improvement over Windows, where
    it seems to transcode everything.

    Why did Plex devise a client-server architecture where the client cannot
    play the files in their native form, but must instead get the server to transcode them? Is there a format that recordings could be converted to (offline) which allows them to be played without transcoding? What a faff having to do that for every file of my several TB of recordings...

    At least on RPi4, VLC will run fast enough to play either SD or HD TS files, either from a local disk or over Ethernet/SMB. All I need to do is to get
    the sound to work - either via the analogue output or via the HDMI lead to
    my TV. I may end up binning PLex server and playing to the TV over VLC on
    the Pi - at least VLC can play whatever file format is available.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to The Natural Philosopher on Mon Jan 11 18:05:18 2021
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 11/01/2021 15:16, Chris Green wrote:
    Chris Green <cl@isbd.net> wrote:
    paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:
    [snip speeds etc.]

    FWIW here are some figures I just got copy a large file across my
    network to my Pi 'NAS':-

    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip
    backup:/bak/esprimo/cur/home/chris
    2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:
    2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38

    The above speeds are wrong, I think one of my switches was playing up, revised speeds as follows:-

    Desktop to backup SD card:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 20.6MB/s 00:41
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 21.3MB/s 00:39

    Desktop to backup external USB3 hard drive:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 47.3MB/s 00:17
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 45.3MB/s 00:18

    Backup to desktop, from SD card:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 36.9MB/s 00:23

    Backup to desktop, from external USB3 hard drive:-
    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 39.2MB/s 00:21

    So, getting on for half the theoretical speed over a Gigabit network in
    the best case.

    Mmm. I get pretty close to my 100Mbps network against a linux server
    with NFS.

    From desktop to/from laptop I too get something over 100MBps on my Gigabit network. The Pi4 (as can be seen above) is somewhat slower, but not
    hugely. ... and the Pi doesn't have a particularly fast disk, unlike
    my desktop and laptop which both have NVME SSDs.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From paul lee@1:105/420 to The Natural Philosopher on Tue Jan 12 00:28:51 2021
    you said gigabit Ethernet and now you are saying Wifi?

    Wifi is to put it bluntly, utter shit designed for morons. Especially on laptops with no proper antennae.

    I have NEVER gotten more than 10Mbps *actual transfer rate* out of a basic 2.4Ghz wifi link, even feet away from the router. Even when it
    said it was connected at 65Mbps or 72Mbps.

    Remember wifi is half duplex., Every time you send an ack back, it stops the forward channel.

    And if any other device is on the wlan, you are sharing the link speed with that, too.

    It is worse than old coaxial Ethernet was at 10Mbps. It is to put it bluntly consumer crap for morons. Like StupidPhones™.


    Using iwconfig I have watched connection rates and attenuation vary by
    3:1 for no apparent reason whatsoever. Or simply stop working altogether until reconnected. Yes, I have foil in all my walls and that makes for a tricky wifi environment, but even so.


    Ok... I do appreciate this reply. :P Thanks.

    So, yes... my daily driver machine is an older T430S Thinkpad laptop, probably with a less than current WiFi chip/card... however, I was running my NAS on a Pi 3; and just upgraded (actually still running both) to a Pi 4. My LAN/ethernet network is all gigabit+ hardware.

    So, what I think I should do is simply plug my Thinkpad into the ethernet port and retest both the Pi 3 and the Pi 4 NAS systems and see what I get then. I, being a fairly versed an knowledgable NEWBIE, didn't release I should be 'happy' with 12-14mb over my laptops WiFi (which again, is probably less than current since the laptops I run are from 2012).

    I'll connect over ethernet to both the Pi NAS systems and post again with the results, but... is this a fair and valid test that I should persue?

    I thought I would get better speeds OVER that WiFi connection I'm speaking of, but... understand what you've stated here. :P I am, as you can tell, still learning..


    THANK YOU.



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/05 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From paul lee@1:105/420 to Chris Green on Tue Jan 12 00:31:38 2021
    FWIW here are some figures I just got copy a large file across my
    network to my Pi 'NAS':-

    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:/bak/esprimo/cur/home/chris 2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
    chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup: 2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38

    As you can see I was using scp, the first copy is to an external USB3
    hard drive,
    the second is to the Pi's SD card. As you can see the speed is
    identical whichsuggests to me that it's limited almost entirely by the network rather than thePi's internals.

    It's all Gigabit (I checked), out of 'esprimo' which is a desktop
    machine, via a
    switch near my desktop, along buried UTP to another switch in the garage andthence to the Pi.

    --
    Chris Green

    Thanks for your reply... I was told by another poster that I, since the laptop I use connects via wifi, should consider my 12mb speeds normal -

    So I am going to connect said laptop to the ethernet connection and test both Pi's again. (pi 3 and pi 4...)

    Thanks for your info; that is about what I'm getting from BOTH the Pi 3 and the Pi 4; which are both connected to the ethernet connection, but pulling TO my laptop Thinkpad T430s which is on its older WiFi chip/card.

    Thanks again...



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/05 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From paul lee@1:105/420 to druck on Tue Jan 12 00:35:31 2021
    I'm using a 4B with a USB 3.1 HD. The HD does about 100MB/s (megaBytes) read/write locally, and using Cyrstal Diskmark over Samba its showing maximum transfer rate of 72MB/s read and 58MB/s write. When backing up from my other Pis over NFS I'm seeing rates of 40-50MB/s.

    Are *all* of your computers connected via ethernet? I am connecting FROM a laptop on wifi, and was told that its not as fast/reliable... so; THATS probably my bottleneck. :P

    Check you are getting 1000Mb/s Full duplex using the command:-
    ethtool eth0

    On the Pi 3, which is connected via ethernet - I'm NOT.
    On the Pi 4, which is connected via ethernet - I AM.

    Also try another Ethernet cable (at least Cat 5e), as it wasn't until I started using that Pi as a NAS did I find its upload speed was very
    poor. Up to then it was only used for web browsing and its download
    speed was fine. I think the cable had been kinked, replacing it restored the upload speed.

    ---druck

    I think I was just misinformed, since I was trying to get the higher speeds TO a Thinkpad T430s laptop connected via WiFi. I am going to connect said laptop TO the ethernet and then retest on both the Pi 3 and Pi 4 NAS systems.... I think this just might be me... user error.

    :P

    But, I wouldn't have known and would have been pulling out my hair had I not asked so... Hope this all makes sense to you, too; and I'm learning. I will post the results with the laptop CONNECTED TO ETHERNET shortly.



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/05 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From The Natural Philosopher@3:770/3 to paul lee on Tue Jan 12 10:26:21 2021
    On 11/01/2021 11:28, paul lee wrote:
    I'll connect over ethernet to both the Pi NAS systems and post again with
    the
    results, but... is this a fair and valid test that I should persue?
    Absolutely


    --
    Socialism is the philosophy of failure, the creed of ignorance and the
    gospel of envy.

    Its inherent virtue is the equal sharing of misery.

    Winston Churchill

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to paul lee on Tue Jan 12 10:57:04 2021
    paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:

    So, yes... my daily driver machine is an older T430S Thinkpad laptop,
    probably
    with a less than current WiFi chip/card... however, I was running my NAS on Pi 3; and just upgraded (actually still running both) to a Pi 4. My LAN/ethernet network is all gigabit+ hardware.

    I had a T430 that died (very unusual for Lenovo T series), I now have
    a T470 I bought used off eBay for rather less than I expected, lovely!


    So, what I think I should do is simply plug my Thinkpad into the ethernet
    port
    and retest both the Pi 3 and the Pi 4 NAS systems and see what I get then.
    I,
    being a fairly versed an knowledgable NEWBIE, didn't release I should be 'happy' with 12-14mb over my laptops WiFi (which again, is probably less
    than
    current since the laptops I run are from 2012).

    I'll connect over ethernet to both the Pi NAS systems and post again with
    the
    results, but... is this a fair and valid test that I should persue?

    I thought I would get better speeds OVER that WiFi connection I'm speaking
    of,
    but... understand what you've stated here. :P I am, as you can tell, still learning..

    My WiFi connection reports that it is 300Mb/s but the real speed is
    never anything like that. Here are my results sending from T470
    laptop (WiFi connection, reports as 300Mb/s) to desktop:-

    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 16.2MB/s 00:52

    So I get about half the 'expected' speed, also quite similar to your
    speeds I think.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Dennis Lee Bieber@3:770/3 to All on Tue Jan 12 12:16:03 2021
    On Tue, 12 Jan 2021 10:57:04 +0000, Chris Green <cl@isbd.net> declaimed the following:



    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 16.2MB/s 00:52

    So I get about half the 'expected' speed, also quite similar to your
    speeds I think.

    Are you taking into account the IP header size, the TCP (or UDP) header size, and MTU size? The latter will tend to determine how many packets need
    to be sent (and for TCP, ACKed). Also, does your transfer method apply any
    sort of CRC or ECC logic, which will also consume some space in those
    packets?

    https://networkengineering.stackexchange.com/questions/19976/trying-to-find-out -exact-tcp-overhead-cost


    --
    Wulfraed Dennis Lee Bieber AF6VN
    wlfraed@ix.netcom.com http://wlfraed.microdiversity.freeddns.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Dennis Lee Bieber@3:770/3 to All on Tue Jan 12 12:19:43 2021
    On Tue, 12 Jan 2021 00:31:38 +1300, nospam.paul.lee@f420.n105.z1.binkp.net (paul lee) declaimed the following:


    Thanks for your reply... I was told by another poster that I, since the laptop
    I use connects via wifi, should consider my 12mb speeds normal -

    https://www.speedguide.net/faq/what-is-the-actual-real-life-speed-of-wireless-3 74




    --
    Wulfraed Dennis Lee Bieber AF6VN
    wlfraed@ix.netcom.com http://wlfraed.microdiversity.freeddns.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Dennis Lee Bieber@3:770/3 to All on Tue Jan 12 12:25:27 2021
    On Tue, 12 Jan 2021 00:35:31 +1300, nospam.paul.lee@f420.n105.z1.binkp.net (paul lee) declaimed the following:


    On the Pi 3, which is connected via ethernet - I'm NOT.

    As I recall, the R-Pi 3 Ethernet is internally a USB dongle, so throughput will be comparable to USB-2... <30MB/s

    On the Pi 4, which is connected via ethernet - I AM.

    Real Ethernet on R-Pi 4


    --
    Wulfraed Dennis Lee Bieber AF6VN
    wlfraed@ix.netcom.com http://wlfraed.microdiversity.freeddns.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to Dennis Lee Bieber on Tue Jan 12 18:04:51 2021
    Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:
    On Tue, 12 Jan 2021 10:57:04 +0000, Chris Green <cl@isbd.net> declaimed the following:



    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
    850MB 16.2MB/s 00:52

    So I get about half the 'expected' speed, also quite similar to your
    speeds I think.

    Are you taking into account the IP header size, the TCP (or UDP)
    header
    size, and MTU size? The latter will tend to determine how many packets need to be sent (and for TCP, ACKed). Also, does your transfer method apply any sort of CRC or ECC logic, which will also consume some space in those packets?


    https://networkengineering.stackexchange.com/questions/19976/trying-to-find-out -exact-tcp-overhead-cost

    When I copy from laptop to desktop (both quite fast machines with fast
    disks) I get something quite a bit over 100MB/s on wired Gigabit
    connections. So the overhead isn't that great given that the
    theoretical maximum would be 1000/8 which is 125MB/s. So on a 300Mb/s
    wireless link between the same two machines one would, sort of, expect something a bit more than 30MB/s whereas in reality one gets about
    half of that.


    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Ahem A Rivet's Shot@3:770/3 to Chris Green on Tue Jan 12 18:38:54 2021
    On Tue, 12 Jan 2021 18:04:51 +0000
    Chris Green <cl@isbd.net> wrote:

    o on a 300Mb/s
    wireless link between the same two machines one would, sort of, expect something a bit more than 30MB/s whereas in reality one gets about
    half of that.

    Which starts to make more sense when you realise the link is essentially half-duplex and subject to noise induced retries.

    --
    Steve O'Hara-Smith | Directable Mirror Arrays C:\>WIN | A better way to focus the sun
    The computer obeys and wins. | licences available see
    You lose and Bill collects. | http://www.sohara.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to Dennis Lee Bieber on Tue Jan 12 19:32:00 2021
    On 12/01/2021 17:25, Dennis Lee Bieber wrote:
    On Tue, 12 Jan 2021 00:35:31 +1300, nospam.paul.lee@f420.n105.z1.binkp.net (paul lee) declaimed the following:


    On the Pi 3, which is connected via ethernet - I'm NOT.

    As I recall, the R-Pi 3 Ethernet is internally a USB dongle, so throughput will be comparable to USB-2... <30MB/s

    The 3B is 100BaseT, but the 3B+ has gigabit Ethernet over USB2 which in practice gives around 330Mb/s, so 30MB/s is about right.

    On the Pi 4, which is connected via ethernet - I AM.

    Real Ethernet on R-Pi 4

    Testing with ipferf, I get about 996MB/s, which is close to the maximum.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From paul lee@1:105/420 to Chris Green on Tue Jan 12 13:20:46 2021
    I had a T430 that died (very unusual for Lenovo T series), I now have
    a T470 I bought used off eBay for rather less than I expected, lovely!

    I love the ThinkPad hardware; and while the T430/T440 series have SOME of the cool stuff from the old days, they are just beginning to be a little long in the tooth for me. I'm not very hardware intensive, but... I will be looking at some other ThinkPad models in the future. I might just bite the bullet and go CURRENT T-series, but I haven't decided just yet.

    My WiFi connection reports that it is 300Mb/s but the real speed is
    never anything like that. Here are my results sending from T470
    laptop (WiFi connection, reports as 300Mb/s) to desktop:-

    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 16.2MB/s 00:52

    So I get about half the 'expected' speed, also quite similar to your speeds I think.

    Understood... however, you are getting a LITTLE better speeds than me; I wonder what type of WiFi chip/card is in the T470 vs what is CURRENT??? Maybe it would be worth it, for me, to upgrade the WiFi chip/card in my T430s to the best it will take, OR whatever is current in 2021...

    Even if I won't get the 100mb/s that I had THOUGHT, I'd probably see a better transfer rate anyway. Hmmmmm...


    I get around 80mb down on average, with my Thinkpad...



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/05 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From Chris Green@3:770/3 to paul lee on Wed Jan 13 09:46:47 2021
    paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:
    I had a T430 that died (very unusual for Lenovo T series), I now have
    a T470 I bought used off eBay for rather less than I expected, lovely!

    I love the ThinkPad hardware; and while the T430/T440 series have SOME of
    the
    cool stuff from the old days, they are just beginning to be a little long in the tooth for me. I'm not very hardware intensive, but... I will be looking
    at
    some other ThinkPad models in the future. I might just bite the bullet and
    go
    CURRENT T-series, but I haven't decided just yet.

    My WiFi connection reports that it is 300Mb/s but the real speed is never anything like that. Here are my results sending from T470
    laptop (WiFi connection, reports as 300Mb/s) to desktop:-

    bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 16.2MB/s 00:52

    So I get about half the 'expected' speed, also quite similar to your speeds I think.

    Understood... however, you are getting a LITTLE better speeds than me; I
    wonder
    what type of WiFi chip/card is in the T470 vs what is CURRENT??? Maybe it
    would
    be worth it, for me, to upgrade the WiFi chip/card in my T430s to the best
    it
    will take, OR whatever is current in 2021...

    Even if I won't get the 100mb/s that I had THOUGHT, I'd probably see a
    better
    transfer rate anyway. Hmmmmm...

    Is it really that important or significant? I.e. does it really
    matter if a transfer takes 15 seconds rather than 10 seconds?

    I run my (incremental, so rarely really huge) backups overnight via
    anacron so whether they take 10 minutes or 30 minutes doesn't matter
    at all. As long as they complete before I wake up in the morning it's
    fine.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Dennis Lee Bieber on Wed Jan 13 12:09:49 2021
    On 12/01/2021 17:19, Dennis Lee Bieber wrote:
    On Tue, 12 Jan 2021 00:31:38 +1300, nospam.paul.lee@f420.n105.z1.binkp.net (paul lee) declaimed the following:


    Thanks for your reply... I was told by another poster that I, since the laptop
    I use connects via wifi, should consider my 12mb speeds normal -


    https://www.speedguide.net/faq/what-is-the-actual-real-life-speed-of-wireless-3 74




    That does not gybe with what is reported on my gear. Top raw speed even
    rammed up against the router is 72Mbps.


    --
    The theory of Communism may be summed up in one sentence: Abolish all
    private property.

    Karl Marx

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Chris Green on Wed Jan 13 12:20:16 2021
    On 12/01/2021 18:04, Chris Green wrote:
    When I copy from laptop to desktop (both quite fast machines with fast
    disks) I get something quite a bit over 100MB/s on wired Gigabit
    connections. So the overhead isn't that great given that the
    theoretical maximum would be 1000/8 which is 125MB/s. So on a 300Mb/s wireless link between the same two machines one would, sort of, expect something a bit more than 30MB/s whereas in reality one gets about
    half of that.

    in general I have found that on a good link, speeds of a little over
    1/10th Mbps rate to be obtained at the byte level, So overheads is not
    that heavy a penalty.

    Probably ~10%
    That's on a *full duplex* link. Broadband is full duplex. Ethernet of
    the cat 5 sort is full duplex.

    Wifi is NOT full duplex.

    That means that any ACK packets going back share bandwidth with the
    forward data stream, In a fairly nasty 'wait till the stream packet size
    is exceeded, then send an ack oh dear collisions/backoffs/try again...'
    sort of way.

    When my Pi zero link was going titsup before I slapped in an access
    point 5 feet away, although it *said* it was connected at 5Mbps, it
    couldn't support a 128kbps stream of audio.
    My so called 72Mbps links couldn't handle HD TV, which is around 5Mbps I
    think, reliably.

    I now have an Ethernet cable to where the laptop lives






    --
    The theory of Communism may be summed up in one sentence: Abolish all
    private property.

    Karl Marx

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Chris Green on Wed Jan 13 12:21:45 2021
    On 13/01/2021 09:46, Chris Green wrote:
    Is it really that important or significant? I.e. does it really
    matter if a transfer takes 15 seconds rather than 10 seconds?

    It starts to matter when live streams - audio or video - start to fail
    and stutter...

    --
    “Those who can make you believe absurdities, can make you commit atrocities.”

    ― Voltaire, Questions sur les Miracles à M. Claparede, Professeur de Théologie à Genève, par un Proposant: Ou Extrait de Diverses Lettres de
    M. de Voltaire

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to The Natural Philosopher on Wed Jan 13 12:42:41 2021
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 12/01/2021 18:04, Chris Green wrote:
    When I copy from laptop to desktop (both quite fast machines with fast disks) I get something quite a bit over 100MB/s on wired Gigabit connections. So the overhead isn't that great given that the
    theoretical maximum would be 1000/8 which is 125MB/s. So on a 300Mb/s wireless link between the same two machines one would, sort of, expect something a bit more than 30MB/s whereas in reality one gets about
    half of that.

    in general I have found that on a good link, speeds of a little over
    1/10th Mbps rate to be obtained at the byte level, So overheads is not
    that heavy a penalty.

    Probably ~10%

    Yes, that's exactly what I was trying to say really, on a wired
    connection one gets something over the 1/10 the Mbp/s in MBp/s.


    That's on a *full duplex* link. Broadband is full duplex. Ethernet of
    the cat 5 sort is full duplex.

    Wifi is NOT full duplex.

    That means that any ACK packets going back share bandwidth with the
    forward data stream, In a fairly nasty 'wait till the stream packet size
    is exceeded, then send an ack oh dear collisions/backoffs/try again...'
    sort of way.

    Simply having to interleave the ACKs with the data going in the other
    direction will slow things down considerably.


    When my Pi zero link was going titsup before I slapped in an access
    point 5 feet away, although it *said* it was connected at 5Mbps, it
    couldn't support a 128kbps stream of audio.
    My so called 72Mbps links couldn't handle HD TV, which is around 5Mbps I think, reliably.

    I use WiFi for as little as I possibly can. About the only major use
    is using my laptop interactively like now, replying to usenet posts
    and such.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Ahem A Rivet's Shot@3:770/3 to The Natural Philosopher on Wed Jan 13 12:39:43 2021
    On Wed, 13 Jan 2021 12:20:16 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    That means that any ACK packets going back share bandwidth with the
    forward data stream, In a fairly nasty 'wait till the stream packet size
    is exceeded, then send an ack oh dear collisions/backoffs/try again...'
    sort of way.

    Your mission, should you choose to accept it, is to devise a better approach that allows full duplex wifi. As always, should you or any of your
    IM Force be caught or killed, the Secretary will disavow any knowledge of
    your actions. This post will self-destruct in five seconds.

    --
    Steve O'Hara-Smith | Directable Mirror Arrays C:\>WIN | A better way to focus the sun
    The computer obeys and wins. | licences available see
    You lose and Bill collects. | http://www.sohara.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Ahem A Rivet's Shot on Wed Jan 13 14:23:18 2021
    On 13/01/2021 12:39, Ahem A Rivet's Shot wrote:
    On Wed, 13 Jan 2021 12:20:16 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    That means that any ACK packets going back share bandwidth with the
    forward data stream, In a fairly nasty 'wait till the stream packet size
    is exceeded, then send an ack oh dear collisions/backoffs/try again...'
    sort of way.

    Your mission, should you choose to accept it, is to devise a better approach that allows full duplex wifi. As always, should you or any of your IM Force be caught or killed, the Secretary will disavow any knowledge of your actions. This post will self-destruct in five seconds.

    Shannon tells you its a hiding to nothing. If you use say two frequency
    bands, you would get better performance just adding them together and
    using collision detection.

    The radio spectrum is limited and precious. Go up to light frequencies
    and there's lots more speed available. But that doesn't punch through
    solid walls..

    The best bet is to put a wifi point in every room, and feed them all via
    cable, and manage them so that StupidDevices™ are only allowed to log on
    to the nearest one

    --
    Religion is regarded by the common people as true, by the wise as
    foolish, and by the rulers as useful.

    (Seneca the Younger, 65 AD)

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Scott Alfter@3:770/3 to tnp@invalid.invalid on Wed Jan 13 17:33:42 2021
    In article <rtmoi1$ofa$1@dont-email.me>,
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    My so called 72Mbps links couldn't handle HD TV, which is around 5Mbps I >think, reliably.

    Depends on the codecs involved. If you're slinging around MPEG-2 streams
    that were broadcast OTA, they could take up to 20 Mbps, though 8-12 for the primary stream is closer to typical.

    Years ago, I wanted to connect a MythTV backend over WiFi. 802.11g wouldn't cut it. There theoretically should've been enough bandwidth, but I think
    there was too much other nearby crap on the 2.4-GHz band that interfered. I was able to get decently reliable streaming of recorded TV when I switched
    to 802.11a, as hardly anybody else (and nobody nearby) was using 5 GHz at
    the time.

    _/_
    / v \ Scott Alfter (remove the obvious to send mail)
    (IIGS( https://alfter.us/ Top-posting!
    \_^_/ >What's the most annoying thing on Usenet?

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Ahem A Rivet's Shot@3:770/3 to The Natural Philosopher on Wed Jan 13 18:21:13 2021
    On Wed, 13 Jan 2021 14:23:18 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    The radio spectrum is limited and precious. Go up to light frequencies
    and there's lots more speed available. But that doesn't punch through
    solid walls..

    Ah we have the solution - modulated X-Rays.

    --
    Steve O'Hara-Smith | Directable Mirror Arrays C:\>WIN | A better way to focus the sun
    The computer obeys and wins. | licences available see
    You lose and Bill collects. | http://www.sohara.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Ahem A Rivet's Shot on Wed Jan 13 19:25:26 2021
    On 13/01/2021 18:21, Ahem A Rivet's Shot wrote:
    On Wed, 13 Jan 2021 14:23:18 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    The radio spectrum is limited and precious. Go up to light frequencies
    and there's lots more speed available. But that doesn't punch through
    solid walls..

    Ah we have the solution - modulated X-Rays.

    well you may well laugh...why stop there. Gamma rays?


    --
    "What do you think about Gay Marriage?"
    "I don't."
    "Don't what?"
    "Think about Gay Marriage."

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From paul lee@1:105/420 to Chris Green on Wed Jan 13 16:20:58 2021
    Even if I won't get the 100mb/s that I had THOUGHT, I'd probably see a transfer rate anyway. Hmmmmm...
    Is it really that important or significant? I.e. does it really
    matter if a transfer takes 15 seconds rather than 10 seconds?

    I run my (incremental, so rarely really huge) backups overnight via anacron so whether they take 10 minutes or 30 minutes doesn't matter
    at all. As long as they complete before I wake up in the morning it's fine.

    While I suppose you're right about the bigger backups being at night, the files I work with WOULD benefit from any and all transfer increases. I mean... a lot of 4k movies & videos, backups of 1TB drives, etc etc.

    Yes, nighttime backups can go so far.. but I do sit and wait while things transfer. A lot.

    I still haven't went to the network room and put the laptop on the ethernet - I'm going to... maybe today.

    I'm thinking that if, while connected to ethernet, I get good transfer rates that way I'll just connect to ethernet if I'm going to do a bunch of big transfers - and I might check into getting the latest Wifi chips across my entire network so that when on Wifi its at least a LITTLE quicker. If I can do both of those things I think I'll be pretty well setup.



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/05 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From paul lee@1:105/420 to The Natural Philosopher on Wed Jan 13 16:22:01 2021
    I now have an Ethernet cable to where the laptop lives

    I have a feeling that, in the end, this is gonna be the ticket.



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/05 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From Chris Green@3:770/3 to paul lee on Thu Jan 14 08:56:20 2021
    paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:
    Even if I won't get the 100mb/s that I had THOUGHT, I'd probably see
    a b
    transfer rate anyway. Hmmmmm...
    Is it really that important or significant? I.e. does it really
    matter if a transfer takes 15 seconds rather than 10 seconds?

    I run my (incremental, so rarely really huge) backups overnight via anacron so whether they take 10 minutes or 30 minutes doesn't matter
    at all. As long as they complete before I wake up in the morning it's fine.

    While I suppose you're right about the bigger backups being at night, the
    files
    I work with WOULD benefit from any and all transfer increases. I mean... a
    lot
    of 4k movies & videos, backups of 1TB drives, etc etc.

    When you backup a 1TB drive do you actually copy the whole 1TB? It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup *selectively*.

    I just backup /home, /etc and a few other odds and ends that have
    customisation or configuration in them. There are ready made
    incremental backup systems like rsnapshot which I used for a while but
    then I wrote my own (quite similar but does *exactly*) what I want.
    There's no need to back up /usr as you can simply reinstall everything
    there.

    For example on my desktop machine I keep short-term incremental
    backups on a separate drive, my 1TB /home is 38% full but my
    multilevel incremental backups only occupy 20% of the 1TB backup
    drive. 'Multilevel' means I have hourly backups for the last 9 hours,
    daily backups for the last 7 days and weekly backups for 5 weeks.

    My longer term (daily) backups go to an offsite machine, a typical
    incremental backup of my 38% full /home plus the other bits and pieces
    only takes a couple of minutes because only the *changes* since the
    day before are saved.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Chris Green on Thu Jan 14 11:06:48 2021
    On 14/01/2021 08:56, Chris Green wrote:
    When you backup a 1TB drive do you actually copy the whole 1TB? It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup*selectively*.

    depends on what you want. I rsync huge amounts of data. Disk space is
    cheap. Recovering from data loss is not, Working out what is important
    and what is not is even more expensive.





    --
    The theory of Communism may be summed up in one sentence: Abolish all
    private property.

    Karl Marx

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From TimS@3:770/3 to All on Thu Jan 14 11:43:24 2021
    On 14 Jan 2021 at 11:06:48 GMT, The Natural Philosopher <tnp@invalid.invalid> wrote:

    On 14/01/2021 08:56, Chris Green wrote:
    When you backup a 1TB drive do you actually copy the whole 1TB? It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup*selectively*.

    depends on what you want. I rsync huge amounts of data. Disk space is
    cheap. Recovering from data loss is not, Working out what is important
    and what is not is even more expensive.

    Incremental backup as done by Time Machine allows many more backups. It's done with hard links, so a file is backed up the first time, but hard links are created for subsequent backups. This means that what is presented to me when I want to do a restore from a selected date just looks an ordinary folder as it would appear on the Desktop. I highlight one or more files/folders with the mouse and click Restore. No farting about with command line options that I
    have no interest in remembering.

    Disk space may be cheap, but then you have to manage it. And remember - if you make backup/restore complicated then noddy users won't do it.

    --
    Tim

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Richard Falken@1:123/115 to The Natural Philosopher on Thu Jan 14 06:36:06 2021
    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am

    depends on what you want. I rsync huge amounts of data. Disk space is
    cheap. Recovering from data loss is not, Working out what is important
    and what is not is even more expensive.


    I agree with this position.

    I know that just backing up the data that is not easily reproductible suffices,
    in theory. However, if you only back the data up without the applications and the OS stack, your recovery consits on a sysadmin installing software for a week and swearing at his notebook.


    --
    gopher://gopher.richardfalken.com/1/richardfalken
    --- SBBSecho 3.12-Linux
    * Origin: Palantir * palantirbbs.ddns.net * Pensacola, FL * (1:123/115)
  • From Ahem A Rivet's Shot@3:770/3 to TimS on Thu Jan 14 12:22:21 2021
    On 14 Jan 2021 11:43:24 GMT
    TimS <timstreater@greenbee.net> wrote:

    Disk space may be cheap, but then you have to manage it. And remember -
    if you make backup/restore complicated then noddy users won't do it.

    You can make it as easy as you like and they still won't. A long
    time ago I set up a system for a customer with an overnight backup schedule
    and prepared a box of QIC tapes labelled Mon, Tue, Wed, Thu, Fri, Fri, Fri
    and left instructions to change the tape daily and keep all but one of the
    Fri tapes offsite cycling them round each week. Many months later the hrd
    disc failed during the nightly backup so after replacing the drive and
    finding the backup corrupt I asked for the previous night's tape - it
    emerged that they had *never* changed the tape.

    We all learned something that day.

    --
    Steve O'Hara-Smith | Directable Mirror Arrays C:\>WIN | A better way to focus the sun
    The computer obeys and wins. | licences available see
    You lose and Bill collects. | http://www.sohara.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to TimS on Thu Jan 14 12:36:35 2021
    On 14/01/2021 11:43, TimS wrote:
    On 14 Jan 2021 at 11:06:48 GMT, The Natural Philosopher
    <tnp@invalid.invalid>
    wrote:

    On 14/01/2021 08:56, Chris Green wrote:
    When you backup a 1TB drive do you actually copy the whole 1TB? It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup*selectively*.

    depends on what you want. I rsync huge amounts of data. Disk space is
    cheap. Recovering from data loss is not, Working out what is important
    and what is not is even more expensive.

    Incremental backup as done by Time Machine allows many more backups. It's
    done
    with hard links, so a file is backed up the first time, but hard links are created for subsequent backups. This means that what is presented to me when want to do a restore from a selected date just looks an ordinary folder as
    it
    would appear on the Desktop. I highlight one or more files/folders with the mouse and click Restore. No farting about with command line options that I have no interest in remembering.

    Disk space may be cheap, but then you have to manage it. And remember - if
    you
    make backup/restore complicated then noddy users won't do it.

    well what I have on my SECOND drive is complete directory trees of
    three machines and its all exported by NFS so all I have to do is mount
    it , navigate to the file I need and move it across to where it needs to go.

    Hardly onerous!

    --
    "I am inclined to tell the truth and dislike people who lie consistently.
    This makes me unfit for the company of people of a Left persuasion, and
    all women"

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Ahem A Rivet's Shot on Thu Jan 14 12:43:03 2021
    On 14/01/2021 12:22, Ahem A Rivet's Shot wrote:
    On 14 Jan 2021 11:43:24 GMT
    TimS <timstreater@greenbee.net> wrote:

    Disk space may be cheap, but then you have to manage it. And remember -
    if you make backup/restore complicated then noddy users won't do it.

    You can make it as easy as you like and they still won't. A long
    time ago I set up a system for a customer with an overnight backup schedule and prepared a box of QIC tapes labelled Mon, Tue, Wed, Thu, Fri, Fri, Fri and left instructions to change the tape daily and keep all but one of the Fri tapes offsite cycling them round each week. Many months later the hrd disc failed during the nightly backup so after replacing the drive and finding the backup corrupt I asked for the previous night's tape - it
    emerged that they had *never* changed the tape.

    We all learned something that day.

    Exactly. My rsyncs are done by 3 am cronjob. I have two VPSes out there
    in internet land, and one big server in house. They all get copied onto
    a secondary disk. If any disk goes, I have full backups, except if the secondary disk goes I have a BIG cronjob the night and day after a new
    one goes in :-)

    I have never really successfully restored from a tape. And the drive
    costs more than 4TB of disk.

    So far i've had one backup drive die on me - well nearly. started giving errors.

    And accidentally deleted a file and restored it from last nights backup
    half a dozen times, and rebuilt *this* desktop completely using the
    backup as a source of remembering what config changes I had made to a
    raw install.

    Works for me. YMMV


    --
    In todays liberal progressive conflict-free education system, everyone
    gets full Marx.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Richard Falken on Thu Jan 14 12:53:20 2021
    On 13/01/2021 17:36, Richard Falken wrote:
    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am

    > depends on what you want. I rsync huge amounts of data. Disk space is
    > cheap. Recovering from data loss is not, Working out what is important
    > and what is not is even more expensive.
    >

    I agree with this position.

    I know that just backing up the data that is not easily reproductible
    suffices,
    in theory. However, if you only back the data up without the applications
    and
    the OS stack, your recovery consits on a sysadmin installing software for a week and swearing at his notebook.


    Well I do reinstall all apps BUT remembering what the config files were
    called, what changes were made and where they were, is something I
    prefer to leave for that recovery phase.

    In general a well crashed primary disk is an excuse to upgrade everything...



    --
    There’s a mighty big difference between good, sound reasons and reasons
    that sound good.

    Burton Hillis (William Vaughn, American columnist)

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From TimS@3:770/3 to Daniel James on Thu Jan 14 13:10:01 2021
    On 14 Jan 2021 at 12:58:22 GMT, Daniel James <daniel@me.invalid> wrote:

    In article <rtnhf6$gjc$1@dont-email.me>, The Natural Philosopher wrote:
    Ah we have the solution - modulated X-Rays.

    well you may well laugh...why stop there. Gamma rays?

    You do know that X-Rays and Gamma rays are essentially the same thing?

    They occupy much the same part of the EM spectrum. The distinction
    often made between them is to do with their means of production. Gamma
    rays are produced inside the atomic nucleus while X-Rays are created by relaxation of highly excited electrons outside the nucleus.

    https://en.wikipedia.org/wiki/X-ray#Gamma_rays

    Both are ionising, so can cause radiation damage. Gamma rays are much more energetic, however. Stuff like UV, visible light, and whatever 5G uses is not ionising.

    --
    Tim

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Daniel James@3:770/3 to The Natural Philosopher on Thu Jan 14 12:58:22 2021
    In article <rtnhf6$gjc$1@dont-email.me>, The Natural Philosopher wrote:
    Ah we have the solution - modulated X-Rays.

    well you may well laugh...why stop there. Gamma rays?

    You do know that X-Rays and Gamma rays are essentially the same thing?

    They occupy much the same part of the EM spectrum. The distinction
    often made between them is to do with their means of production. Gamma
    rays are produced inside the atomic nucleus while X-Rays are created by relaxation of highly excited electrons outside the nucleus.

    https://en.wikipedia.org/wiki/X-ray#Gamma_rays

    --
    Cheers,
    Daniel.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to The Natural Philosopher on Thu Jan 14 13:17:25 2021
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 14/01/2021 08:56, Chris Green wrote:
    When you backup a 1TB drive do you actually copy the whole 1TB? It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup*selectively*.

    depends on what you want. I rsync huge amounts of data. Disk space is
    cheap. Recovering from data loss is not, Working out what is important
    and what is not is even more expensive.

    I don't go to much detail, I just don't save stuff in cache
    directories or in directories named tmp. Doesn't take long and is
    kept in a file called .rsync-filter which rsync can use automatically.

    Having automatic, hourly and daily backups for the past several hours
    and days saves *a lot* of time! Once set up it takes none of my time
    at all.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Martin Gregorie@3:770/3 to Richard Falken on Thu Jan 14 13:24:26 2021
    On Thu, 14 Jan 2021 06:36:06 +1300, Richard Falken wrote:

    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am

    depends on what you want. I rsync huge amounts of data. Disk space is cheap. Recovering from data loss is not, Working out what is
    important and what is not is even more expensive.


    I agree with this position.

    I know that just backing up the data that is not easily reproductible suffices,
    in theory. However, if you only back the data up without the
    applications and the OS stack, your recovery consits on a sysadmin
    installing software for a week and swearing at his notebook.

    Theres a simple tweak that fixes most of that stuff: move /usr/local to
    /home local and replace it with a symlink to /home/local

    I've done the the equivalent with my (large) PostgreSQL databases and my
    local Apache- based website (by default these are in /var, so I changed
    their configurations to put these files in /home too.

    Everything continues to work as before but now I've secured almost all of
    my own work and customisation by backing up /home

    The only thing thats not safeguarded now is the contents of /etc, so
    either back that up along with /home or keep copies of everything in /etc
    that you've explicitly changed in, say, your normal home login. I do the
    latter but of course ymmv. Changes in /etc made by software updates don't
    need backing up because they'll be automatically reapplied when you're rebuilding the failed device that holds your filing system.


    --
    --
    Martin | martin at
    Gregorie | gregorie dot org

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to Ahem A Rivet's Shot on Thu Jan 14 13:29:09 2021
    Ahem A Rivet's Shot <steveo@eircom.net> wrote:
    On 14 Jan 2021 11:43:24 GMT
    TimS <timstreater@greenbee.net> wrote:

    Disk space may be cheap, but then you have to manage it. And remember -
    if you make backup/restore complicated then noddy users won't do it.

    You can make it as easy as you like and they still won't.

    So you make it automatic. I backup my wife's laptop with incremental
    backups, she doesn't have to do anything, any time her laptop is
    connected to our LAN overnight (quite often) it gets backed up to the
    NAS in the garage. It works just the same for my systems (desktop,
    laptop, pi server), they get backed up automatically every night. I'm
    far to lazy to actually do any backups that require action on my part
    (and I suspect most people are the same).

    Since they're incremental backups they don't eat space very fast, my
    8TB NAS disk is only 5% full since moving to it from a 3TB one. The
    3TB one was about 5 years old (backups back to 2015) and was 50% full,
    though that wasn't *all* incremnentals.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to The Natural Philosopher on Thu Jan 14 13:19:52 2021
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 13/01/2021 17:36, Richard Falken wrote:
    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am

    > depends on what you want. I rsync huge amounts of data. Disk space is
    > cheap. Recovering from data loss is not, Working out what is important
    > and what is not is even more expensive.
    >

    I agree with this position.

    I know that just backing up the data that is not easily reproductible
    suffices,
    in theory. However, if you only back the data up without the applications
    and
    the OS stack, your recovery consits on a sysadmin installing software for week and swearing at his notebook.


    Well I do reinstall all apps BUT remembering what the config files were called, what changes were made and where they were, is something I
    prefer to leave for that recovery phase.

    I make very sure that all the configuration is either in /home or
    /etc, most programs do behave properly and keep their configurations
    in the right place.


    In general a well crashed primary disk is an excuse to upgrade everything...

    Yes, so why would one back up /usr ??

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to TimS on Thu Jan 14 13:22:44 2021
    TimS <timstreater@greenbee.net> wrote:
    On 14 Jan 2021 at 11:06:48 GMT, The Natural Philosopher
    <tnp@invalid.invalid>
    wrote:

    On 14/01/2021 08:56, Chris Green wrote:
    When you backup a 1TB drive do you actually copy the whole 1TB? It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup*selectively*.

    depends on what you want. I rsync huge amounts of data. Disk space is cheap. Recovering from data loss is not, Working out what is important
    and what is not is even more expensive.

    Incremental backup as done by Time Machine allows many more backups. It's
    done
    with hard links, so a file is backed up the first time, but hard links are created for subsequent backups. This means that what is presented to me when want to do a restore from a selected date just looks an ordinary folder as
    it
    would appear on the Desktop. I highlight one or more files/folders with the mouse and click Restore. No farting about with command line options that I have no interest in remembering.

    Exactly, my backups are like that. Every one of them looks exactly
    like my normal home directory, just copy the files back as needed.


    Disk space may be cheap, but then you have to manage it. And remember - if
    you
    make backup/restore complicated then noddy users won't do it.

    Yes, again very true, I am just about my only user (possibly 'noddy')
    but I'm lazy, automatic backups are the only reliable ones!

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Daniel James on Thu Jan 14 13:35:44 2021
    On 14/01/2021 12:58, Daniel James wrote:
    In article <rtnhf6$gjc$1@dont-email.me>, The Natural Philosopher wrote:
    Ah we have the solution - modulated X-Rays.

    well you may well laugh...why stop there. Gamma rays?

    You do know that X-Rays and Gamma rays are essentially the same thing?

    They occupy much the same part of the EM spectrum. The distinction
    often made between them is to do with their means of production. Gamma
    rays are produced inside the atomic nucleus while X-Rays are created by relaxation of highly excited electrons outside the nucleus.

    https://en.wikipedia.org/wiki/X-ray#Gamma_rays

    Gamma rays are shorter waves than x rays.In my book.




    --
    Future generations will wonder in bemused amazement that the early
    twenty-first century’s developed world went into hysterical panic over a globally average temperature increase of a few tenths of a degree, and,
    on the basis of gross exaggerations of highly uncertain computer
    projections combined into implausible chains of inference, proceeded to contemplate a rollback of the industrial age.

    Richard Lindzen

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Martin Gregorie on Thu Jan 14 13:39:26 2021
    On 14/01/2021 13:24, Martin Gregorie wrote:
    On Thu, 14 Jan 2021 06:36:06 +1300, Richard Falken wrote:

    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am

    > depends on what you want. I rsync huge amounts of data. Disk space is
    > cheap. Recovering from data loss is not, Working out what is
    > important and what is not is even more expensive.
    >
    >
    I agree with this position.

    I know that just backing up the data that is not easily reproductible
    suffices,
    in theory. However, if you only back the data up without the
    applications and the OS stack, your recovery consits on a sysadmin
    installing software for a week and swearing at his notebook.

    Theres a simple tweak that fixes most of that stuff: move /usr/local to
    /home local and replace it with a symlink to /home/local

    I've done the the equivalent with my (large) PostgreSQL databases and my local Apache- based website (by default these are in /var, so I changed
    their configurations to put these files in /home too.

    Everything continues to work as before but now I've secured almost all of
    my own work and customisation by backing up /home

    The only thing thats not safeguarded now is the contents of /etc, so
    either back that up along with /home or keep copies of everything in /etc that you've explicitly changed in, say, your normal home login. I do the latter but of course ymmv. Changes in /etc made by software updates don't need backing up because they'll be automatically reapplied when you're rebuilding the failed device that holds your filing system.


    what about /var that contains all the webs servers and Mysql databases
    by default? /opt as well has stuff in it. /boot has grub configs





    --
    “Some people like to travel by train because it combines the slowness of
    a car with the cramped public exposure of 
an airplane.”

    Dennis Miller

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to The Natural Philosopher on Thu Jan 14 14:07:45 2021
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 14/01/2021 13:24, Martin Gregorie wrote:
    On Thu, 14 Jan 2021 06:36:06 +1300, Richard Falken wrote:

    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am >>
    > depends on what you want. I rsync huge amounts of data. Disk space is >> > cheap. Recovering from data loss is not, Working out what is
    > important and what is not is even more expensive.
    >
    >
    I agree with this position.

    I know that just backing up the data that is not easily reproductible
    suffices,
    in theory. However, if you only back the data up without the
    applications and the OS stack, your recovery consits on a sysadmin
    installing software for a week and swearing at his notebook.

    Theres a simple tweak that fixes most of that stuff: move /usr/local to /home local and replace it with a symlink to /home/local

    I've done the the equivalent with my (large) PostgreSQL databases and my local Apache- based website (by default these are in /var, so I changed their configurations to put these files in /home too.

    Everything continues to work as before but now I've secured almost all of my own work and customisation by backing up /home

    The only thing thats not safeguarded now is the contents of /etc, so
    either back that up along with /home or keep copies of everything in /etc that you've explicitly changed in, say, your normal home login. I do the latter but of course ymmv. Changes in /etc made by software updates don't need backing up because they'll be automatically reapplied when you're rebuilding the failed device that holds your filing system.


    what about /var that contains all the webs servers and Mysql databases
    by default? /opt as well has stuff in it. /boot has grub configs

    I used to back up /var but I don't have any Mysql databases now,
    partly for this reason, and anyway backing up a database file while
    the server is running isn't a very good idea. The web server stuff I
    have symbolically linked to my home directory so it's backed up that
    way.

    I have nothing in /opt, I've checked, if I did I would add it to my
    backups. I *do* back up /usr/local. /boot configs are generated
    automatically at installation in general, I've not manually changed
    them.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Chris Green on Thu Jan 14 14:29:17 2021
    On 14/01/2021 13:19, Chris Green wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 13/01/2021 17:36, Richard Falken wrote:
    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am >>>
    > depends on what you want. I rsync huge amounts of data. Disk space is >>> > cheap. Recovering from data loss is not, Working out what is important
    > and what is not is even more expensive.
    >

    I agree with this position.

    I know that just backing up the data that is not easily reproductible suffices,
    in theory. However, if you only back the data up without the applications and
    the OS stack, your recovery consits on a sysadmin installing software for >>> week and swearing at his notebook.


    Well I do reinstall all apps BUT remembering what the config files were
    called, what changes were made and where they were, is something I
    prefer to leave for that recovery phase.

    I make very sure that all the configuration is either in /home or
    /etc, most programs do behave properly and keep their configurations
    in the right place.


    In general a well crashed primary disk is an excuse to upgrade everything...

    Yes, so why would one back up /usr ??

    Because /usr/local and /usr/lib is full of nice stuff like fonts and
    screensave backgrounds and the like


    --
    In a Time of Universal Deceit, Telling the Truth Is a Revolutionary Act.

    - George Orwell

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to The Natural Philosopher on Thu Jan 14 14:52:29 2021
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 14/01/2021 13:19, Chris Green wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 13/01/2021 17:36, Richard Falken wrote:
    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06
    am

    > depends on what you want. I rsync huge amounts of data. Disk space
    is
    > cheap. Recovering from data loss is not, Working out what is
    important
    > and what is not is even more expensive.
    >

    I agree with this position.

    I know that just backing up the data that is not easily reproductible
    suffices,
    in theory. However, if you only back the data up without the
    applications and
    the OS stack, your recovery consits on a sysadmin installing software
    for a
    week and swearing at his notebook.


    Well I do reinstall all apps BUT remembering what the config files were
    called, what changes were made and where they were, is something I
    prefer to leave for that recovery phase.

    I make very sure that all the configuration is either in /home or
    /etc, most programs do behave properly and keep their configurations
    in the right place.


    In general a well crashed primary disk is an excuse to upgrade
    everything...

    Yes, so why would one back up /usr ??

    Because /usr/local and /usr/lib is full of nice stuff like fonts and screensave backgrounds and the like

    I do actually back up /usr/local, as far as I'm aware there's nothing
    in my /usr/lib that isn't simply a package I can download from the repositories.

    I do keep a record of everything that I have installed in addition to
    a standard basic install of xubuntu, as well as my record I use
    synaptic which also keeps a history of what has been installed.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Martin Gregorie@3:770/3 to The Natural Philosopher on Thu Jan 14 15:22:14 2021
    On Thu, 14 Jan 2021 13:39:26 +0000, The Natural Philosopher wrote:

    On 14/01/2021 13:24, Martin Gregorie wrote:
    On Thu, 14 Jan 2021 06:36:06 +1300, Richard Falken wrote:

    Re: Re: My darn NAS...
    By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06
    am

    > depends on what you want. I rsync huge amounts of data. Disk space
    > is cheap. Recovering from data loss is not, Working out what is
    > important and what is not is even more expensive.
    >
    >
    I agree with this position.

    I know that just backing up the data that is not easily reproductible
    suffices,
    in theory. However, if you only back the data up without the
    applications and the OS stack, your recovery consits on a sysadmin
    installing software for a week and swearing at his notebook.

    Theres a simple tweak that fixes most of that stuff: move /usr/local to
    /home local and replace it with a symlink to /home/local

    I've done the the equivalent with my (large) PostgreSQL databases and
    my local Apache- based website (by default these are in /var, so I
    changed their configurations to put these files in /home too.

    Everything continues to work as before but now I've secured almost all
    of my own work and customisation by backing up /home

    The only thing thats not safeguarded now is the contents of /etc, so
    either back that up along with /home or keep copies of everything in
    /etc that you've explicitly changed in, say, your normal home login. I
    do the latter but of course ymmv. Changes in /etc made by software
    updates don't need backing up because they'll be automatically
    reapplied when you're rebuilding the failed device that holds your
    filing system.


    what about /var that contains all the webs servers and Mysql databases
    by default? /opt as well has stuff in it. /boot has grub configs

    As I said, I don't need to back up /var because I moved the stuff that
    defaults to /var that I've explicitly set up (PostgreSQL database, Apache website) into dedicated logins in /home and changed the PostgreSQL and
    Apache configurations accordingly. Copies of those configuration files
    are are in my main login directory, which is, of course, in /home and so automatically backed up along with everything else in it.

    I've never made changes in /opt, so I don't need to back it up: a
    reinstall will fix it.

    Similarly I haven't made any changes to the grub configuration, so don't
    need to back it up because the Fedora 'install over the net with dnf'
    will restore that automatically.



    --
    --
    Martin | martin at
    Gregorie | gregorie dot org

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Richard Falken@1:123/115 to TimS on Thu Jan 14 11:45:19 2021
    Re: Re: My darn NAS...
    By: TimS to All on Thu Jan 14 2021 11:43 am

    Incremental backup as done by Time Machine allows many more backups. It's
    do
    with hard links, so a file is backed up the first time, but hard links are created for subsequent backups. This means that what is presented to me
    when
    want to do a restore from a selected date just looks an ordinary folder as would appear on the Desktop. I highlight one or more files/folders with the mouse and click Restore. No farting about with command line options that I have no interest in remembering.

    Hard link based incremental backups are great. I do a lot of it with rsync. There is something worth mentioning, though:

    You may use hard link based backups in order to make a snapshot per week, but if a file remains unchanged for long, all your hard links will be pointing to the same file in your backup drive. This means if the file gets corrupted you have no copies of it despite having 500+ "images". I have seen it happen and it
    is not pretty.

    It didn't happen to me, thankfully :-P BUt it pays to run some integrity checks
    fro tieme to time, or at least have backups of the backup.

    --
    gopher://gopher.richardfalken.com/1/richardfalken
    --- SBBSecho 3.12-Linux
    * Origin: Palantir * palantirbbs.ddns.net * Pensacola, FL * (1:123/115)
  • From druck@3:770/3 to Chris Green on Thu Jan 14 17:31:46 2021
    On 14/01/2021 13:19, Chris Green wrote:
    I make very sure that all the configuration is either in /home or
    /etc, most programs do behave properly and keep their configurations
    in the right place.

    The etckeeper package is very useful to keep the changes to config files
    in a git repo, which can be pushed up to your backup sever on a cron job.

    Being git you can put a commit message on any changes, so you know why
    you had to make the change - very useful for knowing what to change on a
    fresh system too, or reverting it when no longer needed.

    It will also automatically commit the changes from updates making it
    very easy to solve problems from config files being overwritten by the
    package manager.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to Chris Green on Thu Jan 14 17:42:45 2021
    On 14/01/2021 13:29, Chris Green wrote:
    So you make it automatic. I backup my wife's laptop with incremental backups, she doesn't have to do anything, any time her laptop is
    connected to our LAN overnight (quite often) it gets backed up to the
    NAS in the garage. It works just the same for my systems (desktop,
    laptop, pi server), they get backed up automatically every night. I'm
    far to lazy to actually do any backups that require action on my part
    (and I suspect most people are the same).

    That's the way it should be.

    Since they're incremental backups they don't eat space very fast, my
    8TB NAS disk is only 5% full since moving to it from a 3TB one. The
    3TB one was about 5 years old (backups back to 2015) and was 50% full,
    though that wasn't *all* incremnentals.

    The difference with my dozen or so Pi's is I do the incremental backup
    to the NAS, not directly onto its filing system, but into an image file
    which was created from the Pi's SD card. This means that if any of the
    Pi's SD cards fail, I can just get a card of the same size and write the
    image file straight back on to it, and be up and running again in minutes.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to Richard Falken on Thu Jan 14 18:05:17 2021
    Richard Falken <nospam.Richard.Falken@f1.n770.z6840.fidonet.org> wrote:
    Re: Re: My darn NAS...
    By: TimS to All on Thu Jan 14 2021 11:43 am

    Incremental backup as done by Time Machine allows many more backups. It's
    do
    with hard links, so a file is backed up the first time, but hard links
    are
    created for subsequent backups. This means that what is presented to me
    when
    want to do a restore from a selected date just looks an ordinary folder
    as i
    would appear on the Desktop. I highlight one or more files/folders with
    the
    mouse and click Restore. No farting about with command line options that have no interest in remembering.

    Hard link based incremental backups are great. I do a lot of it with rsync. There is something worth mentioning, though:

    You may use hard link based backups in order to make a snapshot per week,
    but
    if a file remains unchanged for long, all your hard links will be pointing
    to
    the same file in your backup drive. This means if the file gets corrupted
    you
    have no copies of it despite having 500+ "images". I have seen it happen and
    it
    is not pretty.

    It didn't happen to me, thankfully :-P BUt it pays to run some integrity
    checks
    fro tieme to time, or at least have backups of the backup.

    Yes, of course that's true, one reason why I keep more than one
    incremental backup of my system (and they're on different machines).

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From TimS@3:770/3 to All on Thu Jan 14 18:29:32 2021
    On 13 Jan 2021 at 22:45:19 GMT, Richard Falken <Richard Falken> wrote:

    Re: Re: My darn NAS...
    By: TimS to All on Thu Jan 14 2021 11:43 am

    Incremental backup as done by Time Machine allows many more backups. It's
    do
    with hard links, so a file is backed up the first time, but hard links
    are
    created for subsequent backups. This means that what is presented to me
    when
    want to do a restore from a selected date just looks an ordinary folder
    as i
    would appear on the Desktop. I highlight one or more files/folders with
    the
    mouse and click Restore. No farting about with command line options that have no interest in remembering.

    Hard link based incremental backups are great. I do a lot of it with rsync. There is something worth mentioning, though:

    You may use hard link based backups in order to make a snapshot per week,
    but
    if a file remains unchanged for long, all your hard links will be pointing
    to
    the same file in your backup drive. This means if the file gets corrupted
    you
    have no copies of it despite having 500+ "images". I have seen it happen and
    it
    is not pretty.

    I don't know whether Time Machine does this or not, or perhaps limits the number of hard links to any file and creates a new complete backup of teh file and starts again.

    On my main file machine I've set TM to use a second disk; it alternates
    between them, so this is some protection.

    --
    Tim

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Chris Green@3:770/3 to TimS on Thu Jan 14 18:35:35 2021
    TimS <timstreater@greenbee.net> wrote:
    On 13 Jan 2021 at 22:45:19 GMT, Richard Falken <Richard Falken> wrote:

    Re: Re: My darn NAS...
    By: TimS to All on Thu Jan 14 2021 11:43 am

    Incremental backup as done by Time Machine allows many more backups.
    It's do
    with hard links, so a file is backed up the first time, but hard links
    are
    created for subsequent backups. This means that what is presented to me
    when
    want to do a restore from a selected date just looks an ordinary folder
    as i
    would appear on the Desktop. I highlight one or more files/folders with
    the
    mouse and click Restore. No farting about with command line options
    that I
    have no interest in remembering.

    Hard link based incremental backups are great. I do a lot of it with
    rsync.
    There is something worth mentioning, though:

    You may use hard link based backups in order to make a snapshot per week,
    but
    if a file remains unchanged for long, all your hard links will be pointing
    to
    the same file in your backup drive. This means if the file gets corrupted
    you
    have no copies of it despite having 500+ "images". I have seen it happen
    and it
    is not pretty.

    I don't know whether Time Machine does this or not, or perhaps limits the number of hard links to any file and creates a new complete backup of teh
    file
    and starts again.

    On my main file machine I've set TM to use a second disk; it alternates between them, so this is some protection.

    That's a rather neat idea, I might get my backup system to do it.

    --
    Chris Green
    ·

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Scott Alfter@3:770/3 to tnp@invalid.invalid on Thu Jan 14 20:08:17 2021
    In article <rtphie$881$1@dont-email.me>,
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    what about /var that contains all the webs servers and Mysql databases
    by default? /opt as well has stuff in it. /boot has grub configs

    Unless you're doing the stop/snapshot/restart thing, you shouldn't back up
    the files in /var/lib/mysql directory. It'd be better to dump them with mysqldump and back up those files, as mysqldump will only run once there are
    no transactions in flight (no need to stop/restart the server, either). Restoring from a copy of /var/lib/mysql can leave databases in an
    inconsistent state.

    _/_
    / v \ Scott Alfter (remove the obvious to send mail)
    (IIGS( https://alfter.us/ Top-posting!
    \_^_/ >What's the most annoying thing on Usenet?

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From The Natural Philosopher@3:770/3 to Scott Alfter on Fri Jan 15 08:34:06 2021
    On 14/01/2021 20:08, Scott Alfter wrote:
    Restoring from a copy of /var/lib/mysql can leave databases in an inconsistent state.
    With C-ISAM it is a useable state.


    --
    Of what good are dead warriors? … Warriors are those who desire battle
    more than peace. Those who seek battle despite peace. Those who thump
    their spears on the ground and talk of honor. Those who leap high the
    battle dance and dream of glory … The good of dead warriors, Mother, is
    that they are dead.
    Sheri S Tepper: The Awakeners.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From TimS@3:770/3 to Chris Green on Fri Jan 15 11:25:21 2021
    On 14 Jan 2021 at 18:35:35 GMT, Chris Green <cl@isbd.net> wrote:

    TimS <timstreater@greenbee.net> wrote:

    I don't know whether Time Machine does this or not, or perhaps limits the >> number of hard links to any file and creates a new complete backup of teh file
    and starts again.

    I'm told that, in fact, TM does have this issue.

    On my main file machine I've set TM to use a second disk; it alternates
    between them, so this is some protection.

    That's a rather neat idea, I might get my backup system to do it.

    I also discovered that TM can use any number of drives and uses them round-robin. If the next one is not mounted it just skips it. So I've added a third drive that was just kicking around. It's a USB SSD so easy to remove.

    --
    Tim

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Pancho@3:770/3 to The Natural Philosopher on Sat Jan 16 12:30:15 2021
    On 14/01/2021 11:06, The Natural Philosopher wrote:
    On 14/01/2021 08:56, Chris Green wrote:
    When you backup a 1TB drive do you actually copy the whole 1TB?  It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup*selectively*.

    depends on what you want. I rsync huge amounts of data. Disk space is
    cheap. Recovering from data loss is not, Working out what is important
    and what is not is even more expensive.


    One of the nice aspects of running apps/services in docker containers is
    that it encourages you to define data volumes and hence to immediately
    know what needs to be backed up.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From paul lee@1:105/420 to Chris Green on Mon Jan 18 17:57:22 2021
    When you backup a 1TB drive do you actually copy the whole 1TB? It's
    a huge waste of time and space and you can't keep so many backups.
    Use some form of incremental backup and also backup *selectively*.

    No... I mean on my BBS box, I do backup all /files and... it's literally 500GB or so. But of course, for my Linux I'm just backing up /home and a few other spots where I hold my personal files. I also use a package that takes a 'snapshot' or basically a LISTING of every installed package on the system.

    Sometimes I'll forget what setup I have going, and I can go thru that listing and select what to reinstall very quickly.

    But, still, I'm backing up enough that speeds matter.
    I mean... don't speeds kinda always matter, anyway?

    :P



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/16 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)
  • From Martin Gregorie@3:770/3 to paul lee on Tue Jan 19 13:37:20 2021
    On Mon, 18 Jan 2021 17:57:22 +1300, paul lee wrote:

    No... I mean on my BBS box, I do backup all /files and... it's literally 500GB or so. But of course, for my Linux I'm just backing up /home and a
    few other spots where I hold my personal files. I also use a package
    that takes a 'snapshot' or basically a LISTING of every installed
    package on the system.

    What OS do you use on the BBS box?

    But, still, I'm backing up enough that speeds matter.
    I mean... don't speeds kinda always matter, anyway?

    Have you tried rsync and/or rsnapshot?

    I used to back up my house server using tar with the compress (gzip)
    option and a relatively small group of files/directories skipped such as
    /tmp - that took 3.5 hours a night, backing up to a USB hard drive. Now
    I'm using rsnapshot to keep 7 daily backups plus another 4 weeklies and
    the typical backup time has dropped to 8 minutes for the daily run and 9 minutes for the weekly one.


    --
    Martin | martin at
    Gregorie | gregorie dot org

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From paul lee@1:105/420 to Martin Gregorie on Thu Jan 21 18:06:39 2021
    No... I mean on my BBS box, I do backup all /files and... it's literally 500GB or so. But of course, for my Linux I'm just backing up /home and a few other spots where I hold my personal files. I also use a package that takes a 'snapshot' or basically a LISTING of every installed package on the system.

    What OS do you use on the BBS box?

    Raspberry Pi OS

    But, still, I'm backing up enough that speeds matter.
    I mean... don't speeds kinda always matter, anyway?
    Have you tried rsync and/or rsnapshot?

    I use rsync and yea, it only takes minutes per night unless I've added tons of files to the bases...



    |07p|15AULIE|1142|07o
    |08.........

    --- Mystic BBS v1.12 A47 2021/01/16 (Raspberry Pi/32)
    * Origin: 2o fOr beeRS bbS>>20ForBeers.com:1337 (1:105/420)