• Network speed problem

    From Geoffrey Baxendale@21:1/5 to All on Fri Jun 26 14:20:26 2020
    Hi,

    I recently upgraded my network switch from a 100Mb unit to a Gigabit
    device. This had an unfortunate effect in that transfer speeds from my
    NAS to the Titanium using Sunfish plummeted. (20secs or more for a 5MB
    file.) Eventually I got the latest versions of !Omni, NFS and LanMan
    working and things were better but still slower than with the 100Mb
    switch. I realise RISC OS is not good on network speed but why lower
    with Gigabit set.

    I have found a work around by setting the Titanium interface to 100MB
    using *configure ECPAdvertise 0 100 full, Speed is now back to normal.
    (<2secs per 5MB) This using LanMan. What is going on?

    TTFN
    --
    Geoff.
    Using Elesar Titanium.
    Oxymoron of the day: "Military intelligence"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Steve Fryatt@21:1/5 to Geoffrey Baxendale on Fri Jun 26 15:08:40 2020
    On 26 Jun, Geoffrey Baxendale wrote in message
    <6f8cad8658.thebears@thebears.onetel.com>:

    I recently upgraded my network switch from a 100Mb unit to a Gigabit
    device. This had an unfortunate effect in that transfer speeds from my NAS
    to the Titanium using Sunfish plummeted. (20secs or more for a 5MB file.) Eventually I got the latest versions of !Omni, NFS and LanMan working and things were better but still slower than with the 100Mb switch. I realise RISC OS is not good on network speed but why lower with Gigabit set.

    I have found a work around by setting the Titanium interface to 100MB
    using *configure ECPAdvertise 0 100 full, Speed is now back to normal. (<2secs per 5MB) This using LanMan. What is going on?

    Possibly that the Titanium and the new switch are getting confused while negotiating the link speed. I had this issue on a Gigabit switch, and the solution (IIRC) was to

    *Configure ECPLink <n> Auto
    *Configure ECPAdvertise <n> 1000

    on the Titanium so that it only admitted to doing one speed. <n> is the interface number.

    Robert Sprowson was very helpful with diagnosing the problem, so if he's reading this and I've mis-remembered what I did, I'm sure he'll provide the correct the advice.

    --
    Steve Fryatt - Leeds, England

    http://www.stevefryatt.org.uk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Geoffrey Baxendale@21:1/5 to Steve Fryatt on Fri Jun 26 16:10:10 2020
    In message <mpro.qcjdyb03h7bm807qt.news@stevefryatt.org.uk>
    Steve Fryatt <news@stevefryatt.org.uk> wrote:

    On 26 Jun, Geoffrey Baxendale wrote in message
    <6f8cad8658.thebears@thebears.onetel.com>:

    I recently upgraded my network switch from a 100Mb unit to a Gigabit device. This had an unfortunate effect in that transfer speeds from my NAS to the Titanium using Sunfish plummeted. (20secs or more for a 5MB file.) Eventually I got the latest versions of !Omni, NFS and LanMan working and things were better but still slower than with the 100Mb switch. I realise RISC OS is not good on network speed but why lower with Gigabit set.

    I have found a work around by setting the Titanium interface to 100MB
    using *configure ECPAdvertise 0 100 full, Speed is now back to normal. (<2secs per 5MB) This using LanMan. What is going on?

    Possibly that the Titanium and the new switch are getting confused while negotiating the link speed. I had this issue on a Gigabit switch, and the solution (IIRC) was to

    *Configure ECPLink <n> Auto
    *Configure ECPAdvertise <n> 1000

    on the Titanium so that it only admitted to doing one speed. <n> is the interface number.

    Robert Sprowson was very helpful with diagnosing the problem, so if he's reading this and I've mis-remembered what I did, I'm sure he'll provide the correct the advice.

    Hi Steve,

    Many thanks for your reply. Unfortunately that didn't work for me. 5MB
    file took 6 secs. Reverted to advertise 100 and it's back to under
    2secs.

    I looked at what speed the i/f was set to and it was 1000 with your
    settings as it was with full auto at the beginning of this saga. (With
    the old switch it was 100)

    With my settings it says 100..

    TTFN
    --
    Geoff.
    Using Elesar Titanium.
    Oxymoron of the day: "Political Science"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin@21:1/5 to Geoffrey Baxendale on Fri Jun 26 17:19:59 2020
    On 26 Jun in article <1298b78658.thebears@thebears.onetel.com>,
    Geoffrey Baxendale <thebears@onetel.com> wrote:
    Many thanks for your reply. Unfortunately that didn't work for me.
    5MB file took 6 secs. Reverted to advertise 100 and it's back to
    under 2secs.

    I looked at what speed the i/f was set to and it was 1000 with your
    settings as it was with full auto at the beginning of this saga.
    (With the old switch it was 100)

    Have you checked *both* links in the connection between Ti and NAS?

    Are there any leds (usually next to the ethernet socket) which
    indicate what speed the connection is?

    My router has them, and my Netgear switch. The Titanium has some, but
    I have never found out what they mean!

    They usually identify if the link is 100MB or GB, and flash when in
    use.

    If one link is 100MB, and the other GB then it does not sound good,
    and I am sure others can explain.

    --
    Martin Avison
    Note that unfortunately this email address will become invalid
    without notice if (when) any spam is received.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From druck@21:1/5 to Geoffrey Baxendale on Fri Jun 26 18:52:58 2020
    On 26/06/2020 14:20, Geoffrey Baxendale wrote:
    Hi,

    I recently upgraded my network switch from a 100Mb unit to a Gigabit
    device. This had an unfortunate effect in that transfer speeds from my
    NAS to the Titanium using Sunfish plummeted. (20secs or more for a 5MB
    file.) Eventually I got the latest versions of !Omni, NFS and LanMan
    working and things were better but still slower than with the 100Mb
    switch. I realise RISC OS is not good on network speed but why lower
    with Gigabit set.

    I have found a work around by setting the Titanium interface to 100MB
    using *configure ECPAdvertise 0 100 full, Speed is now back to normal. (<2secs per 5MB) This using LanMan. What is going on?

    The RISC OS network stack is pretty poor, but it is capable of better
    than 100MB/s performance on a gigabit connection - just. My ARMx6 Mini.m
    on a gigabit switch and Ethernet configured to auto can manage about
    14MB/s down and 18MB/s up. When set to 100 Full or half download speed
    drops to about 7MB/s.

    My guess is that your cabling is sufficient for 100M but inadequate for gigabit. Try different leads on each segment of the network between the machines, and make sure they are marked Cat 5e for gigabit, pain cat 5
    is only rates for 100MB/s and will sometimes work and sometimes not.
    Check that there aren't any sharp bends in the cable, as this will also
    have more effect on gigabit networking.

    Also check your network card's information command *E<something>Info,
    for any fields which indicate errors or collisions are occurring.
    *inetstat -s will give information from the higher levels of the stack,
    look out for errors or retransmission value. Also try the equivalent
    command on the machine at the other end of the connection as things may
    only show up in each direction.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From druck@21:1/5 to druck on Sat Jun 27 18:59:32 2020
    On 26/06/2020 18:52, druck wrote:
    Also check your network card's information command *E<something>Info,
    for any fields which indicate errors or collisions are occurring.
    *inetstat -s will give information from the higher levels of the stack,
    look out for errors or retransmission value. Also try the equivalent
    command on the machine at the other end of the connection as things may
    only show up in each direction.

    *ShowStat will also show you what speed and duplex settings are
    currently in use.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Alan Adams@21:1/5 to druck on Sat Jun 27 20:09:13 2020
    In message <rd81e5$g6i$1@dont-email.me>
    druck <news@druck.org.uk> wrote:

    On 26/06/2020 18:52, druck wrote:
    Also check your network card's information command *E<something>Info,
    for any fields which indicate errors or collisions are occurring.
    *inetstat -s will give information from the higher levels of the stack,
    look out for errors or retransmission value. Also try the equivalent
    command on the machine at the other end of the connection as things may
    only show up in each direction.

    *ShowStat will also show you what speed and duplex settings are
    currently in use.

    or what the local computer thinks they are. It's possible that the other
    end of the link (e.g. the switch) may disagree, which could well explain
    the problems. If the switch has a web page, it may be possible to find out
    from there what the switch thinks the link is running at.

    ---druck


    --
    Alan Adams, from Northamptonshire
    alan@adamshome.org.uk
    http://www.nckc.org.uk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From news@sprow.co.uk@21:1/5 to Steve Fryatt on Mon Jun 29 00:00:53 2020
    On Friday, June 26, 2020 at 3:15:02 PM UTC+1, Steve Fryatt wrote:
    On 26 Jun, Geoffrey Baxendale wrote in message
    <6f8cad8658.thebears@thebears.onetel.com>:

    I recently upgraded my network switch from a 100Mb unit to a Gigabit device. This had an unfortunate effect in that transfer speeds from my NAS to the Titanium using Sunfish plummeted.

    Robert Sprowson was very helpful with diagnosing the problem, so if he's reading this and I've mis-remembered what I did, I'm sure he'll provide the correct the advice.

    Steve and Druck both speak the truth - there are 2 aspects to consider
    * Is the 1000baseT link stable?
    There's no designated master or slave in Ethernet, they use small 'blips' on
    the line in quiet periods to signal speed and direction (to correct for
    cable crossover too if you've wired 2 computers together but not used a
    crossover cable "Auto MDI-X"). This relies on good quality cables, which need
    all 8 conductors for 1000baseT, unlike 10/100 which only needs 4 of 8.
    Some brands of equipment get into an oscillatory pattern as each end decides
    to negotiate the link, and neither end agrees. My 3Com router at home does
    that which results in the link LED flashing continuously even though I'm
    not sending any data.
    * Is RISC OS able to handle the packets?
    The network stack is 23 years old, but while we're saving up for a new one
    https://www.riscosopen.org/bounty/polls/29
    there are some buffers which are sized for a much slower time. If those are
    overcome the data will try its best to get through using TCP retries, but
    if you look on a packet tracer each of those retries has a backoff period
    which will hamper the throughput (compare a turnaround time of 2ms with
    0.5s). It is possible to increase the buffers, and *ShowStat as already
    mentioned elsewhere will show if there are dropped packets or mbuf exhaustion.

    Or, in short: need more info to be sure,
    Sprow.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Geoffrey Baxendale@21:1/5 to news@sprow.co.uk on Mon Jun 29 11:59:24 2020
    Thanks to all who replied.

    In message <10e17c0a-866b-4522-b75b-f903e15de841o@googlegroups.com>
    news@sprow.co.uk wrote:

    On Friday, June 26, 2020 at 3:15:02 PM UTC+1, Steve Fryatt wrote:
    On 26 Jun, Geoffrey Baxendale wrote in message
    <6f8cad8658.thebears@thebears.onetel.com>:

    I recently upgraded my network switch from a 100Mb unit to a Gigabit device. This had an unfortunate effect in that transfer speeds from my NAS
    to the Titanium using Sunfish plummeted.

    Robert Sprowson was very helpful with diagnosing the problem, so if he's reading this and I've mis-remembered what I did, I'm sure he'll provide the correct the advice.

    Steve and Druck both speak the truth - there are 2 aspects to consider
    * Is the 1000baseT link stable?
    There's no designated master or slave in Ethernet, they use small 'blips' on
    the line in quiet periods to signal speed and direction (to correct for
    cable crossover too if you've wired 2 computers together but not used a
    crossover cable "Auto MDI-X"). This relies on good quality cables, which need
    all 8 conductors for 1000baseT, unlike 10/100 which only needs 4 of 8.
    All cables are CAT5e

    Some brands of equipment get into an oscillatory pattern as each end decides
    to negotiate the link, and neither end agrees. My 3Com router at home does
    that which results in the link LED flashing continuously even though I'm
    not sending any data.

    I get this all the time with the SKY router, wondered what was
    happening!

    * Is RISC OS able to handle the packets?
    The network stack is 23 years old, but while we're saving up for a new one
    https://www.riscosopen.org/bounty/polls/29
    there are some buffers which are sized for a much slower time. If those are
    overcome the data will try its best to get through using TCP retries, but
    if you look on a packet tracer each of those retries has a backoff period
    which will hamper the throughput (compare a turnaround time of 2ms with
    0.5s). It is possible to increase the buffers, and *ShowStat as already
    mentioned elsewhere will show if there are dropped packets or mbuf exhaustion.

    Set to 1000Mbs the copy window shows the download stopping and starting,
    sounds to me as if it is a buffer problem.
    Or, in short: need more info to be sure,
    Sprow.
    Here are the stats:

    *showstat
    DCI4 Statistics Display 0.02 (17-Jan-03)
    Copyright (C) Element 14 Ltd. 1999. All rights reserved.

    Interface name : ecp
    Unit number : 1
    Hardware address : 70:b3:d5:03:f0:8b
    Location : Motherboard
    Driver module : EtherCP
    Supported features : Multicast reception is supported
    : Promiscuous reception is supported
    : Interface can receive erroneous packets
    : Interface has a hardware address
    : Driver can alter interface's hardware address
    : Driver supplies standard statistics
    MTU : 1500
    Interface type : 10baseT
    Link status : Interface faulty
    Active status : Interface is active
    Receive mode : Direct
    Interface mode : Half-duplex
    Polarity : Correct
    TX frames : 0
    RX frames : 0

    Interface name : ecp
    Unit number : 0
    Hardware address : 70:b3:d5:03:f0:8a
    Location : Motherboard
    Driver module : EtherCP
    Supported features : Multicast reception is supported
    : Promiscuous reception is supported
    : Interface can receive erroneous packets
    : Interface has a hardware address
    : Driver can alter interface's hardware address
    : Driver supplies standard statistics
    MTU : 1500
    Interface type : 1000baseT
    Link status : Interface OK
    Active status : Interface is active
    Receive mode : Direct, broadcast and multicast
    Interface mode : Full duplex
    Polarity : Correct
    TX frames : 2032
    TX bytes : 130431
    RX unwanted frames : 355
    RX frames : 4056
    RX bytes : 5363044

    Module MbufManager is an mbuf manager

    Mbuf Manager : System wide memory buffer (mbuf) memory management Active sessions : 3
    Sessions opened : 6
    Sessions closed : 3
    Memory pool size : 360448
    Small block size : 128
    Large block size : 1536
    Mbuf exhaustions : 0
    Small mbufs in use : 2
    Small mbufs free : 510
    Large mbufs in use : 0
    Large mbufs free : 192
    *
    Thanks for the reply Sprow.

    TTFN
    --
    Geoff.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Geoffrey Baxendale@21:1/5 to druck on Mon Jun 29 11:50:01 2020
    In message <rd5clr$jb2$1@dont-email.me>
    druck <news@druck.org.uk> wrote:

    On 26/06/2020 14:20, Geoffrey Baxendale wrote:
    Hi,

    I recently upgraded my network switch from a 100Mb unit to a Gigabit device. This had an unfortunate effect in that transfer speeds from my
    NAS to the Titanium using Sunfish plummeted. (20secs or more for a 5MB file.) Eventually I got the latest versions of !Omni, NFS and LanMan working and things were better but still slower than with the 100Mb
    switch. I realise RISC OS is not good on network speed but why lower
    with Gigabit set.

    I have found a work around by setting the Titanium interface to 100MB
    using *configure ECPAdvertise 0 100 full, Speed is now back to normal. (<2secs per 5MB) This using LanMan. What is going on?

    The RISC OS network stack is pretty poor, but it is capable of better
    than 100MB/s performance on a gigabit connection - just. My ARMx6 Mini.m
    on a gigabit switch and Ethernet configured to auto can manage about
    14MB/s down and 18MB/s up. When set to 100 Full or half download speed
    drops to about 7MB/s.

    My guess is that your cabling is sufficient for 100M but inadequate for gigabit. Try different leads on each segment of the network between the machines, and make sure they are marked Cat 5e for gigabit, pain cat 5
    is only rates for 100MB/s and will sometimes work and sometimes not.
    Check that there aren't any sharp bends in the cable, as this will also
    have more effect on gigabit networking.

    All cables are CAT5e and quite short.

    Also check your network card's information command *E<something>Info,
    for any fields which indicate errors or collisions are occurring.
    *inetstat -s will give information from the higher levels of the stack,
    look out for errors or retransmission value. Also try the equivalent
    command on the machine at the other end of the connection as things may
    only show up in each direction.

    Can't see any errors on *ecpinfo.
    *inetstat looks OK I think.
    ---druck
    *inetstat -s
    ip:
    3793 total packets received
    0 bad header checksums
    0 with size smaller than minimum
    0 with data size < data length
    0 with header length < data size
    0 with data length < header length
    0 with bad options
    0 with incorrect version number
    0 fragments received
    0 fragments dropped (dup or out of space)
    0 fragments dropped after timeout
    0 packets reassembled ok
    3735 packets for this host
    3 packets for unknown/unsupported protocol
    0 packets forwarded
    0 packets not forwardable
    55 packets received for unknown multicast group
    0 redirects sent
    2056 packets sent from this host
    0 packets sent with fabricated ip header
    0 output packets dropped due to no bufs, etc.
    1 output packet discarded due to no route
    0 output datagrams fragmented
    0 fragments created
    0 datagrams that can't be fragmented
    icmp:
    0 calls to icmp_error
    0 errors not generated 'cuz old message was icmp
    0 messages with bad code fields
    0 messages < minimum length
    0 bad checksums
    0 messages with bad length
    0 message responses generated
    ICMP address mask responses are disabled
    igmp:
    3 messages received
    0 messages received with too few bytes
    0 messages received with bad checksum
    3 membership queries received
    0 membership queries received with invalid field(s)
    0 membership reports received
    0 membership reports received with invalid field(s)
    0 membership reports received for groups to which we belong
    0 membership reports sent
    tcp:
    2016 packets sent
    361 data packets (23099 bytes)
    0 data packets (0 bytes) retransmitted
    0 resends initiated by MTU discovery
    623 ack-only packets (17 delayed)
    0 URG only packets
    0 window probe packets
    1021 window update packets
    11 control packets
    3662 packets received
    370 acks (for 23097 bytes)
    6 duplicate acks
    0 acks for unsent data
    3026 packets (4114973 bytes) received in-sequence
    2 completely duplicate packets (2896 bytes)
    0 old duplicate packets
    0 packets with some dup. data (0 bytes duped)
    596 out-of-order packets (847052 bytes)
    0 packets (0 bytes) of data after window
    0 window probes
    0 window update packets
    1 packet received after close
    0 discarded for bad checksums
    0 discarded for bad header offset fields
    0 discarded because packet too short
    6 connection requests
    0 connection accepts
    0 bad connection attempts
    0 listen queue overflows
    6 connections established (including accepts)
    5 connections closed (including 0 drops)
    0 connections updated cached RTT on close
    0 connections updated cached RTT variance on close
    0 connections updated cached ssthresh on close
    0 embryonic connections dropped
    370 segments updated rtt (of 371 attempts)
    0 retransmit timeouts
    0 connections dropped by rexmit timeout
    0 persist timeouts
    0 connections dropped by persist timeout
    0 keepalive timeouts
    0 keepalive probes sent
    0 connections dropped by keepalive
    7 correct ACK header predictions
    2687 correct data packet header predictions
    udp:
    73 datagrams received
    0 with incomplete header
    0 with bad data length field
    0 with bad checksum
    0 dropped due to no socket
    18 broadcast/multicast datagrams dropped due to no socket
    0 dropped due to full socket buffers
    55 delivered
    38 datagrams output
    *
    TTFN
    --
    Geoff.
    Using Elesar Titanium.
    Oxymoron of the day: "Genuine Imitation"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)