Hi
I'm using all this free time to work on my slalom competition scoring system. It consists of a database server and a significant number of
client applications, running anyt=where on a local network of RISC OS computers, mostly RPIs. The connections between them use TCP/IP.
I'm stress testing the system at the moment by running the maximum number
of clients. In addition to exposing several latent bugs, that only get hit very infrequently, it has shown up limits in a couple of pieces of
software that aren't mine. A couple of my bugs showed up after a 2 hour
run with the maximum number of clients on the network. The last 3 days
have been spent finding more quickly reproducable ways to trigger them, so that I could find out what was happening. (It turned out all to be
problems reading data around the wrap in a ring buffer.)
I use SocketWatch to monitor incoming sockets and return poll reason 13 - pollword non-zero, in order to get the data read quickly. SocketWatch uses
a bitmap to identify the sockets, and it's a 32-bit integer, meaning I can only monitor 32 sockets. This in turn limits the server to 28 clients, as
4 bits are already in use. (The hardware timing system uses two to
indicate that GPIO events have taken place, and there's a couple of others already tied up.)
Each client has 3 sockets - one to broadcast a request for the server, one to listen to incoming data, and one to send data to the server. The server uses one broadcast, and one listen, plus one send socket per client.
Running all this on one machine, as I have been doing in testing, can tie
up 112 sockets. I also have Hermes and Messenger Pro running, plus
shareFS, and those together use the first 10 sockets.
It turns out Hermes and NewsHound both create a socket when they fetch,
and delete it afterwards. This fails if socket number 95 is already in
use, so I assume they are written with data allocated for handling 96 sockets.
I think I can reduce my socket usage by creating the broadcast sockets
when needed and deleting them afterwards. That will need some testing of course. It's not just the initial connection that uses them - if a connection drops or times out, the system attempts to reconnect automatically.
--
Alan Adams, from Northamptonshire
alan@adamshome.org.uk
http://www.nckc.org.uk/
Each client has 3 sockets - one to broadcast a request for the server, one
to listen to incoming data, and one to send data to the server. The server uses one broadcast, and one listen, plus one send socket per client.
I think I can reduce my socket usage by creating the broadcast sockets
when needed and deleting them afterwards. That will need some testing of course. It's not just the initial connection that uses them - if a
connection drops or times out, the system attempts to reconnect automatically.
On 20/04/2020 19:25, Alan Adams wrote:
Each client has 3 sockets - one to broadcast a request for the server, one >> to listen to incoming data, and one to send data to the server. The server >> uses one broadcast, and one listen, plus one send socket per client.
[Snip]
Why do you need a send and receive socket? A single TCP socket can do
both, you can even send out of band data to implement asynchronous notifications.
I think I can reduce my socket usage by creating the broadcast sockets
when needed and deleting them afterwards. That will need some testing of
course. It's not just the initial connection that uses them - if a
connection drops or times out, the system attempts to reconnect
automatically.
The clients broadcast socket can be created only when needed. Once
you've sent a broadcast to identify the server, you should be able to reconnect to the same address after that.
If you really need more clients, consider switching to UDP rather than
TCP, as the sever can receive data from any number of clients on a
single socket. You can chuck around far more data with UDP, but you have
to take care of the integrity yourself.
---druck
In message <r7ld1j$fn3$1@dont-email.me>
druck <news@druck.org.uk> wrote:
On 20/04/2020 19:25, Alan Adams wrote:
Each client has 3 sockets - one to broadcast a request for the server, one >>> to listen to incoming data, and one to send data to the server. The server >>> uses one broadcast, and one listen, plus one send socket per client.
I think I can reduce my socket usage by creating the broadcast sockets
when needed and deleting them afterwards. That will need some testing of >>> course. It's not just the initial connection that uses them - if a
connection drops or times out, the system attempts to reconnect
automatically.
The clients broadcast socket can be created only when needed. Once
you've sent a broadcast to identify the server, you should be able to
reconnect to the same address after that.
---druck
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 210:50:45 |
Calls: | 6,619 |
Calls today: | 1 |
Files: | 12,168 |
Messages: | 5,317,250 |