Michael J. Ryan wrote to GitLab issue in main/sbbs <=-
open https://gitlab.synchro.net/main/sbbs/-/issues/199
I know there was a discussion on this previously in dovenet, and
am not sure what came of it.
Would be nice if the node directories were easier to move to a
shared common subdirectory for easier volume mount/usage with
Docker specifically. Instead of `../Node1` it would be
`../Node/1` though it would make a nicer default imo, would be a
hiccup for migration of existing servers, and the relative path
being an extra layer deep, but it might be worth exploring.
Would be nice if the node directories were easier to move to a sharedcommon subdirectory for easier volume mount/usage with
Docker specifically. Instead of `../Node1` it would be `../Node/1`though it would make a nicer default imo, would be a
hiccup for migration of existing servers, and the relative path being anextra layer deep, but it might be worth exploring.
Re: NodeX to Node/X directoriescompile,
By: Michael J. Ryan to GitLab issue in main/sbbs on Sun Jan 03 2021 10:31 am
Howdy,
Would be nice if the node directories were easier to move to a shared common subdirectory for easier volume mount/usage with
Docker specifically. Instead of `../Node1` it would be `../Node/1` though it would make a nicer default imo, would be a
hiccup for migration of existing servers, and the relative path being an extra layer deep, but it might be worth exploring.
Yeah, I bought this up previously. Unfortuantely it might be somewhat breaking for existing users who upgrade.
There are 2 workarounds:
* Rob made it easier to change a line in the source code before you
that will have the default as you want.
* Create your own default "main.cnf" with the right paths and include that in your image (that you copy to the ctrl dir on first startup). (You still need this, even if you change the source code above.)
There are 2 workarounds:What source code change are you referring to?
* Rob made it easier to change a line in the source code before you compile, that will have the default as you want.
Re: NodeX to Node/X directories
By: Digital Man to deon on Sun Jan 03 2021 05:46 pm
Howdy,
There are 2 workarounds:What source code change are you referring to?
* Rob made it easier to change a line in the source code before you compile, that will have the default as you want.
This one:
diff --git a/src/sbbs3/scfg/scfgnode.c b/src/sbbs3/scfg/scfgnode.c
index 54c80f74e..d6db3beeb 100644
-+- a/src/sbbs3/scfg/scfgnode.c
+++ b/src/sbbs3/scfg/scfgnode.c
@@ -118,7 +118,7 @@ void node_menu()
SAFECOPY(cfg.node_dir,cfg.node_path[cfg.sys_nodes-1]);
i=cfg.sys_nodes+1;
load_node_cfg(&cfg,error);
- sprintf(str,"../node%d/",i);
+ sprintf(str,"../nodes/node%d/",i);
sprintf(tmp,"Node %d Directory",i);
uifc.helpbuf=node_path_help;
j=uifc.input(WIN_MID,0,0,tmp,str,50,K_EDIT);
diff --git a/src/sbbs3/scfgsave.c b/src/sbbs3/scfgsave.c
index 9f40f4a4c..a36ad0ce4 100644
-+- a/src/sbbs3/scfgsave.c
+++ b/src/sbbs3/scfgsave.c
@@ -173,7 +173,7 @@ BOOL DLLCALL write_main_cfg(scfg_t* cfg, int backup_level)
put_int(cfg->sys_nodes,stream);
for(i=0;i<cfg->sys_nodes;i++) {
if(cfg->node_path[i][0] == 0)
- SAFEPRINTF(cfg->node_path[i], "../node%u", i + 1);
+ SAFEPRINTF(cfg->node_path[i], "../nodes/node%u", i
That just changes the default string in the edit box. Simply typing"../nodes/nodeX" would accomplish the same thing with no
code change.
Neither of those code changes appear necessary.
Re: NodeX to Node/X directoriesbe
By: Digital Man to deon on Mon Jan 04 2021 11:09 am
That just changes the default string in the edit box. Simply typing "../nodes/nodeX" would accomplish the same thing with no
code change.
Neither of those code changes appear necessary.
Sure, but when you are running in a container, you want the "defaults" to
as correct as possible and have less chance of making mistakes - and then figuring out why things are not working as intended. This achieves it.
It would be nice that this hack could be achieved programatically, during build time, or set in some ini so that the patch is not required. That includes making the "initial" main.cnf with the right paths, without having to edit them on first run for a new instance.
There are hundreds, possibly thousands of default settings in SBBS. Youwan't everyone of those settings to be similar to
your desired BBS configuration? That doesn't compute.
What "initial" main.cnf - you mean the one from Git?setting up a new BBS every time you open your BBS
I still understand why the would would be "required" by anyone. Are you
container?
What "initial" main.cnf - you mean the one from Git?
Re: NodeX to Node/X directories
By: Digital Man to deon on Mon Jan 04 2021 07:32 pm
What "initial" main.cnf - you mean the one from Git?
Oh, I didnt answer this.
The main.cnf you have in git references node paths as ../node1, ../node2, and thus are in the parent of the "ctrl" directory.
From what I can gather (I did ask previously), this main.cnf is not "created" at compile time, but rather a static file with "defaults" for a new install. So folks who want their "initial" nodes in a "nodes" subdirectory would need to change this on first startup.
And back to the docker discussion, my workaround is to provide my own "main.cnf" in the image so that these values dont need to be changed on a first time install.
do you then need the code change? Is it just inAnd back to the docker discussion, my workaround is to provide my own "main.cnf" in the image so that these values dont
need
to be changed on a first time install.
Okay, so if your own main.cnf includes node paths of "../node/nodeX", why
case the sysop wants more than 4 nodes (or however many you haveconfigured), they're not confused and accept the "../nodeX"
default?
It's a pretty trivial change to have additional node default paths bederived from whatever was configured for the
previous node (or node 1). I'll make that change and let me know how thatworks for you. I don't see any reason for the
suggested change to scfgsave.c however.
deon wrote to Digital Man <=-
A docker image takes away *all* that complexity, is easier to
support, standardises installation but still lets end users
configure it how they want. Folks wouldnt even see or use git at
all. And since you now use gitlab, images can be built
automatically from a commit. That's why I use docker images.
itIt's a pretty trivial change to have additional node default paths be derived from whatever was configured for the
previous node (or node 1). I'll make that change and let me know how that works for you. I don't see any reason for the
suggested change to scfgsave.c however.
I havent tried your change, but I looked at the commit, and it looks like
will help immensely thank you. I agree, with it, my patch is no longer required.
Another thought on this topic if I may.is
Could the nodes directories be considered emphemeral (with the exception of node.cnf)? IE: When sbbs is stopped, if those node dirs were deleted, there is nothing lost? The contents of a node dir is only useful while somebody
actually on the node?
If they are emphemeral, then the only problem I have with a tmpfs strategy is the creation of node.cnf in the nodeX directory. I've noticed that scfg creates them, but sbbs will abort if it doesnt exist. (So you need to run scfg before you start sbbs - or I need to copy a "default" node.cnf inplace
before starting sbbs).
deon wrote to Digital Man <=-
A docker image takes away *all* that complexity, is easier to
support, standardises installation but still lets end users
configure it how they want. Folks wouldnt even see or use git at
all. And since you now use gitlab, images can be built
automatically from a commit. That's why I use docker images.
An "already configured" image like that isn't of any use to people
that don't use Docker.
Digital Man wrote to Gamgee <=-
deon wrote to Digital Man <=-
A docker image takes away *all* that complexity, is easier to
support, standardises installation but still lets end users
configure it how they want. Folks wouldnt even see or use git at
all. And since you now use gitlab, images can be built
automatically from a commit. That's why I use docker images.
An "already configured" image like that isn't of any use to people
that don't use Docker.
And Docker containers run in VMs which come with their own
performance penalties and issues (e.g. DOSemu compatibility). The
idea is not appealing to me.
I came upon Linux "snaps" a couple of years ago and created a
SyncTERM snap (or tried to) and gave up in frustration after a
few days of work into it. I think you're just trading one set of
issues for another with these container systems.
An "already configured" image like that isn't of any use to people
that don't use Docker.
How many people use Docker with SBBS?
I'd guess that 99% of SBBS users don't use Docker.
And Docker containers run in VMs which come with their own performancepenalties and issues (e.g. DOSemu compatibility). The
idea is not appealing to me.
Nor to me. Don't even see the point of it, other than over-worrying about "security".
deon wrote to Gamgee <=-
An "already configured" image like that isn't of any use to people
that don't use Docker.
By definition, docker images are not "configured".
How many people use Docker with SBBS?
I'd guess that 99% of SBBS users don't use Docker.
Sure, maybe.
deon wrote to Gamgee <=-
Nor to me. Don't even see the point of it, other than over-worrying
about "security".
What security do you worry about?
Re: Re: NodeX to Node/X directories
By: Digital Man to Gamgee on Tue Jan 05 2021 11:17 am
And Docker containers run in VMs which come with their own performance penalties and issues (e.g. DOSemu compatibility). The
idea is not appealing to me.
Sorry not true (the "contains run in VMs part").
Performance penalties is subjective, but probably true when using docker NAT. Not really an issue for a BBS environment. And "issues" is subjective, because things need to be done differently. IMHO, its a good different, it forces system admins to do things better with an impact if you dont.
Not sure what the dosemu compatibility is a reference to - but I run my 4 node 1995 Ezycom in a docker container with dosemu.
This might be helpful: https://containerjournal.com/features/docker-not-faster-vms-just-efficient/
-co st-of-a-docker-container
re-deployed. I'd call that "configuring" an image.By definition, docker images are not "configured".Well, I'm confused then, because I thought you were talking about
getting an SBBS configured just the way you want it, running in a
Docker container, and then "saving" that image so it can be easily
Right. So why alter SBBS code to better suit Docker use, when almost nobody uses that?
security, and so forth. But I don't stress over itNor to me. Don't even see the point of it, other than over-worrying
about "security".
What security do you worry about?
Well, the same as most folks, I guess. Root exploits, basic firewall
too much.
My counter-question to you is: Why would I want/need to run SBBS in a Docker container? What benefit does that offer?
Do you run it that way to improve security? (I might be way off, but Ithought I remember
that as one of the reasons for using containers).
software or hardware) and those layer incur some amountSorry not true (the "contains run in VMs part").Sorry, "OS-level virtualization". :-)
"virtualization" of any kind to me just implies more layers (be it
of performance penalty. Maybe it's immeasurable with Docker, I don'tknow.
Those articles confirm there is a performance penalty. It does appearnegligible, but still measureable.
Or rather, why not make it configurable, so that for the default 99% use cases, nobody needs to change that configuration item, and for the 1% that do, they can change that config item, without changing the source code - as another valid use case?
Re: Re: NodeX to Node/X directories
By: Digital Man to deon on Tue Jan 05 2021 08:42 pm
Sorry not true (the "contains run in VMs part").Sorry, "OS-level virtualization". :-)
"virtualization" of any kind to me just implies more layers (be it software or hardware) and those layer incur some amount
of performance penalty. Maybe it's immeasurable with Docker, I don't know.
But there is no virtualisation with docker.
Sadly if you want to run Docker on Windows, then yes, you need to run Hyper-V and that is virtualisation, but on Linux,
processes in a container are just another process running from the host.
USThose articles confirm there is a performance penalty. It does appear negligible, but still measureable.
Imagine me saying "tomato" with an Australian accent, and "tomato" with a
one...
software or hardware) and those layer incur someSorry, "OS-level virtualization". :-)
"virtualization" of any kind to me just implies more layers (be it
know.amount
of performance penalty. Maybe it's immeasurable with Docker, I don't
Wikipedia thinks there is:OS-level virtualization to deliver software in packages
Docker is a set of platform as a service (PaaS) products that use
called containers.virtualization.
Yup, but they're each virtualizing an OS (in software). It's still
deon wrote to Gamgee <=-re-depl
By definition, docker images are not "configured".
Well, I'm confused then, because I thought you were talking about
getting an SBBS configured just the way you want it, running in a
Docker container, and then "saving" that image so it can be easily
oyed. I'd call that "configuring" an image.
I wondered if that would be confusing.
There are 2 concepts. of "configuration" in play here.
1) Altering configuration items so that the application works.
2) Personalising the application with "my" data.
An example of 1) is changing the nodes so that they appear in a
nodes/ sub directory, so that the nodes/ sub directory can be a
mount from the host (or as per my later discussion, tmpfs). This
makes it work without any workarounds.
An example of 2) is calling my instance of Synchronet "Alterant
BBS", so that when it is stopped and started it is still called "Alterant", not ("My BBS" or whatever SBBS's default name is).
Right. So why alter SBBS code to better suit Docker use, when almost nobody uses that?
Or rather, why not make it configurable, so that for the default
99% use cases, nobody needs to change that configuration item,
and for the 1% that do, they can change that config item, without
changing the source code - as another valid use case?
deon wrote to Gamgee <=-
My counter-question to you is: Why would I want/need to run SBBS in a Docker container? What benefit does that offer?
I cant say why you would want to or not. I do it because of a
multitude of reasons, and the net of them is easier to backup,
improved portability, built in HA, more effecient use of
resources and less OS's to manage.
Would be nice if the node directories were easier to move to a
shared common subdirectory for easier volume mount/usage with
Docker specifically. Instead of `../Node1` it would be
`../Node/1` though it would make a nicer default imo, would be a
hiccup for migration of existing servers, and the relative path
being an extra layer deep, but it might be worth exploring.
How many people run SBBS in Docker? My guess is < 10.
It would be nice that this hack could be achieved programatically,
during build time, or set in some ini so that the patch is not
required. That includes making the "initial" main.cnf with the
right paths, without having to edit them on first run for a new
instance.
What "initial" main.cnf - you mean the one from Git?
I still understand why the would would be "required" by anyone.
Are you setting up a new BBS every time you open your BBS container?
How many people run a BBS? My guess is less than 10000, how many people
use Docker, my guess is greater than 10000. ;-)
Also, more might do it if it were easier to do.
(And sure the above statement references linux installs -
probably doesnt help windows users, but I'm sure there are
tools on Windows that would make deployment easier.)
And Docker containers run in VMs which come with their
own performance penalties and issues (e.g. DOSemu
compatibility). The idea is not appealing to me.
What security do you worry about?
Well, the same as most folks, I guess. Root exploits, basic firewall security, and so forth. But I don't stress over it too much.
My counter-question to you is: Why would I want/need to run SBBS in a
Docker container? What benefit does that offer? Do you run it that
way to improve security? (I might be way off, but I thought I remember
that as one of the reasons for using containers). If I am mistaken
about that, please enlighten me as to how a container is beneficial.
Sorry not true (the "contains run in VMs part").
Sorry, "OS-level virtualization". :-)
"virtualization" of any kind to me just implies more
layers (be it software or hardware) and those layer
incur some amount of performance penalty. Maybe it's
immeasurable with Docker, I don't know.
But there is no virtualisation with docker.
Sadly if you want to run Docker on Windows, then yes, you
need to run Hyper-V and that is virtualisation, but on Linux,
processes in a container are just another process running
from the host.
Yup, but they're each virtualizing an OS (in software). It's still virtualization.
Running a BBS depends on the SysOp and their experience. Running one in Windows is a hell of a lot easier than running one
in *nix if you're a newbie. I've actually contemplated seeing if I can get Synchronet to compile on AIX (doubt it), Solaris
x86 (maybe), or on Red Hat on IBM POWER9 (potentially). I think I know what I'm going to try this weekend... :)
How many people run a BBS? My guess is less than 10000, how many people use Docker, my guess is greater than 10000. ;-)
Also, more might do it if it were easier to do.
I have an S/390 environment here that I was going to try and compile SBBS (just because). Might be fun getting Synchronet running under a z/VM environment, where the network is made up of CTC devices :)
On 01-21-21 21:25, Tracker1 wrote to Gamgee <=-
First run...
docker run --name sbbs \
-d --restart unless-stopped \
-v /host-path/to/data:/sbbs/data \
... \
synchronet/sbbs:3.17b
Oh, time to upgrade...
docker stop sbbs
docker rm sbbs
docker run --name sbbs ... synchronet/sbbs:3.18b
On 01-22-21 08:09, Nightfox wrote to Tracker1 <=-
Docker is something I just became aware of within the past couple
years. I don't know any stats on usage of Docker offhand, but
personally I haven't used Docker in my own personal projects.
On 1/5/2021 11:14 PM, Digital Man wrote:
Yup, but they're each virtualizing an OS (in software). It's still virtualization.
No, they don't virtualize an OS...
Docker containers are *NOT* VMs... it's more like a BSD Jail or a
Solaris Container, the overhead is *MUCH* lighter, commands are not translated through a VM. Nothing like DOSemu overhead.
My point was: there are performance penalties when using Docker.
Wikipedia calls Docker a product that uses "OS-level virtualization". I suppose that is semantically different than "virtualizing an OS", but not really relevant to my point.
My point was: there are performance penalties when using Docker.
That was my understanding as well. My understanding is that Docker is a way to package a program along with any prerequisites it has. That way, you don't necessarily have to install the prerequisites, you can just start up the program in the Docker image.
Running a BBS depends on the SysOp and their experience.
Running one in Windows is a hell of a lot easier than
running one in *nix if you're a newbie. I've actually
contemplated seeing if I can get Synchronet to compile
on AIX (doubt it), Solaris x86 (maybe), or on Red Hat on
IBM POWER9 (potentially). I think I know what I'm going
to try this weekend... :)
How many people run a BBS? My guess is less than 10000, how
many people use Docker, my guess is greater than 10000. ;-)
Also, more might do it if it were easier to do.
Docker is something I just became aware of within the past
couple years. I don't know any stats on usage of Docker
offhand, but personally I haven't used Docker in my own
personal projects.
My point was: there are performance penalties when using Docker.
Took some inspirations from I think it was your repo, as well as my
prior version. I just updated my container scripts for sbbs...
Took some inspirations from I think it was your repo, as well as my
prior version. I just updated my container scripts for sbbs...
Good stuff ;)
You might like to change your build deps install, build and build deps removal as one long changed command.
Your dep removal "# Cleanup libraries", while results in those devs
not appearing in the later layers of the filesystem, they are still
present in earlier layers - so your image size doesnt reduce. (Will
impact "docker pull" bandwidth and data transferred.)
(Not sure if docker hub is doing some flattening, or if you are)
For your upgrade, I wouldnt clobber xtrn - there would be game
config (or editor?) that folks would use, which is no dependant
on Synchronet's versions.
In fact, I would not call the clobbering of everything else (esp
ctrl) as an "upgrade", but rather a "reset" - and for that matter,
a reset could be the user removing the ctrl directory so that it
gets re-populated when the container starts (which is my approach).
https://github.com/bbs-io/synchronet-docker
For your upgrade, I wouldnt clobber xtrn - there would be
game config (or editor?) that folks would use, which is no
dependant on Synchronet's versions.
In fact, I would not call the clobbering of everything else
(esp ctrl) as an "upgrade", but rather a "reset" - and for
that matter, a reset could be the user removing the ctrl
directory so that it gets re-populated when the container
starts (which is my approach).
Re: Re: NodeX to Node/X directories
By: Tracker1 to deon on Mon Jan 25 2021 10:37 am
Had another quick look and noticed you werent doing any special mounts with the node dirs?
I run one of my SBBS servers in a docker swarm, with "replicas: 2". One instance is responsible for nodes 1-5, and the other nodes 6-10. (Its my game server). In my config, I'm sharing the nodes/ dir across the instances.
I'm not sure what is needed for one node to see another nodes files. Chat
maybe?
Rob, in case you are reading this, it would be great for things like spy, which rely on sockets, that either
* Spying on another node (that is on another host) that Spy says "no" because it cannot spy on using the socket. In my case, node 6-10 knows node 1-5 is another SBBS instance.
or
* Spying on another node (that is on another host), is done over IP?
Had another quick look and noticed you werent doing any
special mounts with the node dirs?
I run one of my SBBS servers in a docker swarm, with
"replicas: 2". One instance is responsible for nodes 1-5,
and the other nodes 6-10. (Its my game server). In my
config, I'm sharing the nodes/ dir across the instances.
I'm not sure what is needed for one node to see another
nodes files. Chat maybe?
Yeah, I didn't bother with multiple instances as it just
seems to complicate some things.. I am running the ircd and
the two different web UIs on separate entries though as
Yeah, I didn't bother with multiple instances as it just
seems to complicate some things.. I am running the ircd and
the two different web UIs on separate entries though as
Ahh, I actually found it quite easy - leveraging the "hostname"
in the ini files.
So my game servers "hostnames" are bbs_game-1, bbs_game-2 -
which is generated by the swarm. If I changed replicas to 3,
the 3rd one would be called bbs_game-3, etc
With haproxy in front (and connected to the swarm network), I
can either redirect traffic dirrectly to a specific instance
- eg "bbs_game-1", or to any instance using "bbs_game".
So, sbbs.ini is renamed to sbbs.bbs_game-1.ini, etc and only
bbs_game-1 starts the services.
I've not run into any issues so far.
(For this to work across hosts, you need a shared filesystem,
which is probably a little more complex for some, but NFS should
work. I'm not a fan of NFS, so I use a proprietary cross host
filesystem.)
Just curious, what FS sharing are you using? And have you had
issues with multi-node DOSemu doors... I haven't even started
setting up any doors yet.
Are you using a base volume container for the network mount(s)
on the docker host systems? Or mounting on the host?
Just curious, what FS sharing are you using? And have you had
issues with multi-node DOSemu doors... I haven't even started
setting up any doors yet.
Are you using a base volume container for the network mount(s)
on the docker host systems? Or mounting on the host?
I'm using scale, where its a visible filesystem on the host (all
hosts), with each running continer effectively a directory,
eg: /scale/docker/sbbs.
I've also used portworx (a while ago), where each container gets
a volume as its starts - but I stopped using it because it was
kinda buggy (and I kept having to fsck the volumes - which was
a challenge). I also lost some data - it didnt handle nodes
going offline very well (and it's supposed to).
On the Pi's I've used gluster, in fact I was using gluster on
the Intel server as well, but it kept core dumping - on the Pi
its a lot more reliable, although it doesnt do the I/O. I keep
hearing that gluster is dead now but I dont think its actually
true.
For dosemu, I havent seen any issues with multinode, but then Ive
only tried multinode on a couple of doors. I only just recently
set it up, and the new work that's been commited recently has made
it super easy. (Kudos to Rob and Mike(?) I think who's worked on
that.)
Gotcha... I'm just using a single host with host volumes. Mostly
using Docker to ease deployments/movement/backup etc... I was able to completely switch hosts/environments a couple times without issue.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 369 |
Nodes: | 16 (2 / 14) |
Uptime: | 101:34:07 |
Calls: | 7,897 |
Files: | 12,968 |
Messages: | 5,793,763 |