So, a really basic one here.
What's the current best practice to match term types between vms and ssh clients? OpenVms doesn't see to understand a termtype of xterm, and I'm
not sure if it recognises termtypes from an (inbound session) openssh
config file.
It would probably be good for VSI to spend a bit of time documenting
this, maybe creating some client configs to try.
So, a really basic one here.
What's the current best practice to match term types between vms and ssh clients? OpenVms doesn't see to understand a termtype of xterm, and I'm not sure
if it recognises termtypes from an (inbound session) openssh config file.
It would probably be good for VSI to spend a bit of time documenting this, maybe
creating some client configs to try.
xterm is software, not a terminal type.
On 4/16/2024 8:19 PM, motk wrote:
So, a really basic one here.
What's the current best practice to match term types between vms and ssh
clients? OpenVms doesn't see to understand a termtype of xterm, and I'm
not sure if it recognises termtypes from an (inbound session) openssh
config file.
It would probably be good for VSI to spend a bit of time documenting
this, maybe creating some client configs to try.
If xterm supports VT200/VT300/VT400 then set terminal to
one of those.
In theory /INQUIRE should work, but but reality can be different.
xterm is software, not a terminal type.
On 2024-04-17, motk <meh@meh.meh> wrote:
So, a really basic one here.
What's the current best practice to match term types between vms and ssh
clients? OpenVms doesn't see to understand a termtype of xterm, and I'm
not sure if it recognises termtypes from an (inbound session) openssh
config file.
In my .Xresources, I have:
XTerm*VT100.decTerminalID: vt102
Then on OpenVMS, $ SET TERM/INQUIRE correctly identifies the device
type. This does _not_ affect the TERM environment variable that gets set
in Linux (e.g. `echo $TERM` still reports xterm-256color).
As others have mentioned, something like vt240 or vt420 would be ideal. xterm's implementation of vt420 ends up making SET TERM/INQUIRE hang for several seconds, which is annoying since I have SET TERM/INQUIRE in my LOGIN.COM. This used to happen with vt240 as well, I think, but that has
been fixed and I just tested it: vt240 should be safe now.
I don't rememeber why I've left my xterm set to vt102...I have vague
memories that 'upgrading' to vt240 caused a problem with something not
at all related to VMS, and vt102 works fine in VMS, so I haven't changed
it.
Instead of setting X resources for your whole X session, you can also
launch xterm with "-ti vt240" to override what's in your X resources
database for that invocation of xterm.
So, a really basic one here.
What's the current best practice to match term types between vms and ssh clients? OpenVms doesn't see to understand a termtype of xterm, and I'm
not sure if it recognises termtypes from an (inbound session) openssh
config file.
I believe that VT200/VT300/VT400 and 8bit gives the "best"
VMS experience.
Arne
On 18/4/24 07:14, Arne Vajhøj wrote:
I believe that VT200/VT300/VT400 and 8bit gives the "best"
VMS experience.
Thanks all - it's worth a bit more thought, given that X11 is basically
dead, and that most people just open a windows or linux default terminal
and they 'ssh foo@bar'. Windows Terminal does do a lot of work on
terminal emulation and by default presents as xterm.
On 18/4/24 07:14, Arne Vajhøj wrote:
I believe that VT200/VT300/VT400 and 8bit gives the "best"
VMS experience.
Thanks all - it's worth a bit more thought, given that X11 is basically
dead, and that most people just open a windows or linux default terminal
and they 'ssh foo@bar'. Windows Terminal does do a lot of work on
terminal emulation and by default presents as xterm.
Arne
On 19/4/24 09:36, Arne Vajhøj wrote:
It is my impression that the majority of VMS terminal
users today use Putty.
Putty is kind of ancient, and a huge pain in a11y and general usability.
It's time to look beyond it.
That is why I always use PuTTY - even on linux.I run my console sessions
from that, then use virsh console <VM name> under KVM/QEMU
It is my impression that the majority of VMS terminal
users today use Putty.
Arne
On 19/4/24 09:39, Chris Townley wrote:
That is why I always use PuTTY - even on linux.I run my console
sessions from that, then use virsh console <VM name> under KVM/QEMU
The promox serial console works very well too. A
PuTTY may be old, but is well maintained, and it works
Putty is kind of ancient, and a huge pain in a11y and general usability.
It's time to look beyond it.
On 4/18/24 19:06, motk wrote:
Putty is kind of ancient, and a huge pain in a11y and general usability.
PuTTY is younger than TCP/IP (both v4 and v6) and Ethernet.
It's time to look beyond it.
Is it time to look past TCP/IP (both v4 and v6) and Ethernet?
N.B. 802.11 WiFi is effectively Ethernet.
On 18/4/24 07:14, Arne Vajhøj wrote:
I believe that VT200/VT300/VT400 and 8bit gives the "best"
VMS experience.
Thanks all - it's worth a bit more thought, given that X11 is basically
dead, and that most people just open a windows or linux default terminal
and they 'ssh foo@bar'. Windows Terminal does do a lot of work on
terminal emulation and by default presents as xterm.
Arne
PuTTY, X11, XTerm, OpenVMS, yes they are all still actively maintained
and still work well.
Some may be pushing to replace X11 with Wayland, (ugh), but most unix desktops are still based on X11 at core, with the user desktop GUI
sitting on top of that, and will be for the forseable future.
If it ain't broke, why fix it ?...
Chris
... what?
X11 is literally abandoned, apart from the X11 wayland bits.
Please be real.
On 19/4/24 13:09, Grant Taylor wrote:
PuTTY, X11, XTerm, OpenVMS, yes they are all still actively maintained
and still work well.
X11 is literally abandoned, apart from the X11 wayland bits. Please be
real.
On 4/19/24 05:10, motk wrote:
... what?
Just because something is not new doesn't mean that it's bad.
OpenVMS is not new by any stretch of the imagination.
It's an old technology that's still actively being maintained.
Old and actively maintained is okay.
Old and unmaintained is going to become a problem, it's only a question
of when.
On 4/19/2024 6:15 AM, motk wrote:
On 19/4/24 13:09, Grant Taylor wrote:
PuTTY, X11, XTerm, OpenVMS, yes they are all still actively maintained
and still work well.
X11 is literally abandoned, apart from the X11 wayland bits. Please be
real.
I guess it depends on what you really mean by abandoned.
Is the X11 software still being maintained? Yes it is.
In article <uvtvhs$31urj$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 4/19/2024 6:15 AM, motk wrote:
On 19/4/24 13:09, Grant Taylor wrote:
PuTTY, X11, XTerm, OpenVMS, yes they are all still actively maintained >>>> and still work well.
X11 is literally abandoned, apart from the X11 wayland bits. Please be
real.
I guess it depends on what you really mean by abandoned.
Is the X11 software still being maintained? Yes it is.
This is factually accurate, but the pace of maintenance
is glacial. I'd say it's on life support, but not much
more than that. The assertion was that X11 maintenance
is active; that's only true in so far that some modicum
of it exists.
On 4/19/2024 11:57 AM, Dan Cross wrote:
In article <uvtvhs$31urj$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 4/19/2024 6:15 AM, motk wrote:
On 19/4/24 13:09, Grant Taylor wrote:
PuTTY, X11, XTerm, OpenVMS, yes they are all still actively maintained >>>>> and still work well.
X11 is literally abandoned, apart from the X11 wayland bits. Please be >>>> real.
I guess it depends on what you really mean by abandoned.
Is the X11 software still being maintained? Yes it is.
This is factually accurate, but the pace of maintenance
is glacial. I'd say it's on life support, but not much
more than that. The assertion was that X11 maintenance
is active; that's only true in so far that some modicum
of it exists.
People can take a look:
https://gitlab.freedesktop.org/xorg/lib/libx11/-/commits/master
https://gitlab.freedesktop.org/xorg/xserver/-/commits/master
On 4/19/24 05:10, motk wrote:
... what?
Just because something is not new doesn't mean that it's bad.
If it ain't broke, why fix it ?...Because it's _broken_. ssh x11 forwarding has been deliberately
broken by _design_ for a _decade_, if you're found using X11 in a
corporate environemt you will get an earnest conversation with
people with no sense of humour, the world is real and here and now.
On 19/4/24 13:12, Grant Taylor wrote:
N.B. 802.11 WiFi is effectively Ethernet.
I am aware of 802.foo, yes.
On 4/19/2024 10:17 AM, Grant Taylor wrote:
On 4/19/24 05:10, motk wrote:
... what?
Just because something is not new doesn't mean that it's bad.
Yep! And the wheel is still doing well ...
Worth clarifying that “Ethernet” is actually covered under 802.3,
not 802.11. The parts they share in common (MAC addresses and the
“frame” concept) are defined in 802.2. These parts are also shared
with the other 802.x specs.
In article <uvtjbq$2ujbg$3@dont-email.me>, yep@yep.yep (motk) wrote:
If it ain't broke, why fix it ?...Because it's _broken_. ssh x11 forwarding has been deliberately
broken by _design_ for a _decade_, if you're found using X11 in a
corporate environemt you will get an earnest conversation with
people with no sense of humour, the world is real and here and now.
That's an extremely sweeping statement. I'm working for a very large and paranoid corporation. I wouldn't try using X11 across the internet, but
for working with a lot of different Linuxes, macOS and Solaris in a
secured development lab, it is truly excellent, and nobody is trying to
stop me.
It lets me have editors and terminal windows on lots of different Linuxes without needing to deal with their different GUIs and desktop
environments. As far as I can see, Wayland doesn't offer that unless you
slap on a remote desktop protocol. I actively don't want remote desktop:
it is not useful to me, it will suck bandwidth, and it gives me much,
much more setup to do.
I produce closed-source commercial shared libraries that have to work on
as many Linuxes as possible. The list of ones I have running in the lab
isn't ludicrous, but nor is it short:
x86-64: CentOS 7.9, RHEL 8.9, Rocky 8.9, Alma 8.9, Alma 9.3, SLES12sp5, SLES15sp5, Ubuntu LTS 20.04 and 22.04. I need to add Ubuntu LTS 24.04
soon, of course, and I'm getting extended support on the CentOS 7.9s so
that products released on them can serve out their maintenance lives.
Aarch64: Ubuntu 20.04, Amazon Linux 2, and RHEL 8.9. I need to add Amazon Linux 2023, Ubuntu LTS 22.04 and 24.04.
Would you want to set up desktops for all those different Linuxes?
John
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all
depend on X11 at core. VMS as well, though not sure of current status.
As with systemd, Wayland looks like yet another attempt at power
grab ...
On 4/19/24 17:09, Lawrence D'Oliveiro wrote:
Worth clarifying that “Ethernet” is actually covered under 802.3, not
802.11. The parts they share in common (MAC addresses and the “frame”
concept) are defined in 802.2. These parts are also shared with the
other 802.x specs.
My understanding is that 802.11 is heavily influenced by Ethernet 802.3.
Something akin to generational evolution.
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all depend on
X11 at core. VMS as well, though not sure of current status.
On 18/4/24 07:14, Arne Vajhj wrote:
I believe that VT200/VT300/VT400 and 8bit gives the "best"
VMS experience.
Thanks all - it's worth a bit more thought, given that X11 is basically dead, and that most people just open a windows or linux default terminal and they 'ssh foo@bar'. Windows Terminal does do a lot of work on terminal emulation and by default presents as xterm.
Arne
[,,,]
As dead as it looks, it's still what I'm using today... Just a Putty
login to start a good old VMS session manage,r and I'm in business.
Mostly DECterms, but also LSE. I must admit that the lack of support
for the mouse wheel in DECterm sucks. Otherwise it's still the terminal
that best fits my needs. Oh, and on the PC side, still using Excursion
too. Free, works all the time, even on my Windows 10 PC. What I miss is
the VT emulator that DEC made a long time ago. Was that VT320.EXE ? Can
it still be found somewhere ?
On 4/19/2024 8:05 PM, chrisq wrote:
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all
depend on X11 at core. VMS as well, though not sure of current status.
That's a pretty pliable definition of "depend on" regarding VMS, since
there is literally no dependency on X11 within VMS.
Yeah, if you want a non-character cell interface, then X11 is the only
option
on VMS, but to claim a dependency is a bit much. Even then, your X11 experience will be using a lot of DECterm windows most of the time.
On Sat, 20 Apr 2024 01:05:41 +0100, chrisq wrote:
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all
depend on X11 at core. VMS as well, though not sure of current status.
DEC was a key contributor in the development of X11. But that was then.
As with systemd, Wayland looks like yet another attempt at power
grab ...
I wonder who you think is “grabbing” this “power”. Both systemd and Wayland are open-source projects, created by people who see a problem and
are trying to fix it. Those in the community who see value in these
efforts adopt their solutions, others don’t. There is no Monopolistic™ BigCorp® forcing any of these things down our throats. If you don’t want to use them, don’t use them.
On 4/20/24 02:23, Robert A. Brooks wrote:
On 4/19/2024 8:05 PM, chrisq wrote:
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all
depend on X11 at core. VMS as well, though not sure of current status.
That's a pretty pliable definition of "depend on" regarding VMS, since
there is literally no dependency on X11 within VMS.
Yeah, if you want a non-character cell interface, then X11 is the only
option
on VMS, but to claim a dependency is a bit much. Even then, your X11
experience will be using a lot of DECterm windows most of the time.
Perhaps a poor choice of words, but are there any desktop / GUI systems, other than windows, that do not depend on X11 ?. The point was that it
is a standard, and not optional. Can't remember if the old Motif based
CDE used X11 libs or not, but that's long gone anyway. VWS, ditto...
On 4/20/24 02:23, Robert A. Brooks wrote:
On 4/19/2024 8:05 PM, chrisq wrote:
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all
depend on X11 at core. VMS as well, though not sure of current status.
That's a pretty pliable definition of "depend on" regarding VMS, since
there is literally no dependency on X11 within VMS.
Yeah, if you want a non-character cell interface, then X11 is the only
option
on VMS, but to claim a dependency is a bit much. Even then, your X11
experience will be using a lot of DECterm windows most of the time.
Sigh,
Perhaps a poor choice of words, but are there any desktop / GUI systems, other than windows, that do not depend on X11 ?. The point was that it
is a standard, and not optional. Can't remember if the old Motif based
CDE used X11 libs or not, but that's long gone anyway. VWS, ditto...
Nothing wrong with DECTerm. But there was a lot more you could do
with DECWindows. Worked great back when I had labs of X-terminals to
support both VMS and SunOS for the students (and yes, faculty liked
it, too.)
On 4/19/2024 8:05 PM, chrisq wrote:
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all
depend on X11 at core. VMS as well, though not sure of current status.
That's a pretty pliable definition of "depend on" regarding VMS, since
there is literally no dependency on X11 within VMS.
Yeah, if you want a non-character cell interface, then X11 is the only
option
on VMS, but to claim a dependency is a bit much. Even then, your X11
experience will be using a lot of DECterm windows most of the time.
Sigh,
Perhaps a poor choice of words, but are there any desktop / GUI systems, >other than windows, that do not depend on X11 ?
The point was that it
is a standard, and not optional. Can't remember if the old Motif based
CDE used X11 libs or not, but that's long gone anyway. VWS, ditto...
A suffocating carbunkle on what was an elegant os that really didn't
need it...
Marc Van Dyck schrieb am 20.04.2024 um 11:24:
As dead as it looks, it's still what I'm using today... Just a Putty
login to start a good old VMS session manage,r and I'm in business.
Mostly DECterms, but also LSE. I must admit that the lack of support
for the mouse wheel in DECterm sucks. Otherwise it's still the terminal
that best fits my needs. Oh, and on the PC side, still using Excursion
too. Free, works all the time, even on my Windows 10 PC. What I miss is
the VT emulator that DEC made a long time ago. Was that VT320.EXE ? Can
it still be found somewhere ?
Yes, it's on the freeware CD #7:
https://www.digiater.nl/openvms/freeware/v70/vtstar/
I'm still using it in my internal environment, mainly for VMS nodes and serial consoles. Excellent VT terminal emulation.
On 4/20/24 07:32, bill wrote:
Nothing wrong with DECTerm. But there was a lot more you could do
with DECWindows. Worked great back when I had labs of X-terminals to
support both VMS and SunOS for the students (and yes, faculty liked
it, too.)
I think that X11's ability to work across platforms is something that's
unmet by any other protocol that I'm aware of.
GUI applications could run on their native / optimal platform and
display on whatever platform the user wanted to use.
No, I don't consider web based interfaces to be comparable.
On 4/20/2024 11:40 AM, Grant Taylor wrote:
I think that X11's ability to work across platforms is something
that's unmet by any other protocol that I'm aware of.
GUI applications could run on their native / optimal platform and
display on whatever platform the user wanted to use.
Cool feature.
But how many does really need it?
Linux is a lot of things: incredibly useful, very powerful, and
arguably the most important software project in the world. But
"elegant" is not something that comes to mind when I think look
closely at it.
In article <v00lph$3nik6$2@dont-email.me>, chrisq <devzero@nospam.com> wrote:
[snip]
A suffocating carbunkle on what was an elegant os that really didn't
need it...
Linux is a lot of things: incredibly useful, very powerful, and
arguably the most important software project in the world. But
"elegant" is not something that comes to mind when I think look
closely at it.
- Dan C.
On 4/20/24 01:37, Lawrence D'Oliveiro wrote:
On Sat, 20 Apr 2024 01:05:41 +0100, chrisq wrote:
Absolutely, Solaris, Linux, Freebsd and even cygwin + X + xfce4, all
depend on X11 at core. VMS as well, though not sure of current status.
DEC was a key contributor in the development of X11. But that was then.
As with systemd, Wayland looks like yet another attempt at power grab
...
I wonder who you think is “grabbing” this “power”. Both systemd and >> Wayland are open-source projects, created by people who see a problem
and are trying to fix it. Those in the community who see value in these
efforts adopt their solutions, others don’t. There is no Monopolistic™ >> BigCorp® forcing any of these things down our throats. If you don’t
want to use them, don’t use them.
systemd originally came from redhat. I rest my case.
On Sat, 20 Apr 2024 23:01:17 +0100, chrisq wrote:
Which is why I dumped Linux for FreeBSD a few years ago now, systemd
really was the last straw.
I started with BSD in about 1978, and FreeBSD in about 1991. Never
touched Linux!
On 20 Apr 2024 22:21:08 GMT, Bob Eager wrote:
On Sat, 20 Apr 2024 23:01:17 +0100, chrisq wrote:
Which is why I dumped Linux for FreeBSD a few years ago now, systemd
really was the last straw.
I started with BSD in about 1978, and FreeBSD in about 1991. Never
touched Linux!
There are maybe half a dozen BSD variants still undergoing some kind of development, versus about 50× that number of Linux distros. Yet it is
easier to move between Linux distros than it is to move between BSD
variants.
Linux is able to offer a great deal of variety with minimal
fragmentation,
while the BSDs have more fragmentation and less variety.
Which is why I dumped Linux for FreeBSD a few years ago now, systemd
really was the last straw.
On Sat, 20 Apr 2024 16:08:33 +0100, chrisq wrote:
On 4/20/24 01:37, Lawrence D'Oliveiro wrote:
On Sat, 20 Apr 2024 01:05:41 +0100, chrisq wrote:
As with systemd, Wayland looks like yet another attempt at power grab
...
I wonder who you think is “grabbing” this “power”. Both systemd and >>> Wayland are open-source projects, created by people who see a problem
and are trying to fix it. Those in the community who see value in these
efforts adopt their solutions, others don’t. There is no Monopolistic™ >>> BigCorp® forcing any of these things down our throats. If you don’t
want to use them, don’t use them.
systemd originally came from redhat. I rest my case.
Most Linux users don’t use Red Hat. It seems to be mainly a North American thing.
On 20 Apr 2024 22:21:08 GMT, Bob Eager wrote:
On Sat, 20 Apr 2024 23:01:17 +0100, chrisq wrote:
Which is why I dumped Linux for FreeBSD a few years ago now, systemd
really was the last straw.
I started with BSD in about 1978, and FreeBSD in about 1991. Never
touched Linux!
There are maybe half a dozen BSD variants still undergoing some kind of development, versus about 50× that number of Linux distros. Yet it is
easier to move between Linux distros than it is to move between BSD
variants.
On Sat, 20 Apr 2024 10:40:01 -0500, Grant Taylor wrote:
No, I don't consider web based interfaces to be comparable.
At least some at Microsoft seem to think they’re “good enough”.
Look at Visual Studio Code: they could have used their own Dotnet to build it, but no, instead they built it on Electron.
Which is why I dumped Linux for FreeBSD a few years ago now, systemd
really was the last straw.
No, I don't consider web based interfaces to be comparable.
On Sat, 20 Apr 2024 23:01:17 +0100, chrisq wrote:
Which is why I dumped Linux for FreeBSD a few years ago now, systemd
really was the last straw.
Open Source is all about choice. Not clear why you had to dump all of >“Linux” just because some distros use systemd.
And some BSD folks feel the need to work on their own systemd-lookalike,
too. It’s called “InitWare”.
On 4/20/24 18:26, Scott Dorsey wrote:
The problem is not systemd. Systemd is a symptom of the problem.
I can agree to that.
The problem is change for change's sake. Let's rewrite this thing and
make it different... not better, just different.
I feel like there is a HUGE dose of ignorance on some contemporary
developers and they are repeating old mistakes and making new mistakes.
I know that some oft maligned changes are actually rooted in good
reason. I'm thinking about the deprecation of ifconfig, netstat, and route. The kernel grew, changed, and gained a LOT of new options that
the old tools had no idea how to work with. I can get behind that.
What I can't stand is why there aren't new versions of ifconfig,
netstat, and route that use the new framework while providing command compatibility with nearly 50 years of Unix and Unix like OS history. Not providing a compatible wrapper is stupid in my opinion.
There's some argument for a service manager. But a service manager
should not replace everything with one big monolithic chunk. I am not
a fan of service managers and I didn't like when Solaris implemented
it, but I can see some arguments in favor.
At least Solaris stopped SMF at managing services and didn't try to take
over DNS, NTP, and many other things.
The problem is not systemd. Systemd is a symptom of the problem.
The problem is change for change's sake. Let's rewrite this thing
and make it different... not better, just different.
There's some argument for a service manager. But a service manager
should not replace everything with one big monolithic chunk. I am not
a fan of service managers and I didn't like when Solaris implemented
it, but I can see some arguments in favor.
On 4/20/2024 6:27 PM, Lawrence D'Oliveiro wrote:
There are maybe half a dozen BSD variants still undergoing some kind of
development, versus about 50× that number of Linux distros. Yet it is
easier to move between Linux distros than it is to move between BSD
variants.
You are comparing BSD's that are different OS'es (they do share
a lot of code but that is pick and choose) with
Linux distros that all run the same kernel but are available
in many different bundles.
RHEL is the big one in on-prem enterprise Linux.
RHEL clones are some of the major gratis Linux distros (among a bunch of others).
On 4/20/24 18:26, Scott Dorsey wrote:
The problem is not systemd. Systemd is a symptom of the problem.
I can agree to that.
The problem is change for change's sake. Let's rewrite this thing and make >> it different... not better, just different.
I feel like there is a HUGE dose of ignorance on some contemporary
developers and they are repeating old mistakes and making new
mistakes.
I know that some oft maligned changes are actually rooted in good reason.
I'm thinking about the deprecation of ifconfig, netstat, and route. The kernel grew, changed, and gained a LOT of new options that the old tools had no idea how to work with. I can get behind that.
What I can't stand is why there aren't new versions of ifconfig, netstat,
and route that use the new framework while providing command compatibility with nearly 50 years of Unix and Unix like OS history. Not providing a compatible wrapper is stupid in my opinion.
Why is it the Linux distros are able to maintain a common kernel, but the BSDs are not? Aren’t the BSD kernels flexible enough for such different uses? Which aren’t even that different, compared to how distinct the various Linux distros can be?
On Sat, 20 Apr 2024 18:41:40 -0400, Arne Vajhj wrote:
RHEL is the big one in on-prem enterprise Linux.Like I said, that seems to be a North America thing.
Most Linux distros are offshoots of Debian, not Red Hat.
On Sat, 20 Apr 2024 18:48:15 -0400, Arne Vajhøj wrote:
On 4/20/2024 6:27 PM, Lawrence D'Oliveiro wrote:
There are maybe half a dozen BSD variants still undergoing some kind of
development, versus about 50× that number of Linux distros. Yet it is
easier to move between Linux distros than it is to move between BSD
variants.
You are comparing BSD's that are different OS'es (they do share
a lot of code but that is pick and choose) with
Linux distros that all run the same kernel but are available
in many different bundles.
Why is it the Linux distros are able to maintain a common kernel, but the >BSDs are not? Aren’t the BSD kernels flexible enough for such different >uses? Which aren’t even that different, compared to how distinct the >various Linux distros can be?
Because Linus owns the trademark and controls what can be called
Linux and what cannot be. Linus decides what goes into the kernel,
and therefore if he wants there to be one Linux kernal, there is.
If he wanted there to be two, he could do that too.
I think the problem is that they grew up in a Windows dominated world,
not like us greybeards.
+1
On 4/21/24 13:17, Scott Dorsey wrote:
Because Linus owns the trademark and controls what can be called
Linux and what cannot be. Linus decides what goes into the kernel,
and therefore if he wants there to be one Linux kernal, there is.
If he wanted there to be two, he could do that too.
On one hand I agree. But on the other hand I disagree.
Given that the Linux kernel is released as source code, people can >reconfigure it as they want. People can even add patches to it to add >additional functionality that's not in the upstream vanilla kernel
source. OpenZFS and some binary BLOB drives from vendors being perfect >examples of such things not in the upstream vanilla kernel source.
That's an extremely sweeping statement. I'm working for a very large and paranoid corporation. I wouldn't try using X11 across the internet, but
for working with a lot of different Linuxes, macOS and Solaris in a
secured development lab, it is truly excellent, and nobody is trying to
stop me.
It lets me have editors and terminal windows on lots of different Linuxes without needing to deal with their different GUIs and desktop
environments. As far as I can see, Wayland doesn't offer that unless you
slap on a remote desktop protocol. I actively don't want remote desktop:
it is not useful to me, it will suck bandwidth, and it gives me much,
much more setup to do.
I produce closed-source commercial shared libraries that have to work on
as many Linuxes as possible. The list of ones I have running in the lab
isn't ludicrous, but nor is it short:
x86-64: CentOS 7.9, RHEL 8.9, Rocky 8.9, Alma 8.9, Alma 9.3, SLES12sp5, SLES15sp5, Ubuntu LTS 20.04 and 22.04. I need to add Ubuntu LTS 24.04
soon, of course, and I'm getting extended support on the CentOS 7.9s so
that products released on them can serve out their maintenance lives.
Aarch64: Ubuntu 20.04, Amazon Linux 2, and RHEL 8.9. I need to add Amazon Linux 2023, Ubuntu LTS 22.04 and 24.04.
Would you want to set up desktops for all those different Linuxes?
John
As with systemd, Wayland looks like yet another attempt at power grab,
and even after years, still doesn't work properly, nor is it complete compared to X functionality. Who cares if X isn't completely secure,
just use it accordingly...
Chris
I wonder who you think is “grabbing” this “power”. Both systemd and Wayland are open-source projects, created by people who see a problem and
are trying to fix it. Those in the community who see value in these
efforts adopt their solutions, others don’t. There is no Monopolistic™ BigCorp® forcing any of these things down our throats. If you don’t want to use them, don’t use them.
They can also look at the number of outstanding bugs that
are not getting fixed.
Linux is able to offer a great deal of variety with minimal fragmentation, while the BSDs have more fragmentation and less variety.
Perhaps a poor choice of words, but are there any desktop / GUI systems, other than windows, that do not depend on X11 ?. The point was that it
is a standard, and not optional. Can't remember if the old Motif based
CDE used X11 libs or not, but that's long gone anyway. VWS, ditto...
I think the problem is that they grew up in a Windows dominated world,
not like us greybeards.
On So 21 Apr 2024 at 02:08, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Why is it the Linux distros are able to maintain a common kernel, but
the BSDs are not? Aren’t the BSD kernels flexible enough for such
different uses? Which aren’t even that different, compared to how
distinct the various Linux distros can be?
Because they do not want to! They have different objectives and they are really different projects. Not at all comparable to Linux distributions.
I produce closed-source commercial shared libraries that have to work on
as many Linuxes as possible. The list of ones I have running in the lab
isn't ludicrous, but nor is it short:
x86-64: CentOS 7.9, RHEL 8.9, Rocky 8.9, Alma 8.9, Alma 9.3, SLES12sp5, SLES15sp5, Ubuntu LTS 20.04 and 22.04. I need to add Ubuntu LTS 24.04
soon, of course, and I'm getting extended support on the CentOS 7.9s so
that products released on them can serve out their maintenance lives.
Aarch64: Ubuntu 20.04, Amazon Linux 2, and RHEL 8.9. I need to add Amazon Linux 2023, Ubuntu LTS 22.04 and 24.04.
All the BSD's are great! I'm especially fond of FreeBSD, but even they
have issue with service (rather than process) management.
On Sun, 21 Apr 2024 11:20:03 +0200, Andreas Eder wrote:
I think the problem is that they grew up in a Windows dominated world,
not like us greybeards.
If only Windows had Linux-style service management, don’t you think? Imagine being able to add/remove, enable/disable and start/stop individual services without having to reboot the entire system!
Deeply unserious summation. Feel free to keep righting spaghetti bourne
in rc.d, but the actual working world will just leave it behind.
I don't think macOS use X either (for native macOS
applications - can run X applications).
systemd originally came from redhat. I rest my case.
A suffocating carbunkle on what was an elegant os that really didn't
need it...
This is what I'm saying here, it's possible to neckbeard yourself into irrelevance and then you end up looking like some dude complaining
that's no such thing as rock and roll anymore. It's just cringe.
It probably didn't hurt that the product they intended to
displace was also using Electron (Atom).
Arne
A client of mine has office routers running pfSense, which is based on FreeBSD. I find some oddities, compared to Linux. For example, the “route”
command (for maintaining the routing table) has no option to list the contents of the routing table: instead, you have to use an entirely
different command, “netstat -r”, for that.
There was a time when the BSDs had a much superior network stack to Linux. Those days are gone.
And by the way, the BSD world is working on its own systemd-lookalike,
too. It’s called “InitWare”.
At least Solaris stopped SMF at managing services and didn't try to take
over DNS, NTP, and many other things.
But my impression is that it missed on the main criteria: keeping
things simple.
To illustrate the point and somewhat move back to VMS let me confess something: I really like SYS$MANAGER:SYSTARTUP_VMS.COM to manage
what get started on VMS.
VMS start the stuff that has to run and one put in what one
want to start in SYS$MANAGER:SYSTARTUP_VMS.COM usually in the
form of @SYS$STARTUP:something$STARTUP.COM.
A simple text file that after a little cleanup typical
will be only 20-50 lines. Easy to understand. Easy to edit.
On Mon, 22 Apr 2024 08:45:09 +1000, motk wrote:
This is what I'm saying here, it's possible to neckbeard yourself into
irrelevance and then you end up looking like some dude complaining
that's no such thing as rock and roll anymore. It's just cringe.
I see that around me all the time. systemd is one obvious trigger for
them, but there are others (e.g. Wayland, iproute2). Some of these people
may be physically no older than me (I’m guessing), but mentally it seems >they already have one foot in the grave.
XQuartz is supplied by Apple and provides X under MacOS which works pretty reliably, although slower than native MacOS graphics. It's used by a lot of applications from Matlab to the various open-source ports to MacOS.
--scott
I see that around me all the time. systemd is one obvious trigger for
them, but there are others (e.g. Wayland, iproute2). Some of these people
may be physically no older than me (I’m guessing), but mentally it seems they already have one foot in the grave.
Why is it the Linux distros are able to maintain a common kernel, but the BSDs are not? Aren’t the BSD kernels flexible enough for such different uses? Which aren’t even that different, compared to how distinct the various Linux distros can be?
There are multiple arguments against systemd, and if you listen to the arguments that people actually make, you might learn something about them.
There are people like me who complain that systemd is not necessary at all, that it doesn't solve a problem on most systems.
There are other people who like the idea of a service manager, but they don't like the idea of one monolithic clump that is the service manager but also cron and also has fingers into other places. These people justifiably claim that this is contrary to the traditional Unix philosophy of modularity.
There are also people who are okay with the idea of one monolithic clump
but are offended by the non-unix-style non-human-readable configuration files.
Pay attention to the specific arguments people make rather than just putting them all into one category.
--scott
On 21/04/2024 12:08 pm, Lawrence D'Oliveiro wrote:
Why is it the Linux distros are able to maintain a common kernel, but the
BSDs are not? Aren’t the BSD kernels flexible enough for such different
uses? Which aren’t even that different, compared to how distinct the
various Linux distros can be?
The BSDs have a different philosophy; they consider a system to be kernel+userspace and that they should all be built together, ie RELEASE.
On 20/04/2024 10:37 am, Lawrence D'Oliveiro wrote:
I wonder who you think is “grabbing” this “power”. Both systemd and >> Wayland are open-source projects, created by people who see a problem and
are trying to fix it. Those in the community who see value in these
efforts adopt their solutions, others don’t. There is no Monopolistic™ >> BigCorp® forcing any of these things down our throats. If you don’t want >> to use them, don’t use them.
If you've ever spend two days awake chasing races in shell scripts
across pacemaker/corosync you'd do anything for systemd or something
similar, like the solaris stuff.
This is what I'm saying here, it's possible to neckbeard yourself into irrelevance and then you end up looking like some dude complaining
that's no such thing as rock and roll anymore. It's just cringe.
One could argue that if you are chasing races in scripts, there's
something else wrong in the basic system design :-).
Having used Solaris for decades, their svcadm and other service
management tools seemed quite lightweight, in that all the config
scripts were in the usual places. Still in plain text format, as were
the log files, which could be manually edited with no ill effect. In
essence, a layer on top of what was already there. The FreeBSD service framework seems to work in the same way, a lightweight layer on top of
what was already there. Limited experience, but the AIX smit etc tools
also seem to work the same way, layered software design.
Now compare such an approach with that of systemd, and tell us why
such opaque complexity is a good thing...
On 21/04/2024 1:37 am, Arne Vajhøj wrote:
I don't think macOS use X either (for native macOS
applications - can run X applications).
I think it dropped X support a decade ago. The could probably do a
headless thing but nobody needs it. There are third party apps of course.
I think the problem is that they grew up in a Windows dominated world,
not like us greybeards.
On 4/21/2024 7:07 PM, Lawrence D'Oliveiro wrote:
On Sun, 21 Apr 2024 11:20:03 +0200, Andreas Eder wrote:
I think the problem is that they grew up in a Windows dominated world,
not like us greybeards.
If only Windows had Linux-style service management, don’t you think?
Imagine being able to add/remove, enable/disable and start/stop
individual services without having to reboot the entire system!
They don't need to imagine. They have been doing that for decades.
The BSDs have a different philosophy; they consider a system to be kernel+userspace and that they should all be built together, ie RELEASE.
It's not one or the other. Both completely valid viewpoints.
Still in plain text format, as were the log files, which could be
manually edited with no ill effect.
Now compare such an approach with that of systemd, and tell us why such opaque complexity is a good thing...
On Sun, 21 Apr 2024 19:14:59 -0400, Arne Vajhøj wrote:
On 4/21/2024 7:07 PM, Lawrence D'Oliveiro wrote:
If only Windows had Linux-style service management, don’t you think?
Imagine being able to add/remove, enable/disable and start/stop
individual services without having to reboot the entire system!
They don't need to imagine. They have been doing that for decades.
Windows can’t even update a DLL that is in use by running processes. I suppose it inherited that file-locking mentality from VMS.
If I seem intemperate sometimes, I apologise, but I am a grumpy old
bugger.
But does not matter so much for the users as they would want to reboot anyway.
The unix concept of modularity?
On Sun, 21 Apr 2024 21:50:55 -0400, Arne Vajhøj wrote:
But does not matter so much for the users as they would want to reboot
anyway.
But you said Windows has been doing that kind of thing without rebooting
for decades.
If only Windows had Linux-style service management, don’t you think? Imagine being able to add/remove, enable/disable and start/stopindividual
services without having to reboot the entire system!
but are offended by the non-unix-style non-human-readable configuration
files.
[stares in sendmail]
Pay attention to the specific arguments people make rather than just putting >> them all into one category.
You don't have an argument, you're just justifying your dislike of
change. Nothing wrong with disliking change of course but please
understand that things like smf and systemd came to exist because they
were needed for modern infrastructure. The Old Ways just no longer cut
the mustard, and people who do this for a living for vital
infrastructure were sick of dealing with them.
add/remove, enable/disable and start/stop of services does not require overwriting DLL's.
Because it's _broken_. ssh x11 forwarding has been deliberately broken
by _design_ for a _decade_, if you're found using X11 in a corporate environemt you will get an earnest conversation with people with no
sense of humour, the world is real and here and now.
On 4/20/2024 11:40 AM, Grant Taylor wrote:
On 4/20/24 07:32, bill wrote:
Nothing wrong with DECTerm. But there was a lot more you could do
with DECWindows. Worked great back when I had labs of X-terminals to
support both VMS and SunOS for the students (and yes, faculty liked
it, too.)
I think that X11's ability to work across platforms is something that's
unmet by any other protocol that I'm aware of.
GUI applications could run on their native / optimal platform and
display on whatever platform the user wanted to use.
No, I don't consider web based interfaces to be comparable.
Cool feature.
But how many does really need it?
On 21/04/2024 8:58 am, Arne Vajhj wrote:
It probably didn't hurt that the product they intended to
displace was also using Electron (Atom).
It'll probably move on to react/fluent/webview2, like teams did.
Using a display engine used in anger on billions on devices is probably
a good call, there's a lot more react coders in the world that qt or
similar.
On 4/21/24 04:20, Andreas Eder wrote:
I think the problem is that they grew up in a Windows dominated world,
not like us greybeards.
I really would like to agree. However we have some 20 year old
developers that have been using Linux their entire life making these
types of questionable decisions.
I think it's more apt to say that they have grown up with frameworks
that abstract things away from them and they have no idea how the
underlying infrastructure works.
On 20/04/2024 7:44 am, John Dallman wrote:
That's an extremely sweeping statement. I'm working for a very large and
paranoid corporation. I wouldn't try using X11 across the internet, but
for working with a lot of different Linuxes, macOS and Solaris in a
secured development lab, it is truly excellent, and nobody is trying to
stop me.
I'd love that sort of gig myself, but everywhere I've been for the past twenty years would have conniptions if I asked to open firewall holes
for X, or to add stuff to /etc/skel for xhosts, or to add selinux
policy, etc etc.
It lets me have editors and terminal windows on lots of different Linuxes
without needing to deal with their different GUIs and desktop
environments. As far as I can see, Wayland doesn't offer that unless you
slap on a remote desktop protocol. I actively don't want remote desktop:
it is not useful to me, it will suck bandwidth, and it gives me much,
much more setup to do.
Wayland is not X. It was never designed to do that, and I've personally berated Keith et al about that decision. RDP pretty much exploded and
all interest in replicated X in that way vanished, but there is yet
hope, ie https://gitlab.freedesktop.org/mstoeckl/waypipe/
On 2024-04-21, Grant Taylor <gtaylor@tnetconsulting.net> wrote:
On 4/21/24 04:20, Andreas Eder wrote:
I think the problem is that they grew up in a Windows dominated world,
not like us greybeards.
I really would like to agree. However we have some 20 year old
developers that have been using Linux their entire life making these
types of questionable decisions.
I think it's more apt to say that they have grown up with frameworks
that abstract things away from them and they have no idea how the
underlying infrastructure works.
I would hope that this generation of programmers still knows some of
the basics and (for example) still knows what a device or CPU register is...
On 2024-04-21, motk <meh@meh.meh> wrote:
Wayland is not X. It was never designed to do that, and I've personally
berated Keith et al about that decision. RDP pretty much exploded and
all interest in replicated X in that way vanished, but there is yet
hope, ie https://gitlab.freedesktop.org/mstoeckl/waypipe/
When running in a lower-bandwidth situation, what are the bandwidth >requirements with the above approach, versus running the X11 protocol >directly over ssh ?
Sendmail.cf was hardly typical of most Unix configuration files,
but surely you already know that. Indeed, I think one could
make a strong argument that sendmail's design, not to mention
its configuration, wasn't very Unix-y at all. At this point, I
imagine that Eric would agree.
But a fair counter argument to the "but it's not Unix!" cries is
that Unix lacked a robust configuration language that was
ubiquitous across systems and packages. That was a bit of a
shame, but perhaps inevitable: some programs had very domain
specific requirements for configuration that would be difficult
to express in a generic configuration language (lookin' at you,
sendmail). Surely any given universal language would either be
insufficient to express the full generality required for all
use cases, or it would be too baroque for simple, common cases.
Anyway. I can get behind the idea that modern service
management is essential for server operation. But it doesn't
follow that the expression of that concept in systemd is a great
example of how to do it.
On 2024-04-20, Arne Vajhj <arne@vajhoej.dk> wrote:
On 4/20/2024 11:40 AM, Grant Taylor wrote:
On 4/20/24 07:32, bill wrote:
Nothing wrong with DECTerm. But there was a lot more you could do
with DECWindows. Worked great back when I had labs of X-terminals to
support both VMS and SunOS for the students (and yes, faculty liked
it, too.)
I think that X11's ability to work across platforms is something that's
unmet by any other protocol that I'm aware of.
GUI applications could run on their native / optimal platform and
display on whatever platform the user wanted to use.
No, I don't consider web based interfaces to be comparable.
Cool feature.
But how many does really need it?
Everyone who needs to run a GUI application on an embedded Linux box
or everyone who needs to run a GUI application on various Linux servers
from the comfort of your own desk and workstation.
Even at home, I routinely use this capability.
Also note that the GUI application could be anything, including a debugger.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Sendmail.cf was hardly typical of most Unix configuration files,
but surely you already know that. Indeed, I think one could
make a strong argument that sendmail's design, not to mention
its configuration, wasn't very Unix-y at all. At this point, I
imagine that Eric would agree.
This is true although the extensive use of regexps and rewrite rules is
very Unixlike.
Sendmail was a thing that started out clean and small and accreted more and >more crap as time went by, until it got to the point where it just was not >really much good anymore. And then it got replaced (for the most part) by >more modular and maintainable systems.
But a fair counter argument to the "but it's not Unix!" cries is
that Unix lacked a robust configuration language that was
ubiquitous across systems and packages. That was a bit of a
shame, but perhaps inevitable: some programs had very domain
specific requirements for configuration that would be difficult
to express in a generic configuration language (lookin' at you,
sendmail). Surely any given universal language would either be >>insufficient to express the full generality required for all
use cases, or it would be too baroque for simple, common cases.
This is true, although with JSON things are changing a bit.
Anyway. I can get behind the idea that modern service
management is essential for server operation. But it doesn't
follow that the expression of that concept in systemd is a great
example of how to do it.
IF you believe this, and I am not sure that I do, then it seems to me
that the Solaris approach is far, far better than the systemd approach. >Certainly a good argument can be made for service management and there are >certainly some systems where it is a good idea, but that does not mean
that systemd is a good idea.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
That said, some of the greybeards have no idea (and I mean none
whatsoever) of how modern systems _actually_ work under the hood
themselves. Those in glass houses....
I am certainly in that category and believe me it absolutely terrifies me.
It terrifies me even more that when I ask people how things work inside, nobody else seems to know either!
What worries me is that we have a generation of people who don't really care. Or maybe more than one generation.
--scott
That said, some of the greybeards have no idea (and I mean none
whatsoever) of how modern systems _actually_ work under the hood
themselves. Those in glass houses....
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
That said, some of the greybeards have no idea (and I mean none
whatsoever) of how modern systems _actually_ work under the hood >>themselves. Those in glass houses....
I am certainly in that category and believe me it absolutely terrifies me.
It terrifies me even more that when I ask people how things work inside, >nobody else seems to know either!
What worries me is that we have a generation of people who don't really care. >Or maybe more than one generation.
On Sun, 21 Apr 2024 11:16:07 +0200, Andreas Eder wrote:
On So 21 Apr 2024 at 02:08, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Why is it the Linux distros are able to maintain a common kernel, but
the BSDs are not? Aren’t the BSD kernels flexible enough for such
different uses? Which aren’t even that different, compared to how
distinct the various Linux distros can be?
Because they do not want to! They have different objectives and they
are really different projects. Not at all comparable to Linux
distributions.
And yet the range of variety they are able to offer, with their
fragmented kernels, is only a small fraction of what Linux distros have achieved, with their unified kernel base.
It's not even that people don't care, it's that the entire thing is so ridiculously complex that it's beyond the understanding of a single
person, and much of it is hidden from the OS, buried under layers
of firmware blobs running on hidden cores outside of the visibility,
let alone control, of an operating system.
I would hope that this generation of programmers still knows some
of the basics and (for example) still knows what a device or CPU
register is...
What worries me is that we have a generation of people who don't really care. Or maybe more than one generation.
--scott
I've found that X11 is one of the fatter remote GUI protocols. RDP and
VNC tend to be lighter.
But, RDP and VNC tend to imply a full desktop whereas X easily has
programs from different hosts display as windows on a single X server.
There are some hacks to emulate this with RDP and VNC, but they are not native and not reliable.
If anyone is doing anything with xhost for X11 these days, they are
doing it _very_ wrong. :-)
The only acceptable way to run X11 remotely is over ssh (and that is
with "ssh -X" and not "ssh -Y").
When running in a lower-bandwidth situation, what are the bandwidth requirements with the above approach, versus running the X11 protocol directly over ssh ?
Sendmail.cf was hardly typical of most Unix configuration files,
You may have a point, but to suggest that anyone who objects to systemd doesn't "have an argument" or is reactionarily change averse is going
too far. There are valid arguments against systemd, in particular.
I'll concede that modern Unix systems (including Linux systems), that
work in terms of services, need a robust service management subsystem.
If one takes a step back and thinks about what such a service
management framework actually has to do, a few things pop out: managing
the processes that implement the service, including possibly running
commands both before the "main" process starts and possibly after
it ends. It must manage dependencies between services; arguably it
should manage the resources assigned to services.
So this suggests that it should expose some way to express
inter-service dependencies, presumably with some sort of
human-maintainable representation; it must support some sort of
inter-service scheduling to satisfy those dependencies; and it
must work with the operating system to enforce resource management constraints.
But what else should it do? Does it necessarily need to handle
logging, or network interface management, or provide name or time
services? Probably not.
SMF didn't do all of that (it did sort of support logging, but not
the rest), and that was fine.
And is it enough? I would argue not really, and this is really the
issue with the big monolithic approach that something like systemd
takes. What does it mean for each and every service to be "up"?
Is systemd able to express that sufficiently richly in all cases?
How does one express the sorts of probes that would be used to test,
anyway?
The counter that things like NTP can drag in big dependencies that
aren't needed (for something that's arguably table stakes, like
time) feels valid, but that's more of an indictment of the upstream
NTP projects, rather than justification for building it all into
a monolith.
Anyway. I can get behind the idea that modern service management
is essential for server operation. But it doesn't follow that the
expression of that concept in systemd is a great example of how to
do it.
For the sake of discussion, please explain why traditional SysV init
scripts aren't a service management subsystem / facility / etc.
There is way too much apathy going around.
Can the greybeards please stop sooking about it not being 1999 anymore please. Once upon a time, knowing inscrutable regex and gluing stuff
together with sed and awk make you a sage superhero. Now people look at
you funny for gluing in tech debt, and they are right to do so.
Quantity does not equal quality. :-)
Simon.
If anyone is doing anything with xhost for X11 these days, they are doing
it _very_ wrong. :-) The only acceptable way to run X11 remotely is over
ssh (and that is with "ssh -X" and not "ssh -Y").
When running in a lower-bandwidth situation, what are the bandwidth requirements with the above approach, versus running the X11 protocol directly over ssh ?
Simon.
On 2024-04-19, motk <yep@yep.yep> wrote:
Because it's _broken_. ssh x11 forwarding has been deliberately broken
by _design_ for a _decade_, if you're found using X11 in a corporate
environemt you will get an earnest conversation with people with no
sense of humour, the world is real and here and now.
In what way has it been deliberately broken (and in which versions of software) ?
BTW, to anyone using X11 over ssh, it is a standard recommendation to use "ssh -X" instead of "ssh -Y" so that additional security restrictions
are active. See the ssh man page for details.
Simon.
Once upon a time, knowing inscrutable regex and gluing stuff together
with sed and awk make you a sage superhero.
I dunno, I'm not unintelligent but have you seen how much stress a
browser engine has to endure? Thousands of people with phds smash these things to bits on the regular. Hundreds of thousands of people use electron/react/whatever apps every day and never notice. Grousing about
this isn't a good look anymore.
I hear people say that systemd and smf are service management things and
that traditional SysV style init scripts aren't. But they never explain
why the former is and the latter isn't.
I was genuinely trying to learn something.
Grant Taylor <gtaylor@tnetconsulting.net> wrote:
I hear people say that systemd and smf are service management things and >>that traditional SysV style init scripts aren't. But they never explain >>why the former is and the latter isn't.
The one thing that smf and systemd have is the ability to watch a process
and restart it if it crashes. Many people find this very important, although >personally I suspect that if your service is crashing a lot that you should >fix it rather than rely on something else to restart it.
Something else that they do provide is automated management of dependencies >to start everything in order without the admin having to manually set the >order of execution up. This can be a benefit and if done well can also speed >boot times, but I am not sure that this is necessary to call a startup >mechanism a service manager.
I was genuinely trying to learn something.
This is not likely to be a thread in which anyone will learn much, I am sorry >to say.
On 4/21/24 21:37, Dan Cross wrote:
Sendmail.cf was hardly typical of most Unix configuration files,
I'll argue that sendmail.cf or sendmail.mc aren't as much configuration
files as they are source code for a domain specific language used to
impart configuration on the sendmail binary. In some ways it's closer
to modifying a Makefile / header file for a program than a more typical >configuration file.
You may have a point, but to suggest that anyone who objects to systemd
doesn't "have an argument" or is reactionarily change averse is going
too far. There are valid arguments against systemd, in particular.
Agreed.
I'll concede that modern Unix systems (including Linux systems), that
work in terms of services, need a robust service management subsystem.
For the sake of discussion, please explain why traditional SysV init
scripts aren't a service management subsystem / facility / etc.
If one takes a step back and thinks about what such a service
management framework actually has to do, a few things pop out: managing
the processes that implement the service, including possibly running
commands both before the "main" process starts and possibly after
it ends. It must manage dependencies between services; arguably it
should manage the resources assigned to services.
I feel like the pre / post commands should not be part of the system >management ${PICK_YOUR_TERM}. Instead there should be a command (script
or binary) that can be called to start / stop / restart / etc. a service
and that it is the responsibility of that command (...) to run the pre
and / or post commands related to the actual primary program executable.
I feel like the traditional SysV / /etc/init.d scripts did the pre and /
or post commands fairly well.
What the SysV init system didn't do is manage dependencies. Instead
that dependency management was offloaded to the system administrator.
So this suggests that it should expose some way to express
inter-service dependencies, presumably with some sort of
human-maintainable representation; it must support some sort of
inter-service scheduling to satisfy those dependencies; and it
must work with the operating system to enforce resource management
constraints.
I'm okay with that in spirit. But I'm not okay with what I've witnessed >execution of this. I've seen a service restart, when a HUP would
suffice, cause multiple other things stop and restart because of the >dependency configuration.
Yes, things like a web server and an email server probably really do
need networking. But that doesn't mean that they need the primary
Ethernet interface to be up/up. The loopback / localhost and other
Ethernet interfaces are probably more than sufficient to keep the
servers happy while I re-configure the primary Ethernet interface.
But what else should it do? Does it necessarily need to handle
logging, or network interface management, or provide name or time
services? Probably not.
I think almost certainly not. Or more specifically I think that -- what
I colloquially call -- an init system should keep it's bits off name >resolution and network interface management.
SMF didn't do all of that (it did sort of support logging, but not
the rest), and that was fine.
The only bits of logging that I've seen in association with SMF was
logging of SMF's processing of starting / stopping / etc. services. The
rest of the logging was handled by the standard system logging daemon.
And is it enough? I would argue not really, and this is really the
issue with the big monolithic approach that something like systemd
takes. What does it mean for each and every service to be "up"?
Is systemd able to express that sufficiently richly in all cases?
How does one express the sorts of probes that would be used to test,
anyway?
I would argue that this is a status / ping operation that a venerable
init script should provide and manage.
If the system management framework wants to periodically call the init
script to check the status of the process, fine. Let the service's init >script manage what tests are done and how to do them. The service's
init script almost certainly knows more about the service than a generic
init / service lifecycle manager thing.
I feel like there are many layering violations in the pursuit of service >lifecycle manager.
Here's a thought, have a separate system that does monitoring / health
checks of things and have it report it's findings and possibly try to
restart the unhealthy service using the init / SMF / etc. system in the
event that is necessary.
Multiple sub-systems should work in concert with each other. No single >subsystem should try to do multiple subsystems jobs.
The counter that things like NTP can drag in big dependencies that
aren't needed (for something that's arguably table stakes, like
time) feels valid, but that's more of an indictment of the upstream
NTP projects, rather than justification for building it all into
a monolith.
+10
Anyway. I can get behind the idea that modern service management
is essential for server operation. But it doesn't follow that the
expression of that concept in systemd is a great example of how to
do it.
+1
On 4/21/24 21:37, Dan Cross wrote:
I'll concede that modern Unix systems (including Linux systems), that
work in terms of services, need a robust service management subsystem.
For the sake of discussion, please explain why traditional SysV init
scripts aren't a service management subsystem / facility / etc.
What the SysV init system didn't do is manage dependencies.
On 4/22/24 19:15, motk wrote:
I dunno, I'm not unintelligent but have you seen how much stress a
browser engine has to endure? Thousands of people with phds smash these
things to bits on the regular. Hundreds of thousands of people use
electron/react/whatever apps every day and never notice. Grousing about
this isn't a good look anymore.
How much more productive work is done with a contemporary web browser in
2024 than in 2004 or even in 1998 (save for encryption)?
How much more productive work are computers doing in general in 2024
than in 1994?
Have the frameworks and fancy things that are done in 2024 actually
improved things?
I feel like there is massively disproportionately more computation power
/ resources consumed for very questionable things with not much to show
for it. Think what could have been done in the mid '90s with today's >computing resources.
As such, I believe that there is some room for grousing about many >questionable practices today.
The one thing that smf and systemd have is the ability to watch a
process and restart it if it crashes.
Many people find this very important, although personally I suspect
that if your service is crashing a lot that you should fix it rather
than rely on something else to restart it.
Something else that they do provide is automated management of
dependencies to start everything in order without the admin having
to manually set the order of execution up. This can be a benefit
and if done well can also speed boot times, but I am not sure that
this is necessary to call a startup mechanism a service manager.
This is not likely to be a thread in which anyone will learn much,
I am sorry to say.
init has been watching processes and restarting them if they crash for
the 25 years that I've been messing with Linux.
On 4/22/24 19:13, motk wrote:
Can the greybeards please stop sooking about it not being 1999 anymore
please. Once upon a time, knowing inscrutable regex and gluing stuff
together with sed and awk make you a sage superhero. Now people look
at you funny for gluing in tech debt, and they are right to do so.
That's a non-answer.
I was hoping to see a ${SERVICE_MANAGEMENT_THING} provides:
- this
- this
- that
- and this
- don't forget about that
As such, I believe that there is some room for grousing about many questionable practices today.
Then perhaps be a little more open minded and do some inquiry of your
own as well.
world.
On 23/4/24 15:18, Lawrence D'Oliveiro wrote:
systemd-haters are like the anti-fluoridationists of the Open Source
world.
Well, I think that's unfair ...
Eh, JSON has its own problems; since the representation of
numbers is specified to be compatible with floats, it's possible
to lose data by translating it through JSON (I've seen people
put e.g. machine addresses in JSON and then be surprised when
the low bits disappear: floating point representations are not
exact over the range of 64-bit integers!).
On 2024-04-22, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Eh, JSON has its own problems; since the representation of
numbers is specified to be compatible with floats, it's possible
to lose data by translating it through JSON (I've seen people
put e.g. machine addresses in JSON and then be surprised when
the low bits disappear: floating point representations are not
exact over the range of 64-bit integers!).
I would consider that to be strictly a programmer error. That's the
kind of thing that should be stored as a string value unless you are
using a JSON library that preserves integer values unless decimal data
is present in the input data (and hence silently converts it to a float).
I don't expect people to write their own JSON library (although I hope
they can given how relatively simple JSON is to parse), but I do expect
them to know what values they can use in libraries in general without >experiencing data loss.
It's just cringe.
In modern languages, one can often derive JSON serialization and deserialization methods from the source data type, transparent
to the programmer. Those may decide to use the JSON numeric
type for numeric data; this has surprised a few people I know
(who are extraordinarily competent programmers). Sure, the fix
is generally easy (there's often a way to annotate a datum to
say "serialize this as a string"), but that doesn't mean that
even very senior people don't get caught out at times.
But the problem is even more insideous than that; popular tools
like `jq` can take properly serialized source data and silently
make lossy conversions. So you might have properly written,
value preserving libraries at both ends and still suffer loss
due to some intermediate tool.
JSON is dangerous. Caveat emptor.
On 2024-04-23, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In modern languages, one can often derive JSON serialization and
deserialization methods from the source data type, transparent
to the programmer. Those may decide to use the JSON numeric
type for numeric data; this has surprised a few people I know
(who are extraordinarily competent programmers). Sure, the fix
is generally easy (there's often a way to annotate a datum to
say "serialize this as a string"), but that doesn't mean that
even very senior people don't get caught out at times.
But the problem is even more insideous than that; popular tools
like `jq` can take properly serialized source data and silently
make lossy conversions. So you might have properly written,
value preserving libraries at both ends and still suffer loss
due to some intermediate tool.
JSON is dangerous. Caveat emptor.
JSON is fine. What _is_ dangerous are the incredibly arrogant people
who think they can design the above libraries in a way that silently
alter someone's data in that way.
On 4/21/2024 6:45 PM, motk wrote:
It's just cringe.So is using "cringe" as a noun.
On 4/21/2024 6:45 PM, motk wrote:
It's just cringe.
So is using "cringe" as a noun.
Hunter
On 24/4/24 10:23, Lawrence D'Oliveiro wrote:
Heh, you used “burn” as a noun, too. ;)
It's what all the cool kids do now, verbing nouns and nounifying verbs.
Heh, you used “burn” as a noun, too. ;)
I shall never recover from this burn.
Not the source I would refer to. While it has recently been used in
slang as a noun, it's still only a verb or an adjective to me. And I'm
sure someone will tell me it's an example of English being a living
language that's growing, etc. Whatever. I'm old, and I cringe at people
using it as a noun.
Hunter
Not the source I would refer to.
On Tue, 23 Apr 2024 21:50:43 -0400, Hunter Goatley wrote:
Not the source I would refer to.
You are certainly no “source” I would treat as credible.
On Sun, 21 Apr 2024 19:14:59 -0400, Arne Vajhj wrote:
On 4/21/2024 7:07 PM, Lawrence D'Oliveiro wrote:
On Sun, 21 Apr 2024 11:20:03 +0200, Andreas Eder wrote:
I think the problem is that they grew up in a Windows dominated world, >>> not like us greybeards.
If only Windows had Linux-style service management, don?t you think?
Imagine being able to add/remove, enable/disable and start/stop
individual services without having to reboot the entire system!
They don't need to imagine. They have been doing that for decades.
Windows can?t even update a DLL that is in use by running processes. I suppose it inherited that file-locking mentality from VMS.
Look at the long list of reasons why Windows needs a reboot here, from Microsoft itself: <https://learn.microsoft.com/en-us/troubleshoot/windows-server/installing-updates-features-roles/why-prompted-restart-computer>.
If anything, Linux has acquired Windows NT style service
management and logging with systemd.
Linux is no different here. Want to upgrade your X server? Going to have
to at least restart all X11 apps.
For example, the csrss.exe mentioned in the article is the user-space
chunk of the Win32 environment subsystem - the thing that allows Windows
NT to run regular windows software.
On Wed, 24 Apr 2024 17:04:36 +1200, David Goodwin wrote:
If anything, Linux has acquired Windows NT style service
management and logging with systemd.
Did you know that systemd uses text-based config files in .INI format? The same format Microsoft invented for Windows back in the 1980s, then junked
in favour of that stinking cesspool known as the Registry?
Linux is no different here. Want to upgrade your X server? Going to have
to at least restart all X11 apps.
That?s just logging out of a GUI session and logging in again.
And remember, you can have multiple GUI sessions logged in at once.
For example, the csrss.exe mentioned in the article is the user-space
chunk of the Win32 environment subsystem - the thing that allows Windows
NT to run regular windows software.
There?s nothing like that needed on Linux.
On 24/4/24 12:34, Lawrence D'Oliveiro wrote:
On Tue, 23 Apr 2024 21:50:43 -0400, Hunter Goatley wrote:
Not the source I would refer to.
You are certainly no “source” I would treat as credible.
How I've missed you, usenet.
The thing is, when you're working at scale, managing services
across tens of thousands of machines, you quickly discover that
shit happens. Things sometimes crash randomly; often this is
due to a bug, but sometimes it's just because the OOM killer got
greedy due to the delayed effects of a poor scheduling decision,
or there was a dip on one of the voltage rails and a DIMM lost a
bit, or a job landed on a machine that's got some latent
hardware fault and it just happened to wiggle things in just the
right way so that a 1 turned into a 0 (or vice versa), or any
number of other things that may or may not have anything to do
with the service itself.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
The thing is, when you're working at scale, managing services
across tens of thousands of machines, you quickly discover that
shit happens. Things sometimes crash randomly; often this is
due to a bug, but sometimes it's just because the OOM killer got
greedy due to the delayed effects of a poor scheduling decision,
or there was a dip on one of the voltage rails and a DIMM lost a
bit, or a job landed on a machine that's got some latent
hardware fault and it just happened to wiggle things in just the
right way so that a 1 turned into a 0 (or vice versa), or any
number of other things that may or may not have anything to do
with the service itself.
Oh, I understand this completely. I have stood in the middle of a large >colocation facility and listened to Windows reboot sounds every second or
two coming from different places in the room each time.
What I don't necessarily understand is why people consider this acceptable. >People today just seem to think this is the normal way of doing business. >Surely we can do better.
On Wed, 24 Apr 2024 21:52:36 +1200, David Goodwin wrote:
The difference here is that Windows NT isn't limited to a single
userspace API/personality, historically it provided three (Win32, OS/2
and POSIX) in addition to its own Native API.
That’s the theory. In practice, it doesn’t seem to have worked very well. The POSIX “personality” for example, was essentially unusable.
When the Windows engineers were working on WSL1, emulating Linux kernel
APIs on Windows, you would think they would have used this “personality” system. But they did not.
I suspect it had already bitrotted into nonfunctionality by that point.
In the end, they had to give up, and bring in an honest-to-goodness Linux kernel, in WSL2.
The difference here is that Windows NT isn't limited to a single
userspace API/personality, historically it provided three (Win32, OS/2
and POSIX) in addition to its own Native API.
In article <v088m8$1juj9$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2024-04-22, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Eh, JSON has its own problems; since the representation of
numbers is specified to be compatible with floats, it's possible
to lose data by translating it through JSON (I've seen people
put e.g. machine addresses in JSON and then be surprised when
the low bits disappear: floating point representations are not
exact over the range of 64-bit integers!).
I would consider that to be strictly a programmer error. That's the
kind of thing that should be stored as a string value unless you are
using a JSON library that preserves integer values unless decimal data
is present in the input data (and hence silently converts it to a float).
I don't expect people to write their own JSON library (although I hope
they can given how relatively simple JSON is to parse), but I do expect
them to know what values they can use in libraries in general without
experiencing data loss.
In modern languages, one can often derive JSON serialization and deserialization methods from the source data type, transparent
to the programmer. Those may decide to use the JSON numeric
type for numeric data; this has surprised a few people I know
(who are extraordinarily competent programmers). Sure, the fix
is generally easy (there's often a way to annotate a datum to
say "serialize this as a string"), but that doesn't mean that
even very senior people don't get caught out at times.
But the problem is even more insideous than that; popular tools
like `jq` can take properly serialized source data and silently
make lossy conversions. So you might have properly written,
value preserving libraries at both ends and still suffer loss
due to some intermediate tool.
JSON is dangerous. Caveat emptor.
On 4/22/24 19:15, motk wrote:
I dunno, I'm not unintelligent but have you seen how much stress a
browser engine has to endure? Thousands of people with phds smash
these things to bits on the regular. Hundreds of thousands of people
use electron/react/whatever apps every day and never notice. Grousing
about this isn't a good look anymore.
How much more productive work is done with a contemporary web browser in
2024 than in 2004 or even in 1998 (save for encryption)?
How much more productive work are computers doing in general in 2024
than in 1994?
Have the frameworks and fancy things that are done in 2024 actually
improved things?
I feel like there is massively disproportionately more computation power
/ resources consumed for very questionable things with not much to show
for it. Think what could have been done in the mid '90s with today's computing resources.
On 4/24/2024 6:15 PM, Lawrence D'Oliveiro wrote:
On Wed, 24 Apr 2024 21:52:36 +1200, David Goodwin wrote:
The difference here is that Windows NT isn't limited to a single
userspace API/personality, historically it provided three (Win32, OS/2
and POSIX) in addition to its own Native API.
That’s the theory. In practice, it doesn’t seem to have worked very
well. The POSIX “personality” for example, was essentially unusable.
My impression is that it worked fine.
When the Windows engineers were working on WSL1, emulating Linux kernel
APIs on Windows, you would think they would have used this
“personality” system. But they did not.
Lowest common denonimator for Unix API's from the early 1990's was
probably not interesting.
This will be a relative long post. Sorry.
The problem at hand has nothing to do with JSON. It is
a string to numeric and data types problem.
JSON:
{ "v": 100000000000000001 }
XML:
<data>
<v>100000000000000001</v>
</data>
YAML:
v: 100000000000000001
All expose the same problem.
The value cannot be represented as is in some very common
data types like 32 bit integers and 64 bit floating point.
The fact that it ultimately is the developers responsibility
to select proper data types does not mean that programming languages
and JSON libraries can not help catch errors.
If it is obvious that an unexpected/useless result is being
produced then it should be flagged (return error code or throw
exception depending on technology).
Let us go back to the example with 100000000000000001.
Trying to stuff that into a 32 bit integer by like parsing
it as a 64 bit integer and returning the lower 32 bits
is in my best opinion an error. Nobody wants to get an int
with 1569325057 from retrieving a 32 bit integer integer
from "100000000000000001". It should give an error.
The case with a 64 bit floating point is more tricky. One
could argue that 100000000000000001.0 is the expected
result and that 100000000000000000.0 should be considered
an error. And it probably would be an error in the majority
of cases. But there is actually the possibility that
someone that understand floating point are reading JSON
and expect what is happening and does not care because
there are some uncertainty in the underlying data. And
creating a false error for people that understand FP data
types to prevent those that do not understand FP data types
from shooting themself in the foot is not good.
On 4/23/2024 9:03 AM, Dan Cross wrote:
In article <v088m8$1juj9$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2024-04-22, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Eh, JSON has its own problems; since the representation of
numbers is specified to be compatible with floats, it's possible
to lose data by translating it through JSON (I've seen people
put e.g. machine addresses in JSON and then be surprised when
the low bits disappear: floating point representations are not
exact over the range of 64-bit integers!).
I would consider that to be strictly a programmer error. That's the
kind of thing that should be stored as a string value unless you are
using a JSON library that preserves integer values unless decimal data
is present in the input data (and hence silently converts it to a float). >>>
I don't expect people to write their own JSON library (although I hope
they can given how relatively simple JSON is to parse), but I do expect
them to know what values they can use in libraries in general without
experiencing data loss.
In modern languages, one can often derive JSON serialization and
deserialization methods from the source data type, transparent
to the programmer. Those may decide to use the JSON numeric
type for numeric data; this has surprised a few people I know
(who are extraordinarily competent programmers). Sure, the fix
is generally easy (there's often a way to annotate a datum to
say "serialize this as a string"), but that doesn't mean that
even very senior people don't get caught out at times.
But the problem is even more insideous than that; popular tools
like `jq` can take properly serialized source data and silently
make lossy conversions. So you might have properly written,
value preserving libraries at both ends and still suffer loss
due to some intermediate tool.
JSON is dangerous. Caveat emptor.
This will be a relative long post. Sorry.
The problem at hand has nothing to do with JSON. It is
a string to numeric and data types problem.
[snip]
But selecting an appropriate data type for a given piece
of data based on its possible values and usage is
core responsibility for a developer.
Let us see some code.
I picked Groovy as demo language, because I am a J guy
and Groovy allows to demo a lot.
As a JVM language there are several possible data types to
pick from:
int
long
double
String
BigInteger
BigDecimal
And obviously int and double has problem with 100000000000000001
while the rest can store it OK.
To illustrate the option of different JSON libraries I will
test with both GSON (Google JSON library) and Jackson (probably
the most widely used JSON library in the Java wold).
Let us first look at the model where the JSON is parsed to
a tree.
We see:
* the expected slightly off value for double
* the crazy value for int (no exception)
* other data types are fine
Now mapping/binding to class.
Very similar behavior except that now I do get the exception for
int that I so much prefer.
In article <v0c6t7$2jtvn$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
The problem at hand has nothing to do with JSON. It is
a string to numeric and data types problem.
It is a problem with a format that permits silently lossy
conversions.
[snip]
But selecting an appropriate data type for a given piece
of data based on its possible values and usage is
core responsibility for a developer.
One hopes one also has tools that permit detection of
truncation. You also ignored the point about how third-party
tools for manipulating JSON will silently lose data.
On Wed, 24 Apr 2024 19:13:44 -0400, Arne Vajhøj wrote:
On 4/24/2024 6:15 PM, Lawrence D'Oliveiro wrote:
On Wed, 24 Apr 2024 21:52:36 +1200, David Goodwin wrote:
The difference here is that Windows NT isn't limited to a single
userspace API/personality, historically it provided three (Win32, OS/2 >>>> and POSIX) in addition to its own Native API.
That’s the theory. In practice, it doesn’t seem to have worked very
well. The POSIX “personality” for example, was essentially unusable.
My impression is that it worked fine.
You got to be kidding <https://www.youtube.com/watch?v=BOeku3hDzrM>.
But it seems like this person did not really understand the
Posix standard (from that time).
His main argument seems to be that it is not useful.
I believe that.
But that is the limitation of Posix (from that time).
VMS Posix was not considered useful by many either.
On 4/24/2024 8:36 PM, Dan Cross wrote:
In article <v0c6t7$2jtvn$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
The problem at hand has nothing to do with JSON. It is
a string to numeric and data types problem.
It is a problem with a format that permits silently lossy
conversions.
Trying to stuff 100000000000000001 into a 32 bit integer
or a 64 bit floating point create a problem.
The developer made a design error. And the language/library
did not object.
But none of that is a problem in the format. The JSON has the
correct value.
[snip]
But selecting an appropriate data type for a given piece
of data based on its possible values and usage is
core responsibility for a developer.
One hopes one also has tools that permit detection of
truncation. You also ignored the point about how third-party
tools for manipulating JSON will silently lose data.
You must have missed this part:
# The fact that it ultimately is the developers responsibility
# to select proper data types does not mean that programming languages
# and JSON libraries can not help catch errors.
Why go to all the effort reimplementing the whole
Linux syscall interface and keeping it up-to-date when you could just
run the Linux kernel in a VM.
Are you trying to argue that moving code out of the kernel into
userspace is a bad idea?
On Fri, 26 Apr 2024 16:36:41 +1200, David Goodwin wrote:
Why go to all the effort reimplementing the whole
Linux syscall interface and keeping it up-to-date when you could just
run the Linux kernel in a VM.
Because that was what the whole ?personality? system was supposed to be
for. In practice, it didn?t work.
Are you trying to argue that moving code out of the kernel into
userspace is a bad idea?
It would have been great if they could have implemented the Linux API that way, wouldn?t it? But they couldn?t do it.
On Wed, 24 Apr 2024 21:52:36 +1200, David Goodwin wrote:
The difference here is that Windows NT isn't limited to a single
userspace API/personality, historically it provided three (Win32, OS/2
and POSIX) in addition to its own Native API.
That?s the theory. In practice, it doesn?t seem to have worked very well.
The POSIX ?personality? for example, was essentially unusable.
When the Windows engineers were working on WSL1, emulating Linux kernel
APIs on Windows, you would think they would have used this ?personality? system. But they did not. I suspect it had already bitrotted into nonfunctionality by that point.
In the end, they had to give up, and bring in an honest-to-goodness Linux kernel, in WSL2.
WSLv1 exists and it does work surprisingly well.
WSLv2 works better though, and it is no doubt far easier to
maintain.
Still not sure what your'e arguing here though. Are you suggesting
Windows NT should have used a monolithic kernel for some reason?
Or that a flexible design was a bad idea because it didn't work out
perfectly in one scenario over 30 years later?
On Fri, 26 Apr 2024 17:05:21 +1200, David Goodwin wrote:
WSLv1 exists and it does work surprisingly well.
But never quite good enough. And it is now abandoned.
WSLv2 works better though, and it is no doubt far easier to
maintain.
Still not sure what your'e arguing here though. Are you suggesting
Windows NT should have used a monolithic kernel for some reason?
You tell me: are two monolithic kernels better than one?
Or that a flexible design was a bad idea because it didn't work out perfectly in one scenario over 30 years later?
In the one scenario where it could have achieved something genuinely
useful, implementing an API which is amply documented and even comes with
a full open-source reference implementation, it completely failed to
deliver the goods.
On Fri, 26 Apr 2024 16:36:41 +1200, David Goodwin wrote:
Why go to all the effort reimplementing the whole
Linux syscall interface and keeping it up-to-date when you could just
run the Linux kernel in a VM.
Because that was what the whole “personality” system was supposed to be for. In practice, it didn’t work.
I'm also not sure why you think WSL is a failure. Have you not used it?
On 2024-04-26, Arne Vajhøj <arne@vajhoej.dk> wrote:
When the subsystem concept was invented virtualization technology
was not where it is today.
Are you sure ?
https://en.wikipedia.org/wiki/VM_(operating_system)
When the subsystem concept was invented virtualization technology
was not where it is today.
On 4/26/2024 12:43 AM, Lawrence D'Oliveiro wrote:
On Fri, 26 Apr 2024 16:36:41 +1200, David Goodwin wrote:
Why go to all the effort reimplementing the whole Linux syscall
interface and keeping it up-to-date when you could just run the Linux
kernel in a VM.
Because that was what the whole “personality” system was supposed to be >> for. In practice, it didn’t work.
When the subsystem concept was invented virtualization technology was
not where it is today.
In article <v0fel2$3grqf$1@dont-email.me>, ldo@nz.invalid says...
On Fri, 26 Apr 2024 17:05:21 +1200, David Goodwin wrote:
WSLv1 exists and it does work surprisingly well.
But never quite good enough. And it is now abandoned.
Its still there and still works. And importantly its still supported.
You tell me: are two monolithic kernels better than one?
NT isn't generally considered to have a monolithic kernel.
Windows NT started life as a next-generation portable high-end 32-bit
OS/2 implementation known as NT-OS/2.
If converting the entire userspace personality from one OS to another in
a year without any significant architectural changes doesn't validate
the design I don't know what would.
I'm also not sure why you think WSL is a failure.
On Fri, 26 Apr 2024 19:53:31 +1200, David Goodwin wrote:
In article <v0fel2$3grqf$1@dont-email.me>, ldo@nz.invalid says...
On Fri, 26 Apr 2024 17:05:21 +1200, David Goodwin wrote:
WSLv1 exists and it does work surprisingly well.
But never quite good enough. And it is now abandoned.
Its still there and still works. And importantly its still supported.
You know how they like to use marketing-speak to avoid coming out and
saying something is EOL?
From <https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-in-the-microsoft-store-is-now-generally-available-on-windows-10-and-11/>:
Additionally, the in-Windows version of WSL will still receive
critical bug fixes, but the Store version of WSL is where new
features and functionality will be added.
So it?s quite clear: no more ?new features and functionality?. And
when was the last time you saw a ?critical bug fix? for WSL1, by the
way?
You tell me: are two monolithic kernels better than one?
NT isn't generally considered to have a monolithic kernel.
It has the GUI inextricably entwined into it.
It doesn?t have a
virtual filesystem layer--most filesystem features seem to be
specifically tied into NTFS.
It doesn?t have pluggable security
modules. Does it even have loadable modules at all?
And its ?personality? system seems a lot more unwieldy and clumsy than Linux?s pluggable ?binfmt? system.
Windows NT started life as a next-generation portable high-end 32-bit
OS/2 implementation known as NT-OS/2.
I know. Note that ?32-bit?: it was never designed to make a transition
to 64-bit easy.
Also note that ?portable? nonsense--that was another
abject failure.
As for ?next-generation? ... drive letters, that?s all I need to say.
If converting the entire userspace personality from one OS to another in
a year without any significant architectural changes doesn't validate
the design I don't know what would.
Has anybody demonstrated OS/2 software actually running under NT? Just curious.
I'm also not sure why you think WSL is a failure.
WSL1 certainly is. Else there would not have been WSL2, would there?
In article <v0hdf0$3v35b$3@dont-email.me>, ldo@nz.invalid says...
On Fri, 26 Apr 2024 19:53:31 +1200, David Goodwin wrote:
NT isn't generally considered to have a monolithic kernel.
It has the GUI inextricably entwined into it.
The GUI actually lives in the Win32 Environment Subsystem. Until NT 4.0
it was all implemented entirely in userspace. For performance reasons
window management and a few other bits were moved into a kernel driver service (win32k.sys) in NT 4.0. This driver is loaded by the Win32 Environment Subsystem (csrss.exe) when its loaded.
So the kernel itself knows nothing about GUIs and until the Session
Manager Subsystem launches the Win32 Environment Subsystem, Windows NT
is an entirely text-mode operating system. This is why on-boot
filesystem checks aren't graphical: the check disk tool is implemented
using the Native API and run before the Win32 subsystem has started
meaning there is no GUI for it to use.
I'm also not sure why you think WSL is a failure.
WSL1 certainly is. Else there would not have been WSL2, would there?
WSLv2 mostly improves filesystem performance and gives you support for
linux kernel modules. The downside is accessing files outside of the
linux filesystem requires using the 9P protocol and the same is true for accessing linux files from Windows.
WSLv1 runs on top of the regular Windows filesystem storing all its
files in NTFS so there is no performance penalty for accessing stuff
outside the "linux area" beyond the general filesystem performance
penalty WSLv1 itself imposes.
I believe the original design goal for WSLv1 was actually for running
Android apps on Windows 10 Mobile. A more limited variety of
applications on hardware that would not necessarily be appropriate for running a hypervisor and linux kernel. This code was later repurposed as WSLv1. Perhaps if general linux binary compatibility for desktop PCs was
the initial goal WSLv1 would have been designed differently.
MS own explanation is:
<quote>
WSL 2 is a major overhaul of the underlying architecture and uses virtualization technology and a Linux kernel to enable new features. The primary goals of this update are to increase file system performance and
add full system call compatibility.
</quote>
https://learn.microsoft.com/en-us/windows/wsl/compare-versions
I have always believed the second one was the key - emulating an OS
95% or 99% or 99.9% is doable - emulating an OS 100% is difficult.
In article <v06t9m$fni$4@tncsrv09.home.tnetconsulting.net>,
Grant Taylor <gtaylor@tnetconsulting.net> wrote:
Regardless, I wouldn't consider sendmail's config stuff anywhere
analogous to a Makefile or header; more like APL source code
perhaps.
On Di 23 Apr 2024 at 02:07, cross@spitfire.i.gajendra.net (Dan Cross) wrote:
In article <v06t9m$fni$4@tncsrv09.home.tnetconsulting.net>,
Grant Taylor <gtaylor@tnetconsulting.net> wrote:
Regardless, I wouldn't consider sendmail's config stuff anywhere
analogous to a Makefile or header; more like APL source code
perhaps.
And what is so bad about APL?
On Tue, 23 Apr 2024 15:07:32 +1000, motk wrote:
Then perhaps be a little more open minded and do some inquiry of your
own as well.
systemd-haters are like the anti-fluoridationists of the Open Source
world.
Discuss.
On 22/04/2024 10:03 am, chrisq wrote:
One could argue that if you are chasing races in scripts, there's
something else wrong in the basic system design :-).
Sure, but it's not always my job to fix that. Sometimes you have to deal
with what you get.
Having used Solaris for decades, their svcadm and other service
management tools seemed quite lightweight, in that all the config
scripts were in the usual places. Still in plain text format, as were
the log files, which could be manually edited with no ill effect. In
essence, a layer on top of what was already there. The FreeBSD service
framework seems to work in the same way, a lightweight layer on top of
what was already there. Limited experience, but the AIX smit etc tools
also seem to work the same way, layered software design.
Now compare such an approach with that of systemd, and tell us why
such opaque complexity is a good thing...
What? Where is the opaqueness, where is the complexity? I'm absolutely baffled here. What plain text files are you needed to edit? What are the usual places? I use this stuff daily, and it manifestly makes my working
life easier. At no point does it replace any of the tradition unix stuff
like bind or whatever *unless you specifically ask it to*. None of the
major distributions build it that way, except perhaps for people using
zfs root, or iot/cloud builds where they *want* a monolithic init.
It really sounds like you just need to sit down, read the documentation
from the systemd and distro side, and work out what works for you.
Sitting around carping at what is now an established industry standard because 'elegance' isn't how I choose to spend my time.
If you had ever worked on serious system design, you would realise that reliable system design depends on strict partitioning and encapsulation. Layered functionality, with defined interfaces.
No, I don't need to sit down and spend hours reading docs on
something I don't need for my work and that is wrong by design .,..
But then, I never had a problem with sendmail.cf either and I ran
sendmail based mailservers for many years.
Perhaps some really can't see a need for [systemd] ...
On Sat, 27 Apr 2024 16:50:06 +0100, chrisq wrote:
Perhaps some really can't see a need for [systemd] ...
That’s entirely fair. That’s why we have so many Linux distros that don’t
use it. Open Source is all about choice.
What is inexplicable is the hostility from those who simply seem to hate
the fact that systemd exists, and that it is quite popular. The noise they make is entirely out of proportion to their numbers. Kind of like anti- fluoridationists.
On Sat, 27 Apr 2024 10:18:41 -0400, bill wrote:
But then, I never had a problem with sendmail.cf either and I ran
sendmail based mailservers for many years.
Even with the Sendmail book in hand, I was still never entirely sure what
I was doing. A later version brought in configuration with m4 macros, but trying to move to that seemed to cause more problems than it solved.
Then, by mutual agreement with the client, we moved to Postfix. And that
made so much more sense, I never looked back.
On Sat, 27 Apr 2024 17:13:05 +0100, chrisq wrote:
If you had ever worked on serious system design, you would realise that
reliable system design depends on strict partitioning and encapsulation.
Layered functionality, with defined interfaces.
All of which applies to systemd, its config definition system and its
APIs. It is very much modular. The irreducible core is the systemd init process, journald and udevd. That’s it. There are 69 individual binaries
if you choose to build everything, but all except that core are optional.
And the fact they are separate binaries should tell you how modular everything is.
No, I don't need to sit down and spend hours reading docs on
something I don't need for my work and that is wrong by design .,..
<http://0pointer.de/blog/projects/the-biggest-myths.html>
(Posted here not for the person I’m replying to, but for the sake of those willing to learn.)
What is inexplicable is the hostility from those who simply seem to hate
the fact that systemd exists, and that it is quite popular. The noise they >make is entirely out of proportion to their numbers. Kind of like anti- >fluoridationists.
On 4/27/24 23:55, Lawrence D'Oliveiro wrote:
On Sat, 27 Apr 2024 16:50:06 +0100, chrisq wrote:
Perhaps some really can't see a need for [systemd] ...
That’s entirely fair. That’s why we have so many Linux distros that
don’t use it. Open Source is all about choice.
Perhaps, but even Devuan still has traces of it, which suggests that
it's very difficult to get rid of.
... for example, why are log files in binary ?...
... not for those who can't be bothered making the effort..
On Sat, 27 Apr 2024 09:10:14 -0400, Arne Vajhøj wrote:
I suspect docker is a good example of something that require a real
Linux kernel.
Microsoft were trying to implement Docker natively on Windows at one
point; wonder why they gave up?
I suspect docker is a good example of something that require a real
Linux kernel.
In article <v0hdf0$3v35b$3@dont-email.me>, ldo@nz.invalid says...
So it?s quite clear: no more “new features and functionality”. And
when was the last time you saw a “critical bug fix” for WSL1, by the
way?
What makes you say they aren't fixing critical bugs?
[NT] has the GUI inextricably entwined into it.
The GUI actually lives in the Win32 Environment Subsystem.
It doesn’t have a
virtual filesystem layer--most filesystem features seem to be
specifically tied into NTFS.
I'm not sure what you mean by filesystem features being tied into
NTFS ...
It doesn?t have pluggable security
modules. Does it even have loadable modules at all?
It does have pluggable security modules. These are managed by the Local Security Authority Subsystem Service. Kerberos, SSL and NTLM modules
are provided out of the box ...
You can also completely replace the login screen if you like by reimplementing the GINA interface ...
And its “personality” system seems a lot more unwieldy and clumsy than >> Linux’s pluggable “binfmt” system.
It also goes beyond what binfmt does.
Note that “32-bit”: it was never designed to make a transition
to 64-bit easy.
I ported C-Kermit for Windows in about a day IIRC.
Also note that “portable” nonsense--that was another
abject failure.
Windows NT has been publicly released for: MIPS R4000, x86, Alpha,
PowerPC, Itanium, x86-64, ARM, ARM64
It has been ported to, but not released on: i860XR, MIPS R3000,
Clipper, PA-RISC, 64bit Alpha. And work was started on a SPARC port it doesn't appear to have got far.
Not seeing the failure here.
As for “next-generation” ... drive letters, that?s all I need to say.
Drive letters are a feature of the Win32 environment subsystem - the
Win32 namespace. This is implemented on top of the NT namespace which
is provided by the Object Manager.
WSL1 certainly is [a failure]. Else there would not have been WSL2,
would there?
WSLv2 mostly improves ...
Perhaps if general linux binary compatibility for desktop PCs was
the initial goal WSLv1 would have been designed differently.
If we instead of looking at Linux emulation under Windows go to the
opposite direction Windows emulation under Linux, then we have Wine and https://appdb.winehq.org/ that lists what works and what does not work.
A lot works just fine. But for some "a lot" is not good enough.
On Sat, 27 Apr 2024 13:52:06 +1200, David Goodwin wrote:
In article <v0hdf0$3v35b$3@dont-email.me>, ldo@nz.invalid says...
So it?s quite clear: no more ?new features and functionality?. And
when was the last time you saw a ?critical bug fix? for WSL1, by the
way?
What makes you say they aren't fixing critical bugs?
You were the one who claimed it was ?still supported?, not me. It is up to you to prove that point, if you can.
[NT] has the GUI inextricably entwined into it.
The GUI actually lives in the Win32 Environment Subsystem.
But it is not modular and replaceable, like on Linux. It took them a long time even to offer anything resembling a ?headless? setup, and that only
for Windows Server. So it?s only a choice between Microsoft?s GUI, or no
GUI at all. There are no APIs to make anything else work.
It doesn?t have a
virtual filesystem layer--most filesystem features seem to be
specifically tied into NTFS.
I'm not sure what you mean by filesystem features being tied into
NTFS ...
Mount points as an alternative to drive letters--only work with NTFS. I
think also system booting only works with NTFS.
It doesn?t have pluggable security
modules. Does it even have loadable modules at all?
It does have pluggable security modules. These are managed by the Local Security Authority Subsystem Service. Kerberos, SSL and NTLM modules
are provided out of the box ...
Those are all for network security, not local security. I?m talking about things like SELinux and AppArmor. And containers.
You can also completely replace the login screen if you like by reimplementing the GINA interface ...
How wonderful. So they (partially) reinvented GUI login display managers
that *nix systems have had since the 1990s. Have they figured out how to
add that little menu that offers you a choice of GUI environment to run,
as well?
thanAnd its ?personality? system seems a lot more unwieldy and clumsy
Linux?s pluggable ?binfmt? system.
It also goes beyond what binfmt does.
Does it indeed? Weren?t you making apologies about its limitations, due to its being 30 years old, elsewhere?
Note that ?32-bit?: it was never designed to make a transition
to 64-bit easy.
I ported C-Kermit for Windows in about a day IIRC.
Not sure why that?s relevant.
Also note that ?portable? nonsense--that was another
abject failure.
Windows NT has been publicly released for: MIPS R4000, x86, Alpha,
PowerPC, Itanium, x86-64, ARM, ARM64
It has been ported to, but not released on: i860XR, MIPS R3000,
Clipper, PA-RISC, 64bit Alpha. And work was started on a SPARC port it doesn't appear to have got far.
Not seeing the failure here.
All those ports are gone. All the non-x86 ones, except the ARM one, which continues to struggle.
As for ?next-generation? ... drive letters, that?s all I need to say.
Drive letters are a feature of the Win32 environment subsystem - the
Win32 namespace. This is implemented on top of the NT namespace which
is provided by the Object Manager.
Which is somehow specifically tied into NTFS. Try this: create and mount a FAT volume, mount it somewhere other than a drive letter, create a
directory within it, and mount some other non-NTFS volume on that
directory.
The Linux VFS layer doesn?t care: it all works. No drive letters, no
special dependency on one particular filesystem.
WSL1 certainly is [a failure]. Else there would not have been WSL2,
would there?
WSLv2 mostly improves ...
?Compatibility?. Go on, say it: WSL1 could not offer good enough compatibility with Linux.
Perhaps if general linux binary compatibility for desktop PCs was
the initial goal WSLv1 would have been designed differently.
How on earth do you design a ?personality? that does not properly support
the APIs being emulated? What exactly is it supposed to run, if not ?binaries? from the platform being emulated?
On Sun, 28 Apr 2024 00:59:11 +0100, chrisq wrote:
On 4/27/24 23:55, Lawrence D'Oliveiro wrote:
On Sat, 27 Apr 2024 16:50:06 +0100, chrisq wrote:
Perhaps some really can't see a need for [systemd] ...
That’s entirely fair. That’s why we have so many Linux distros that
don’t use it. Open Source is all about choice.
Perhaps, but even Devuan still has traces of it, which suggests that
it's very difficult to get rid of.
You mean, the much-ballyhooed “systemd-free” distro isn’t so “systemd-free” after all?
... for example, why are log files in binary ?...
Let me see if I can count the ways:
* Easy logfile rotation, just by deleting expired records instead of
* having to rewrite the whole file Quick lookup of entries by
* attributes, including ones that cannot be faked by services
* themselves Timestamps that can be interpreted in any timezone
Actually, there’s lots more. Have a look at the design document <https://docs.google.com/document/d/1IC9yOXj7j6cdLLxWEBAGRL6wl97tFxgjLUEHIX3MSTs/pub>.
Just a pro tip: the time to ask questions is *before* you start
spouting off about how terrible something is, not *after*.
A lot of it comes from the fact that due to corporate standards or
due to support of commercial applications, many Linux users are
forced into running RH or an RH-alike and they don't actually have
choice.
Which isn't really systemd's fault, or really even RH's fault, but
it is a definite failing of the linux community in some ways.
On Sun, 28 Apr 2024 00:54:38 +0100, chrisq wrote:
... not for those who can't be bothered making the effort..
Says the one who has already said he can’t be bothered learning why he’s wrong.
A cynic might say that the whole idea of systemd is to deskill
system management, so that the half brain dead can be employed, rather
than people who actually understand how systems work.
As someone who produces commercial software for Linux, I obviously want
to support as many of the distributions that my customers use as possible >with the same build. Once you get into multiple builds and multiple sets
of testing, costs and inconvenience have a tendency to combinatorial >explosions.
Doing that is just /easier/ if you build on RHEL, or its work-alikes, and >allows you access to updated compilers (via the GCC Toolsets) more
swiftly than other stable and supported distributions. None of the
software I produce goes anywhere near systemd.
Yes! And the odds are that if I got your binary distribution I
could probably make it run fine on Slackware (without systemd)
after spending an afternoon or two playing with libraries and
moving files around. Because Linux distributions don't vary
THAT much.
But, were I to do that, if I called you for support on your
software and explained I was running it on Slackware, the odds
are that the first thing you would do would be to tell me to
move it to a supported RH system.
And that is why.... I and many thousands of others run RH when we
actually don't like the direction RH is heading at all.
In article <v0lm82$2gs$1@panix2.panix.com>, kludge@panix.com (Scott
Dorsey) wrote:
And that is why.... I and many thousands of others run RH when we
actually don't like the direction RH is heading at all.
I don't like it either. It's legal, but it's not in the spirit of the GPL.
Note that the RH topic now seems to have moved from their push of
systemd ...
I read the article linked to elsewhere. Something like >60 separate
binaries for all the functionality. It's entrails into every aspect of
the system, far more than that needed for init.
If you had ever worked on serious system design, you would realise
that reliable system design depends on strict partitioning and
encapsulation. Layered functionality, with defined interfaces.
Strangely enough, but "elegance" really does apply in many such
cases, and is what many software engineers strive for.
It really sounds like you just need to sit down, read the documentation from the systemd and distro side, and work out what works for you.
Sitting around carping at what is now an established industry standard because 'elegance' isn't how I choose to spend my time.
No, I don't need to sit down and spend hours reading docs on
something I don't need for my work and that is wrong by design,
though I guess design elegance does depend on personal opinion.
Those who need to, will learn how to drive it and there are probably
dozens of sites online demystifying the process. Computing is a complex subject, not for those who can't be bothered making the effort..
On 28/04/2024 8:57 pm, chrisq wrote:
[absolute nonsense]
OK, you should probably just not express any opinions here. You're
clearly beyond the horizon.
On 28/04/2024 10:05 am, chrisq wrote:
I read the article linked to elsewhere. Something like >60 separate
binaries for all the functionality. It's entrails into every aspect of
the system, far more than that needed for init.
Are you okay? Why are you ignoring the clear explanations given, and
just sitting on your bum grumping like Eyore?
This is just sad.
On 4/30/24 06:23, motk wrote:
On 28/04/2024 8:57 pm, chrisq wrote:
[absolute nonsense]
OK, you should probably just not express any opinions here. You're
clearly beyond the horizon.
So, what did you disagree with in what was written, or perhaps you
think large corporations are full of altruistic love first, rather
than grasping for profit ?. Just having a fit is not a valid
response :-).
Anyway, didn't red hat sell out to ibm just recently, and didn't
pottying go to work for Microsoft ?.
On 28/04/2024 2:13 am, chrisq wrote:
If you had ever worked on serious system design, you would realise
that reliable system design depends on strict partitioning and
encapsulation. Layered functionality, with defined interfaces.
Strangely enough, but "elegance" really does apply in many such
cases, and is what many software engineers strive for.
Plan9 has ruined so many brains.
Nice veiled insult, I guess?
It really sounds like you just need to sit down, read thedocumentation
from the systemd and distro side, and work out what works for you.
Sitting around carping at what is now an established industry standard >> > because 'elegance' isn't how I choose to spend my time.
No, I don't need to sit down and spend hours reading docs on
something I don't need for my work and that is wrong by design,
though I guess design elegance does depend on personal opinion.
You sure have plenty of opinions to chuck over the fence though.
3000 lines of source, 116 Kbytes.
It's quite clear that what is driving the adoption of systemd is
corporate interests ...
... that want / need a unified system management framework.
Really, just for networking ?
On 4/27/2024 10:15 PM, Lawrence D'Oliveiro wrote:
On Sat, 27 Apr 2024 09:10:14 -0400, Arne Vajhøj wrote:
I suspect docker is a good example of something that require a real
Linux kernel.
Microsoft were trying to implement Docker natively on Windows at one
point; wonder why they gave up?
Docker for Windows is available.
The way that so much of the unix infrastructure has been replaced,
coercing the system to suit the needs of systemd ...
In article <v0lm82$2gs$1@panix2.panix.com>, kludge@panix.com (Scott
Dorsey) wrote:
Yes! And the odds are that if I got your binary distribution I
could probably make it run fine on Slackware (without systemd)
after spending an afternoon or two playing with libraries and
moving files around. Because Linux distributions don't vary
THAT much.
But, were I to do that, if I called you for support on your
software and explained I was running it on Slackware, the odds
are that the first thing you would do would be to tell me to
move it to a supported RH system.
I can do better than that. Assuming Distrowatch's page on Slackware is >accurate, I can tell you that it should run on Slackware 15.0 or later
and definitely won't run on 14.2 or earlier. That much, I can get from
the glibc and gcc versions.
If it won't run for you on 15.0 or later, then I'll ask you to try a RHEL >work-alike, to eliminate the possibility that it's something about your
local setup.
If it works on Rocky and not on Slackware 15.0, then I'll start asking
more detailed questions and getting a Slackware VM set up.
All I can say is that you're a lot more helpful toward customers
than the folks at Mathworks are....
and I suspect you are linking in a lot fewer libraries than
their bloated code does.
In article <v0s562$ehh$1@panix2.panix.com>, kludge@panix.com (Scott
Dorsey) wrote:
All I can say is that you're a lot more helpful toward customers
than the folks at Mathworks are....
I'm supplying mathematical modelling libraries to ISVs. I have to be reasonably helpful.
and I suspect you are linking in a lot fewer libraries than
their bloated code does.
You're right. I'm only using glibc and the GCC language run-times. It
makes life a lot simpler.
On 5/1/2024 4:29 AM, John Dallman wrote:
In article <v0s562$ehh$1@panix2.panix.com>, kludge@panix.com
I'm supplying mathematical modelling libraries to ISVs. I have to
be reasonably helpful.
and I suspect you are linking in a lot fewer libraries thanYou're right. I'm only using glibc and the GCC language
their bloated code does.
run-times. It makes life a lot simpler.
No BLAS, LAPACK, Octave etc.?
On Sun, 28 Apr 2024 12:22:03 +0100, chrisq wrote:
The way that so much of the unix infrastructure has been replaced,
coercing the system to suit the needs of systemd ...
For example?
Here’s the kind of stuff we have to deal with in a modern network
stack:
On 4/30/24 18:00, Chris Townley wrote:
On 30/04/2024 17:36, chrisq wrote:
Anyway, didn't red hat sell out to ibm just recently, and didn't
pottying go to work for Microsoft ?.
IBM completed the purchase of Red Hat in 2019
Thanks, no doubt some cooperation long before that, just as red hat
are also connected to Oracle for their Linux offering.
On 30/04/2024 17:36, chrisq wrote:
On 4/30/24 06:23, motk wrote:
On 28/04/2024 8:57 pm, chrisq wrote:
[absolute nonsense]
OK, you should probably just not express any opinions here. You're
clearly beyond the horizon.
So, what did you disagree with in what was written, or perhaps you
think large corporations are full of altruistic love first, rather
than grasping for profit ?. Just having a fit is not a valid
response :-).
Anyway, didn't red hat sell out to ibm just recently, and didn't
pottying go to work for Microsoft ?.
IBM completed the purchase of Red Hat in 2019
Yes, it's the fashion
to write commentless code these days,
On 5/3/2024 8:03 AM, chrisq wrote:
Yes, it's the fashion
to write commentless code these days,
Since when?
If so, then things are worse than I could imagine.
On 5/3/2024 10:16 AM, Dave Froble wrote:
On 5/3/2024 8:03 AM, chrisq wrote:
Yes, it's the fashion
to write commentless code these days,
Since when?
If so, then things are worse than I could imagine.
I will claim that the "recommended approach"
today actually is to use comments.
Clean Code, Code Complete etc..
Note though that there is a strong focus on useful
comments vs useless comments.
Useless comments are comments that explains what
the code does, but if the reader knows the programming
language, then those are redundant because the code
already provide that information, and they are in fact
bad because they clutter up the code.
Useful comments are comments that explain why the code
does what it does.
Super simple example useless:
// add 1 to ix
ix = ix + 1
Super simple example useful:
// skip separating comma
ix = ix + 1
But "recommended approach" and "used everywhere" are
of course two different things.
In general the overall picture of software quality is
very mixed. Some good, a lot ok and some bad. Maybe
even a lot bad.
The number of software developers has increased x10 or more.
And no surprise the average skill level of 30 million
software developers are lower than than of 3 million software
developers.
Current fashions in development methodologies does not
favor strict processes.
There may also be a generational thing of "I will do as
I am told" vs "I will do as I want to".
So in the real world some write useless comments because
they don't know better, some write no comments because
they can get away with it and some write no comments
because the feel very cool by proclaiming that
"code should be self-explanatory".
And some still write useful comments because they got it.
Arne
Oracle Linux is another RHEL clone, so Oracle obviously like Redhat.
Redhat is not so happy with the cloners. And if I were to guess
then they are more angry with Oracle a multi B$ company than with
Rocky a very small company.
:-)
The following is some snippets from what was basically a research program, and I
consider commenting such should be rather terse. For production programs I'd >want a lot more.
First, declare the purpose. Shouldn't this always be done?
!********************************************************************
!
! Program: TCP_PEEK.BAS
! Function: Test Using TCP/IP Sockets as a Listener
! Version: 1.00
! Created: 01-Dec-2011
! Author(s): DFE
!
! Purpose/description:
!
! This program will set up TCP/IP sockets to allow
! itself to listen for connection requests. When
! a connection request is received, this program
! will accept the connection, and then attempt to
! PEEK the message, ie; read it but leave it available
! to be re-read.
When using custom defined structures, it might be nice to know what they will be
used for.
!**************************************************
! Declare Variables of User Defined Structures
!**************************************************
DECLARE IOSB_STRUCT IOSB, ! I/O status blk &
ITEMLIST_2 SERVER.ITEMLST, ! Server item list &
ITEMLIST_2 SOCKOPT.ITEMLST, ! Socket options list &
ITEMLIST_2 REUSEADR.ITEMLST, ! Reuse adr list &
ITEMLIST_3 CLIENT.ITEMLST, ! Client item list &
SOCKET_OPTIONS LISTEN.OPTN, ! Socket options &
SOCK_ADDR CLIENT.ADR, ! Client IP adr/port &
SOCK_ADDR SERVER.ADR, ! Server IP adr/port &
BUFF CLIENT.NAME, ! Client name buffer &
BUFF SERVER.NAME, ! Server name buffer &
IP_ADR IP, ! Ip address &
BUFF MSG ! Message buffer
I consider the following rather terse. Either the programmer knows how to use >system services, or perhaps remedial training is called for.
!**************************************************
! Assign channels to 'TCPIP$DEVICE:'
!**************************************************
Dev$ = "TCPIP$DEVICE:"
Stat% = SYS$ASSIGN( Dev$ , ListenCh% , , )
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to assign listener channel - "; E$
GoTo 4900
End If
Print #KB%, "Internal VMS channel for listener socket:"; ListenCh%
Stat% = SYS$ASSIGN( Dev$ , ClientCh% , , )
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to assign client channel - "; E$
GoTo 4900
End If
However, when details might be helpful, there is usually never too much.
!**************************************************
! Create Listener socket
! Bind server's IP address and port # to listener
! socket, set socket as a passive socket
! Note: we used to do this in 2 calls, but can be combined
!**************************************************
LISTEN.OPTN::PROTOCOL% = TCPIP$C_TCP ! Listener socket optn
LISTEN.OPTN::TYP$ = Chr$(TCPIP$C_STREAM)
LISTEN.OPTN::AF$ = Chr$(TCPIP$C_AF_INET)
SOCKOPT.ITEMLST::LEN% = 8% ! Socket options buffer
SOCKOPT.ITEMLST::TYP% = TCPIP$C_SOCKOPT
SOCKOPT.ITEMLST::ADR% = Loc(REUSEADR.ITEMLST::Len%)
REUSEADR.ITEMLST::LEN% = 4% ! Reuse adr (port #)
REUSEADR.ITEMLST::TYP% = TCPIP$C_REUSEADDR
REUSEADR.ITEMLST::ADR% = Loc(ReuseAdrVal%)
ReuseAdrVal% = 1% ! Set to 'True'
SERVER.ITEMLST::LEN% = 16% ! Server item list
SERVER.ITEMLST::TYP% = TCPIP$C_SOCK_NAME
SERVER.ITEMLST::ADR% = Loc(SERVER.ADR::Fam%)
SERVER.ADR::Fam% = TCPIP$C_AF_INET ! Server Ip adr/port
SERVER.ADR::PORT% = SWAP%(ServerPort%)
SERVER.ADR::IP.ADR% = TCPIP$C_INADDR_ANY
SERVER.ADR::ZERO1% = 0%
SERVER.ADR::ZERO2% = 0%
BACKLOG% = 1%
Stat% = SYS$QIOW( , ! Event flag &
ListenCh% By Value, ! VMS channel &
IO$_SETCHAR By Value, ! Operation &
IOSB::Stat%, ! I/O status block &
, ! AST routine &
, ! AST parameter &
LISTEN.OPTN::Protocol%, ! P1 &
, ! P2 &
SERVER.ITEMLST::Len%, ! P3 - local socket nam
e &
BACKLOG% By Value, ! P4 - connection backl
og &
SOCKOPT.ITEMLST::Len%, ! P5 - socket options &
) ! P6
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to queue create and bind listener socket - "
; E$
GoTo 4900
End If
If ( IOSB::Stat% And SS$_NORMAL ) = 0%
Then Stat% = IOSB::Stat%
E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to create and bind listener socket - "; E$
GoTo 4900
End If
My opinion is, the above is essential, without it, there would be much studying
of code, wondering what is being referenced, and such. I always use one line >for each argument in a QIO and such, which makes it very clear what is >happening. Without that, even the best will still have some "fun" reading the
code to figure out what is happening.
On 5/3/2024 10:16 AM, Dave Froble wrote:
On 5/3/2024 8:03 AM, chrisq wrote:
Yes, it's the fashion
to write commentless code these days,
Since when?
If so, then things are worse than I could imagine.
I will claim that the "recommended approach"
today actually is to use comments.
Clean Code,
Code Complete etc..
Note though that there is a strong focus on useful
comments vs useless comments.
Useless comments are comments that explains what
the code does, but if the reader knows the programming
language, then those are redundant because the code
already provide that information, and they are in fact
bad because they clutter up the code.
Useful comments are comments that explain why the codeSee above.
does what it does.
[snip]
// skip separating comma
ix = ix + 1
On 4/30/24 23:56, Lawrence D'Oliveiro wrote:
On Sun, 28 Apr 2024 12:22:03 +0100, chrisq wrote:
The way that so much of the unix infrastructure has been replaced,
coercing the system to suit the needs of systemd ...
For example?
Normally run FreeBSD for development, but needed to run a current
version of Linux, to test against an open source project, that it would
build and run without issue. Installed latest Xubuntu and Debian, both
of which operate under systemd, with no opout at install time.
All looks good at desktop level, but the networking config didn't stick without a reboot, and things like ifconfig, ntp, inetd and other stuff
was missing.
It's also not clear how to remove the systemd stuff ...
In article <v130js$k7o6$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 5/3/2024 10:16 AM, Dave Froble wrote:
On 5/3/2024 8:03 AM, chrisq wrote:
Yes, it's the fashion
to write commentless code these days,
Since when?
If so, then things are worse than I could imagine.
I will claim that the "recommended approach"
today actually is to use comments.
Clean Code,
Clean Code specifically suggests that comments should be
avoided. I believe he wrote that he considers comments, "a
failure."
Of course, that's Robert Martin, who knows very
little and is a mediocre programmer, so take anything that he
says with a very large dose of salt.
Note though that there is a strong focus on useful
comments vs useless comments.
Useless comments are comments that explains what
the code does, but if the reader knows the programming
language, then those are redundant because the code
already provide that information, and they are in fact
bad because they clutter up the code.
As with most generalities, this is true most of the time, but
not all of the time.
When an implementation is not obvious, has very subtle
side-effects, or uses a very complex algorithm, it can be very
useful to explain _what_ the code does. But that's very
different than obviously parroting the code as in the, "add one
to x" example that always pops up when this subject comes up.
Useful comments are comments that explain why the codeSee above.
does what it does.
[snip]
// skip separating comma
ix = ix + 1
Perhaps. But it may be possible to write this in a way that is
much more obvious, without the comment. Here, given context we
may assume that, `ix` is some kind of index into a buffer that
contains string data. In this case, we may be able to write
something like this:
size_t
advance_if_char_matches(const *buffer, size_t index, char ch)
{
if (buffer[index] == ch)
index++;
return index;
}
// ...
ix = advance_if_char_matches(str, ix, ',');
As Linux becomes aver more absorbed by commercial interests, expect far
less transperency and more control...
On Tue, 30 Apr 2024 23:04:02 -0000 (UTC), Lawrence D'Oliveiro wrote:
On Tue, 30 Apr 2024 18:50:55 +0100, chrisq wrote:
Really, just for networking ?
You seem to have a very simplistic idea of what “networking” is all
about. Maybe, given the group we’re in, your experience dates from, I
don’t know, DECnet days? Netware, maybe?
Here’s the kind of stuff we have to deal with in a modern network stack: >> <https://www.freedesktop.org/software/systemd/man/latest/systemd.netdev.html>
[blah blah blah]
Real time embedded background here ...
[blah blah blah]
Oracle gets money that would otherwise likely go to Red Hat ...
Useful comments are comments that explain why the code
does what it does.
On 5/3/2024 8:38 PM, Dan Cross wrote:
In article <v130js$k7o6$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 5/3/2024 10:16 AM, Dave Froble wrote:
On 5/3/2024 8:03 AM, chrisq wrote:
Yes, it's the fashion
to write commentless code these days,
Since when?
If so, then things are worse than I could imagine.
I will claim that the "recommended approach"
today actually is to use comments.
Clean Code,
Clean Code specifically suggests that comments should be
avoided. I believe he wrote that he considers comments, "a
failure."
That is not an accurate description of what Clean Code
says.
He does say that:
"The proper use of comments is to compensate for our failure
to express ourself in code. Note that I use the word failure."
But on the next page he opens up with:
"Some comments are necessary or beneficial."
And move on with some examples where he does see value in
comments.
So Clean Code does not suggest that comments should be
avoided. It just say that if well written code can make
a comment unnecessary then it is better.
Of course, that's Robert Martin, who knows very
little and is a mediocre programmer, so take anything that he
says with a very large dose of salt.
Ever wondered what Robert Martin think about you?
:-)
Note though that there is a strong focus on useful
comments vs useless comments.
Useless comments are comments that explains what
the code does, but if the reader knows the programming
language, then those are redundant because the code
already provide that information, and they are in fact
bad because they clutter up the code.
As with most generalities, this is true most of the time, but
not all of the time.
When an implementation is not obvious, has very subtle
side-effects, or uses a very complex algorithm, it can be very
useful to explain _what_ the code does. But that's very
different than obviously parroting the code as in the, "add one
to x" example that always pops up when this subject comes up.
Useful comments are comments that explain why the codeSee above.
does what it does.
An algorithm description should be more a "why" than a "what".
[snip]
// skip separating comma
ix = ix + 1
Perhaps. But it may be possible to write this in a way that is
much more obvious, without the comment. Here, given context we
may assume that, `ix` is some kind of index into a buffer that
contains string data. In this case, we may be able to write
something like this:
size_t
advance_if_char_matches(const *buffer, size_t index, char ch)
{
if (buffer[index] == ch)
index++;
return index;
}
// ...
ix = advance_if_char_matches(str, ix, ',');
If the conditional aspect increases robustness and the function
will be used more than one place, then it is all good.
But it does not change much for the comment.
// skip separating comma if present
ix = advance_if_char_matches(str, ix, ',');
So Clean Code does not suggest that comments should be
avoided. It just say that if well written code can make
a comment unnecessary then it is better.
That's literally the definition of what "should be avoided"
means in this context. Clean Code is a poorly written book full
of bad advice; it's really best avoided.
In article <v1366u$lf87$1@dont-email.me>,
Dave Froble <davef@tsoft-inc.com> wrote:
:-)
The following is some snippets from what was basically a research program, and I
consider commenting such should be rather terse. For production programs I'd
want a lot more.
I hope you won't mind some feedback?
Overall, there's a concept of, "too much of a good thing."
First, declare the purpose. Shouldn't this always be done?
Yes, but see below.
!********************************************************************
!
What does the line of asterisks buy you?
! Program: TCP_PEEK.BAS
Why do you need this? Is it not in a file that is already
named, "TCP_PEEK.BAS"? What do you get by repeating it in the
file itself?
! Function: Test Using TCP/IP Sockets as a Listener
! Version: 1.00
! Created: 01-Dec-2011
! Author(s): DFE
Why the elaborate formatting of this front matter? What does
some of it even mean? How did you decide that this was version
1.00, for example? (Something like semver is much more useful
than an arbitrary major.minor here.) Is the creation date
useful versus, say, the last modification date?
Moreover, where do you record the history of the module?
Knowing how code has evolved over time can be very useful.
Frankly, all of this is metadata, which is better captured in a
revision control system than in comments at the top of a file,
which can easily get out of date over time.
!
! Purpose/description:
Well, which is it?
!
! This program will set up TCP/IP sockets to allow
! itself to listen for connection requests. When
! a connection request is received, this program
! will accept the connection, and then attempt to
! PEEK the message, ie; read it but leave it available >> ! to be re-read.
This is good, but honestly, kind of all you need at the top of
the file.
When using custom defined structures, it might be nice to know what they will be
used for.
!**************************************************
! Declare Variables of User Defined Structures
!**************************************************
DECLARE IOSB_STRUCT IOSB, ! I/O status blk & >> ITEMLIST_2 SERVER.ITEMLST, ! Server item list &
ITEMLIST_2 SOCKOPT.ITEMLST, ! Socket options list &
ITEMLIST_2 REUSEADR.ITEMLST, ! Reuse adr list & >> ITEMLIST_3 CLIENT.ITEMLST, ! Client item list &
SOCKET_OPTIONS LISTEN.OPTN, ! Socket options & >> SOCK_ADDR CLIENT.ADR, ! Client IP adr/port &
SOCK_ADDR SERVER.ADR, ! Server IP adr/port &
BUFF CLIENT.NAME, ! Client name buffer &
BUFF SERVER.NAME, ! Server name buffer &
IP_ADR IP, ! Ip address &
BUFF MSG ! Message buffer
I don't think that these comments are useful at all, with the
possible exception of the one on IOSB. These comments just
parrot the code. If I saw, "SOCKOPT.ITEMLIST" how is it not
obvious that that is a, "Socket options list"? Note also that
the comment omits the fact that it's an item list, which is a
specific thing, rather than just a generic list. Half of the
item lists are annotated as item lists in the comments, but
the other half are not.
I consider the following rather terse. Either the programmer knows how to use
system services, or perhaps remedial training is called for.
!**************************************************
! Assign channels to 'TCPIP$DEVICE:'
!**************************************************
Dev$ = "TCPIP$DEVICE:"
Stat% = SYS$ASSIGN( Dev$ , ListenCh% , , )
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to assign listener channel - "; E$
GoTo 4900
End If
Print #KB%, "Internal VMS channel for listener socket:"; ListenCh% >>
Stat% = SYS$ASSIGN( Dev$ , ClientCh% , , )
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to assign client channel - "; E$
GoTo 4900
End If
It's also rather repetitive. It'd be better to wrap assignment
in some kind of helper function, IMHO. Without knowing more
about VMS BASIC, however, it's difficult to tell whether one
could do much better.
However, when details might be helpful, there is usually never too much.
!**************************************************
! Create Listener socket
! Bind server's IP address and port # to listener
! socket, set socket as a passive socket
I'm not sure this comment is accurate: it appears that most of
what this code is doing until the $QIOW is setting up data for
the $QIOW. Something like this may be more useful:
! Initialize IOSB data for listener socket creation,
! then create the socket and bind it to the server's
! IP address with the given port number.
! Note: we used to do this in 2 calls, but can be combined
Grammar: "but they were combined." Again, we see how missing
the history can lead to questions: why were they combined, and
when? Is that better somehow, other than the general principle
of doing less work?
!**************************************************
LISTEN.OPTN::PROTOCOL% = TCPIP$C_TCP ! Listener socket optn
LISTEN.OPTN::TYP$ = Chr$(TCPIP$C_STREAM)
LISTEN.OPTN::AF$ = Chr$(TCPIP$C_AF_INET)
SOCKOPT.ITEMLST::LEN% = 8% ! Socket options buffer
SOCKOPT.ITEMLST::TYP% = TCPIP$C_SOCKOPT
SOCKOPT.ITEMLST::ADR% = Loc(REUSEADR.ITEMLST::Len%)
REUSEADR.ITEMLST::LEN% = 4% ! Reuse adr (port #)
REUSEADR.ITEMLST::TYP% = TCPIP$C_REUSEADDR
REUSEADR.ITEMLST::ADR% = Loc(ReuseAdrVal%)
ReuseAdrVal% = 1% ! Set to 'True'
SERVER.ITEMLST::LEN% = 16% ! Server item list >> SERVER.ITEMLST::TYP% = TCPIP$C_SOCK_NAME
SERVER.ITEMLST::ADR% = Loc(SERVER.ADR::Fam%)
SERVER.ADR::Fam% = TCPIP$C_AF_INET ! Server Ip adr/port
SERVER.ADR::PORT% = SWAP%(ServerPort%)
SERVER.ADR::IP.ADR% = TCPIP$C_INADDR_ANY
SERVER.ADR::ZERO1% = 0%
SERVER.ADR::ZERO2% = 0%
BACKLOG% = 1%
Stat% = SYS$QIOW( , ! Event flag &
ListenCh% By Value, ! VMS channel &
IO$_SETCHAR By Value, ! Operation &
IOSB::Stat%, ! I/O status block &
, ! AST routine &
, ! AST parameter & >> LISTEN.OPTN::Protocol%, ! P1 &
, ! P2 &
SERVER.ITEMLST::Len%, ! P3 - local socket nam
e &
BACKLOG% By Value, ! P4 - connection backl
og &
SOCKOPT.ITEMLST::Len%, ! P5 - socket options &
) ! P6
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to queue create and bind listener socket - "
; E$
GoTo 4900
End If
If ( IOSB::Stat% And SS$_NORMAL ) = 0%
Then Stat% = IOSB::Stat%
E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to create and bind listener socket - "; E$
GoTo 4900
End If
My opinion is, the above is essential, without it, there would be much studying
of code, wondering what is being referenced, and such. I always use one line
for each argument in a QIO and such, which makes it very clear what is
happening. Without that, even the best will still have some "fun" reading the
code to figure out what is happening.
I agree that well-commented code is useful.
- Dan C.
"The proper use of comments is to compensate for our failure to express ourself in code. Note that I use the word failure."
Why do you need this? Is it not in a file that is already
named, "TCP_PEEK.BAS"? What do you get by repeating it in the
file itself?
Doesn't hurt.
Oracle gets money that would otherwise likely go to Red Hat ...
Interesting that they won’t offer their own ZFS next-generation
filesystem product with it, preferring to bundle btrfs instead. Vote
of confidence in your own product over rivals, much?
On Fri, 3 May 2024 11:41:47 -0400, Arne Vajhøj wrote:
Useful comments are comments that explain why the code
does what it does.
Absolutely agree. The key word is “why” the code does what it does, not >“what” it does (which is trivially obvious from the code itself).
In article <v0kbcs$q4lf$4@dont-email.me>, ldo@nz.invalid says...
You were the one who claimed it was “still supported”, not me. It is up >> to you to prove that point, if you can.
Microsoft says its still supported. I don't see any reason to claim or believe otherwise without proof.
There are APIs to make something else work - the same APIs the Win32 Environment Subsystem uses. The main roadblock is almost certainly the
fact the GUI that ships with the system is good enough and modifying it
is far easier than starting from scratch.
Mount points as an alternative to drive letters--only work with NTFS. I
think also system booting only works with NTFS.
This is primarily because NTFS is the only filesystem that ships with
windows that has the required features.
Mounted folders are implemented using reparse points - the same feature
used for hard links, symbolic links, junctions and other such things. I
think these are stored as extended attributes or similar which are not supported by FAT.
Those are all for network security, not local security. I?m talking
about things like SELinux and AppArmor. And containers.
As far as I can see most of what SELinux adds has been in Windows NT
since version 3.1.
How wonderful. So they (partially) reinvented GUI login display
managers that *nix systems have had since the 1990s. Have they figured
out how to add that little menu that offers you a choice of GUI
environment to run, as well?
They could put that menu there.
It also goes beyond what binfmt does.
Does it indeed? Weren?t you making apologies about its limitations, due
to its being 30 years old, elsewhere?
No, I was not. And I think I've described NTs architecture more than
enough at this point for you to know that binfmt is not the same concept
as NTs Environment Subsystems.
I ported C-Kermit for Windows in about a day IIRC.
Not sure why that?s relevant.
You were implying the transition from 32bit to 64bit is not easy. From
my experience this is not the case. I suspect you are just making
assumptions here.
All those ports are gone. All the non-x86 ones, except the ARM one,
which continues to struggle.
Yes, because all of those platforms are gone.
The fact that [Windows NT] has been ported to so many architectures
clearly demonstrates that it is fairly portable.
Drive letters are a feature of the Win32 environment subsystem - the
Win32 namespace. This is implemented on top of the NT namespace which
is provided by the Object Manager.
Which is somehow specifically tied into NTFS. Try this: create and
mount a FAT volume, mount it somewhere other than a drive letter,
create a directory within it, and mount some other non-NTFS volume on
that directory.
It would seem it isn't necessarily tied to NTFS but rather requires
certain filesystem features FAT doesn't have.
“Compatibility”. Go on, say it: WSL1 could not offer good enough
compatibility with Linux.
Not good enough for what?
My experience with btrfs was awful. It's fortunate I only tested itbeen
and took backups, and it kept losing data. Turned to ZFS and it has
rock solid. Ten years.
On Fri, 3 May 2024 18:42 +0100 (BST), John Dallman wrote:
Oracle gets money that would otherwise likely go to Red Hat ...Interesting that they won't offer their own ZFS next-generation
filesystem product with it, preferring to bundle btrfs instead.
Vote of confidence in your own product over rivals, much?
My experience with btrfs was awful. It's fortunate I only tested itbeen rock solid. Ten years.
and took backups, and it kept losing data. Turned to ZFS and it has
I had significant trouble with the version in RHEL7/CentOS7, where it
was the default root filesystem. It's still the default on SUSE
Enterprise. My local Linux expert tells me that it's been fixed now,
although anotherfriend who spent some years working for SUSE
disagrees.
I had Android devices connected to a CentOS 7.9 machine, which did
Android builds, and staged data to be pushed onto the devices. When
the data set to be pushed was large (more than a few GB), btrfs would
often decide that it had run out of space, with more than 70% of a
1TB volume still free according to df. An fsck would fix the problem,
but it would reoccur a few days or weeks later.
My conclusion was that since I was moving to a new Rocky 8.9 machine
before CentOS 7.9 ran out of support, I'd have my staging area as
ext4, please, and that has been entirely satisfactory. I was quite
happy that it was removed from RHEL/Rocky/Alma 8.x; even if it is
fixed now, it has a bad reputation.
I had significant trouble with the [btrfs] version in RHEL7/CentOS7,
where it was the default root filesystem.
In article <fd016548559cb3c8fca2ab4f538153127cfe42c9.camel@munted.eu>, >alex.buell@munted.eu (Single Stage to Orbit) wrote:
My experience with btrfs was awful. It's fortunate I only tested itbeen
and took backups, and it kept losing data. Turned to ZFS and it has
rock solid. Ten years.
I had significant trouble with the version in RHEL7/CentOS7, where it was
the default root filesystem. It's still the default on SUSE Enterprise.
My local Linux expert tells me that it's been fixed now, although another >friend who spent some years working for SUSE disagrees.
I had Android devices connected to a CentOS 7.9 machine, which did
Android builds, and staged data to be pushed onto the devices. When the
data set to be pushed was large (more than a few GB), btrfs would often >decide that it had run out of space, with more than 70% of a 1TB volume
still free according to df. An fsck would fix the problem, but it would >reoccur a few days or weeks later.
My conclusion was that since I was moving to a new Rocky 8.9 machine
before CentOS 7.9 ran out of support, I'd have my staging area as ext4, >please, and that has been entirely satisfactory. I was quite happy that
it was removed from RHEL/Rocky/Alma 8.x; even if it is fixed now, it has
a bad reputation.
On 5/3/2024 9:04 PM, Dan Cross wrote:
In article <v1366u$lf87$1@dont-email.me>,
Dave Froble <davef@tsoft-inc.com> wrote:
:-)
The following is some snippets from what was basically a research program, and I
consider commenting such should be rather terse. For production programs I'd
want a lot more.
Do consider the above ...
I hope you won't mind some feedback?
No problem. But some background. I came from an environment where strict >programming practices were in effect. Certain things, such as the header of a >program file, had a strict format to follow, and much of the text came from an >IDE code generator. That allowed any support person to know where to look for >information, and what to look for.
[snip]
!********************************************************************
!
What does the line of asterisks buy you?
Strict formatting in every program. Important information was always highlighted.
! Program: TCP_PEEK.BAS
Why do you need this? Is it not in a file that is already
named, "TCP_PEEK.BAS"? What do you get by repeating it in the
file itself?
Doesn't hurt.
! Function: Test Using TCP/IP Sockets as a Listener
! Version: 1.00
! Created: 01-Dec-2011
! Author(s): DFE
Why the elaborate formatting of this front matter? What does
some of it even mean? How did you decide that this was version
1.00, for example? (Something like semver is much more useful
than an arbitrary major.minor here.) Is the creation date
useful versus, say, the last modification date?
For some, the first iteration of a program is version 1. I guess other >standards could be used.
Both dates can be useful.
Moreover, where do you record the history of the module?
Knowing how code has evolved over time can be very useful.
I didn't show that. As I wrote, some snippets from a program, not the entire >program.
Frankly, all of this is metadata, which is better captured in a
revision control system than in comments at the top of a file,
which can easily get out of date over time.
!
! Purpose/description:
Well, which is it?
Now, that's a bit too "picky" ...
!
! This program will set up TCP/IP sockets to allow
! itself to listen for connection requests. When
! a connection request is received, this program
! will accept the connection, and then attempt to
! PEEK the message, ie; read it but leave it available
! to be re-read.
When using custom defined structures, it might be nice to know what they will be
used for.
!**************************************************
! Declare Variables of User Defined Structures
!**************************************************
DECLARE IOSB_STRUCT IOSB, ! I/O status blk &
ITEMLIST_2 SERVER.ITEMLST, ! Server item list &
ITEMLIST_2 SOCKOPT.ITEMLST, ! Socket options list &
ITEMLIST_2 REUSEADR.ITEMLST, ! Reuse adr list &
ITEMLIST_3 CLIENT.ITEMLST, ! Client item list &
SOCKET_OPTIONS LISTEN.OPTN, ! Socket options &
SOCK_ADDR CLIENT.ADR, ! Client IP adr/port &
SOCK_ADDR SERVER.ADR, ! Server IP adr/port &
BUFF CLIENT.NAME, ! Client name buffer &
BUFF SERVER.NAME, ! Server name buffer &
IP_ADR IP, ! Ip address &
BUFF MSG ! Message buffer
I consider the following rather terse. Either the programmer knows how to use
system services, or perhaps remedial training is called for.
!**************************************************
! Assign channels to 'TCPIP$DEVICE:'
!**************************************************
Dev$ = "TCPIP$DEVICE:"
Stat% = SYS$ASSIGN( Dev$ , ListenCh% , , )
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to assign listener channel - "; E$
GoTo 4900
End If
Print #KB%, "Internal VMS channel for listener socket:"; ListenCh%
Stat% = SYS$ASSIGN( Dev$ , ClientCh% , , )
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to assign client channel - "; E$
GoTo 4900
End If
However, when details might be helpful, there is usually never too much.
!**************************************************
! Create Listener socket
! Bind server's IP address and port # to listener
! socket, set socket as a passive socket
! Note: we used to do this in 2 calls, but can be combined
!**************************************************
LISTEN.OPTN::PROTOCOL% = TCPIP$C_TCP ! Listener socket optn
LISTEN.OPTN::TYP$ = Chr$(TCPIP$C_STREAM)
LISTEN.OPTN::AF$ = Chr$(TCPIP$C_AF_INET)
SOCKOPT.ITEMLST::LEN% = 8% ! Socket options buffer
SOCKOPT.ITEMLST::TYP% = TCPIP$C_SOCKOPT
SOCKOPT.ITEMLST::ADR% = Loc(REUSEADR.ITEMLST::Len%)
REUSEADR.ITEMLST::LEN% = 4% ! Reuse adr (port #)
REUSEADR.ITEMLST::TYP% = TCPIP$C_REUSEADDR
REUSEADR.ITEMLST::ADR% = Loc(ReuseAdrVal%)
ReuseAdrVal% = 1% ! Set to 'True'
SERVER.ITEMLST::LEN% = 16% ! Server item list
SERVER.ITEMLST::TYP% = TCPIP$C_SOCK_NAME
SERVER.ITEMLST::ADR% = Loc(SERVER.ADR::Fam%)
SERVER.ADR::Fam% = TCPIP$C_AF_INET ! Server Ip adr/port
SERVER.ADR::PORT% = SWAP%(ServerPort%)
SERVER.ADR::IP.ADR% = TCPIP$C_INADDR_ANY
SERVER.ADR::ZERO1% = 0%
SERVER.ADR::ZERO2% = 0%
BACKLOG% = 1%
Stat% = SYS$QIOW( , ! Event flag &
ListenCh% By Value, ! VMS channel &
IO$_SETCHAR By Value, ! Operation &
IOSB::Stat%, ! I/O status block &
, ! AST routine &
, ! AST parameter &
LISTEN.OPTN::Protocol%, ! P1 &
, ! P2 &
SERVER.ITEMLST::Len%, ! P3 - local socket name &
BACKLOG% By Value, ! P4 - connection backlog &
SOCKOPT.ITEMLST::Len%, ! P5 - socket options &
) ! P6
If ( Stat% And SS$_NORMAL ) = 0%
Then E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to queue create and bind listener socket - "; E$
GoTo 4900
End If
If ( IOSB::Stat% And SS$_NORMAL ) = 0%
Then Stat% = IOSB::Stat%
E$ = FnVMSerr$( Stat% )
Print #KB%, "Unable to create and bind listener socket - "; E$
GoTo 4900
End If
My opinion is, the above is essential, without it, there would be much studying
of code, wondering what is being referenced, and such. I always use one line for each argument in a QIO and such, which makes it very clear what is happening. Without that, even the best will still have some "fun" reading the
code to figure out what is happening.
On Sun, 28 Apr 2024 17:03:19 +1200, David Goodwin wrote:
In article <v0kbcs$q4lf$4@dont-email.me>, ldo@nz.invalid says...
You were the one who claimed it was ?still supported?, not me. It is up
to you to prove that point, if you can.
Microsoft says its still supported. I don't see any reason to claim or believe otherwise without proof.
Nope, all you?ve done is continue to insist that Microsoft?s PR weasel
words still mean something, contrary to past experience.
There are APIs to make something else work - the same APIs the Win32 Environment Subsystem uses. The main roadblock is almost certainly the
fact the GUI that ships with the system is good enough and modifying it
is far easier than starting from scratch.
That is absolutely laughable to claim it is ?good enough?, given the long- standing complaints about the inflexibility of the Windows GUI.
Mount points as an alternative to drive letters--only work with NTFS. I
think also system booting only works with NTFS.
This is primarily because NTFS is the only filesystem that ships with windows that has the required features.
Like I said, in Linux these are not ?features? specific to the filesystem, they are implemented in the VFS layer. So they are able to work across different filesystems--even ones from the Windows world, where Windows
itself is unable to support those features.
Mounted folders are implemented using reparse points - the same feature used for hard links, symbolic links, junctions and other such things. I think these are stored as extended attributes or similar which are not supported by FAT.
Why do you need on-disk attributes to store information about mount
points?
Those are all for network security, not local security. I?m talking
about things like SELinux and AppArmor. And containers.
As far as I can see most of what SELinux adds has been in Windows NT
since version 3.1.
You realize SELinux was created by the NSA? It offers military-strength role-based mandatory access control.
Maybe Windows has something equivalent to one of the simpler LSMs, like
maybe AppArmor. Not SELinux.
How wonderful. So they (partially) reinvented GUI login display
managers that *nix systems have had since the 1990s. Have they figured
out how to add that little menu that offers you a choice of GUI
environment to run, as well?
They could put that menu there.
Nobody could, apart from Microsoft. The GUI is not easily replaceable, remember.
It also goes beyond what binfmt does.
Does it indeed? Weren?t you making apologies about its limitations, due
to its being 30 years old, elsewhere?
No, I was not. And I think I've described NTs architecture more than
enough at this point for you to know that binfmt is not the same concept
as NTs Environment Subsystems.
Ah, first you were claiming these ?go beyond? binfmt, now you are trying
to backpedal by saying they are somehow not comparable at all?
Want to change your story yet again?
I ported C-Kermit for Windows in about a day IIRC.
Not sure why that?s relevant.
You were implying the transition from 32bit to 64bit is not easy. From
my experience this is not the case. I suspect you are just making assumptions here.
No, I was looking for some relevance to the 32-bit-to-64-bit transition,
and your story had nothing about that.
All those ports are gone. All the non-x86 ones, except the ARM one,
which continues to struggle.
Yes, because all of those platforms are gone.
No, ARM and POWER and MIPS are all still very much here and continuing to
be made and sold. And like I said, even with the massive popularity of
ARM, Microsoft still can?t get Windows running properly on it.
The fact that [Windows NT] has been ported to so many architectures
clearly demonstrates that it is fairly portable.
The fact that every single one of those ports ran in into trouble clearly demonstrates that that ?portability? was more of a PR claim than
practical.
Drive letters are a feature of the Win32 environment subsystem - the
Win32 namespace. This is implemented on top of the NT namespace which
is provided by the Object Manager.
Which is somehow specifically tied into NTFS. Try this: create and
mount a FAT volume, mount it somewhere other than a drive letter,
create a directory within it, and mount some other non-NTFS volume on
that directory.
It would seem it isn't necessarily tied to NTFS but rather requires
certain filesystem features FAT doesn't have.
Like I said, on Linux this has nothing to do with filesystem-specific features. It is all handled within the VFS layer.
?Compatibility?. Go on, say it: WSL1 could not offer good enough
compatibility with Linux.
Not good enough for what?
Not good enough to do the things Linux users/developers expect from their systems as a matter of course.
Not good enough to pass for Linux.
On 06/05/2024 06:24, Lawrence D'Oliveiro wrote:
On Sun, 5 May 2024 18:04 +0100 (BST), John Dallman wrote:ZFS has been perfectly fine ...
I had significant trouble with the [btrfs] version in RHEL7/CentOS7,
where it was the default root filesystem.
I wonder if Oracle customers have trouble with it too?
On Sun, 5 May 2024 18:04 +0100 (BST), John Dallman wrote:
I had significant trouble with the [btrfs] version in RHEL7/CentOS7,
where it was the default root filesystem.
I wonder if Oracle customers have trouble with it too?
In article <v16nsc$1gbho$2@dont-email.me>, ldo@nz.invalid says...
[snip]Why does Unix need a text file to store information about mount points?
[snip]
I'm still not entirely sure what the point of this discussion is. Is
there some point you're trying to make here or are we just trying to
find the differences between two operating systems?
In article <v16nsc$1gbho$2@dont-email.me>, ldo@nz.invalid says...
No, ARM and POWER and MIPS are all still very much here and
continuing to be made and sold. And like I said, even with the
massive popularity of ARM, Microsoft still can't get Windows
running properly on it.
Been a long time since I've seen PowerPC or MIPS PCs on store
shelves...
The PowerPC port ended when IBM stopped including ARC-compatible
firmware on new machines. The MIPS port ended when you could no
longer buy MIPS workstations with ARC firmware. Compatible hardware
was discontinued so the ports were discontinued.
Microsoft could have taken on supporting these platforms with
whatever random firmware they have like Linux does. But Microsoft
is selling a product here - if the number of sales to people who
want to run Windows rather than AIX on their brand new RS/6000
doesn't cover the costs its not worth doing.
That doesn't mean it can't be done or that Windows NT isn't
portable. It just means it doesn't make business sense to do it.
Same goes for ARM - Windows runs on ARM devices built to run
Windows. For business reasons Microsoft doesn't spend money porting
Windows to any random ARM device thats designed and sold for some
other purpose.
The fact that every single one of those ports ran in into trouble
clearly demonstrates that that ?portability? was more of a PR
claim than practical.
None of them ran into technical problems. The ports exist and they
work. I have PowerPC and Alpha hardware here running Windows NT and
it works just fine. Only reason I don't have a MIPS is because
they're extremely rare. The operating system itself is
indistinguishable from the regular x86 version and all the included
utilities work just the same. The operating system is portable and
its a bit absurd to try and claim otherwise.
Would be interested to see Linux using fat32 as the root
filesystem. Last I checked it wasn't possible due to missing
features in that filesystem.
Why does Unix need a text file to store information about mount points?
Been a long time since I've seen PowerPC or MIPS PCs on store shelves...
The PowerPC port ended when IBM stopped including ARC-compatible
firmware on new machines.
The MIPS port ended when you could no longer
buy MIPS workstations with ARC firmware.
Microsoft could have taken on supporting these platforms with whatever
random firmware they have like Linux does. But Microsoft is selling a
product here ...
None of them ran into technical problems.
Would be interested to see Linux using fat32 as the root filesystem.
Last I checked it wasn't possible due to missing features in that
filesystem.
On Mon, 6 May 2024 15:37:57 +1000, Gary R. Schmidt wrote:
On 06/05/2024 06:24, Lawrence D'Oliveiro wrote:
On Sun, 5 May 2024 18:04 +0100 (BST), John Dallman wrote:ZFS has been perfectly fine ...
I had significant trouble with the [btrfs] version in RHEL7/CentOS7,
where it was the default root filesystem.
I wonder if Oracle customers have trouble with it too?
But Oracle doesn’t offer that with its Linux. It offers btrfs instead.
Why does Unix need a text file to store information about mount
points?
Linux does not need a text file to store information about mount
points.
On Mon, 6 May 2024 15:37:57 +1000, Gary R. Schmidt wrote:
On 06/05/2024 06:24, Lawrence D'Oliveiro wrote:
On Sun, 5 May 2024 18:04 +0100 (BST), John Dallman wrote:ZFS has been perfectly fine ...
I had significant trouble with the [btrfs] version in RHEL7/CentOS7,
where it was the default root filesystem.
I wonder if Oracle customers have trouble with it too?
But Oracle doesn’t offer that with its Linux. It offers btrfs instead.
On Sat, 2024-05-04 at 02:11 +0000, Lawrence D'Oliveiro wrote:
Oracle gets money that would otherwise likely go to Red Hat ...
Interesting that they won’t offer their own ZFS next-generation
filesystem product with it, preferring to bundle btrfs instead. Vote
of confidence in your own product over rivals, much?
My experience with btrfs was awful. It's fortunate I only tested it and
took backups, and it kept losing data. Turned to ZFS and it has been
rock solid. Ten years.
Back on topic, who remembers the dec advfs for Tru64 ?. Never
actually used it, but what were it's advantages / usp ?...
On 5/7/2024 8:45 AM, chrisq wrote:
Back on topic, who remembers the dec advfs for Tru64 ?. Never
actually used it, but what were it's advantages / usp ?...
SpiraLog may be even more on topic ...
:-)
Arne
On Fri, 3 May 2024 13:05:18 +0100, chrisq wrote:
On 4/30/24 23:56, Lawrence D'Oliveiro wrote:
On Sun, 28 Apr 2024 12:22:03 +0100, chrisq wrote:
The way that so much of the unix infrastructure has been replaced,
coercing the system to suit the needs of systemd ...
For example?
Normally run FreeBSD for development, but needed to run a current
version of Linux, to test against an open source project, that it would
build and run without issue. Installed latest Xubuntu and Debian, both
of which operate under systemd, with no opout at install time.
So why didn’t you try a distro that didn’t have systemd?
All looks good at desktop level, but the networking config didn't stick
without a reboot, and things like ifconfig, ntp, inetd and other stuff
was missing.
ifconfig was superseded by the iproute2 suite years ago, nothing to do
with systemd. But of course systemd builds on that work--why reinvent the wheel?
And also inetd is one of the many pieces of legacy baggage superseded by systemd. systemd offers a much more modular way of managing individual services--either get used to it, or go use something else. The choice, as always, is up to you.
It's also not clear how to remove the systemd stuff ...
It’s called “building your own distro”. Go learn from the experts before
attempting this sort of activity yourself.
On 5/7/24 13:50, Arne Vajhøj wrote:
On 5/7/2024 8:45 AM, chrisq wrote:
Back on topic, who remembers the dec advfs for Tru64 ?. Never
actually used it, but what were it's advantages / usp ?...
SpiraLog may be even more on topic ...
:-)
Had never even heard of it, needed to look it up :-). Will have a
look later, but good report in the Digital Tech Journal,
volume 8, section 2, 1996. Load of other interesting reports there
as well.
It'a quite amazing, looking back to just how much effort digital and
others put into basic research. Stuff some take for granted now,
all the great books have been written etc, but just how many
don't even notice and continue to make the same mistakes over and
over again ?...
On Tue, 2024-05-07 at 07:07 +0000, Lawrence D'Oliveiro wrote:
Why does Unix need a text file to store information about mount
points?
Linux does not need a text file to store information about mount
points.
It's a text file, actually. /etc/mtab.
Linux does not need a text file to store information about mount
points.
It's a text file, actually. /etc/mtab.
Well, here it is not! It is a symbolic link to /proc/self/mounts.
Well, here it is not! It is a symbolic link to /proc/self/mounts.
Oracle Linux is a RHEL clone, so it offers what Redhat wants to put in
RHEL.
On Mon, 6 May 2024 16:28:12 +1200, David Goodwin wrote:
Why does Unix need a text file to store information about mount points?
Linux does not need a text file to store information about mount points.
Been a long time since I've seen PowerPC or MIPS PCs on store shelves...
They are used in computers, just because the stores you frequent don?t
carry them, is merely a reflection on the kinds of stores you frequent.
The PowerPC port ended when IBM stopped including ARC-compatible
firmware on new machines.
It didn?t stop Linux from continuing to support POWER, though.
The MIPS port ended when you could no longer
buy MIPS workstations with ARC firmware.
So Windows needed some special handholding to run on non-x86
architectures, where Linux was able to operate without such training
wheels.
Microsoft could have taken on supporting these platforms with whatever random firmware they have like Linux does. But Microsoft is selling a product here ...
Funny, isn?t it. The Linux kernel project has maybe 1000 regular contributors. Microsoft has not one, but close to two orders of magnitude greater developer talent on its payroll. Yet those Linux developers are managing to support about *two dozen* major processor architectures, while Microsoft struggles to get beyond one.
None of them ran into technical problems.
I didn?t say they did. But they were just too expensive and difficult to maintain. Windows simply wasn?t designed to make this sort of thing easy.
Would be interested to see Linux using fat32 as the root filesystem.
Last I checked it wasn't possible due to missing features in that filesystem.
Linux will boot off any filesystem that GRUB will read. <https://askubuntu.com/questions/938076/install-boot-on-fat32-partition>
On Tue, 7 May 2024 07:41:32 -0400, Arne Vajhøj wrote:
Oracle Linux is a RHEL clone, so it offers what Redhat wants to put in
RHEL.
Don’t you think it wants to give customers a reason to choose its product >over Red Hat?
SpiraLog may be even more on topic ...
... generally, I want use an os for work, and expect to to just
work and be easily configurable out of the box, just like any similar
unix system.
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation that
was not recoverable. Why use anything else ?.
On 5/4/24 07:57, Single Stage to Orbit wrote:
On Sat, 2024-05-04 at 02:11 +0000, Lawrence D'Oliveiro wrote:
Oracle gets money that would otherwise likely go to Red Hat ...
Interesting that they won?t offer their own ZFS next-generation
filesystem product with it, preferring to bundle btrfs instead. Vote
of confidence in your own product over rivals, much?
My experience with btrfs was awful. It's fortunate I only tested it and took backups, and it kept losing data. Turned to ZFS and it has been
rock solid. Ten years.
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation
that was not recoverable. Why use anything else ?.
FreeBSD was the only alternative os to offer their own clean room
zfs many years ago, but they moved to OpenZFS. Again, rock solid
and would not choose any other fs, other than for quick hacks, or
testing.
Back on topic, who remembers the dec advfs for Tru64 ?. Never
actually used it, but what were it's advantages / usp ?...
On Tue, 7 May 2024 18:04:39 +0100, chrisq wrote:
... generally, I want use an os for work, and expect to to just
work and be easily configurable out of the box, just like any similar
unix system.
So why didn’t you try a distro that didn’t have systemd?
On Tue, 7 May 2024 13:45:25 +0100, chrisq wrote:
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation that
was not recoverable. Why use anything else ?.
And yet, Oracle won’t offer it, preferring to give their customers btrfs instead.
In article <v1d7p6$38li0$1@dont-email.me>, devzero@nospam.com says...
On 5/4/24 07:57, Single Stage to Orbit wrote:
On Sat, 2024-05-04 at 02:11 +0000, Lawrence D'Oliveiro wrote:
Oracle gets money that would otherwise likely go to Red Hat ...
Interesting that they won?t offer their own ZFS next-generation
filesystem product with it, preferring to bundle btrfs instead. Vote
of confidence in your own product over rivals, much?
My experience with btrfs was awful. It's fortunate I only tested it and
took backups, and it kept losing data. Turned to ZFS and it has been
rock solid. Ten years.
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation
that was not recoverable. Why use anything else ?.
FreeBSD was the only alternative os to offer their own clean room
zfs many years ago, but they moved to OpenZFS. Again, rock solid
and would not choose any other fs, other than for quick hacks, or
testing.
Back on topic, who remembers the dec advfs for Tru64 ?. Never
actually used it, but what were it's advantages / usp ?...
Late last year I had a go at building GCC 4.7.4 for Tru64 5.1B on my
trusty AlphaServer 800 (notes here for anyone interested in doing it: https://www.zx.net.nz/vc/dunix/gcc.shtml)
During this process I ran out of disk and ended up having to slot
another drive in the machine which led me to interacting with the AdvFS management tools. And it turns out its a pretty impressive filesystem
for its age. Its got the whole storage pools thing that ZFS does which
is pretty nice.
No COW or Checksumming that I can see though, but despite that it seems
to be a more capable filesystem than whats normally been used on linux
for the past decade or two. Its a shame HP never released their Linux
port of it.
On 5/9/24 22:56, Lawrence D'Oliveiro wrote:
On Tue, 7 May 2024 13:45:25 +0100, chrisq wrote:
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation that
was not recoverable. Why use anything else ?.
And yet, Oracle won?t offer it, preferring to give their customers btrfs instead.
Still very much in Solaris, but suspect conflicting licensing issues,
Linux and ZFS, being the main reason it's not offered. It's been in
FreeBSD for years, but still not in mainstream Linux afaik, unless
they are now using OpenZFS...
Chris
On 5/9/24 23:25, David Goodwin wrote:
In article <v1d7p6$38li0$1@dont-email.me>, devzero@nospam.com says...
On 5/4/24 07:57, Single Stage to Orbit wrote:
On Sat, 2024-05-04 at 02:11 +0000, Lawrence D'Oliveiro wrote:
Oracle gets money that would otherwise likely go to Red Hat ...
Interesting that they won?t offer their own ZFS next-generation
filesystem product with it, preferring to bundle btrfs instead. Vote >>>> of confidence in your own product over rivals, much?
My experience with btrfs was awful. It's fortunate I only tested it and >>> took backups, and it kept losing data. Turned to ZFS and it has been
rock solid. Ten years.
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation
that was not recoverable. Why use anything else ?.
FreeBSD was the only alternative os to offer their own clean room
zfs many years ago, but they moved to OpenZFS. Again, rock solid
and would not choose any other fs, other than for quick hacks, or
testing.
Back on topic, who remembers the dec advfs for Tru64 ?. Never
actually used it, but what were it's advantages / usp ?...
Late last year I had a go at building GCC 4.7.4 for Tru64 5.1B on my
trusty AlphaServer 800 (notes here for anyone interested in doing it: https://www.zx.net.nz/vc/dunix/gcc.shtml)
During this process I ran out of disk and ended up having to slot
another drive in the machine which led me to interacting with the AdvFS management tools. And it turns out its a pretty impressive filesystem
for its age. Its got the whole storage pools thing that ZFS does which
is pretty nice.
No COW or Checksumming that I can see though, but despite that it seems
to be a more capable filesystem than whats normally been used on linux
for the past decade or two. Its a shame HP never released their Linux
port of it.
Older versions of gcc, 2.7.2, for example, were not too difficult to
build and have built gcc cross, even on a Sun 3. Modern
versions are more difficult, needing obscure math libraries resident,
and a whole raft of gnu infrastructure in place. Gnu is still a great
set of tools though and more than anything else, sounded the death
knell of expensive and locked down proprietary tools...
On Tue, 7 May 2024 13:45:25 +0100, chrisq wrote:
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation that
was not recoverable. Why use anything else ?.
And yet, Oracle won’t offer it, preferring to give their customers btrfs >> instead.
Still very much in Solaris, but suspect conflicting licensing issues,
Linux and ZFS, being the main reason it's not offered. It's been in
FreeBSD for years, but still not in mainstream Linux afaik, unless
they are now using OpenZFS...
On 5/9/24 23:00, Lawrence D'Oliveiro wrote:
On Tue, 7 May 2024 18:04:39 +0100, chrisq wrote:
... generally, I want use an os for work, and expect to to just work
and be easily configurable out of the box, just like any similar unix
system.
So why didn’t you try a distro that didn’t have systemd?
I do, Suse 11.4 on an old laptop and another machine, because of some specific capabilities. Also looked at Devuan, but Linux is not the be
all and end all of operating systems. FreeBSD is an excellent
alternative, and very professional in it's development process. So is
Open Indiana Hipster, a spin off from the original open Solaris project.
That gets ever better, and with a fraction of the development effort and funding that goes into Linux. Loads of os choice these days, depending
on project need and application.
On 5/9/24 22:56, Lawrence D'Oliveiro wrote:
On Tue, 7 May 2024 13:45:25 +0100, chrisq wrote:
Yes, ZFS here since the very early versions of Solaris 10. Absolutely
rock solid and has never lost any data here, nor had a situation that
was not recoverable. Why use anything else ?.
And yet, Oracle won’t offer it, preferring to give their customers
btrfs instead.
Still very much in Solaris, but suspect conflicting licensing issues,
Linux and ZFS, being the main reason it's not offered.
In article <v1cjv2$343as$1@dont-email.me>, ldo@nz.invalid says...
So Windows needed some special handholding to run on non-x86
architectures, where Linux was able to operate without such training
wheels.
Please don't be absurd. Special hand-holding? I'm really not sure how
you think Windows NT has more special hand-holding than Linux here.
Delete all the "special handholding" OpenFirmware support code from the
linux kernel and see how well it boots on a SPARCstation.
This is a reasonable choice when you're a company that is after some
kind of return on investment.
You don't make money by developing a
product no one will buy no matter how cheap or easy that product is to develop.
I asked if [Linux] could run with a FAT32 root filesystem (/).
Remember, you can boot up a Linux kernel and specify whatever command you want to run for its “init” (PID 1) process. That can be as simple as a shell. And shells on Linux can cope with whatever filesystems Linux itself can cope with.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Remember, you can boot up a Linux kernel and specify whatever command
you want to run for its “init” (PID 1) process. That can be as simple
as a shell. And shells on Linux can cope with whatever filesystems
Linux itself can cope with.
And "init=/bin/bash" used to be the standard fix for "Dang, I forgot the
root password again". ;-)
On Sun, 12 May 2024 14:29:11 +0200, Alexander Schreiber wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Remember, you can boot up a Linux kernel and specify whatever command
you want to run for its “init” (PID 1) process. That can be as simple >>> as a shell. And shells on Linux can cope with whatever filesystems
Linux itself can cope with.
And "init=/bin/bash" used to be the standard fix for "Dang, I forgot the
root password again". ;-)
I remember:
SET/STARTUP OPA0:
!!
On Fri, 10 May 2024 01:50:42 +0100, chrisq wrote:
On 5/9/24 22:56, Lawrence D'Oliveiro wrote:
On Tue, 7 May 2024 13:45:25 +0100, chrisq wrote:
And yet, Oracle won’t offer it, preferring to give their customers
btrfs instead.
Still very much in Solaris, but suspect conflicting licensing issues,
Linux and ZFS, being the main reason it's not offered.
Yes, but Oracle *owns* the licence to ZFS. So they can offer it on
whatever terms they like.
Kludge writes:
People today just seem to think this is the normal way of doing business. >>Surely we can do better.
Because it's the law of large numbers, not any particular fault
that can be reasonably addressed by an individual, organization,
etc.
If a marginal solder joint mechanically weakened by a bumpy ride
in a truck causes something to short, and that current draw on a
bus spikes over some threshold, pulling a voltage regulator out
of spec and causing voltage to sag by some nominal amount that
pulls another component on a different server below its marginal
threshold for a logic value and a bit flips, what are you, the
software engineer, supposed to do to tolerate that, let alone
recover from it? It's not a software bug, it's the confluence
of a large number of factors that only emerge when you run at a
scale with tens or hundreds of thousands of systems.
Can we do better? Maybe. There were some lessons learned in
that failure; in part, making sure that the battery room doesn't
flood if the generator catches on fire (another part of the
story). But the reliability of hyperscalar operations is
already ridiculously high. They do it by using redundency and
designing in an _expectation_ of failure: multiple layers of
redundent load balancers, sharding traffic across multiple
backends, redundant storage in multiple geolocations, etc. But
a single computer failing and rebooting? That's expected. The
enterprise is, of course, much further behind, but I'd argue on
balance even they do all right, all things considered.
OpenZFS, the only version of ZFS that runs on Linux, would remain incompatible with the GPL.
On Tue, 14 May 2024 17:01:52 -0000 (UTC), Matthew R. Wilson wrote:
OpenZFS, the only version of ZFS that runs on Linux, would remain
incompatible with the GPL.
So you are of the opinion that the CDDL is incompatible with the GPL?
Because this seems to be a matter of contention, and Ubuntu for one
doesn’t seem to agree.
If Oracle were to bundle OpenZFS with Linux, that would settle the issue
once and for all, wouldn’t it?
As far as we know, Oracle's lawyers are telling them the CDDL is,
indeed, incompatible and if Oracle bundles OpenZFS with their Linux, the OpenZFS contributors could take action against them! lol
But, my speculation is obviously of little value. The real fact we can
see is that Oracle has had _plenty_ of time and opportunity to move in
that direction, and they haven't, so while we don't know their reasons,
we do know their decision.
Sure. We are aware of the legal maneouvring behind the scenes. But
the immediate optics of it, intended or not, is that they are
reluctant to give a vote of confidence in their own technology,
instead favouring a different open-source rival.
That has got to be a source of corporate embarrassment.
If customers start dropping Oracle Linux and telling Oracle it's because
of the lack of OpenZFS, that might have some effect. But Oracle Linux >customers are mostly seeking an all-Oracle stack AFAIK, so they're
unlikely to drop Oracle Linux.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Kludge writes:
People today just seem to think this is the normal way of doing business. >>>Surely we can do better.
Because it's the law of large numbers, not any particular fault
that can be reasonably addressed by an individual, organization,
etc.
Yes, this is why the key is to simplify things. I want to read my email, >which requires I log in from my desktop computer through a VPN that requires >an external server for downloading some scripts to my machine. Once I am
on the VPN, I can log into the mail server which is dependent on an active >directory server for authentication and three disk servers to host the mail. >Since it's running a Microsoft product that isn't safe to connect to the >outside world there is also an additional isolation server between the mail >server and the outside world. All of these things need to be working for me >to be able to read my mail.
Given the complexity of this system, it's not surprising that sometimes
it isn't working. This seems ludicrous to me.
If a marginal solder joint mechanically weakened by a bumpy ride
in a truck causes something to short, and that current draw on a
bus spikes over some threshold, pulling a voltage regulator out
of spec and causing voltage to sag by some nominal amount that
pulls another component on a different server below its marginal
threshold for a logic value and a bit flips, what are you, the
software engineer, supposed to do to tolerate that, let alone
recover from it? It's not a software bug, it's the confluence
of a large number of factors that only emerge when you run at a
scale with tens or hundreds of thousands of systems.
Yes, precisely. I don't want to be dependent on systems running at
that scale.
Can we do better? Maybe. There were some lessons learned in
that failure; in part, making sure that the battery room doesn't
flood if the generator catches on fire (another part of the
story). But the reliability of hyperscalar operations is
already ridiculously high. They do it by using redundency and
designing in an _expectation_ of failure: multiple layers of
redundent load balancers, sharding traffic across multiple
backends, redundant storage in multiple geolocations, etc. But
a single computer failing and rebooting? That's expected. The
enterprise is, of course, much further behind, but I'd argue on
balance even they do all right, all things considered.
Redundancy helps a lot, but part of the key is to look at the
opposite, at the number of single points of failure.
In article <v21td4$pa2n$2@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
Sure. We are aware of the legal maneouvring behind the scenes. But the
immediate optics of it, intended or not, is that they are reluctant to
give a vote of confidence in their own technology, instead favouring a
different open-source rival.
That has got to be a source of corporate embarrassment.
I think you underrate Oracle's ability to justify their actions to themselves, and overrate the extent to which they care about what
pundits say.
If customers start dropping Oracle Linux and telling Oracle it's because
of the lack of OpenZFS, that might have some effect.
John Dallman <jgd@cix.co.uk> wrote:
If customers start dropping Oracle Linux and telling Oracle it's because
of the lack of OpenZFS, that might have some effect. But Oracle Linux >>customers are mostly seeking an all-Oracle stack AFAIK, so they're
unlikely to drop Oracle Linux.
People are using Oracle Linux because they want Red Hat compatibility
without paying Red Hat fees.
They used to use Scientific Linux or Centos, but those aren't the same
any longer. So they go to Oracle Linux or to Rocky. If they need a
Common Critera cetification for government work, they go to Oracle Linux because Rocky hasn't been tested.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 433 |
Nodes: | 16 (2 / 14) |
Uptime: | 101:28:43 |
Calls: | 9,100 |
Calls today: | 3 |
Files: | 13,418 |
Messages: | 6,029,178 |