Darya Zelenina, speaks 9 languages,
looks like she is about 35.
And she does not have a past in DEC/CPQ/HP.
On Wed, 3 Jan 2024 15:52:16 -0500, Arne Vajhøj wrote:
And she does not have a past in DEC/CPQ/HP.
Does she have a background in finance?
If yes, then ...
... is the company being prepared for a selloff?
Darya Zelenina, speaks 9 languages, looks like she is about 35.Darya brings extensive expertise in OpenVMS and the OpenVMS ecosystem, coupled with deep commitment to shaping the platform's long-term trajectory.
Practically all of the OpenVMS users seem to be 65+ years old!
She is soon to be the CEO!
https://www.linkedin.com/in/darya-zelenina-8a3b3272/
Darya will assume the role of CEO in June 2024. She joined VMS Software as a technical writer and OpenVMS instructor in 2017 and has since held key leadership positions in software and web development, documentation, the Community Program and Marketing.
On 2024-01-03, Slo <slovuj@gmail.com> wrote:
Darya will assume the role of CEO in June 2024. She joined VMSThis move does not give me a good feeling.
Software as a technical writer and OpenVMS instructor in 2017 and
has since held key leadership positions in software and web
development, documentation, the Community Program and Marketing.
Darya brings extensive expertise in OpenVMS and the OpenVMS
ecosystem, coupled with deep commitment to shaping the platform's
long-term trajectory. >
She does not seem like a good fit for a CEO of a company providing
the types of mission-critical services that companies running VMS
rely on.
Even ignoring all the touchy-feeling stuff in her bio, someone who
has "successfully managed teams in documentation, marketing, web
development, and DevOps" as her main achievement does not seem to
be a good match for the needs of VMS users.
Where were all the other candidates for the job, and why was she
considered to be the best one for the job ? Would no-one else
look at taking the job for some reason ?
Also, is her Russian background going to be a problem for the
US governement ? I'm not saying it is an issue in real life, I am just
asking how some people might react. For example, look at all the crap
the Sailfish OS people have to deal with in this area...
BTW, what the hell is "Intercultual Communication" ?
Le 04/01/2024 à 16:02, Arne Vajhøj a écrit :
On 1/4/2024 9:00 AM, Simon Clubley wrote:
BTW, what the hell is "Intercultual Communication" ?
Probably something about the need to communicate differently
with people from different cultural backgrounds. Do you start directly
with the point or do you start with some polite chit chat. Does the
boss order or suggest to team. Etc.etc..
Useful skill.
Perhaps intercultural is necessary to speak altogether with computers, computers scientists, business men, from Europe to US and US to
europe... :)
I remember having met her during the first bootcamps of the new age. Impressive for her cleverness, really curious of VMS culture.
I am hoping she will be the one who gets a "vision"... which is the
first function of a good ceo.
On 1/4/2024 9:00 AM, Simon Clubley wrote:
BTW, what the hell is "Intercultual Communication" ?
Probably something about the need to communicate differently
with people from different cultural backgrounds. Do you start directly
with the point or do you start with some polite chit chat. Does the
boss order or suggest to team. Etc.etc..
Useful skill.
Arne
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
On Thu, 4 Jan 2024 09:56:31 -0500, Arne Vajhøj wrote:
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
If she is less invested in how DEC used to do things, maybe she’s the one to put in place the program I suggested sometime back: get rid of most of
VMS itself, leaving only the parts that users care about--namely their userland programs and DCL command procedures. All that could run on an emulation layer on Linux.
On 1/4/2024 2:25 PM, Lawrence D'Oliveiro wrote:
On Thu, 4 Jan 2024 09:56:31 -0500, Arne Vajhøj wrote:
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
If she is less invested in how DEC used to do things, maybe she’s the
one to put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All
that could run on an emulation layer on Linux.
Lots of work to implement.
Not much interest from customers.
Sector 7 has offered such products for decades. Without taking away the
VMS customer base.
On Thu, 4 Jan 2024 15:42:57 -0500, Arne Vajhøj wrote:
On 1/4/2024 2:25 PM, Lawrence D'Oliveiro wrote:
On Thu, 4 Jan 2024 09:56:31 -0500, Arne Vajhøj wrote:
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
If she is less invested in how DEC used to do things, maybe she’s the
one to put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All
that could run on an emulation layer on Linux.
Lots of work to implement.
Much less than the 7 years it took to reimplement VMS on top of AMD64.
Remember, it took less time (and resources) than that to move Linux from 32-bit x86 to 64-bit Alpha.
Not much interest from customers.
Just think: there would have been more customers left if they’d got it working sooner.
Sector 7 has offered such products for decades. Without taking away the
VMS customer base.
Maybe they have.
On 1/4/2024 5:20 PM, Lawrence D'Oliveiro wrote:
On Thu, 4 Jan 2024 15:42:57 -0500, Arne Vajhøj wrote:
On 1/4/2024 2:25 PM, Lawrence D'Oliveiro wrote:
... put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All
that could run on an emulation layer on Linux.
Lots of work to implement.
Much less than the 7 years it took to reimplement VMS on top of AMD64.
I doubt that.
Mapping from one OS to another OS is not easy.
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
Just think: there would have been more customers left if they’d got it
working sooner.
Sector 7 has been around for many years. So the lack of interest in
their product is not likely to be due to timing.
Sector 7 has offered such products for decades. Without taking away
the VMS customer base.
Maybe they have.
That is something we would know about.
They have customers, but not nearly as many as those migrating
natively to other platforms.
On Thu, 4 Jan 2024 20:26:33 -0500, Arne Vajhøj wrote:
On 1/4/2024 5:20 PM, Lawrence D'Oliveiro wrote:
On Thu, 4 Jan 2024 15:42:57 -0500, Arne Vajhøj wrote:
On 1/4/2024 2:25 PM, Lawrence D'Oliveiro wrote:
... put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All >>>>> that could run on an emulation layer on Linux.
Lots of work to implement.
Much less than the 7 years it took to reimplement VMS on top of AMD64.
I doubt that.
Mapping from one OS to another OS is not easy.
Linux is a more versatile kernel than VMS. For example, the WINE project
has been able to substantially implement the Windows APIs on top of Linux, while Microsoft’s attempt to do the reverse, implement the Linux APIs on top of the Windows kernel with WSL1, has been abandoned as a failure.
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new architecture.
the VMS customer base.
Maybe they have.
That is something we would know about.
You mean “would not know about”?
They have customers, but not nearly as many as those migrating
natively to other platforms.
I think we’ve discussed their product before. Reading between the lines of their case studies, seems their product lacks some of the niceties that it should be possible to implement on top of the Linux kernel. DECnet, I
think, was one thing they seemed to be missing.
Have you noticed how the world has moved from Windows to Linux with
Wine?
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new
architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new architecture" then yes.
But the reality is that it is very different.
On Thu, 4 Jan 2024 21:11:49 -0500, Arne Vajhøj wrote:
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new >>> architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.
But the reality is that it is very different.
New CPU -- check
“underlying foreign OS kernel” -- this was about porting the same kernel onto a different CPU. In both cases.
So tell me again: “very different” how?
On Thu, 4 Jan 2024 21:11:49 -0500, Arne Vajhøj wrote:
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new >>> architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.
But the reality is that it is very different.
New CPU -- check
“underlying foreign OS kernel” -- this was about porting the same kernel >onto a different CPU. In both cases.
So tell me again: “very different” how?
In article <un7ren$3s7nl$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 4 Jan 2024 21:11:49 -0500, Arne Vajhøj wrote:
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop. I can feel it!
Remember, it took less time (and resources) than that to move Linux >>>>>> from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new >>>> architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.
But the reality is that it is very different.
New CPU -- check
“underlying foreign OS kernel” -- this was about porting the same kernel >> onto a different CPU. In both cases.
So tell me again: “very different” how?
I think, again, you are talking at cross-purposes: my suspicion
is that Arne is referring to a VMS compatibility layer built on
top of Linux, not the effort of porting VMS to x86_64.
That said, VMS was not originally written for portability and
wasn't ported to anything other than successive version of the
VAX for the first 10 or so years it existed; Linux was ported
to the Alpha pretty early on (sponsored by DEC; thanks Mad Dog).
So Linux filed off a lot of portability sharp edges for the
machines at the time pretty early on, when it was still pretty
small; VMS not so much.
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
That said, VMS was not originally written for portability and wasn't
ported to anything other than successive version of the VAX for the
first 10 or so years it existed ...
Linux was ported to the Alpha pretty early on (sponsored by DEC; thanks
Mad Dog). So Linux filed off a lot of portability sharp edges for the machines at the time pretty early on, when it was still pretty small;
VMS not so much.
I believe one of the VSI people has told that one of issues in the
x86-64 port is probing memory. VAX got PROBEx instructions.
Alpha got CALL_PAL PROBER and PROBEW.
I was not comparing "Linux port to Alpha" with "VSI actual port of VMS
to x86-64" but to "hypothetical port of VMS to being on top of Linux
kernel".
She did not join the second largest IT company in the world (DEC
80's) with one of the worlds major OS (VMS 80's), has seen
it decline over several decades and want to "resurrect" it.
Nothing wrong with being old (I am old!). But experience leaves an
impact on ones thinking.
She could be the right person to move VSI and VMS into the 2030's.
On 1/4/2024 10:09 PM, Dan Cross wrote:
In article <un7ren$3s7nl$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 4 Jan 2024 21:11:49 -0500, Arne Vajhøj wrote:
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
That's not Linux with wine. You can install Wine on the steam
deck, but their success has much more to do with their native
architecture.
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop. I can feel it!
That's a weird thing to say. I have been running Linux Desktops for
over 20 years.
On Fri, 5 Jan 2024 03:09:37 -0000 (UTC), Dan Cross wrote:
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about >porting across userland executables and DCL command procedures--just the >parts of VMS that users care about, nothing more.
That said, VMS was not originally written for portability and wasn't
ported to anything other than successive version of the VAX for the
first 10 or so years it existed ...
And being typical of proprietary software, think of the layers of cruft
the code will have accumulated, first in the move to Alpha, then Itanium,
and now AMD64. All without ever really becoming a fully 64-bit OS.
Linux was ported to the Alpha pretty early on (sponsored by DEC; thanks
Mad Dog). So Linux filed off a lot of portability sharp edges for the
machines at the time pretty early on, when it was still pretty small;
VMS not so much.
Which is reinforcing my point, is it not? That Linux stands a good chance
of being able to take on enough of a VMS layer to make VMS itself >unnecessary.
On 1/4/24 1:10 PM, Arne Vajhøj wrote:
She did not join the second largest IT company in the world (DEC
80's) with one of the worlds major OS (VMS 80's), has seen
it decline over several decades and want to "resurrect" it.
Nothing wrong with being old (I am old!). But experience leaves an
impact on ones thinking.
The previous CEO (Kevin Shaw) was 44 when he was killed by a car while crossing the street, so it's not news that the next generation of
leadership will be too young to have been ex-DECCies.
On 1/4/2024 10:09 PM, Dan Cross wrote:
In article <un7ren$3s7nl$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 4 Jan 2024 21:11:49 -0500, Arne Vajhøj wrote:
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
That's not Linux with wine. You can install Wine on the steam
deck, but their success has much more to do with their native
architecture.
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop. I can feel it!
That's a weird thing to say. I have been running Linux Desktops for
over 20 years.
On Fri, 5 Jan 2024 03:09:37 -0000 (UTC), Dan Cross wrote:
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about porting across userland executables and DCL command procedures--just the parts of VMS that users care about, nothing more.
That said, VMS was not originally written for portability and wasn't
ported to anything other than successive version of the VAX for the
first 10 or so years it existed ...
And being typical of proprietary software, think of the layers of cruft
the code will have accumulated, first in the move to Alpha, then Itanium,
and now AMD64. All without ever really becoming a fully 64-bit OS.
On 1/4/2024 11:44 PM, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 03:09:37 -0000 (UTC), Dan Cross wrote:
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and expensive.
There will be both some hard problems and a gazillion trivial problems
to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
On 1/5/2024 9:08 AM, Arne Vajhøj wrote:
On 1/4/2024 11:44 PM, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 03:09:37 -0000 (UTC), Dan Cross wrote:
I think, again, you are talking at cross-purposes: my suspicion is that >>>> Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the >>> parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and expensive.
There will be both some hard problems and a gazillion trivial problems
to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
What would they return if asked for an item that does not exist on VMS?
On 1/5/2024 1:17 PM, Arne Vajhøj wrote:
On 1/5/2024 1:01 PM, bill wrote:
On 1/5/2024 9:08 AM, Arne Vajhøj wrote:
On 1/4/2024 11:44 PM, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 03:09:37 -0000 (UTC), Dan Cross wrote:
I think, again, you are talking at cross-purposes: my suspicion is >>>>>> that
Arne is referring to a VMS compatibility layer built on top of Linux, >>>>>> not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking
about
porting across userland executables and DCL command
procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and
expensive.
There will be both some hard problems and a gazillion trivial problems >>>> to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
What would they return if asked for an item that does not exist on VMS?
SS$_BADPARAM I believe.
But returning that for items codes working on VMS could
easily break code.
Times change. Sometimes code needs to as well. :-)
But on the subject of Linux vs. MS on the desktop.
Who would you say was the largest single user of MS
Windows products? Why do you think they continue to
use MS instead of Linux?
On 1/5/2024 1:27 PM, Arne Vajhøj wrote:
On the consumer side I expect drivers like:
- they know Windows
At the user level very low learning curve to change.
- they use Windows at work
I would expect that most of the people who develop Linux OS and
apps in their spare time use Windows at work. One does not preclude
the other.
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work. I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux. And it could make them cheaper.
The hassle of changing to Linux is not worth it given
how cheap Windows is for consumers.
Something not guaranteed to continue.
Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
On 1/5/2024 2:26 PM, bill wrote:
On 1/5/2024 2:05 PM, Arne Vajhøj wrote:
On 1/5/2024 1:43 PM, bill wrote:
On 1/5/2024 1:27 PM, Arne Vajhøj wrote:
On the consumer side I expect drivers like:
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different. There are still things I used
to do that I can not figure out on Windows 10.
Apparently the average Joe's of the world think differently.
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work. I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly. I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
There is no reason why it should not work.
API's are maintained.
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux. And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
MS dropped that practice in 1994.
30 years ago.
Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
On 1/5/2024 2:05 PM, Arne Vajhøj wrote:
On 1/5/2024 1:43 PM, bill wrote:
On 1/5/2024 1:27 PM, Arne Vajhøj wrote:
On the consumer side I expect drivers like:
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different. There are still things I used
to do that I can not figure out on Windows 10.
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work. I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly. I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux. And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
That's true. I have a laptop running Windows 10 that only performs the
task people have accused the PC of all along. It launches Minecraft and then runs as a game console. :-)
In article <un81en$l6e$2@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
I thought I made it pretty clear early on that I was only talking about >>porting across userland executables and DCL command procedures--just the >>parts of VMS that users care about, nothing more.
That would necessarily entail dragging in much of the rest of the
operating system.
Consider that for both the VAX _and_ Alpha, DEC was able to shape the
design of the hardware _and_ of VMS simultaneously to match one another.
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
And yet they were never able to make VMS a fully 64-bit OS, even on their
own fully 64-bit hardware.
On Fri, 5 Jan 2024 13:28:29 -0000 (UTC), Dan Cross wrote:
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a “desktop” OS, it is (and always has been) a “workstation” OS.
On Fri, 5 Jan 2024 13:27:14 -0000 (UTC), Dan Cross wrote:
In article <un81en$l6e$2@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
I thought I made it pretty clear early on that I was only talking about >>>porting across userland executables and DCL command procedures--just the >>>parts of VMS that users care about, nothing more.
That would necessarily entail dragging in much of the rest of the
operating system.
No it wouldn't.
Any more than WINE entails implementing the whole of
Windows on top of Linux. We don't need any actual supervisor-mode DCL, or >kernel-mode drivers, or any actual ACPs/XQPs, only a layer that emulates >their behaviour, for example. No need for EVL or MPW or the whole queue >system, because Linux already provides plenty of existing facilities for
that kind of thing. No VMScluster rigmarole.
Consider that for both the VAX _and_ Alpha, DEC was able to shape the
design of the hardware _and_ of VMS simultaneously to match one another.
And yet they were never able to make VMS a fully 64-bit OS, even on their
own fully 64-bit hardware.
On Fri, 5 Jan 2024 13:28:29 -0000 (UTC), Dan Cross wrote:
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a "desktop" OS, it is (and always has been) a >"workstation" OS.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are they going to return when asked for an item that does not exist on Linux?
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE as "LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well documented.
But the code does not really make any sense on Linux. So what to do?
On 1/5/2024 5:10 PM, Lawrence D'Oliveiro wrote:
And yet they were never able to make VMS a fully 64-bit OS, even on
their own fully 64-bit hardware.
That statement is literally not true.
The issue isn't that we are not capable of doing that; we don't want to
break decades of compatibility in order to do that.
The project of getting the native X86_64 C++ compiler to straddle the
32- and 64-bit world of VMS and play nice with open source that expects
fully 64-bitness everywhere would be much easier if we could abandon the 32-bit aspects of VMS, but we cannot, if we expect the vast majority of
our customers to remain on VMS.
On 1/5/2024 5:11 PM, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 13:28:29 -0000 (UTC), Dan Cross wrote:
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a “desktop” OS, it is (and always has been) a
“workstation” OS.
Not everybody agrees on that.
How about the US Government.
On Fri, 5 Jan 2024 09:08:42 -0500, Arne Vajhøj wrote:
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are they
going to return when asked for an item that does not exist on Linux?
Maybe, be more specific. Give some examples of info you think would not
make sense to return (or emulate) under Linux, and we can discuss them.
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones with a server process and communication via some IPC mechanism. D-Bus or Varlink might be good enough for this.
Just take a list of all the JPI$_* codes.
The most notorious case has to be the Munich city council, which moved to Linux years ago, then faced a massive pressure campaign from Microsoft
(aided and abetted by HP, I think it was, at one stage) to try to make it appear that they were worse off as a result. Which they were not.
Some like to blame MS for what happened. But the project execution does
not seem attractive to follow.
On 1/5/2024 9:33 PM, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE
as "LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones with
a server process and communication via some IPC mechanism. D-Bus or
Varlink might be good enough for this.
And another service for the privileges.
A lot is possible if one is willing to put enough effort into it.
The project of getting the native X86_64 C++ compiler to straddleSuch a long-winded way of saying _we could not make VMS fully
the 32- and 64-bit world of VMS and play nice with open source that
expects fully 64-bitness everywhere would be much easier if we could abandon the 32-bit aspects of VMS, but we cannot, if we expect the
vast majority of our customers to remain on VMS.
64-bit, even on our own fully 64-bit hardware_.
On 1/5/2024 9:38 PM, Lawrence D'Oliveiro wrote:
For what I mean by “workstation”, look at the capabilities of the Unix >> workstations in the 1980s/1990s: remember, they ran the same OS as their
respective companies’ server offerings, with all the same
capabilities. It
was Microsoft that came along and offered a “Workstation” OS that had
cut-
down capabilities compared to their “Server” offering, so they could
charge less for the former ... and more for the latter.
Not sure I agree with this at all. It's been a long time and my
memory may not be what it once was but I distinctly remember the
only difference between NT Server and NT Workstation was Registry
Settings.
On 1/5/2024 3:32 PM, Arne Vajhøj wrote:
On 1/5/2024 2:26 PM, bill wrote:
On 1/5/2024 2:05 PM, Arne Vajhøj wrote:
On 1/5/2024 1:43 PM, bill wrote:
On 1/5/2024 1:27 PM, Arne Vajhøj wrote:
On the consumer side I expect drivers like:
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different. There are still things I used
to do that I can not figure out on Windows 10.
Apparently the average Joe's of the world think differently.
Not claiming to be an "average Joe", but this is being posted from an XP system. I'm less than happy with the WEENDOZE 7 and later user
interface. Of course, SSL/TLS latest versions don't work here, and I'm limited on browser versions. Nor does my version of SmarTerm work on
WEENDOZE 7 and later.
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work. I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly. I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
There is no reason why it should not work.
API's are maintained.
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux. And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
MS dropped that practice in 1994.
30 years ago.
Not how Office is no longer
sold but uses a subscription service so they can continue to collect >>>>> revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
Office 2000 works fine for me. Until someone sends me a docx file and
such.
On 1/5/2024 1:27 PM, Arne Vajhøj wrote:
On 1/5/2024 1:08 PM, bill wrote:
But on the subject of Linux vs. MS on the desktop.
Who would you say was the largest single user of MS
Windows products? Why do you think they continue to
use MS instead of Linux?
There are two big groups of Windows usage:
* business - in the office
* consumer - at home
I didn't say classes. I said largest single user. How about
the US Government. Who also happen to be the largest business
(if you really want to call them that) in the US. Definitely
the current largest employer which gives them a lot if users.
On the business side the driver are probably mostly about
integration.
Windows PC's with Edge, Outlook, Office and Teams works
with Active Directory, SharePoint, phone system, mobile
phones etc..
Too expensive and too risky to try and migrate that
to Linux based solution.
Actually, the biggest reason is more likely to be political.
Or government financial (another system that would bankrupt
any real business!!)
On the consumer side I expect drivers like:
- they know Windows
At the user level very low learning curve to change.
- they use Windows at work
I would expect that most of the people who develop Linux OS and
apps in their spare time use Windows at work. One does not preclude
the other.
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work. I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux. And it could make them cheaper.
The hassle of changing to Linux is not worth it given
how cheap Windows is for consumers.
Something not guaranteed to continue. Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Only time will tell but I really think like so many other IT
Giants MS's time is running out. I only wish I was likely to
still be around to see it. :-)
bill
On Fri, 5 Jan 2024 23:25:38 -0500, Arne Vajhøj wrote:
Just take a list of all the JPI$_* codes.
OK, looking at the VMS 5.5 System Services manual from Bitsavers (they don’t seem to have anything more recent):
JPI$_ACCOUNT -- we can maintain that per-process
JPI$_APTCNT -- same as the resident working set
JPI$_ASTACT -- ASTs would have to be maintained as part of the emulation layer, this count would come from there
JPI$_ASTCNT -- ditto
JPI$_ASTEN -- ditto ditto
JPI$_ASTLM -- ditto ditto ditto
JPI$_AUTHPRI -- equivalent to the “nice” value
JPI$_AUTHPRIV -- either emulation layer, or just some dummy value
JPI$_BIOCNT -- just a count of I/O operations to block devices in progress JPI$_BIOLM -- a limit that could be imposed by the emulation layer
JPI$_BUFIO -- same thing, but for buffered I/O this time
JPI$_BYTCNT -- ditto
JPI$_BYTLM -- ditto
JPI$_CHAIN -- hmm, new to me, but no problem
JPI$_CLINAME -- part of the emulation layer (CLI would run in a separate Linux process, of course, but there’s no reason the VMS code needs to be aware of that)
JPI$_CPU_ID -- straightforward extraction from /sys/devices/system/cpu JPI$_CPULIM -- can be obtained from prlimit(2)/getrlimit(2)
JPI$_CPUTIM -- can be obtained from prlimit(2)/getrlimit(2)
JPI$_CREPRC_FLAGS -- maintained by emulation layer
So that’s the second page done. I could keep going on, but do you want to shortcut the process by pointing out where you think the traps lie?
I also believe there are more linux desktops out there than people give credit for.
On 1/5/2024 9:38 PM, Lawrence D'Oliveiro wrote:
For what I mean by “workstation”, look at the capabilities of the Unix >> workstations in the 1980s/1990s: remember, they ran the same OS as
their respective companies’ server offerings, with all the same
capabilities. It was Microsoft that came along and offered a
“Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could
charge less for the former ... and more for the latter.
Not sure I agree with this at all. It's been a long time and my memory
may not be what it once was but I distinctly remember the only
difference between NT Server and NT Workstation was Registry Settings.
There may be no reason, but I still have a lot of software that
ran fine on Vista and XP that does not run on Win 10.
I have old versions of Office (still have a bunch of OEM ones in the shrinkwrap) that work just fine. Don't need them as I moved to Open
Source Office a long time ago.
So you see, on the Unix side, the vendors never thought to charge any different for the “workstation” versus “server” software, because it was
the exact same software, with the exact same capabilities.
Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
It can't be made fully 64-bit without breaking source-level
compatibility with customer code.
...
Obviously, DEC needed a 64-bit VMS. They also needed it /soon/, so they
added 64-bit versions of the APIs that most needed to deal with lots of memory. Quite a lot of APIs that took pointers to user memory carried on taking 32-bit pointers, and thus could only deal with data in the bottom
2GB of a process address space.
They probably intended to add 64-bit versions of all the other APIs, but
this never happened, for reasons that probably included some of:
* Lack of budget: DEC was never as successful in the 1990s as it
had been in the 1980s.
The official CEO blurb from VSI is on their LinkedIn page:
https://www.linkedin.com/company/vms-software-inc-/
On 1/6/2024 3:11 PM, Lawrence D'Oliveiro wrote:
So you see, on the Unix side, the vendors never thought to charge any
different for the “workstation” versus “server” software, because it >> was the exact same software, with the exact same capabilities.
Commercial Unix was usually sold as systems - a bundle of HW and OS.
Making the distinction irrelevant.
Most commercial Linux distros have both a server version and a
desktop version. Including Redhat, SUSE and Ubuntu.
On Sat, 6 Jan 2024 15:30 +0000 (GMT Standard Time), John Dallman wrote:
It can't be made fully 64-bit without breaking source-level
compatibility with customer code.
Yes, but remember, at the same time, they were able to bring out their own Unix OS for the same hardware, and make it fully 64-bit from the get-go.
Look at how the Linux kernel does it, on platforms (e.g. x86) where 32-bit code still matters: it is able to be fully 64-bit internally, yet offer
both 32-bit and 64-bit APIs to userland.
By about 1996, there were 4 OSes that you might say were in common use on Alpha: DEC Unix, OpenVMS, Windows NT, and Linux. Two of them (Unix and
Linux) were fully 64-bit; one (OpenVMS) was a hybrid of 32- and 64-bit
code; and Windows NT was 32-bit only.
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to
do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus
or Varlink might be good enough for this.
And yet they were never able to make VMS a fully 64-bit OS, even on
their own fully 64-bit hardware.
On 1/5/2024 11:52 PM, Lawrence D'Oliveiro wrote:
So that’s the second page done. I could keep going on, but do you want
to shortcut the process by pointing out where you think the traps lie?
It becomes complex to maintain that process state in a VMS process style
aka across image activations.
On Sat, 2024-01-06 at 02:33 +0000, Lawrence D'Oliveiro wrote:
We can emulate logical names on Linux beyond the per-process ones with
a server process and communication via some IPC mechanism. D-Bus or
Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
No such concern was made for the Ultrix customers going to DEC OSF/1 aka DUNIX aka Tru64.
DEC made less money from Ultrix. Ultrix and OSF/1 was two different
Unixes so compatibility would have been difficult anyway. And porting C
code using a C API was easier anyway.
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
I remember pretty specifically maximum user limits on versions of
commercial Unix.
Today, the only OS in widespread use with this commonality of function >>across disparate hardware configurations is Linux.
Or FreeBSD. Or OpenBSD.
On Sat, 6 Jan 2024 10:47:11 -0500, bill wrote:
On 1/5/2024 9:38 PM, Lawrence D'Oliveiro wrote:
For what I mean by “workstation”, look at the capabilities of the Unix >>> workstations in the 1980s/1990s: remember, they ran the same OS as
their respective companies’ server offerings, with all the same
capabilities. It was Microsoft that came along and offered a
“Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could >>> charge less for the former ... and more for the latter.
Not sure I agree with this at all. It's been a long time and my memory
may not be what it once was but I distinctly remember the only
difference between NT Server and NT Workstation was Registry Settings.
You are remembering NT 3.51, I think it was, when somebody discovered
that, indeed, all it took was a single Registry setting change to enable >“Server” functionality on an NT “Workstation” installation.
Microsoft fixed that in the next version. Remember, it was not in their >interests to allow this sort of thing to continue, given the significant >difference in price between the two products.
So you see, on the Unix side, the vendors never thought to charge any >different for the "workstation" versus "server" software, because it was
the exact same software, with the exact same capabilities.
Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
On Sat, 6 Jan 2024 15:59:58 -0500, Arne Vajhøj wrote:
No such concern was made for the Ultrix customers going to DEC OSF/1 aka
DUNIX aka Tru64.
DEC made less money from Ultrix. Ultrix and OSF/1 was two different
Unixes so compatibility would have been difficult anyway. And porting C
code using a C API was easier anyway.
You almost got the point, didn't you? That POSIX had defined standard
types like "time_t" and "size_t", and code that was written to adhere to >those types as appropriate was much easier to port between different >architectures. This applied to customer code, to third-party code ... to
all code.
And POSIX already existed when Dave Cutler commenced development on
Windows NT. Back when he was starting VMS, he could claim ignorance of
such techniques for avoiding obsolescence; what was his excuse this time?
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
Not sure how you can have 64-bit code without 64-bit addressing ...
that is practically the essence of 64-bit code.
On Sat, 6 Jan 2024 15:59:58 -0500, Arne Vajhøj wrote:
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
Not sure how you can have 64-bit code without 64-bit addressing ... that
is practically the essence of 64-bit code.
Does that “64-bit” code on VMS still call LIB$EMUL?
On Sat, 2024-01-06 at 02:33 +0000, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to
do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus
or Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some >extra-cost "layered product", not to the core OS.
Because consider that users are defined in /etc/passwd, which is just a
text file. How would you limit the number of lines in that?
And the kernel
itself knows nothing of which user/group IDs are "valid" or "invalid", it >will happily accept any numbers within the permissible ranges,
regardless
of whether they appear in /etc/passwd or not. A network service (like
Telnet or SSH or file service) could limit the number of concurrent >connections, I suppose. But given there was open-source code available for >all of that anyway, it would be easy enough to bypass the limits by
replacing the vendor-provided code.
(Unless maybe you're talking about IBM's AIX. I am dimly aware that that
had its own proprietary ways of configuring things, that the traditional
*nix text-based configuration files were only a partial reflection of
that.)
Today, the only OS in widespread use with this commonality of function >>>across disparate hardware configurations is Linux.
Or FreeBSD. Or OpenBSD.
I did say "widespread". ;)
It took literally decades from the introduction of 64-bit Unix machines
until most software was 64-bit clean.
I was there; it was a painful
time, and Linux was actually behind the curve here compared to many of
the commercial vendors.
The mere existence of those types a) didn't help the piles of code that
was sloppy and made assumptions about primitive types ...
In article <unck70$p3mp$4@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 15:59:58 -0500, Arne Vajhøj wrote:
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
Not sure how you can have 64-bit code without 64-bit addressing ...
Of course you're not. "64-bit code" for something like x86
refers to details of the processor mode and e.g. the handling
of the REX prefix. On Alpha or Itanium, presumably that means
using the 64-bit ISA that uses e.g. 64-bit registers and so on.
But in either case, that's distinct from data pointers in
userspace are truncated represented as 32-bit values, as only
the low 2GiB of the address space is used by VMS applications.
Much of this was before SSH was invented, and way before "open source"
was the force it is today.
Yes, but remember, at the same time, they were able to bring out
their own Unix OS for the same hardware, and make it fully 64-bit
from the get-go.
Look at how the Linux kernel does it, on platforms (e.g. x86) where
32-bit code still matters: it is able to be fully 64-bit
internally, yet offer both 32-bit and 64-bit APIs to userland.
On 1/6/2024 4:22 PM, Single Stage to Orbit wrote:
On Sat, 2024-01-06 at 02:33 +0000, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to
do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus
or Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
I see environment variables being closer to VMS symbols than to
VMS logicals.
But just closer not identical.
On Sat, 6 Jan 2024 15:09:25 -0500, Arne Vajhøj wrote:
On 1/5/2024 11:52 PM, Lawrence D'Oliveiro wrote:
So that’s the second page done. I could keep going on, but do you want >>> to shortcut the process by pointing out where you think the traps lie?
It becomes complex to maintain that process state in a VMS process style
aka across image activations.
Not sure how that’s relevant to the question about $GETJPI.
On Sun, 7 Jan 2024 00:27:15 -0000 (UTC), Dan Cross wrote:
It took literally decades from the introduction of 64-bit Unix machines
until most software was 64-bit clean.
I was doing Unix sysadmin work on DEC Alphas in the late 1990s until the >early 2000s, when the client saw the writing on the wall and moved to
Linux (and so did I).
They frequently asked me to download, build and install various items of >open-source software. I don't recall ever having a problem with 64-bitness >per se.
I was there; it was a painful
time, and Linux was actually behind the curve here compared to many of
the commercial vendors.
Jon "maddog: Hall shipped an Alpha to Linus Torvalds somewhere around
1995,
and Linux was running native 64-bit on DEC Alpha in releasable form
by about 1996.
That was only only the second hardware platform that Linux
had been implemented on, at that stage. So it went portable at the same
time it went 64-bit.
The mere existence of those types a) didn't help the piles of code that
was sloppy and made assumptions about primitive types ...
Piles of proprietary code, certainly.
In article <uncrcr$q371$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 1/6/2024 4:22 PM, Single Stage to Orbit wrote:
On Sat, 2024-01-06 at 02:33 +0000, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to
do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus
or Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
I see environment variables being closer to VMS symbols than to
VMS logicals.
But just closer not identical.
Eh. Emulation of logical names would likely be done by
maintaining and consulting symbol tables in the DCL "shell" (or
whatever emulated DCL in this frankenstein "VMS on Linux"
monstronsity) for user mode logicals and a symbol table
maintained in a region of shared memory owned by some DSO-like
shared object for the others.
The idea of using some service that one communicates with via
dbus to emulate logical names is absurd.
On 1/6/2024 8:04 PM, Dan Cross wrote:
In article <uncrcr$q371$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 1/6/2024 4:22 PM, Single Stage to Orbit wrote:
On Sat, 2024-01-06 at 02:33 +0000, Lawrence D'Oliveiro wrote:
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to
do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus >>>>> or Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
I see environment variables being closer to VMS symbols than to
VMS logicals.
But just closer not identical.
Eh. Emulation of logical names would likely be done by
maintaining and consulting symbol tables in the DCL "shell" (or
whatever emulated DCL in this frankenstein "VMS on Linux"
monstronsity) for user mode logicals and a symbol table
maintained in a region of shared memory owned by some DSO-like
shared object for the others.
The idea of using some service that one communicates with via
dbus to emulate logical names is absurd.
Direct access to shared memory would be more efficient
than an IPC to a process. But it also increases the risk
of data corruption. That is not a problem in VMS because
the data structures can not be trashed from user mode code,
but in the frankenstein "VMS on Linux" I don't know.
Note that mode and table are (mostly) independent.
Logicals can be user, supervisor, exec or kernel mode.
Logicals reside in process, job, group, system, cluster,
decwindows or a custom logical table.
On Sat, 6 Jan 2024 00:10:26 -0500, Arne Vajhøj wrote:
Some like to blame MS for what happened. But the project execution does
not seem attractive to follow.
It saved money over all. That was one of the main points of the exercise.
On 1/6/2024 5:23 PM, Lawrence D'Oliveiro wrote:
On Sat, 6 Jan 2024 15:09:25 -0500, Arne Vajhøj wrote:
On 1/5/2024 11:52 PM, Lawrence D'Oliveiro wrote:
So that’s the second page done. I could keep going on, but do you
want to shortcut the process by pointing out where you think the
traps lie?
It becomes complex to maintain that process state in a VMS process
style aka across image activations.
Not sure how that’s relevant to the question about $GETJPI.
GETJPI retrive that info, so that the info is correct per VMS semantics
is important for GETJPI, and VMS semantics are a bit tricky because of
the differences between VMS and *nix.
On 1/6/2024 12:22 AM, Lawrence D'Oliveiro wrote:
On Sat, 6 Jan 2024 00:10:26 -0500, Arne Vajhøj wrote:
Some like to blame MS for what happened. But the project execution
does not seem attractive to follow.
It saved money over all. That was one of the main points of the
exercise.
The bottom line was that the chosen Linux & OOo/LO strategy had 11
million Euro lower cost than the Windows & MSO strategy.
But the assumption was that staff and end user training cost was the
same for doing the switch as for just upgrading the MS solution.
And the software creation including the approx. 65000 lines of code
(mostly Java) for WollMux is set to zero.
In article <uncu14$q8b0$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 1/6/2024 8:04 PM, Dan Cross wrote:
In article <uncrcr$q371$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 1/6/2024 4:22 PM, Single Stage to Orbit wrote:
On Sat, 2024-01-06 at 02:33 +0000, Lawrence D'Oliveiro wrote:I see environment variables being closer to VMS symbols than to
On Fri, 5 Jan 2024 09:23:36 -0500, Arne Vajhøj wrote:
Another example:
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to >>>>>>> do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus >>>>>> or Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these. >>>>
VMS logicals.
But just closer not identical.
Eh. Emulation of logical names would likely be done by
maintaining and consulting symbol tables in the DCL "shell" (or
whatever emulated DCL in this frankenstein "VMS on Linux"
monstronsity) for user mode logicals and a symbol table
maintained in a region of shared memory owned by some DSO-like
shared object for the others.
The idea of using some service that one communicates with via
dbus to emulate logical names is absurd.
Direct access to shared memory would be more efficient
than an IPC to a process. But it also increases the risk
of data corruption. That is not a problem in VMS because
the data structures can not be trashed from user mode code,
but in the frankenstein "VMS on Linux" I don't know.
This could actually be handled relatively straight-forwardly.
If the data were provided in the form of a DSO, then the kernel
could manage the mapping of this region so that it was
read-only; since the kernel is providing the data, it would
catch the page fault on a write, fetch the faulting operation
from the (user) program, and emulate it with appropriate
interlocks to avoid corruption. I don't know, but I imagine
VSI does already something similar in VMS on x86.
Note that mode and table are (mostly) independent.
Logicals can be user, supervisor, exec or kernel mode.
Logicals reside in process, job, group, system, cluster,
decwindows or a custom logical table.
Indeed. This whole nonsense about "VMS on Linux" makes the
extent of the problem even more glaring.
- Dan C.
On Sat, 6 Jan 2024 20:31:08 -0500, Arne Vajhøj wrote:
On 1/6/2024 12:22 AM, Lawrence D'Oliveiro wrote:
On Sat, 6 Jan 2024 00:10:26 -0500, Arne Vajhøj wrote:
Some like to blame MS for what happened. But the project execution
does not seem attractive to follow.
It saved money over all. That was one of the main points of the
exercise.
The bottom line was that the chosen Linux & OOo/LO strategy had 11
million Euro lower cost than the Windows & MSO strategy.
That is what "saving money" means, does it not.
But the assumption was that staff and end user training cost was the
same for doing the switch as for just upgrading the MS solution.
Given the major, disruptive changes that tend to happen between versions
of Microsoft's software, that kind of thing sounds entirely reasonable.
Particularly since you have more control over UI changes on the Linux
side. They created their own "LiMux" distro, as I recall, as part of the >implementation.
And the software creation including the approx. 65000 lines of code
(mostly Java) for WollMux is set to zero.
Again, presumably just equivalent to similar software development that
would have had to be done on Windows anyway. And with inferior Windows
tools, to boot.
Even without VMS under linux, I have always wanted a dclsh.
The development of WollMux itself was probably around
a million euros.
In article <uncc5u$ns66$2@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 10:47:11 -0500, bill wrote:
On 1/5/2024 9:38 PM, Lawrence D'Oliveiro wrote:
For what I mean by “workstation”, look at the capabilities of the Unix >>>> workstations in the 1980s/1990s: remember, they ran the same OS as
their respective companies’ server offerings, with all the same
capabilities. It was Microsoft that came along and offered a
“Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could >>>> charge less for the former ... and more for the latter.
Not sure I agree with this at all. It's been a long time and my memory
may not be what it once was but I distinctly remember the only
difference between NT Server and NT Workstation was Registry Settings.
You are remembering NT 3.51, I think it was, when somebody discovered
that, indeed, all it took was a single Registry setting change to enable
“Server” functionality on an NT “Workstation” installation.
Microsoft fixed that in the next version. Remember, it was not in their
interests to allow this sort of thing to continue, given the significant
difference in price between the two products.
So you see, on the Unix side, the vendors never thought to charge any
different for the "workstation" versus "server" software, because it was
the exact same software, with the exact same capabilities.
I remember pretty specifically maximum user limits on versions
of commercial Unix. Most of the time it didn't matter for a
workstation, where only one user at a time (generally) was
logged into the machine. For servers and timesharing hosts?
It was a big deal.
Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
Or FreeBSD. Or OpenBSD.
- Dan C.
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some
extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some
extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
[snip]
Or FreeBSD. Or OpenBSD.
Been running FreeBSD for years now, Works out of the box on various >architectures and a base install takes around 20 minutes. Ditched
Linux as it became more bloated and especially, the systemd trainwreck,
which I saw as a power grab by RedGat. Gross amount of complexity added
for no good reason. Having said that, have Suse and xubuntu installed
on a couple of machines, for software compatability testing reasons.
Always liked Suse Linux in the past, but again systemd, the disease
that has infected so many Linux distros.
As for licensing, and having been around many vendor's unix offerings
for decades, the only onerous licensing was associated with third
party apps, where a license manager needed to be installed to run
the app. Embedded C cross compilers, real time os, and tools,for
example.
With Sun, the os came with the machine and you could do more or
less what you wanted to do with it. A full set of tools and basic C
compiler out of the box. If you had the hardware, the os revision
for that hardware release was perpetually licensed. Compared to a
greedy DEC, some still wonder why Sun became so successful...
On Sun, 7 Jan 2024 03:17:11 -0000 (UTC), Dan Cross wrote:
The development of WollMux itself was probably around
a million euros.
Speculation? Developing a whole entire Linux distro can actually be done
on a shoestring.
if setenv() and getenv() were thread-safe it'd be easier to use
these.
I see environment variables being closer to VMS symbols than to
VMS logicals.
But just closer not identical.
I see environment variables being closer to VMS symbols than to
VMS logicals.
But just closer not identical.
Eh. Emulation of logical names would likely be done by
maintaining and consulting symbol tables in the DCL "shell" (or
whatever emulated DCL in this frankenstein "VMS on Linux"
monstronsity) for user mode logicals and a symbol table
maintained in a region of shared memory owned by some DSO-like
shared object for the others.
The idea of using some service that one communicates with via
dbus to emulate logical names is absurd.
Even without VMS under linux, I have always wanted a dclsh. Remember
that wonderful, very limited PCDCL? I wrote scripts with that to do
things I couldn't do in a PC batch file. In a production environment.
Not that I need it now - I always write scripts in ksh, despite
normally using bash under linux
In article <une90c$1345e$1@dont-email.me>, chrisq <devzero@nospam.com> wrote:
On 1/7/24 00:19, Dan Cross wrote:
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some >>>> extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
SCO, IIRC. https://www.tech-insider.org/unix/research/1997/0407.html
- Dan C.
On 1/7/24 14:04, Dan Cross wrote:
I remember seeing the writing on the wall when a friend of mine
was showing me a Pentium PC: "It's about half the speed of a
SPARCstation-5, but a quarter of the cost." Then they ditched
their core business to concentrate on Java standards. That's
when it was obvious Sun was going to fail: it was just a matter
of time.
Perhaps Sun did lose their way a bit, but it was the early 90's
recession, the dot com boom crash, that caused the most damage.
Dozens of companies went bust and in some ways, that culture
of innovation and progress has never recovered since. It's been
an interesting journey though :-)...
In article <une6iq$12vd9$1@dont-email.me>, chrisq <devzero@nospam.com> wrote:
On 1/6/24 23:42, Dan Cross wrote:
[snip]
Or FreeBSD. Or OpenBSD.
Been running FreeBSD for years now, Works out of the box on various
architectures and a base install takes around 20 minutes. Ditched
Linux as it became more bloated and especially, the systemd trainwreck,
which I saw as a power grab by RedGat. Gross amount of complexity added
for no good reason. Having said that, have Suse and xubuntu installed
on a couple of machines, for software compatability testing reasons.
Always liked Suse Linux in the past, but again systemd, the disease
that has infected so many Linux distros.
As for licensing, and having been around many vendor's unix offerings
for decades, the only onerous licensing was associated with third
party apps, where a license manager needed to be installed to run
the app. Embedded C cross compilers, real time os, and tools,for
example.
AIX licensing was a pain.
With Sun, the os came with the machine and you could do more or
less what you wanted to do with it. A full set of tools and basic C
compiler out of the box. If you had the hardware, the os revision
for that hardware release was perpetually licensed. Compared to a
greedy DEC, some still wonder why Sun became so successful...
Ah SunOS. In so many ways, the Unix par excellence. It was sad
when they unbundled the C compiler and ditched the BSD kernel
with the switch to SVR4. SunPro was not cheap.
I remember seeing the writing on the wall when a friend of mine
was showing me a Pentium PC: "It's about half the speed of a
SPARCstation-5, but a quarter of the cost." Then they ditched
their core business to concentrate on Java standards. That's
when it was obvious Sun was going to fail: it was just a matter
of time.
- Dan C.
Speculation? Developing a whole entire Linux distro can actually be
done on a shoestring.
Supported over 10 years? Tell me you've never supported a
custom linux distro without telling me.
AIX licensing was a pain.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
SCO, IIRC. https://www.tech-insider.org/unix/research/1997/0407.html
Lol. I would not touch anything SCO. Fwir, they exist purely to
litigate against others :-)...
The whole command-line concept on VMS is fundamentally flawed. Notice that
on *nix, the command line is not a single string, it is an array of
strings. This makes it easy to pass special characters that might mean >something to the shell, simply by bypassing the shell.
And this is another case where Cutler seemed unable to learn from his >mistakes: he put the same brain damage into Windows NT.
On Sat, 6 Jan 2024 00:10:26 -0500, Arne Vajhøj wrote:
Some like to blame MS for what happened. But the project execution does
not seem attractive to follow.
It saved money over all. That was one of the main points of the exercise.
In article <une90c$1345e$1@dont-email.me>, chrisq <devzero@nospam.com> wrote:
On 1/7/24 00:19, Dan Cross wrote:
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some >>>>> extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
SCO, IIRC. https://www.tech-insider.org/unix/research/1997/0407.html
Lol. I would not touch anything SCO. Fwir, they exist purely to
litigate against others :-)...
On 1/6/2024 12:22 AM, Lawrence D'Oliveiro wrote:
On Sat, 6 Jan 2024 00:10:26 -0500, Arne Vajhøj wrote:
Some like to blame MS for what happened. But the project execution does
not seem attractive to follow.
It saved money over all. That was one of the main points of the exercise.
I am not sure a CIO would see it that way.
The official story is that it saved 11 million Euro.
That number came from a report provided to the city council 10 year
after project start, where they compared what they chose with
alternative strategies.
The bottom line was that the chosen Linux & OOo/LO strategy
had 11 million Euro lower cost than the Windows & MSO strategy.
Basically: MSO licenses 4.2 million, Windows licenses 2.6 million
and HW upgrades required by Windows 5 million compared to 0.3
million for Limux.
I am sure the numbers are correct. Or as close to as practically
possible.
But the assumption was that staff and end user training
cost was the same for doing the switch as for just upgrading
the MS solution.
And the software creation including the approx. 65000 lines
of code (mostly Java) for WollMux is set to zero.
In article <une6iq$12vd9$1@dont-email.me>, chrisq <devzero@nospam.com> wrote:
On 1/6/24 23:42, Dan Cross wrote:
[snip]
Or FreeBSD. Or OpenBSD.
Been running FreeBSD for years now, Works out of the box on various
architectures and a base install takes around 20 minutes. Ditched
Linux as it became more bloated and especially, the systemd trainwreck,
which I saw as a power grab by RedGat. Gross amount of complexity added
for no good reason. Having said that, have Suse and xubuntu installed
on a couple of machines, for software compatability testing reasons.
Always liked Suse Linux in the past, but again systemd, the disease
that has infected so many Linux distros.
As for licensing, and having been around many vendor's unix offerings
for decades, the only onerous licensing was associated with third
party apps, where a license manager needed to be installed to run
the app. Embedded C cross compilers, real time os, and tools,for
example.
AIX licensing was a pain.
A single example :-).
Have an RS6000 machine here, aix 6 from memory,
and was able to download a whole set of updates from the IBM site
without a single question about licensing. Filled in a form, then
got an email when the update set was ready. Seems some don't like
aix, but just another unix under the hood. The built in system
management and diagnostic tools are some of the best i've seen
anywhere. Probably expensive formally, but no worse than DEC in
the old days, or Sun since the Oracle takeover.
With Sun, the os came with the machine and you could do more or
less what you wanted to do with it. A full set of tools and basic C
compiler out of the box. If you had the hardware, the os revision
for that hardware release was perpetually licensed. Compared to a
greedy DEC, some still wonder why Sun became so successful...
Ah SunOS. In so many ways, the Unix par excellence. It was sad
when they unbundled the C compiler and ditched the BSD kernel
with the switch to SVR4. SunPro was not cheap.
Yes, it was. Remember one company around 1990 that bought one of
the early Sun 3/60 workstations. Pushed the boat out for the full
colour 19" display, maxed out memory and storage, and we were
all blown away by the machine, capabilities and performance. It
was a few years later, doing comparisons between a uVax GPX, VMS
and a Sun 3/60, compiling Tex source and the Sun 3 was 4-5 times
faster.
Spent years working and programming DEC, but such hard work to get
anything done on VMS for s/w development, compared to the unix.
Everything an added cost, very little open source, when by then.
a whole raft of open source from ftp sites for Sun machines. Then
Sunsites all over the world helping to spread the word.
Different business model and target market I guess, but never
looked back to DEC since.
Only switched off the last Sparc box here around a year ago. No
problem with the system, but the cost of energy now makes it
totally uneconomic to run some of the older hardware 24x7.
I remember seeing the writing on the wall when a friend of mine
was showing me a Pentium PC: "It's about half the speed of a
SPARCstation-5, but a quarter of the cost." Then they ditched
their core business to concentrate on Java standards. That's
when it was obvious Sun was going to fail: it was just a matter
of time.
Perhaps Sun did lose their way a bit, but it was the early 90's
recession, the dot com boom crash, that caused the most damage.
Dozens of companies went bust and in some ways, that culture
of innovation and progress has never recovered since. It's been
an interesting journey though :-)...
On Sun, 2024-01-07 at 14:05 +0000, Dan Cross wrote:
Speculation? Developing a whole entire Linux distro can actually be
done on a shoestring.
Supported over 10 years? Tell me you've never supported a
custom linux distro without telling me.
Haha :-D
Slackware since 1997 and then Gentoo after 2002. Fab times. One of the
oldest files on my system that's come with me when I moved to newer
machines is the emerge log and that was started in 2005.
I've gone through 32bit x86, 32bit SPARC, 32bit PPC, 64bit SPARC, 64bit
x86 over the years. Hopefully Gentoo will still with me when I boldly
go where no man has ever returned from.
I remember hating it. Coming from a more "traditional" Unix
background, it was ... weird. Printing, storage management,
man pages, the security infrastructure, all felt gratuitously
different for no real reason. You were almost forced to use
their menu-driven management tools, but as the USENIX button at
the time said, "SMIT happens." It all felt very big-M
"Mainframe" inspired. The compilers were very good, and the
machines were fast, but the developer tools weren't bundled and
I remembered fighting a lot of third-party software to get it to
compile and run properly.
That was all weird because, on the 6150 ("RT") machines they had
offered a very nice version of 4.3BSD Tahoe plus NFS to the
academic community; clearly, people at IBM knew how to "do" Unix
right.
Weirdest for me was the lack of a real console. There was a
3-digit 7-segment LED display that would cycle through various
numbers as the system booted up; things that would have been
emitted to a serial port on a VAX (or even a Sun) were instead
represented by random collections of digits, and there was a
book you had to look at to see what was going on if something
hung. Something like "371" was "fsck failed on /usr." (I don't
recall if that was the exact code). Then were was a the damned
key, where the system wouldn't boot if it was in the "locked"
position. Which sucked if the machine crashed for some random
reason. I walked into a lab one day and the entire network was
down because all the machines had crashed over some network
hiccup and the damned sysadmin had turned everything to "Locked"
for some obscure reason ("it's more secure.") I guess he was
right: it's certainly more "secure" if no one can use the
computers. :-/
In article <unegfi$1497f$2@dont-email.me>, chrisq <devzero@nospam.com> wrote:
On 1/7/24 13:57, Dan Cross wrote:
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir. >>>>
Examples ?...
SCO, IIRC. https://www.tech-insider.org/unix/research/1997/0407.html
Lol. I would not touch anything SCO. Fwir, they exist purely to
litigate against others :-)...
This was not initially the case. SCO was once an engineering-driven company that was considered a seriously good place for programmers to work. They
had a huge programming and design staff, a hot tub available for technical staff, and some highly respectable products.
By 1997 this had started to change and SCO was starting to get taken over
by lawyers. Within a few years there was nothing left but lawyers and they had turned into a patent holding company.
But this was not originally the case and they are sorely missed.
On 07.01.2024 15:04, Dan Cross wrote:
AIX licensing was a pain.
AIX base OS doesn't need license keys or PAKs or such.
Third party software might, but that's not AIX specific.
On Sat, 6 Jan 2024 20:31:08 -0500, Arne Vajhøj wrote:
On 1/6/2024 12:22 AM, Lawrence D'Oliveiro wrote:
On Sat, 6 Jan 2024 00:10:26 -0500, Arne Vajhøj wrote:
Some like to blame MS for what happened. But the project execution
does not seem attractive to follow.
It saved money over all. That was one of the main points of the
exercise.
The bottom line was that the chosen Linux & OOo/LO strategy had 11
million Euro lower cost than the Windows & MSO strategy.
That is what “saving money” means, does it not.
But the assumption was that staff and end user training cost was the
same for doing the switch as for just upgrading the MS solution.
Given the major, disruptive changes that tend to happen between versions
of Microsoft’s software, that kind of thing sounds entirely reasonable. Particularly since you have more control over UI changes on the Linux
side. They created their own “LiMux” distro, as I recall, as part of the implementation.
And the software creation including the approx. 65000 lines of code
(mostly Java) for WollMux is set to zero.
Again, presumably just equivalent to similar software development that
would have had to be done on Windows anyway.
On Sat, 6 Jan 2024 20:00:50 -0500, Arne Vajhøj wrote:
On 1/6/2024 5:23 PM, Lawrence D'Oliveiro wrote:
On Sat, 6 Jan 2024 15:09:25 -0500, Arne Vajhøj wrote:
On 1/5/2024 11:52 PM, Lawrence D'Oliveiro wrote:
So that’s the second page done. I could keep going on, but do you
want to shortcut the process by pointing out where you think the
traps lie?
It becomes complex to maintain that process state in a VMS process
style aka across image activations.
Not sure how that’s relevant to the question about $GETJPI.
GETJPI retrive that info, so that the info is correct per VMS semantics
is important for GETJPI, and VMS semantics are a bit tricky because of
the differences between VMS and *nix.
So which info do you think will cause trouble? If it wasn’t in the part of the list I had already addressed, then point out which list items will
cause the trouble.
In article <unccdr$ns66$4@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
There seems to be this persistent myth that Microsoft never breaks
old software in new Windows versions. In fact, Windows has become
so complex that it is impossible for them to avoid such breakage.
Their record remains pretty good for well-behaved software. I've been producing the same stuff for Windows since NT 3.51 days. The only time
I've had compatibility trouble was at Vista, where a bug we'd had for
years that did not provoke problems on XP or older started showing up.
And that was our own fault.
On Sun, 7 Jan 2024 02:59:17 +0000, Chris Townley wrote:
Even without VMS under linux, I have always wanted a dclsh.
With that clunky PIPE command every time you want to feed the output of
one process into the input of another?
The whole command-line concept on VMS is fundamentally flawed. Notice that
on *nix, the command line is not a single string, it is an array of
strings.
Sun had a problem:
- Solaris/SPARC servers were more expensive than Linux/x86-64
servers
- the applications running on Solaris/SPARC were typically not that
difficult to port to Linux
Asking for a premium without sufficient vendor lock-in is a bad
business case.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
I remember hating it. Coming from a more "traditional" Unix
background, it was ... weird. Printing, storage management,
man pages, the security infrastructure, all felt gratuitously
different for no real reason. You were almost forced to use
their menu-driven management tools, but as the USENIX button at
the time said, "SMIT happens." It all felt very big-M
"Mainframe" inspired. The compilers were very good, and the
machines were fast, but the developer tools weren't bundled and
I remembered fighting a lot of third-party software to get it to
compile and run properly.
Having started on OS/360, it seemed very much a throwback to that kind of >environment to me. Many of th AIX things were weird but were very welcome
in a mainframe world, like real batch queue management. The automated >management was awful for somebody running a single server but I think it >might have been a good thing for somebody running hundreds of them because
it did give a sort of central management years before puppet or ansible.
That was all weird because, on the 6150 ("RT") machines they had
offered a very nice version of 4.3BSD Tahoe plus NFS to the
academic community; clearly, people at IBM knew how to "do" Unix
right.
AIX wasn't designed with Unix users in mind. I think AIX was designed to >make things easier for people who were coming from the AS/400 or System/34 >world.
Weirdest for me was the lack of a real console. There was a
3-digit 7-segment LED display that would cycle through various
numbers as the system booted up; things that would have been
emitted to a serial port on a VAX (or even a Sun) were instead
represented by random collections of digits, and there was a
book you had to look at to see what was going on if something
hung. Something like "371" was "fsck failed on /usr." (I don't
recall if that was the exact code). Then were was a the damned
key, where the system wouldn't boot if it was in the "locked"
position. Which sucked if the machine crashed for some random
reason. I walked into a lab one day and the entire network was
down because all the machines had crashed over some network
hiccup and the damned sysadmin had turned everything to "Locked"
for some obscure reason ("it's more secure.") I guess he was
right: it's certainly more "secure" if no one can use the
computers. :-/
Again, I think this was because the intention was to run a million >workstations with a single admin. If something goes wrong to prevent >booting, you just swap the machine out with a new one and have your
on-site IBM FE fix it.
In article <unehno$14f82$1@dont-email.me>, arne@vajhoej.dk (Arne Vajhj) >wrote:
Sun had a problem:
- Solaris/SPARC servers were more expensive than Linux/x86-64
servers
- the applications running on Solaris/SPARC were typically not that
difficult to port to Linux
Asking for a premium without sufficient vendor lock-in is a bad
business case.
Their responses were also not that great:
They open-sourced their OS in the belief that this would "reduce
development costs" as Linux people switched to working on Solaris. This >didn't happen to any noticeable extent. The open-sourcing part created
lots of work for expensive lawyers and slowed software development.
Cut back their hardware development, since it was expensive, making their >systems even less competitive.
They ended up selling themselves to Oracle, of course. Oracle's plan was >vertical integration: tuning up SPARC and Solaris hardware for Oracle >database so they had a price-performance advantage on their own hardware.
A great plan, except that the tuning had already been done and there was
no unrealised performance available.
In article <uneg09$1497f$1@dont-email.me>, chrisq<devzero@nospam.com> wrote:
<devzero@nospam.com> wrote:On 1/7/24 14:04, Dan Cross wrote:
In article <une6iq$12vd9$1@dont-email.me>, chrisq
trainwreck,On 1/6/24 23:42, Dan Cross wrote:
[snip]
Or FreeBSD. Or OpenBSD.
Been running FreeBSD for years now, Works out of the box on various
architectures and a base install takes around 20 minutes. Ditched
Linux as it became more bloated and especially, the systemd
addedwhich I saw as a power grab by RedGat. Gross amount of complexity
for no good reason. Having said that, have Suse and xubuntu installed
on a couple of machines, for software compatability testing reasons.
Always liked Suse Linux in the past, but again systemd, the disease
that has infected so many Linux distros.
As for licensing, and having been around many vendor's unix offerings
for decades, the only onerous licensing was associated with third
party apps, where a license manager needed to be installed to run
the app. Embedded C cross compilers, real time os, and tools,for
example.
AIX licensing was a pain.
A single example :-).
Well, yes, but also DG, HP, etc. SGI and Sun seemed to do it
right, but then I was on the technical side and didn't have to
worry too much about the business side of folks who were keeping
track of licenses, etc.
I remember hating it. Coming from a more "traditional" Unix
background, it was ... weird. Printing, storage management,
man pages, the security infrastructure, all felt gratuitously
different for no real reason. You were almost forced to use
their menu-driven management tools, but as the USENIX button at
the time said, "SMIT happens." It all felt very big-M
"Mainframe" inspired. The compilers were very good, and the
machines were fast, but the developer tools weren't bundled and
I remembered fighting a lot of third-party software to get it to
compile and run properly.
That was all weird because, on the 6150 ("RT") machines they had
offered a very nice version of 4.3BSD Tahoe plus NFS to the
academic community; clearly, people at IBM knew how to "do" Unix
right.
Weirdest for me was the lack of a real console. There was a
3-digit 7-segment LED display that would cycle through various
numbers as the system booted up; things that would have been
emitted to a serial port on a VAX (or even a Sun) were instead
represented by random collections of digits, and there was a
book you had to look at to see what was going on if something
hung. Something like "371" was "fsck failed on /usr." (I don't
recall if that was the exact code). Then were was a the damned
key, where the system wouldn't boot if it was in the "locked"
position. Which sucked if the machine crashed for some random
reason. I walked into a lab one day and the entire network was
down because all the machines had crashed over some network
hiccup and the damned sysadmin had turned everything to "Locked"
for some obscure reason ("it's more secure.") I guess he was
right: it's certainly more "secure" if no one can use the
computers. :-/
Ha, yeah, my SPARC hardware down in the basement hasn't been
turned on in years: it's too expensive to run.
The trend had already started before that, I'm afraid. A lot of
former Sun people I know acknowledged that they tried to stick
with SPARC as a differentiator way longer than they should have,
and that they should have embraced x86 much earlier than they
did. They had a head-start with the Roadrunner, but they gave
up. Had they stayed with it, perhaps life would have been
different.
On Sun, 2024-01-07 at 02:59 +0000, Chris Townley wrote:
Even without VMS under linux, I have always wanted a dclsh. Remember
that wonderful, very limited PCDCL? I wrote scripts with that to do
things I couldn't do in a PC batch file. In a production environment.
Not that I need it now - I always write scripts in ksh, despite
normally using bash under linux
I believe someone has tried to do that but my memory might be flawed.
The command line is not a single string on VMS.
All the items in the list were you said maintained in compatibility
layer. The VMS process model and the Linux process are different. And
that difference impact tracking this info.
I'd sqlite3 it.
SCO, IIRC. https://www.tech-insider.org/unix/research/1997/0407.html
AT&T as a corporate entity never quite got how to do Unix ...
People often
forget that at one point, Microsoft was one of the biggest Unix
vendors on the planet, with Xenix.
I think this bears on VMS a bit today: VMS actually has some
really interesting technology in it ...
In article <uneqvj$15poq$1@dont-email.me>, arne@vajhoej.dk (Arne Vajhøj) wrote:
Sometimes one need to download and install something.
Like Microsoft Visual C++ Redistributable version XXXX for x86/x64.
Yes, you usually need some of those. Any competently packaged software
should install them for you.
On Sun, 7 Jan 2024 13:22:37 -0500, Arne Vajhøj wrote:
All the items in the list were you said maintained in compatibility
layer. The VMS process model and the Linux process are different. And
that difference impact tracking this info.
You asked “But what are they going to return when asked for an item that does not exist on Linux?” I think I showed a convincing answer to that question.
At least admit that I have answered that question, before trying to jump
onto an entirely different one.
Been running FreeBSD for years now, Works out of the box on various architectures and a base install takes around 20 minutes.
Ditched Linux
as it became more bloated and especially, the systemd trainwreck,
which I saw as a power grab by Red[H]at. Gross amount of complexity
added for no good reason.
With Sun, the os came with the machine and you could do more or less
what you wanted to do with it.
On 1/6/2024 12:22 AM, Lawrence D'Oliveiro wrote:
On Sat, 6 Jan 2024 00:10:26 -0500, Arne Vajhøj wrote:I don't know the details. Don't want to either. However:
Some like to blame MS for what happened. But the project execution
does not seem attractive to follow.
It saved money over all. That was one of the main points of the
exercise.
One can be penny wise and dollar foolish ...
Saving in one location, but paying for it in another, ...
Unless hard evidence to the contrary ...
In article <memo.20240107190811.16260s@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
Cut back their hardware development, since it was expensive, making their
systems even less competitive.
Yes. They really missed the boat on x86.
In article <undkh0$10jff$2@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Speculation? Developing a whole entire Linux distro can actually be done
on a shoestring.
Supported over 10 years?
Ah SunOS. In so many ways, the Unix par excellence.
But this was not originally the case and they are sorely missed.
On Sun, 7 Jan 2024 13:21:05 -0500, Arne Vajhøj wrote:
Unless hard evidence to the contrary ...
The report from the city council itself, that there was a net gain in price/benefits.
On Sun, 7 Jan 2024 12:48:25 +0000, chrisq wrote:
Been running FreeBSD for years now, Works out of the box on various
architectures and a base install takes around 20 minutes.
The BSDs are a good illustration that the health of an open-source project doesn’t depend on how many users it has, but on the strength of contributions from the community.
Having said that, I am mystified and disappointed by the amount of fragmentation in the BSD world. There are maybe half a dozen BSD variants still in active use, and maybe 50 times that number of Linux distros. Yet
it is easier to move between Linux distros than it is to move between BSD variants.
So a couple of guys set themselves the job of recompiling the whole
of Debian from source, optimized for the Raspberry Pi. As I recall,
the bulk of the job took them 6 weeks. In their own spare time.
They called the result _Raspbian_. You may have heard of it. Though
I think the Foundation has now taken it over and called it
_Raspberry Pi OS_.
On 1/7/2024 2:47 PM, Dan Cross wrote:
In article <memo.20240107190811.16260s@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
Cut back their hardware development, since it was expensive, making their >>> systems even less competitive.
Yes. They really missed the boat on x86.
Solaris has been available for x86 since 1993.
But I don't think there were that much customer
interest in the x86 and later x86-64 version of Solaris.
And there were not much money in it for Sun, because
the system manufacturer (IBM/HP/Dell/whoever) would get most
of the money.
So few customers for a product that the vendor preferred
customers did not pick.
In article <unf485$174pb$5@dont-email.me>, ldo@nz.invalid (Lawrence >D'Oliveiro) wrote:
So a couple of guys set themselves the job of recompiling the whole
of Debian from source, optimized for the Raspberry Pi. As I recall,
the bulk of the job took them 6 weeks. In their own spare time.
They called the result _Raspbian_. You may have heard of it. Though
I think the Foundation has now taken it over and called it
_Raspberry Pi OS_.
The two original guys release their first version in July 2012. The >Foundation released their version in September 2013, 14 months later. The >Foundation isn't a large and rich company, but it is more than a
shoestring operation.
<https://en.wikipedia.org/wiki/Raspberry_Pi_OS#History>
On 1/7/2024 3:13 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 13:22:37 -0500, Arne Vajhøj wrote:
All the items in the list were you said maintained in compatibility
layer. The VMS process model and the Linux process are different. And
that difference impact tracking this info.
You asked “But what are they going to return when asked for an item
that does not exist on Linux?” I think I showed a convincing answer to
that question.
At least admit that I have answered that question, before trying to
jump onto an entirely different one.
It is still the same question.
That info is for a single process on VMS but would be for multiple
processes on Linux.
Which makes the "just put the info in the compatibility latter" to a
complex problem.
On 1/7/2024 4:09 PM, Lawrence D'Oliveiro wrote:
... I am mystified and disappointed by the amount of
fragmentation in the BSD world. There are maybe half a dozen BSD
variants still in active use, and maybe 50 times that number of Linux
distros. Yet it is easier to move between Linux distros than it is to
move between BSD variants.
FreeBSD, NetBSD, OpenBSD and their derivatives share some origin and
share some code, but are more like different OS.
In article <memo.20240107213108.16260u@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <unf485$174pb$5@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
So a couple of guys set themselves the job of recompiling the whole
of Debian from source, optimized for the Raspberry Pi. As I recall,
the bulk of the job took them 6 weeks. In their own spare time.
They called the result _Raspbian_. You may have heard of it. Though
I think the Foundation has now taken it over and called it
_Raspberry Pi OS_.
The two original guys release their first version in July 2012. The
Foundation released their version in September 2013, 14 months later. The
Foundation isn't a large and rich company, but it is more than a
shoestring operation.
<https://en.wikipedia.org/wiki/Raspberry_Pi_OS#History>
It's also much larger than what the Munich city council can
bring to bear.
- Dan C.
[snip]
I remember hating it. Coming from a more "traditional" Unix
background, it was ... weird. Printing, storage management,
man pages, the security infrastructure, all felt gratuitously
different for no real reason. You were almost forced to use
their menu-driven management tools, but as the USENIX button at
the time said, "SMIT happens." It all felt very big-M
"Mainframe" inspired. The compilers were very good, and the
machines were fast, but the developer tools weren't bundled and
I remembered fighting a lot of third-party software to get it to
compile and run properly.
That was all weird because, on the 6150 ("RT") machines they had
offered a very nice version of 4.3BSD Tahoe plus NFS to the
academic community; clearly, people at IBM knew how to "do" Unix
right.
Weirdest for me was the lack of a real console. There was a
3-digit 7-segment LED display that would cycle through various
numbers as the system booted up; things that would have been
emitted to a serial port on a VAX (or even a Sun) were instead
represented by random collections of digits, and there was a
book you had to look at to see what was going on if something
hung. Something like "371" was "fsck failed on /usr." (I don't
recall if that was the exact code). Then were was a the damned
key, where the system wouldn't boot if it was in the "locked"
position. Which sucked if the machine crashed for some random
reason. I walked into a lab one day and the entire network was
down because all the machines had crashed over some network
hiccup and the damned sysadmin had turned everything to "Locked"
for some obscure reason ("it's more secure.") I guess he was
right: it's certainly more "secure" if no one can use the
computers. :-/
The RS6000k here (7043/150) has a bios console, updated from
ibmfiles.com. Functionality as one would expect. including
extensive diags.
Yes, there is a seven segment display showing
post and boot progress, but running headless, that could be a
real advantage.
Can't be sure about C compiler, but think there
is one. Package management seems good, so just a few minutes task
to install Gnu tools. Also, the file system layout is more or
less as expected. Perhaps the early machines were as you describe,
but not the one here.
You don't have to use the automated tools,
smit etc either, but they do have their uses. Pretty cool, fully
sorted system, in fact. Slightly different in some ways, but easy
to get to grips with and find way around.
Ha, yeah, my SPARC hardware down in the basement hasn't been
turned on in years: it's too expensive to run.
Yes, archive server only now, powered up as needed, but at 600+
watts with the drive arrays, before even pressing a key, totally
unworkable :-). Still have SS20 and more to play with though.
The trend had already started before that, I'm afraid. A lot of
former Sun people I know acknowledged that they tried to stick
with SPARC as a differentiator way longer than they should have,
and that they should have embraced x86 much earlier than they
did. They had a head-start with the Roadrunner, but they gave
up. Had they stayed with it, perhaps life would have been
different.
Sparc is still quite competitive technically, if you look at
the specs.
It's just that Oracle have given up on it. Solaris
always was a very secure and robust OS and
there are real
advantages from running a non X86 architecture, from a
security pov.
I remember seeing one of the early Sun X86 boxes, a 386i,
1999 ish. An awful machine, slow, expensive and underwhelming,
even compared to Sun 3.
Everyone hated it, but what they did produce were the X86 PC
on a card products, to plug in to sbus and pci Sparc machines.
Would run almost as a standalone pc, with all the sockets on
the card cage bracket, including vga video, keyboard, mouse,
soundcard etc, but were also highly integrated into the file
system and desktop at the Sunos / Solaris side. Quite
reasonable performance for the time and remember running
Lotus 123 and a raft of pc apps via one of the cards.
Perhaps a bit hard on DEC, as one of the things I most liked
about DEC was the integrity and attention to detail of the h/w
and s/w. Ran an Alpha 500/400 machine for many years. Tru64 unix,
a very solid OS. but bit by bit, became an orphan with little
open source support and no real future pathway. In many ways,
it's open source software that has made many platforms what they
are today, and their success, or not. If i'm still annoyed at
DEC, it's because they had some of the best product and minds in
the business from a technical pov, but squandered the lot on a
greedy and inflexible business model. Hubris is it's own reward
etc.
The really oddball unix ime, was HP-UX, where nothing is where
one would expect to find it, and a whole shedload of oddly named
commsnds, like learning a new language.
Have a good new year anyway. Still some progress, with arm and
Risc 5 likely to further upset the established order :-)...
On Sun, 7 Jan 2024 16:05:43 -0500, Arne Vajhøj wrote:
On 1/7/2024 3:13 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 13:22:37 -0500, Arne Vajhøj wrote:
All the items in the list were you said maintained in compatibility
layer. The VMS process model and the Linux process are different. And
that difference impact tracking this info.
You asked “But what are they going to return when asked for an item
that does not exist on Linux?” I think I showed a convincing answer to >>> that question.
At least admit that I have answered that question, before trying to
jump onto an entirely different one.
It is still the same question.
That info is for a single process on VMS but would be for multiple
processes on Linux.
Which makes the "just put the info in the compatibility latter" to a
complex problem.
So you tried to suggest that the problem couldn’t be solved, then when I >offered up a solution, you don’t like the solution.
On 1/7/2024 4:11 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 13:21:05 -0500, Arne Vajhøj wrote:
Unless hard evidence to the contrary ...
The report from the city council itself, that there was a net gain in
price/benefits.
Counting what they counted.
It is not necessarily a complete picture.
I believe that, within OpenSolaris, the feeling was that Solaris was "so obviously better" that people would just naturally gravitate to the technically superior offering.
You didn't offer a solution.
It's also much larger than what the Munich city council can bring to
bear.
On Sun, 7 Jan 2024 16:22:45 -0500, Arne Vajhøj wrote:
On 1/7/2024 4:09 PM, Lawrence D'Oliveiro wrote:
... I am mystified and disappointed by the amount of
fragmentation in the BSD world. There are maybe half a dozen BSD
variants still in active use, and maybe 50 times that number of Linux
distros. Yet it is easier to move between Linux distros than it is to
move between BSD variants.
FreeBSD, NetBSD, OpenBSD and their derivatives share some origin and
share some code, but are more like different OS.
You are just repeating my point, without explaining *why* that is so. Why
are the Linux distros better able to keep it together?
My sense with the AIX tools was that they were trying to
insulate the system manager (or low-paid operators) from the
underlying system. If your use-case is a factory floor or a
business data processing shop, that may make some sense.
Again, I think this was because the intention was to run a million >>workstations with a single admin. If something goes wrong to prevent >>booting, you just swap the machine out with a new one and have your
on-site IBM FE fix it.
Yeah, but when a room full of them are crashed and won't boot,
one wonders whether the cure isn't worse than the disease. :-)
In article <memo.20240107190811.16260s@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <unehno$14f82$1@dont-email.me>, arne@vajhoej.dk (Arne Vajhøj)
wrote:
Sun had a problem:
- Solaris/SPARC servers were more expensive than Linux/x86-64
servers
- the applications running on Solaris/SPARC were typically not that
difficult to port to Linux
Asking for a premium without sufficient vendor lock-in is a bad
business case.
Their responses were also not that great:
They open-sourced their OS in the belief that this would "reduce
development costs" as Linux people switched to working on Solaris. This
didn't happen to any noticeable extent. The open-sourcing part created
lots of work for expensive lawyers and slowed software development.
I know many of the players involved with the creation of
OpenSolaris and I think they would dispute parts of this.
It is true that the hoped-for shift of open source people
working on Linux and BSD moving to working on OpenSolaris did
not materialize. But why?
I believe that, within OpenSolaris, the feeling was that Solaris
was "so obviously better" that people would just naturally
gravitate to the technically superior offering. However, by
this time, Linux was "good enough" and improving rapidly;
certainly at a pace greater than Solaris was improving. So
while there were parts of Solaris that were (and arguably still
are) technically superior to Linux, the feeling was that Linux
would overtake Sun in these areas soon anyway, so why switch?
Secondly, a lot of people were put off by the CDDL; Linux seemed
safer and more "free." Moreover, some parts of the operating
system remained closed, and you pretty much had to use SunPro
(at last at the beginning) to build things, and that was still
proprietary.
That said, while the initial open-sourcing was expensive, it is
not clear to me that the ongoing cost was particularly high.
Certainly, I have _never_ heard anyone who worked on it complain
about the ongoing cost. Re-closing the source code was highly
contentious.
I think the reason OpenSolaris failed was that it was just too
little, too late. There wasn't a good reason for people to
switch.
Cut back their hardware development, since it was expensive, making their
systems even less competitive.
Yes. They really missed the boat on x86.
They ended up selling themselves to Oracle, of course. Oracle's plan was
vertical integration: tuning up SPARC and Solaris hardware for Oracle
database so they had a price-performance advantage on their own hardware.
A great plan, except that the tuning had already been done and there was
no unrealised performance available.
Well, when the main reason your systems are sold is to run one
program specifically....
It's a shame.
- Dan C.
On Sun, 7 Jan 2024 16:22:45 -0500, Arne Vajhøj wrote:
On 1/7/2024 4:09 PM, Lawrence D'Oliveiro wrote:
... I am mystified and disappointed by the amount of
fragmentation in the BSD world. There are maybe half a dozen BSD
variants still in active use, and maybe 50 times that number of Linux
distros. Yet it is easier to move between Linux distros than it is to
move between BSD variants.
FreeBSD, NetBSD, OpenBSD and their derivatives share some origin and
share some code, but are more like different OS.
You are just repeating my point, without explaining *why* that is so. Why
are the Linux distros better able to keep it together?
On Sun, 7 Jan 2024 16:24:13 -0500, Arne Vajhøj wrote:
On 1/7/2024 4:11 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 13:21:05 -0500, Arne Vajhøj wrote:
Unless hard evidence to the contrary ...
The report from the city council itself, that there was a net gain in
price/benefits.
Counting what they counted.
It is not necessarily a complete picture.
You sound like HP, trying to second-guess what conclusion the council
itself came to, just to come up with something favourable to Microsoft.
On Sun, 7 Jan 2024 16:05:43 -0500, Arne Vajhøj wrote:
On 1/7/2024 3:13 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 13:22:37 -0500, Arne Vajhøj wrote:
All the items in the list were you said maintained in compatibility
layer. The VMS process model and the Linux process are different. And
that difference impact tracking this info.
You asked “But what are they going to return when asked for an item
that does not exist on Linux?” I think I showed a convincing answer to >>> that question.
At least admit that I have answered that question, before trying to
jump onto an entirely different one.
It is still the same question.
That info is for a single process on VMS but would be for multiple
processes on Linux.
Which makes the "just put the info in the compatibility latter" to a
complex problem.
So you tried to suggest that the problem couldn’t be solved, then when I offered up a solution, you don’t like the solution.
On 1/7/2024 7:10 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 16:24:13 -0500, Arne Vajhøj wrote:
On 1/7/2024 4:11 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 13:21:05 -0500, Arne Vajhøj wrote:
Unless hard evidence to the contrary ...
The report from the city council itself, that there was a net gain in
price/benefits.
Counting what they counted.
It is not necessarily a complete picture.
You sound like HP, trying to second-guess what conclusion the council
itself came to, just to come up with something favourable to Microsoft.
Nothing guess.
The report clearly specified what they counted and what they did
not count.
And I noted that due to what they did not count, then no comptent
CIO would use that report as evidence of anything.
On Sun, 7 Jan 2024 19:47:55 -0000 (UTC), Dan Cross wrote:
I believe that, within OpenSolaris, the feeling was that Solaris was "so
obviously better" that people would just naturally gravitate to the
technically superior offering.
I wonder if the open-sourcing happened before or after benchmarks showing >Linux outperforming Solaris on Sun’s own hardware ...
On Sun, 7 Jan 2024 23:50:32 -0000 (UTC), Dan Cross wrote:
You didn't offer a solution.
I listed solutions for a whole bunch of cases, then asked if I needed to >continue with even more detail.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
My sense with the AIX tools was that they were trying to
insulate the system manager (or low-paid operators) from the
underlying system. If your use-case is a factory floor or a
business data processing shop, that may make some sense.
This is the IBM WAY. The system manager does this, the operations staff
does that, nobody is able to do anything else outside of what they are >supposed to do, and if you want something else done you call IBM and they
do it.
IBM is a services company. They sell hardware only so that they can sell >services for them. Their goal is to optimize your need for IBM services.
Again, I think this was because the intention was to run a million >>>workstations with a single admin. If something goes wrong to prevent >>>booting, you just swap the machine out with a new one and have your >>>on-site IBM FE fix it.
Yeah, but when a room full of them are crashed and won't boot,
one wonders whether the cure isn't worse than the disease. :-)
That depends whether you are an IBM shareholder or not.
In article <unfeg3$18ir4$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 7 Jan 2024 23:50:32 -0000 (UTC), Dan Cross wrote:
You didn't offer a solution.
I listed solutions for a whole bunch of cases, then asked if I needed to >>continue with even more detail.
No, you _think_ that you did. It's quite common for people who are
ignorant of the actual technical issues to provide some simplistic
"solution" to a problem like that and then feel like they have addressed
the issue.
On 1/7/2024 6:45 PM, Lawrence D'Oliveiro wrote:
So you tried to suggest that the problem couldn’t be solved, then when
I offered up a solution, you don’t like the solution.
Because the proposed solution did not specify how it would solve the difficult part of the problem.
In article <memo.20240107190811.16260s@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
[snip]
They ended up selling themselves to Oracle, of course. Oracle's plan was >>> vertical integration: tuning up SPARC and Solaris hardware for Oracle
database so they had a price-performance advantage on their own hardware. >>> A great plan, except that the tuning had already been done and there was >>> no unrealised performance available.
Well, when the main reason your systems are sold is to run one
program specifically....
It's a shame.
Back in the day, Vax etc, software was optimised and fine tuned
to match the hw is ran on, so perhaps the Oracle "engineered
systems" idea was just updating that concept.
If you look at
the last Sparc release docs, theres's a lot of database and
high speed comms related included in hw. Far too expensive,
of course, and perhaps the last gasp of proprietary hardware and
os's, which can never hope to match the resources available to
the open source movement.
To be clear, the Solaris sold by Oracle is not the same as Open
Solaris, which was independently developed from the original Sun
source release.
Openindiana, in constant development
and a free alternative to the Oracle offering. Also used as the
core of Joyent Smartos and other systems.
Solaris 10 was a major milestone, with the introduction of the
ZFS filesystem, and lightweight virtualisation via Zones, or
containers, whatever they are called now. This was a decade or
more ago.
The FreeBSD clean room ZFS implementation eventually
became OpenZFS.
Finally settled on FreeBSD partly because that too had ZFS, a
similar lightweight virtualisation implementation, a very
disciplined development process and more. No systemd either.
The clean room ZFS implementation eventually becoming OpenZFS.
All in all, a worthy successor to Solaris...
The Sun acquisition was based on the observation that many of Sun's
customers were buying Sun machines running Solaris primarily to run
Oracle's DBMS.
On Mon, 8 Jan 2024 01:58:29 -0000 (UTC), Dan Cross wrote:
In article <unfeg3$18ir4$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 7 Jan 2024 23:50:32 -0000 (UTC), Dan Cross wrote:
You didn't offer a solution.
I listed solutions for a whole bunch of cases, then asked if I needed to >>>continue with even more detail.
No, you _think_ that you did. It's quite common for people who are
ignorant of the actual technical issues to provide some simplistic
"solution" to a problem like that and then feel like they have addressed
the issue.
Feel free to point out where you _think_ the deficiencies in my outline of
a solution lie.
No systemd either.
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
On Mon, 8 Jan 2024 02:18:22 -0000 (UTC), Dan Cross wrote:
The Sun acquisition was based on the observation that many ofOracle bought Sun to get control of Java. Nothing more, nothing
Sun's customers were buying Sun machines running Solaris
primarily to run Oracle's DBMS.
less. After the acquisition, they essentially threw away everything
else.
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
In article <unfn5d$19hl0$4@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
On Mon, 8 Jan 2024 02:18:22 -0000 (UTC), Dan Cross wrote:
The Sun acquisition was based on the observation that many ofOracle bought Sun to get control of Java. Nothing more, nothing
Sun's customers were buying Sun machines running Solaris
primarily to run Oracle's DBMS.
less. After the acquisition, they essentially threw away everything
else.
If that was the case, why did they leave Java as open source, rather than close it, as they did with Solaris?
On 1/7/2024 9:38 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
3 main BSD's. 1 Linux.
That report was paid for by Microsoft ...
Nah. You wouldn't understand it.
On 1/7/24 00:19, Dan Cross wrote:
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some
extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
I've gone through 32bit x86, 32bit SPARC, 32bit PPC, 64bit SPARC,
64bit
x86 over the years. Hopefully Gentoo will still with me when I
boldly
go where no man has ever returned from.
I think there's a pretty big difference between what you run on
your personal machine and a distro supported over thousands of
seats. I'm pretty sure you're not a maintainer of either Gentoo
or Slackware.
I'd sqlite3 it.
No need to save it to persistent storage.
On Mon, 8 Jan 2024 00:31:33 +0000, chrisq wrote:
No systemd either.
Shame. With all their smarts, they never thought to implement something as clever as that.
On Sun, 2024-01-07 at 17:25 +0000, Dan Cross wrote:
I've gone through 32bit x86, 32bit SPARC, 32bit PPC, 64bit SPARC,
64bit
x86 over the years. Hopefully Gentoo will still with me when I
boldly
go where no man has ever returned from.
I think there's a pretty big difference between what you run on
your personal machine and a distro supported over thousands of
seats. I'm pretty sure you're not a maintainer of either Gentoo
or Slackware.
You'd be surprised. I once was a maintainer on Gentoo.
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
On Mon, 8 Jan 2024 02:43:43 -0000 (UTC), Dan Cross wrote:
Nah. You wouldn't understand it.
There is a well-known saying among those of us who work in advanced
technical fields, that if you cannot explain something, that is a good
sign you don't understand it yourself.
"the year of the Linux desktop" is a known phrase (meme)
eluding to the permanent expectation from some Linux users
that Linux will take over the desktop market.
The official CEO blurb from VSI is on their LinkedIn page:
https://www.linkedin.com/company/vms-software-inc-/
On 2024-01-05, Arne Vajhøj <arne@vajhoej.dk> wrote:
"the year of the Linux desktop" is a known phrase (meme)
eluding to the permanent expectation from some Linux users
that Linux will take over the desktop market.
It bypassed the desktop and went directly to the handheld market,
where in fact it _did_ take over the market. :-)
Simon.
PS: ~200 non-spam messages over the weekend ? :-) Did you lot have nothing
to do this weekend ? :-)
On Sun, 2024-01-07 at 20:12 +0000, Lawrence D'Oliveiro wrote:
I'd sqlite3 it.
No need to save it to persistent storage.
Now I'm convinced you're a troll.
In article <ung50a$1eqq3$3@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 8 Jan 2024 02:43:43 -0000 (UTC), Dan Cross wrote:
Nah. You wouldn't understand it.
There is a well-known saying among those of us who work in advanced
technical fields, that if you cannot explain something, that is a good
sign you don't understand it yourself.
The irony is deafening.
Usually, when you make a technical suggestion, the onus is on
you to support it. "I'd do it in a compatibility layer" is not
supporting your idea; it's handwaving away all the details.
When pressed, if your response is to just double down and assert
that you already said how you would do it because you said you'd
do it in a compatibility layer without acknowledging any of the
complexities that would involve, then that means that _you_
don't understand what you are suggesting, or the problem that
was pointed out to you. This has been evident for some time.
I think we're done here. *Plonk*
- Dan C.
On 1/8/24 02:38, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:...which begs the question, why is that so important ?.
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
Diversity improves the breed and enables better fit to
a problem, based on requirements. Rather than a one
size fits all, as promulgated by Microsoft. Our way
or the highway, is always a compromise...
Chris
On 08/01/2024 00:29, chrisq wrote:
On 1/7/24 00:19, Dan Cross wrote:HP-UX came with a two-login license by default, one for the console and
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some >>>> extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
one to allow remote administration. :-)
Well, 7, 8, 9, and 10 did, I don't recall ever installing 11.x from
scratch.
You had to order more if you wanted them, and it would occasionally
throw the monkeys at HP, at least here in OZ: "Why do you need more user logins to run Oracle??" "Because we don't just run Oracle, you luser."
Cheers,
Gary B-)
On 08/01/2024 00:29, chrisq wrote:
On 1/7/24 00:19, Dan Cross wrote:HP-UX came with a two-login license by default, one for the console and
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some >>>>> extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
one to allow remote administration. :-)
Well, 7, 8, 9, and 10 did, I don't recall ever installing 11.x from
scratch.
You had to order more if you wanted them, and it would occasionally
throw the monkeys at HP, at least here in OZ: "Why do you need more user
logins to run Oracle??" "Because we don't just run Oracle, you luser."
Cheers,
Gary B-)
LOL Always thought HP-UX was a bit weird, but it's the result of buying >another unix ws vendor, whose name forget. Like some i-other unix ws
vendors in the early days, the overall structure and shell commands
hadn't settled down to a common core...
On Sun, 7 Jan 2024 17:07:57 -0000 (UTC), Dan Cross wrote:
AT&T as a corporate entity never quite got how to do Unix ...
Example: the 7th Edition added a prohibition on using the source code for >teaching purposes. Which is why the Lions Book had to be officially
withdrawn (though it continued to circulate via samizdat).
I think this just shows, Unix got popular in spite of its corporate
owners, rather than because of them.
On 1/4/2024 9:00 AM, Simon Clubley wrote:
On 2024-01-03, Slo <slovuj@gmail.com> wrote:
Darya will assume the role of CEO in June 2024. She joined VMSThis move does not give me a good feeling.
Software as a technical writer and OpenVMS instructor in 2017 and
has since held key leadership positions in software and web
development, documentation, the Community Program and Marketing.
Darya brings extensive expertise in OpenVMS and the OpenVMS
ecosystem, coupled with deep commitment to shaping the platform's
long-term trajectory.
She does not seem like a good fit for a CEO of a company providing
the types of mission-critical services that companies running VMS
rely on.
Even ignoring all the touchy-feeling stuff in her bio, someone who
has "successfully managed teams in documentation, marketing, web
development, and DevOps" as her main achievement does not seem to
be a good match for the needs of VMS users.
A CEO has to have managerial experience for obvious reasons. People
do not move directly from individual contributor to CEO.
She does not have an engineering background. But CEO's for tech
companies not having an engineering background is not unusual.
She has experience with the development process and the engineering
teams from her devops work.
She has experience with customers from marketing and sales work.
She seems more focused on new ways (CI/CD, web etc.) than
how DEC did things 40 years ago.
She was working on the CL program, which I think turned out
very good for VSI - I suspect a lot of the bug reports come
from CL users.
Based on VSI web page and LinkedIn profile I think it looks
as a good choice.
On Sun, 7 Jan 2024 23:08:56 -0500, Arne Vajhøj wrote:
On 1/7/2024 9:38 PM, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
3 main BSD's. 1 Linux.
More like 350 Linuxes (Linuces?). Why is that?
LOL Always thought HP-UX was a bit weird, but it's the result of buying >>another unix ws vendor, whose name forget. Like some i-other unix ws >>vendors in the early days, the overall structure and shell commands
hadn't settled down to a common core...
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
Over time, HP ditched the underlying OS and went with their
System V derivative instead.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
LOL Always thought HP-UX was a bit weird, but it's the result of buying
another unix ws vendor, whose name forget. Like some i-other unix ws
vendors in the early days, the overall structure and shell commands
hadn't settled down to a common core...
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
It wasn't even a little bit Unix, but it was very cool, and it offered a
real distributed environment on workstations years before anyone else even thought about it.
On 1/8/2024 12:25 PM, Dan Cross wrote:
In article <unh8er$1jkbg$1@dont-email.me>, chrisq <devzero@nospam.com> wrote:
On 1/8/24 08:03, Gary R. Schmidt wrote:
On 08/01/2024 00:29, chrisq wrote:
On 1/7/24 00:19, Dan Cross wrote:HP-UX came with a two-login license by default, one for the console and >>>> one to allow remote administration. :-)
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of >>>>>>>> commercial Unix.
How would such limits be enforced? Presumably they only applied to some >>>>>>> extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir. >>>>>
Examples ?...
Well, 7, 8, 9, and 10 did, I don't recall ever installing 11.x from
scratch.
You had to order more if you wanted them, and it would occasionally
throw the monkeys at HP, at least here in OZ: "Why do you need more user >>>> logins to run Oracle??" "Because we don't just run Oracle, you luser." >>>>
Cheers,
Gary B-)
LOL Always thought HP-UX was a bit weird, but it's the result of buying
another unix ws vendor, whose name forget. Like some i-other unix ws
vendors in the early days, the overall structure and shell commands
hadn't settled down to a common core...
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
Ahhhh, Apollo. I used to have one at the house. Now that winter
is here I really miss it. Best room heater I ever had.
It wasn't so much over time, it was kind of abrupt. There were a few years where you could buy an 700 with either HP-UX or Domain,
On 2024-01-03 3:16 p.m., Slo wrote:
She joined VMS Software as a technical writer and OpenVMS instructor
in 2017 and has since held key leadership positions in software and
web development, documentation, the Community Program and Marketing.
I wonder if she was the narrator on this? https://www.youtube.com/watch?v=Rf53T7i8RGs
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
Over time, HP ditched the underlying OS and went with their
System V derivative instead.
An interesting tie-in to DEC was, when HP acquited Compaq, and
thus the DEC IP rights, whether they would wind down HP-UX and
go with Tru64 as their Unix offering (or the other way around).
Too bad that HP-UX is the one still standing. :-(
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
Yes, that was it.
Over time, HP ditched the underlying OS and went with their
System V derivative instead.
I think it was HP-Ux 10 or so, and even that seemed quite weird
in terms of the command set. That was on one of the HP tech
computers, 68030 cpu, from memory, with all the peripherals
linked by hpib cables. Very capable little machines though, for
instrument control work.
An interesting tie-in to DEC was, when HP acquited Compaq, and
thus the DEC IP rights, whether they would wind down HP-UX and
go with Tru64 as their Unix offering (or the other way around).
Too bad that HP-UX is the one still standing. :-(
Shame they backed the wrong horse. Tru64 may have been a bit
unpolished round the edges, but a far more straightforward
os to work with than Hp-Ux. Just needed a bit more work to
finish the job...
In article <unhuvu$1n15b$1@dont-email.me>, chrisq <devzero@nospam.com> wrote:
On 1/8/24 17:25, Dan Cross wrote:
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
Yes, that was it.
Over time, HP ditched the underlying OS and went with their
System V derivative instead.
I think it was HP-Ux 10 or so, and even that seemed quite weird
in terms of the command set. That was on one of the HP tech
computers, 68030 cpu, from memory, with all the peripherals
linked by hpib cables. Very capable little machines though, for
instrument control work.
HP-UX 10.0 was around 1995; they merged the server and
workstation versions of the OS. They'd been running on System V
for a long while by that point, though; I didn't realize it, but
apparently the first version in 1984 was based on System III.
Poor suckers. They switched to System V (I'm guessing SVR3) in
1985.
An interesting tie-in to DEC was, when HP acquited Compaq, and
thus the DEC IP rights, whether they would wind down HP-UX and
go with Tru64 as their Unix offering (or the other way around).
Too bad that HP-UX is the one still standing. :-(
Shame they backed the wrong horse. Tru64 may have been a bit
unpolished round the edges, but a far more straightforward
os to work with than Hp-Ux. Just needed a bit more work to
finish the job...
Yes. Tru64 was one of the best of the commerical Unixes. I
wish it had had a better run.
- Dan C.
On 1/8/24 02:36, Lawrence D'Oliveiro wrote:
On Mon, 8 Jan 2024 00:31:33 +0000, chrisq wrote:
No systemd either.
Shame. With all their smarts, they never thought to implement something
as clever as that.
Clever ?, debatable. More like an impenetrable trainwreck.
A whole shedload of complexity, for no good reason, and goes against all principles of unix.
... via xml scripts, in the Sun case.
On 1/8/24 02:38, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:...which begs the question, why is that so important ?.
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
Diversity improves the breed and enables better fit to a problem, based
on requirements.
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
I believe that you're thinking of Apollo, which had an operating system >>called "Domain/OS" (originally "AEGIS") which was not really Unix,
though had a Unix "environment". It was done from scratch and more
closely resembled Multics in internal sturcture.
It wasn't even a little bit Unix, but it was very cool, and it offered a
real distributed environment on workstations years before anyone else
even thought about it.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 7 Jan 2024 23:08:56 -0500, Arne Vajhøj wrote:
3 main BSD's. 1 Linux.
More like 350 Linuxes (Linuces?). Why is that?
No. Maybe 350 different distributions, but only one kernel.
This is both the key to Linux's success but also a real problem for
people wanting to use it for embedded for multimedia applications.
On Sun, 2024-01-07 at 20:12 +0000, Lawrence D'Oliveiro wrote:
I'd sqlite3 it.
No need to save it to persistent storage.
Now I'm convinced you're a troll.
In article <ung50a$1eqq3$3@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 8 Jan 2024 02:43:43 -0000 (UTC), Dan Cross wrote:
Nah. You wouldn't understand it.
There is a well-known saying among those of us who work in advanced >>technical fields, that if you cannot explain something, that is a good
sign you don't understand it yourself.
The irony is deafening.
Usually, when you make a technical suggestion, the onus is on you to
support it. "I'd do it in a compatibility layer" is not supporting your idea; it's handwaving away all the details.
I think we're done here. *Plonk*
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
LOL Always thought HP-UX was a bit weird, but it's the result of buying
another unix ws vendor, whose name forget. Like some i-other unix ws
vendors in the early days, the overall structure and shell commands
hadn't settled down to a common core...
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
It wasn't even a little bit Unix, but it was very cool, and it offered a
real distributed environment on workstations years before anyone else even thought about it.
Over time, HP ditched the underlying OS and went with their
System V derivative instead.
It wasn't so much over time, it was kind of abrupt. There were a few years where you could buy an 700 with either HP-UX or Domain, but they really started
pushing HP-UX from the beginning. I don't think anyone at HP had any clue what Domain really was or how to sell it.
--scott
On 1/8/24 08:03, Gary R. Schmidt wrote:
On 08/01/2024 00:29, chrisq wrote:
On 1/7/24 00:19, Dan Cross wrote:HP-UX came with a two-login license by default, one for the console
In article <uncqas$pust$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 6 Jan 2024 23:42:26 -0000 (UTC), Dan Cross wrote:
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to
some
extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Don't remember that at all. Not on SGI, Sun or HPUX, nor Ultrix, fwir.
Examples ?...
and one to allow remote administration. :-)
Well, 7, 8, 9, and 10 did, I don't recall ever installing 11.x from
scratch.
You had to order more if you wanted them, and it would occasionally
throw the monkeys at HP, at least here in OZ: "Why do you need more
user logins to run Oracle??" "Because we don't just run Oracle, you
luser."
Cheers,
Gary B-)
LOL Always thought HP-UX was a bit weird, but it's the result of buying another unix ws vendor, whose name forget. Like some i-other unix ws
vendors in the early days, the overall structure and shell commands
hadn't settled down to a common core...
Did they have any clue about what VMS was, or how to sell it?
I think there's a pretty big difference between what you run on
your personal machine and a distro supported over thousands of
seats. I'm pretty sure you're not a maintainer of either Gentoo
or Slackware.
You'd be surprised. I once was a maintainer on Gentoo.
Isn't anyone who submits a patch considered a Gentoo maintainer?
Kind of like being a "Debian Developer"?
No idea how accurate it is - I have never seen an Apollo computer.
An interesting tie-in to DEC was, when HP acquited Compaq, and
thus the DEC IP rights, whether they would wind down HP-UX and
go with Tru64 as their Unix offering (or the other way around).
Too bad that HP-UX is the one still standing. :-(
On Mon, 8 Jan 2024 12:13:37 +0000, chrisq wrote:
On 1/8/24 02:36, Lawrence D'Oliveiro wrote:
On Mon, 8 Jan 2024 00:31:33 +0000, chrisq wrote:
No systemd either.
Shame. With all their smarts, they never thought to implement something
as clever as that.
Clever ?, debatable. More like an impenetrable trainwreck.
Remember, it became popular across a whole bunch of distros entirely
through its own merits, not because there was some dominant, faceless >MegaCorp pushing it on everybody.
On 1/8/2024 3:16 PM, Scott Dorsey wrote:
Dan Cross <cross@spitfire.i.gajendra.net> wrote:
LOL Always thought HP-UX was a bit weird, but it's the result of buying >>>> another unix ws vendor, whose name forget. Like some i-other unix ws
vendors in the early days, the overall structure and shell commands
hadn't settled down to a common core...
I believe that you're thinking of Apollo, which had an operating
system called "Domain/OS" (originally "AEGIS") which was not
really Unix, though had a Unix "environment". It was done from
scratch and more closely resembled Multics in internal
sturcture.
It wasn't even a little bit Unix, but it was very cool, and it offered a
real distributed environment on workstations years before anyone else even >> thought about it.
Over time, HP ditched the underlying OS and went with their
System V derivative instead.
It wasn't so much over time, it was kind of abrupt. There were a few years >> where you could buy an 700 with either HP-UX or Domain, but they really started
pushing HP-UX from the beginning. I don't think anyone at HP had any clue >> what Domain really was or how to sell it.
Did they have any clue about what VMS was, or how to sell it?
In article <unia4a$1o881$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Remember, [systemd] became popular across a whole bunch of distros
entirely through its own merits, not because there was some dominant,
faceless MegaCorp pushing it on everybody.
Are you sure about that? Are you absolutely sure it wasn't a matter of
"We have to do this because Red Hat is doing it and we don't want our
distro to be too different?"
On Sun, 7 Jan 2024 13:16:19 -0500, Arne Vajhøj wrote:
The command line is not a single string on VMS.
Yes it is. It is passed to the program being activated as a single string buffer somewhere in P1 space. LIB$GET_FOREIGN returns a copy of this
string, and the CLD functions do their parsing on this string as well.
No one I know uses ms office anymore. Have a look at Libre office,
for a better experience. Free, and works on all the usual OS's as
well...
In article <uneqvj$15poq$1@dont-email.me>, arne@vajhoej.dk (Arne Vajhøj) wrote:
Sometimes one need to download and install something.
Like Microsoft Visual C++ Redistributable version XXXX for x86/x64.
Yes, you usually need some of those. Any competently packaged software
should install them for you.
On 1/8/24 02:38, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:...which begs the question, why is that so important ?.
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
Diversity improves the breed and enables better fit to a problem, based
on requirements.
On 2024-01-06, Neil Rieck <n.rieck@bell.net> wrote:
The official CEO blurb from VSI is on their LinkedIn page:
https://www.linkedin.com/company/vms-software-inc-/
Interesting. When I visited that page for a second time, it is now
forcing me to create an account and login to continue. Yeah, right...
Yes, you usually need some of those. Any competently packaged
software should install them for you.
And no doubt if you have multiple things installed on the same
Windows system that need that library, you will end up with
multiple copies of it.
On Mon, 8 Jan 2024 12:22:22 +0000, chrisq wrote:
On 1/8/24 02:38, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 20:01:58 -0500, Arne Vajhøj wrote:...which begs the question, why is that so important ?.
All the Linux distros are based on the same kernel project,
same GNU core utils etc..
Again: *why* are the BSDs not able to manage this?
Diversity improves the breed and enables better fit to a problem, based
on requirements.
Linux is able to manage diversity without fragmentation. Why not the BSDs?
In article <uo2h17$qsie$3@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Linux is able to manage diversity without fragmentation. Why not the
BSDs?
Because the BSD community is fine with fragmentation.
No, actually. These days the version management and library
compatibility actually work.
On 15 Jan 2024 16:39:52 -0000, Scott Dorsey wrote:
In article <uo2h17$qsie$3@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Linux is able to manage diversity without fragmentation. Why not the >>>BSDs?
Because the BSD community is fine with fragmentation.
Think of it this way: there are maybe half a dozen BSD variants still in >active development. There are something like 50× that number of Linux >distros similarly under active development. Yet it is easier to move
between Linux distros than it is to move between BSD variants.
That’s what I mean by “diversity” versus “fragmentation”. Do you see >“fragmentation” as an asset, not a liability?
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On 15 Jan 2024 16:39:52 -0000, Scott Dorsey wrote:
In article <uo2h17$qsie$3@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Linux is able to manage diversity without fragmentation. Why not the >>>>BSDs?
Because the BSD community is fine with fragmentation.
Think of it this way: there are maybe half a dozen BSD variants still in >>active development. There are something like 50× that number of Linux >>distros similarly under active development. Yet it is easier to move >>between Linux distros than it is to move between BSD variants.
Yes. This is fine. The Linux distros all have the same kernel. The BSD >variants do not.
80% or maybe even 90% of what is in the Linux kernel is stuff that I have
no need for. Why should I have it on my machine? I want an OS that is intended for what I do, not for what everyone does. Linux drives me up
the wall because every time I turn around they are adding more stuff to it. Maybe it's stuff I want. Maybe it's stuff I have no need for. The kernel just keeps getting bigger and bigger and the more stuff running in kernel space the less secure the system is going to be in the long term.
As for the kernel, Linux, unlike VMS, has a flexible and functioning
kernel modules system. Hopefully, at least some of that extra
functionality will be in kernel modules that are never installed if
you don't select those packages during installation.
There are plenty of good engineers at Raspberry
On Tue, 2024-01-16 at 13:13 +0000, Simon Clubley wrote:
As for the kernel, Linux, unlike VMS, has a flexible and functioning
kernel modules system. Hopefully, at least some of that extra
functionality will be in kernel modules that are never installed if
you don't select those packages during installation.
I would love to see loadable drivers/modules in VMS. It sure would make
it a lot easier to bring up new devices or new functionality.
On Sun, 7 Jan 2024 23:37:34 +0000, Chris Townley wrote:
There are plenty of good engineers at Raspberry
And yet it was none of them that created Raspbian, to begin with.
As for the kernel, Linux, unlike VMS, has a flexible and functioning kernel >modules system. Hopefully, at least some of that extra functionality will
be in kernel modules that are never installed if you don't select those >packages during installation.
On 16/01/2024 21:20, Lawrence D'Oliveiro wrote:
On Sun, 7 Jan 2024 23:37:34 +0000, Chris Townley wrote:
There are plenty of good engineers at Raspberry
And yet it was none of them that created Raspbian, to begin with.
They don't use raspian any more. They use Debian, but add quite a few
extra bits, and patches for raspberry pi architecture
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 432 |
Nodes: | 16 (2 / 14) |
Uptime: | 49:44:08 |
Calls: | 9,087 |
Calls today: | 1 |
Files: | 13,412 |
Messages: | 6,024,166 |