On 3/23/2024 6:42 AM, scbs29 wrote:
Hello all
I seem to remember that there used to be a facility in Windows to run a program in what I think was called its 'own
memory space' so that if the program crashed or hung it would not bring down the rest of the machine.
I cannot find a reference to this in Windows 10 and was wondering if it was still available.
Can anyone help?
TIA
So we would start, by asking the question, about how have
computers worked over the years.
In the beginning, a computer did one thing at a time.
Like, my ZX81. You broke it, you bought it. Life was simple then.
Multitasking was introduced. It was called "cooperative multitasking",
because each program would contact the scheduler and say "OK, I've
done a slice of work, give the CPU to another task". It was up to a
program designer, to figure out what that time period should be.
Maybe the "Earth animation" program, it would run for 20 milliseconds
then give up control. A word processor would use 100 millisecond slices, because... it did a lot of computing.
In terms of reliability, this methodology only worked if each
program was very well behaved. If just one program had a bug...
it could bring down the system, and all progress/work on the
rest of the processes, would be lost.
Right around this time, is when automatic checkpoint saves
were invented for word processors :-) I used to average
one or two crashes a day, on cooperative multitasking systems.
My UNIX box had no such issue (because it was Preemptive).
The next change, was "Preemptive Multitasking". The scheduler
would break in, like a bar keep, and say "OK, you've had enough,
time for the next process to run". This prevents any one
program, from affecting any other program. It's up to the
scheduler to allocate slices fairly. The scheduler could
tell the Word Processor to piss off after only 20 milliseconds.
There can still be issues, related to resource allocation.
One program can do an excessive amount of disk I/O. This
can affect the productivity of a second process. Disk I/O
is a scarce resource.
The other resource is memory. A 32-bit process has a 4GB
address space, and a 2G:2G split on kernel addresses
and user addresses. To call a kernel routine, you need
to use its kernel address. In practical terms, it means
Photoshop in WinXP x86, could use about 1.8GB max of RAM.
As that's the userland address-space-limited max allocation.
When programs run on a 64-bit OS, a single program can use
all of the memory. We use the notion of "quotas", to limit
how much RAM is used. For example, early on in
Preemptive Multitasking, we used to set the per-program
quota to 50% of main memory, and that was a good compromise
value. It allowed a "big" program, to most-of-the-time
get the big memory it needed. And it left a bit of memory
for guests logged into a machine, to do things. We could
set the quota once, and just leave it.
*******
The end conclusion of all this chatter, is programs
are pretty well insulated against one another, but
not completely insulated. In some cases, it can be
hard to find where the Quota control is, or how to
set it. Some OSes make this really easy.
64-bit Firefox at one time, could crash the OS. It
could cause a kernel panic, with no memory left
for system usage. Later, there seemed to be some
sort of 3GB limitation, which is not a "natural number"
from computer science. It just seemed to stop there,
on a runaway situation.
Even though Microsoft Notepad is available as a
64-bit program, the address space for loading
problems, seems to still have a 32-bit addressing limitation.
This operates as a natural quota and prevents
Notepad from using the entire memory. You can edit
perhaps 900 million 16-bit-wide characters in there.
(1.8GB divided by 2)
*******
There is the concept of the Sandbox, which Win10 and
Win11 have as a built-in feature. Part of the "excessive"
Win11 minimal memory requirement, is a side effect of Sandbox.
https://learn.microsoft.com/en-us/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-overview
*******
There is the concept of the Virtual Machine, these
can have "hardening features", where the Hosting
software injects material into the Guest, with the
purpose of preventing certain kinds of runaway behavior.
A program like Virtualbox (virtualbox.org), can be run
on Win10 Home or Win10 Pro. Win10 and Win11 Guests,
will run without activation, at least until
whatever time Microsoft does not want them to run without
activation. It's not recommended to buy a license for
a Guest OS for home usage, as Microsoft Support will not
help you if there is an activation/license related issue.
VMs do not have tight enough controls, with regard
to licensing. Maybe the VM cannot see the NIC MAC, or
the mobo serial number or similar quantities.
*******
Summary: As others have indicated, a virtual machine environment
like VirtualBox, is not patronizing and allows some
degree of user control. If you set the Guest memory limit
to 4GB for example, then the software running in the
environment, cannot "use the entire computer". You can
see I'm a big fan.
[Picture]
https://i.postimg.cc/GpYFmzFH/Virtualbox-OS-hosting-for-Guests.gif
While Hyper-V is available on Pro (instead of VirtualBox),
it's as annoying as some other attempts (Gnome Boxes perhaps).
The very best usability, was Connectix Virtual PC (which
Microsoft bought), VirtualBox comes in second, others are
lower down the list and a nuisance. The VMWare Worstation Player,
I nearly lost control of a Win11 installation there, because
of hairbrained (really stupid) design decisions. There are
certainly lots of candidates, and each has its own "smell".
Paul
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)