Hi all,
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
On 9/22/2022 2:35 PM, Kira Ash wrote:
Hi all,
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.First, welcome to the 2200 community. We are a small, but generally
friendly group. If you have questions, someone here will almost
certainly help to answer them.
Second, I enjoyed your web site. It was interesting to see someone not familiar with 2200 stuff figuring it out, and comparing it to other systems.
Third, a few comments/pointers, etc.
A) You seem to be under the impression that all files are program
files. This is not true. While all program files are files to the file system, not all files are program files. Many (and on some systems
most) files are simply files, with no internal structure other than what
the application gives them. These are typically what one deals with
when a program does I/O. A program file is simply a file with a
particular well defined structure within it. As you say, it is sort of
like a directory, but not exactly. It is definitely not an ISAM file.
B) When you talk about processor calls, you seem to be unaware of the defaults, especially for the output file. So for example if you have
@UC File1.Element1, File2.Element2
It will behave as you indicate. But if you leave off File2, and just
have .element2, it will default to the same file as specified in spec1.
The same rules apply to element names, that is file2. without specifying
an element name will default to Element1. But best of all, if you leave
out spec2 entirely, i.e. @UC file1.element1 spec two will default to the same name as spec1. You might think that the output would overwrite the input file, but the program file allows you to have a source (symbolic)
and an object (binary) element with the same name. If you do this, then
do an @PRT,t of the file, it will show both elements. (There are further options to @PRT,t to show only one or the other if you desire.)
Overall, this can reduce your cognitive load, and reduce typing.
C) You say an unadorned @RUN can start a demand run. This is true, but assuming you have set up the account, it can can just as easily start a batch run.
D) I believe that the Hitachi, Fujitsu, and old Siemans mainframes were essentially IBM 360/70 clones (though with their own OS), whereas the
Bull and Unisys systems weren't. I think this is worth mentioning,
Anyway, thanks for doing this, and I look forward to reading future
sections of your web site.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
On 9/22/2022 2:35 PM, Kira Ash wrote:
Hi all,
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
D) I believe that the Hitachi, Fujitsu, and old Siemans mainframes were
essentially IBM 360/70 clones (though with their own OS), whereas the
Bull and Unisys systems weren't. I think this is worth mentioning,
Sperry had a line of 360 clone systems alongside the 1100/2200
line. https://en.wikipedia.org/wiki/UNIVAC_Series_90
On 9/22/2022 2:35 PM, Kira Ash wrote:
Hi all,
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
D) I believe that the Hitachi, Fujitsu, and old Siemans mainframes were
essentially IBM 360/70 clones (though with their own OS), whereas the
Bull and Unisys systems weren't. I think this is worth mentioning,
On Thursday, September 22, 2022 at 9:19:51 PM UTC-7, Stephen Fuld wrote:from my usual area of expertise, and there's a lot to learn.
On 9/22/2022 2:35 PM, Kira Ash wrote:
Hi all,First, welcome to the 2200 community. We are a small, but generally
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
friendly group. If you have questions, someone here will almost
certainly help to answer them.
Second, I enjoyed your web site. It was interesting to see someone not
familiar with 2200 stuff figuring it out, and comparing it to other systems.
I'm glad you enjoyed it! Thank you very much for your reply - as is obvious, I'm new to the system; I have a lot of experience with UNIX and a fair amount with Stratus VOS, which obviously affects how I learn things, but OS 2200 is certainly different
Third, a few comments/pointers, etc.
A) You seem to be under the impression that all files are program
files. This is not true. While all program files are files to the file
system, not all files are program files. Many (and on some systems
most) files are simply files, with no internal structure other than what
the application gives them. These are typically what one deals with
when a program does I/O. A program file is simply a file with a
particular well defined structure within it. As you say, it is sort of
like a directory, but not exactly. It is definitely not an ISAM file.
A non-program-file file has no trailing dot and no internal named elements, right?
If so, I actually started primarily working with those first - they just cluttered up my @prt,p output quickly, so I started keeping things contained in program files.
(On a related note, can I make @PRT show me a list of files with a certain qualifier? I didn't see that option in the ECL/FURPUR manual.)
The ISAM comparison was probably a bad one, as I see now. I was mostly just thinking "file with internal records identified by name", but it seems like that's not quite right for the elements of a program file.
B) When you talk about processor calls, you seem to be unaware of the
defaults, especially for the output file. So for example if you have
@UC File1.Element1, File2.Element2
It will behave as you indicate. But if you leave off File2, and just
have .element2, it will default to the same file as specified in spec1.
The same rules apply to element names, that is file2. without specifying
an element name will default to Element1. But best of all, if you leave
out spec2 entirely, i.e. @UC file1.element1 spec two will default to the
same name as spec1. You might think that the output would overwrite the
input file, but the program file allows you to have a source (symbolic)
and an object (binary) element with the same name. If you do this, then
do an @PRT,t of the file, it will show both elements. (There are further
options to @PRT,t to show only one or the other if you desire.)
Overall, this can reduce your cognitive load, and reduce typing.
Oh, interesting! So would that mean that when running other commands, like @DELETE for instance, I'd provide a subtype specifier to specify which of the same-named files I'm referring to?
Thanks again for your reply! I deeply appreciate feedback from someone who has spent quality time with this system. Hopefully I'll embarrass myself less next time around.
Hi all,
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
Kira
On Thu, 22 Sep 2022 14:35:59 -0700 (PDT), Kira Ash
<hpeint...@gmail.com> wrote:
Hi all,
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
KiraI will echo Mr. Fuld's welcome to you, and offer semi-random comments
on your fascinating posts.
1) I believe that there is a CSHELL program for the 2200 that provides
a decent emulation of a *nix shell. While I knew the authors when
they worked at Unisys, I don't know where the program can be found.
2) The 2200 File System is a flat file system. According to Ron Smith (co-author with George Gray of what I regard as the definitive history
of Unisys systems), there was a time when Univac considered replacing
the flat file system with a heirarchical file systems. The customers
they discussed this with were not supportive of the idea, so it was
dropped.
3) Based upon your comment regarding read and write keys on files, I
infer that you are running on what is called a Fundamental Security
system. I tend to regard SECOPT1 (or higher) systems somewhat more
secure, and on those systems read and write keys are not meaningful
for most files.
4) The ECL (Exec Control Language) internally works with Fieldata,
which is a six bit character set. So ECL is not case-sensitive.
5) @PRT (with no options) is not something I typically use, as it
displays the contents of the entire Master File Directory. Not a big
deal on systems with small filesystems; a very big deal at some sites
I have supported with very large file systems.
6) PLUS is a descendent of JOVIAL (Jules Own Version of the
International Algorithmic Language), or so I've been told. It
suffers, from my perspective, from trying to be too many things for
too many people (for a while it ran on 2200s, Series 30, and Series 90 systems). While there are UCS flavors of COBOL and Fortran, I suspect
that the customer you cite probably uses FTN (ASCII Fortran). I could
be mistaken.
7) From a user program perspective, there are 48 general registers (partitioned into index registers, accumulators, and R-registers),
and16 base registers (critical for addressing and security). While
negative zero is a thing, arithmetic operations never return a
negative zero. The existence of negative zero does cause some quirks
that most people encounter very rarely. Instructions can access sixth
words, quarter words, third words, half words, words, and double
words. I will observe that the complex memory security features seem
to be effective.
8) AT&T, before it was broken up, was a heavy user of SX 1100. There
are some oddball Exec ERs that were put in specifically to support
AT&T's needs.
I'm now dying to see Part 4: The Periodic File of Elements.
On Friday, September 23, 2022 at 10:07:00 PM UTC-7, David W Schroth wrote:
On Thu, 22 Sep 2022 14:35:59 -0700 (PDT), Kira Ash
<hpeint...@gmail.com> wrote:
Hi all,I will echo Mr. Fuld's welcome to you, and offer semi-random comments
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
Kira
on your fascinating posts.
1) I believe that there is a CSHELL program for the 2200 that provides
a decent emulation of a *nix shell. While I knew the authors when
they worked at Unisys, I don't know where the program can be found.
2) The 2200 File System is a flat file system. According to Ron Smith
(co-author with George Gray of what I regard as the definitive history
of Unisys systems), there was a time when Univac considered replacing
the flat file system with a heirarchical file systems. The customers
they discussed this with were not supportive of the idea, so it was
dropped.
3) Based upon your comment regarding read and write keys on files, I
infer that you are running on what is called a Fundamental Security
system. I tend to regard SECOPT1 (or higher) systems somewhat more
secure, and on those systems read and write keys are not meaningful
for most files.
4) The ECL (Exec Control Language) internally works with Fieldata,
which is a six bit character set. So ECL is not case-sensitive.
5) @PRT (with no options) is not something I typically use, as it
displays the contents of the entire Master File Directory. Not a big
deal on systems with small filesystems; a very big deal at some sites
I have supported with very large file systems.
6) PLUS is a descendent of JOVIAL (Jules Own Version of the
International Algorithmic Language), or so I've been told. It
suffers, from my perspective, from trying to be too many things for
too many people (for a while it ran on 2200s, Series 30, and Series 90
systems). While there are UCS flavors of COBOL and Fortran, I suspect
that the customer you cite probably uses FTN (ASCII Fortran). I could
be mistaken.
7) From a user program perspective, there are 48 general registers
(partitioned into index registers, accumulators, and R-registers),
and16 base registers (critical for addressing and security). While
negative zero is a thing, arithmetic operations never return a
negative zero. The existence of negative zero does cause some quirks
that most people encounter very rarely. Instructions can access sixth
words, quarter words, third words, half words, words, and double
words. I will observe that the complex memory security features seem
to be effective.
8) AT&T, before it was broken up, was a heavy user of SX 1100. There
are some oddball Exec ERs that were put in specifically to support
AT&T's needs.
I'm now dying to see Part 4: The Periodic File of Elements.
Thank you very much for your thoughtful post, and I hope Part 4 is interesting to you when it's posted in a couple of weeks! I've been hitting the manuals on OS 2200 storage and the APIs for dealing with it - lot of interesting stuff in there.
For the Ron Smith and George Gray history, do you mean "Unisys Computers: An Introductory History" or "Sperry Rand's Third Generation Computers" in IEEE Annals?
CSHELL sounds like an interesting beast. Please let me know if you come across a copy somewhere, as I'd be very curious - though the truth is, I've become increasingly accustomed to doing things the 2200 way. :-)
Thanks again!
As Mr. Fuld and Mr. Schroth said, welcome.
Some piddly points of no real importance as the Really Important Stuff you probably need to know is coming from Mr. Fuld and Mr. Schroth (and probably Mr. Gunshannon Real Soon Now).
1. You mentioned the fact that Once Upon a Time, there used to be a Unix variant, SX-1100, that ran on top of OS1100 at Bell Labs (BELLCORE).
You also mentioned that it had a "poor reputation for performance".
I would like to point out that at the time (pre-1100/90 IIRC), I think that 1100 hardware was still running at only a few MIPS at best.
To paraphrase Samuel Johnson, "If you see a dog's walking on his hind legs, it may not do so very well, but what you should be surprised at is that it is done at all."
2. As Mr. Schroth noted, the OS1100/OS2200 file system was (is?) a flat file system with program files just happening to have an internal structure that makes them look like they are a subdirectory.
But just in case you haven't run into it yet, there were (are?) also file cycles (F-cycles) so that there can be multiple instances of a file with the "same" name.
If you want to learn more about the joys of basic OS1100/OS2200 speak, I would recommend that you take a look at Volume 2 of one of the old Exec Programmer Reference Manuals like this one for Exec 36R2:
< http://bitsavers.org/pdf/univac/1100/exec/UP-4144.23_R36r2_Executive_System_Vol_2_Jan80.pdf >
Yes, I know it's older than dirt, but AFAIK, it's still applicable to today's OS2200 thanks to the joys of Backward Compatibility.
3. You mention file access being controlled via read/write keys.
Well, things may have changed, but that isn't (or wasn't) the only way file access could be controlled.
Once Upon a Time, a branch of OS1100 was hardened to be compliant with the B-1 Security Standard (the Orange Book?).
AFAIK, this is just about as good as it gets in terms of security.
(I think Microsoft at one time came up with some that was C-2 compliant.)
So at least at one time, access could be controlled by ACRs as well as read/write keys.
4. From your previous questions, I suspected that you were probably writing some kind of article, especially based on your question about what were the most common programming language that programs running on OS2200 are written in.started using the machine specific language features that tied them in to one family of machines.
To be honest, I wasn't quite sure what you were getting at and to be quite honest, I not sure I do yet.
To try to be clear on my part, every customer in sight since close to the beginning of time has wanted computers that supported high level languages so that they (the customers) could supposedly write machine independent code ... at least before they
No doubt in an attempt to keep its customers "happy", Univac/Sperry Univac/Sperry/Unisys provides and supports, and will continue to provide and support high level languages like COBOL, Fortran, and C or whatever other languages become the Language DuJour so long as the Company exists.
But the fact of the matter is that a lot of code from the Good Old Days which was and still is 1100 assembly language code, and is still presumably very much supported (even if its use is not really encouraged).between by way of a mode bit in the Unisys version of a PSW (DB16 in the Designator Register AKA DR).
If that wasn't the case, then the Company could have gotten rid of what amounts to an entirely different machine (a "Basic Mode" machine) that's tacked onto a newer machine with a similar architecture (an "Extended Mode" machine) which the OS switch
Also in addition to old Dusty Deck code, I suspect that a lot of Transaction Processing development is still actively being done in assembly language.your list.
Of course, if Mr. Fuld or Mr. Schroth say that I'm full of shit, I will defer to them and say that you should believe them, but my point is that I suspect that assembler is still very much alive and well as far as OS2200 is concerned and should be in
Also, in addition to assembly, COBOL, Fortran, and C, there's also the matter of what amounts to another Univac/Sperry Univac/Sperry/Unisys language known as Mapper (which I see you reference in the second part of your series).
Mapper was renamed long, long ago to BL- or BS- something-something-something and was even ported to non-Univac/Sperry Univac/Sperry/Unisys machines.
Considering that the Company once sort of tailored one of its 1100 systems to primarily run Mapper (i.e. a variant of the 1100/50 was referred to as Mapper10), I suspect that Mapper should also be added to your list.
As for other languages that were customer written and not supported by the Company, there used to be a Pascal, a PL/1 variant (PLUM), and a LISP (written in Pascal IIRC).
While they aren't Company processors, presumably they stand a decent chance of still running on an Dorado.
5. I think you are confusing the Unisys 2200 emulator software that the Company uses on its Intel boxes with PS/2200.wall time, making PS/2200 at best a simulator rather than an emulator.
Although it's been a few years now since I used PS/2200, I can definitely say that at the time I used it, it was *NOT* an emulator.
It couldn't be.
The host OS (Windows) prevented time keeping on the 1-usec granularity that the original M-Series dayclock hardware could deliver and so if you just left the simulated system run for awhile, its sense of wall time quickly went out of sync with actual
It's my understanding that the Company emulator basically runs on top of a Linux based hypervisor which presumably is in close communication with the guest OS.
Again, I will defer to Mr. Fuld and Mr. Schroth if they say something different.
6. I think you haven't quite gotten a handle on the finer points of the basic 2200 Series IP (AKA CPU) architecture.access an Exec register, you won't get there from here but instead will take an interrupt.
So here are a few things that I'll throw at the wall for your consideration.
* No, not all registers are visible in the (virtual) address space.
The General Register Set (GRS) is divided into a User set and an Exec set, the visibility of which is determined by a mode bit (DB17).
Even so, some instructions specify a register using a GRS offset (by combining the J- and A-fields of an instruction as in the case of the JGD instruction) and so are *NOT* affected by the mode bit, except to say that if you're a User and you try to
Furthermore, in the Good Old Days, there were some registers that were specifically for Exec use only and so you couldn't access them either no matter how tricky you might get.Extended mode and the B-register specified in the instruction is zero (AKA B0).
(They've largely/entirely disappeared.)
And I think the rules for indirect addressing sometimes always reference storage rather than GRS no matter what, but I could be wrong.
* There are basically three (3) kinds of registers in each of the two register sets in GRS: index registers (X-registers), accumulators (A-registers), and repeat counters (R-registers).
To me, that's not all that "complex".
I mean, the MC68000 used to have two (2) kinds of registers, index registers (A-registers) and accumulators (D-registers) and yet I don't recall the 68K architecture as being "complex".
* When you compute the effective address of a "storage operand" whose value is less than 0200 (that's octal 0200), the processor *MAY* refer to a GRS register depending on the type of instruction being executed.
Some instructions will *ALWAYS* reference storage no matter what, while some will reference a GRS register (assuming that you have the appropriate processor privilege to get there from here) if you're running in Basic Mode or if you're running in
* I would quibble about how you describe the address of a storage operand. Assuming that we're not talking about a literal/immediate value, it's specified by an offset (u) and an index value from an index (X-) register in Basic Mode or an offset (u) and index value from an index (X-) register and base value (from a Base [B-]register) in Extended Mode.
The size of the offset varies between Basic mode and Extended mode, and there's a mode bit that controls the size of the index value (DB11) if you're the Exec (DB14 || DB15 < 2).absolute address specifies a location in a paged absolute address space and not an actual storage address.
In the Good Old Days, the effective address U was turned directly into an actual storage address (called an absolute address) via the additon of the base value from a base register, but since the introduction of M-Series (the 2200/900 and 2200/500) the
Paging hardware translates the absolute address into an actual storage address (called a real address) although on the emulated hardware, it's the host hardware that does the address translation.which is kinda stupid.
IOW, in the Good Old Days, 1100/2200 hardware used two layers of address translation, while the newer hardware uses three layers of address translation.
* As Mr. Schroth has noted, OS1100/2200 basically uses Fieldata (6-bit characters) internally.
The architecture just happens to support reading and writing sixth words which is to say 6-bits which is "nice".
But the architecture also supports reading and writing either third words (12-bits) or quarter words (9-bits) based on the setting of a mode bit (DB32).
I mention this because to the extent that the hardware supports a character other than Fieldata, that character is 9-bits wide, not 7-bits which is of course ASCII or 8-bits which is of course ANSI or one of the other 8-bit encodings.
This something you need to be careful about if you're coming from an 8-bit byte addressable machine and if you happen to be conditioned to think that your characters are basically either 7-bits or 8-bits wide in an 8-bit cell.
* Mr. Schroth said that arithmetic operations can't return a negative zero. Normally, you should always trust what Mr. Schroth says instead of what I say, but in this case, Mr. Schroth is mistaken as any of the IP PRMs indicate in their discussion of negative zero.
In particular (-0) + (-0) = (-0) and (-0) - (+0) = (-0).
But the only way for -0 to show up in such an operation is if a programmer to do something to explicitly put it there in a register, such as (I think), doing a load negative immediate of a positive zero before the arithmetic operation (e.g. "LN,U A0,0")
7. Maybe I'm taking this all wrong, but I get the sense in the third part of your series that you're slamming OS2200 because you're having trouble coding up your Robot Finds Kitten app in C.systems are not just weird, but also severely lacking in capability.
I think this is more than a little unfair/biased as by pretty much by definition, there's no such thing as "standard I/O" that's part of the C language itself.
Yes, there are I/O libraries, but in effect you seem to be assuming that all systems that support C must also have equivalent I/O devices which is not the case.
I doubt that you could get your Robot Finds Kitten app running on an Atmel 8-bit AVR based system even though I'm sure that there's a C compiler for it.
IOW you seem to be trying to construct a toy application that might be trivial on x86-64 box, but doing so on an OS2200 system doesn't pass the "So What?" test for me except to the extent that you're deliberately trying to suggest that 1100/2200 Series
My apologies in advance if this was not your intent.
As Mr. Fuld and Mr. Schroth said, welcome.started using the machine specific language features that tied them in to one family of machines.
Some piddly points of no real importance as the Really Important Stuff you probably need to know is coming from Mr. Fuld and Mr. Schroth (and probably Mr. Gunshannon Real Soon Now).
1. You mentioned the fact that Once Upon a Time, there used to be a Unix variant, SX-1100, that ran on top of OS1100 at Bell Labs (BELLCORE).
You also mentioned that it had a "poor reputation for performance".
I would like to point out that at the time (pre-1100/90 IIRC), I think that 1100 hardware was still running at only a few MIPS at best.
To paraphrase Samuel Johnson, "If you see a dog's walking on his hind legs, it may not do so very well, but what you should be surprised at is that it is done at all."
2. As Mr. Schroth noted, the OS1100/OS2200 file system was (is?) a flat file system with program files just happening to have an internal structure that makes them look like they are a subdirectory.
But just in case you haven't run into it yet, there were (are?) also file cycles (F-cycles) so that there can be multiple instances of a file with the "same" name.
If you want to learn more about the joys of basic OS1100/OS2200 speak, I would recommend that you take a look at Volume 2 of one of the old Exec Programmer Reference Manuals like this one for Exec 36R2:
< http://bitsavers.org/pdf/univac/1100/exec/UP-4144.23_R36r2_Executive_System_Vol_2_Jan80.pdf >
Yes, I know it's older than dirt, but AFAIK, it's still applicable to today's OS2200 thanks to the joys of Backward Compatibility.
3. You mention file access being controlled via read/write keys.
Well, things may have changed, but that isn't (or wasn't) the only way file access could be controlled.
Once Upon a Time, a branch of OS1100 was hardened to be compliant with the B-1 Security Standard (the Orange Book?).
AFAIK, this is just about as good as it gets in terms of security.
(I think Microsoft at one time came up with some that was C-2 compliant.)
So at least at one time, access could be controlled by ACRs as well as read/write keys.
4. From your previous questions, I suspected that you were probably writing some kind of article, especially based on your question about what were the most common programming language that programs running on OS2200 are written in.
To be honest, I wasn't quite sure what you were getting at and to be quite honest, I not sure I do yet.
To try to be clear on my part, every customer in sight since close to the beginning of time has wanted computers that supported high level languages so that they (the customers) could supposedly write machine independent code ... at least before they
No doubt in an attempt to keep its customers "happy", Univac/Sperry Univac/Sperry/Unisys provides and supports, and will continue to provide and support high level languages like COBOL, Fortran, and C or whatever other languages become the Language DuJour so long as the Company exists.
But the fact of the matter is that a lot of code from the Good Old Days which was and still is 1100 assembly language code, and is still presumably very much supported (even if its use is not really encouraged).between by way of a mode bit in the Unisys version of a PSW (DB16 in the Designator Register AKA DR).
If that wasn't the case, then the Company could have gotten rid of what amounts to an entirely different machine (a "Basic Mode" machine) that's tacked onto a newer machine with a similar architecture (an "Extended Mode" machine) which the OS switch
Also in addition to old Dusty Deck code, I suspect that a lot of Transaction Processing development is still actively being done in assembly language.your list.
Of course, if Mr. Fuld or Mr. Schroth say that I'm full of shit, I will defer to them and say that you should believe them, but my point is that I suspect that assembler is still very much alive and well as far as OS2200 is concerned and should be in
Also, in addition to assembly, COBOL, Fortran, and C, there's also the matter of what amounts to another Univac/Sperry Univac/Sperry/Unisys language known as Mapper (which I see you reference in the second part of your series).wall time, making PS/2200 at best a simulator rather than an emulator.
Mapper was renamed long, long ago to BL- or BS- something-something-something and was even ported to non-Univac/Sperry Univac/Sperry/Unisys machines.
Considering that the Company once sort of tailored one of its 1100 systems to primarily run Mapper (i.e. a variant of the 1100/50 was referred to as Mapper10), I suspect that Mapper should also be added to your list.
As for other languages that were customer written and not supported by the Company, there used to be a Pascal, a PL/1 variant (PLUM), and a LISP (written in Pascal IIRC).
While they aren't Company processors, presumably they stand a decent chance of still running on an Dorado.
5. I think you are confusing the Unisys 2200 emulator software that the Company uses on its Intel boxes with PS/2200.
Although it's been a few years now since I used PS/2200, I can definitely say that at the time I used it, it was *NOT* an emulator.
It couldn't be.
The host OS (Windows) prevented time keeping on the 1-usec granularity that the original M-Series dayclock hardware could deliver and so if you just left the simulated system run for awhile, its sense of wall time quickly went out of sync with actual
It's my understanding that the Company emulator basically runs on top of a Linux based hypervisor which presumably is in close communication with the guest OS.access an Exec register, you won't get there from here but instead will take an interrupt.
Again, I will defer to Mr. Fuld and Mr. Schroth if they say something different.
6. I think you haven't quite gotten a handle on the finer points of the basic 2200 Series IP (AKA CPU) architecture.
So here are a few things that I'll throw at the wall for your consideration.
* No, not all registers are visible in the (virtual) address space.
The General Register Set (GRS) is divided into a User set and an Exec set, the visibility of which is determined by a mode bit (DB17).
Even so, some instructions specify a register using a GRS offset (by combining the J- and A-fields of an instruction as in the case of the JGD instruction) and so are *NOT* affected by the mode bit, except to say that if you're a User and you try to
Furthermore, in the Good Old Days, there were some registers that were specifically for Exec use only and so you couldn't access them either no matter how tricky you might get.Extended mode and the B-register specified in the instruction is zero (AKA B0).
(They've largely/entirely disappeared.)
And I think the rules for indirect addressing sometimes always reference storage rather than GRS no matter what, but I could be wrong.
* There are basically three (3) kinds of registers in each of the two register sets in GRS: index registers (X-registers), accumulators (A-registers), and repeat counters (R-registers).
To me, that's not all that "complex".
I mean, the MC68000 used to have two (2) kinds of registers, index registers (A-registers) and accumulators (D-registers) and yet I don't recall the 68K architecture as being "complex".
* When you compute the effective address of a "storage operand" whose value is less than 0200 (that's octal 0200), the processor *MAY* refer to a GRS register depending on the type of instruction being executed.
Some instructions will *ALWAYS* reference storage no matter what, while some will reference a GRS register (assuming that you have the appropriate processor privilege to get there from here) if you're running in Basic Mode or if you're running in
* I would quibble about how you describe the address of a storage operand. Assuming that we're not talking about a literal/immediate value, it's specified by an offset (u) and an index value from an index (X-) register in Basic Mode or an offset (u) and index value from an index (X-) register and base value (from a Base [B-]register) in Extended Mode.
The size of the offset varies between Basic mode and Extended mode, and there's a mode bit that controls the size of the index value (DB11) if you're the Exec (DB14 || DB15 < 2).absolute address specifies a location in a paged absolute address space and not an actual storage address.
In the Good Old Days, the effective address U was turned directly into an actual storage address (called an absolute address) via the additon of the base value from a base register, but since the introduction of M-Series (the 2200/900 and 2200/500) the
Paging hardware translates the absolute address into an actual storage address (called a real address) although on the emulated hardware, it's the host hardware that does the address translation.which is kinda stupid.
IOW, in the Good Old Days, 1100/2200 hardware used two layers of address translation, while the newer hardware uses three layers of address translation.
* As Mr. Schroth has noted, OS1100/2200 basically uses Fieldata (6-bit characters) internally.
The architecture just happens to support reading and writing sixth words which is to say 6-bits which is "nice".
But the architecture also supports reading and writing either third words (12-bits) or quarter words (9-bits) based on the setting of a mode bit (DB32).
I mention this because to the extent that the hardware supports a character other than Fieldata, that character is 9-bits wide, not 7-bits which is of course ASCII or 8-bits which is of course ANSI or one of the other 8-bit encodings.
This something you need to be careful about if you're coming from an 8-bit byte addressable machine and if you happen to be conditioned to think that your characters are basically either 7-bits or 8-bits wide in an 8-bit cell.
* Mr. Schroth said that arithmetic operations can't return a negative zero. Normally, you should always trust what Mr. Schroth says instead of what I say, but in this case, Mr. Schroth is mistaken as any of the IP PRMs indicate in their discussion of negative zero.
In particular (-0) + (-0) = (-0) and (-0) - (+0) = (-0).
But the only way for -0 to show up in such an operation is if a programmer to do something to explicitly put it there in a register, such as (I think), doing a load negative immediate of a positive zero before the arithmetic operation (e.g. "LN,U A0,0")
7. Maybe I'm taking this all wrong, but I get the sense in the third part of your series that you're slamming OS2200 because you're having trouble coding up your Robot Finds Kitten app in C.systems are not just weird, but also severely lacking in capability.
I think this is more than a little unfair/biased as by pretty much by definition, there's no such thing as "standard I/O" that's part of the C language itself.
Yes, there are I/O libraries, but in effect you seem to be assuming that all systems that support C must also have equivalent I/O devices which is not the case.
I doubt that you could get your Robot Finds Kitten app running on an Atmel 8-bit AVR based system even though I'm sure that there's a C compiler for it.
IOW you seem to be trying to construct a toy application that might be trivial on x86-64 box, but doing so on an OS2200 system doesn't pass the "So What?" test for me except to the extent that you're deliberately trying to suggest that 1100/2200 Series
My apologies in advance if this was not your intent.
I'm terribly sorry if it somehow came across
like I was slamming OS 2200 in part 3. On the
contrary, I'm very much enjoying OS 2200; I
find it to be a well-thought-out platform,
with good interfaces to the user and the
programmer, and I don't know what I wrote
that gave the impression that I was attacking
the 2200 system.
On Monday, September 26, 2022 at 6:46:40 AM UTC-7, Kira Ash wrote:timestamp [52:13]):
I'm terribly sorry if it somehow came across
like I was slamming OS 2200 in part 3. On the
contrary, I'm very much enjoying OS 2200; I
find it to be a well-thought-out platform,
with good interfaces to the user and the
programmer, and I don't know what I wrote
that gave the impression that I was attacking
the 2200 system.
I think that the way you've written your articles makes it easy for people who want to look down on mainframe in general and OS2200 systems in particular to do so.
For example, I ran across your series by way of a bit in the episode 260 YouTube video on the Retrocomputing Roundtable channel
(< https://www.youtube.com/watch?v=JeK3E0JA8N4 > starting around timestamp [48:23]).
After all four of the Roundtable board members basically expressed complete cluelessness about mainframes and OS2200, one member -- a PROGRAMMER who I happen to follow for her machinist videos on her Blondi Hacks channel -- said this (starting around
"Well when people talked about that, like we've all heard the stories about the culture clash between, you know, mainframes and the smaller mini-computer systems, and how difficult that transition was for companies because, you know, the mainframepeople had all the lab coats and all the prestige and so on.
And they cultivated this, you know, mystery of wizards in a tower running these systems.like to joke about being true -- and that almost feels like that's what's happening here. I mean if you create a system that's so hard to understand, then you have to go back to the same set of people to operate it."
"But a blog like this really, ah, really, brings that home, like, what a completely different world it was.
One of the earlier posts she said about how the, ah, system that she's running was like intentionally baroque or something ... how did she put it? ... like it reveled in how weird it was and how difficult it was to understand.
And yeah, this -- this line here I like: 'They are unapologetically strange systems -- integers are represented as ones' complement, and the machine word is 36-bits; the operating system is proudly baroque and more than a little intimidating.'"
She skips the following line where you said, "They're also way more fun than I expected" and instead went on to say:
"That's great writing, but I also think that it sells how strange these systems were.
And I almost won ... this also makes me wonder if there was a little bit of intentionality at some point here because engineers always joke about how hard, complicated, hard-to-understand systems create job security -- which is not really true but we
As it turns out, I appear to be blocked from making comments on this video (presumably because I pointed out that the claim she made in episode 0 that the presence of positive and negative zero in ones complement makes "basic arithmetic crazycomplicated" was untrue).
Now I understand that you can't control the way people interpret what you've written, but with all due respect, I can't see how you weren't intending to slam OS2200 when you wrote the first paragraph of the third section of your series.Linux kind.
And later on, whatever "fun" you may have been having seemed to me to hidden behind the fact that you were trying to overcome the obstacles of OS2200 software that you could entirely avoid by using something like ncurses or some other library known to
FWIW, as I read what you were doing, I kept thinking to myself, "If you're trying to send something to a Uniscope terminal as a test (which I assume is the output device you're using either simulated or real), why is she messing around with C and DPSat all?"
I means if I wanted to send something to a screen just as a test, I'd probably use @FLIT.
In any event, like others, I look forward to your future installments.
On 9/26/2022 11:20 AM, Lewis Cole wrote:timestamp [52:13]):
On Monday, September 26, 2022 at 6:46:40 AM UTC-7, Kira Ash wrote:
I'm terribly sorry if it somehow came across
like I was slamming OS 2200 in part 3. On the
contrary, I'm very much enjoying OS 2200; I
find it to be a well-thought-out platform,
with good interfaces to the user and the
programmer, and I don't know what I wrote
that gave the impression that I was attacking
the 2200 system.
I think that the way you've written your articles makes it easy for people who want to look down on mainframe in general and OS2200 systems in particular to do so.
For example, I ran across your series by way of a bit in the episode 260 YouTube video on the Retrocomputing Roundtable channel
(< https://www.youtube.com/watch?v=JeK3E0JA8N4 > starting around timestamp [48:23]).
After all four of the Roundtable board members basically expressed complete cluelessness about mainframes and OS2200, one member -- a PROGRAMMER who I happen to follow for her machinist videos on her Blondi Hacks channel -- said this (starting around
people had all the lab coats and all the prestige and so on."Well when people talked about that, like we've all heard the stories about the culture clash between, you know, mainframes and the smaller mini-computer systems, and how difficult that transition was for companies because, you know, the mainframe
like to joke about being true -- and that almost feels like that's what's happening here. I mean if you create a system that's so hard to understand, then you have to go back to the same set of people to operate it."And they cultivated this, you know, mystery of wizards in a tower running these systems.
"But a blog like this really, ah, really, brings that home, like, what a completely different world it was.
One of the earlier posts she said about how the, ah, system that she's running was like intentionally baroque or something ... how did she put it? ... like it reveled in how weird it was and how difficult it was to understand.
And yeah, this -- this line here I like: 'They are unapologetically strange systems -- integers are represented as ones' complement, and the machine word is 36-bits; the operating system is proudly baroque and more than a little intimidating.'"
She skips the following line where you said, "They're also way more fun than I expected" and instead went on to say:
"That's great writing, but I also think that it sells how strange these systems were.
And I almost won ... this also makes me wonder if there was a little bit of intentionality at some point here because engineers always joke about how hard, complicated, hard-to-understand systems create job security -- which is not really true but we
complicated" was untrue).As it turns out, I appear to be blocked from making comments on this video (presumably because I pointed out that the claim she made in episode 0 that the presence of positive and negative zero in ones complement makes "basic arithmetic crazy
to Linux kind.Now I understand that you can't control the way people interpret what you've written, but with all due respect, I can't see how you weren't intending to slam OS2200 when you wrote the first paragraph of the third section of your series.
And later on, whatever "fun" you may have been having seemed to me to hidden behind the fact that you were trying to overcome the obstacles of OS2200 software that you could entirely avoid by using something like ncurses or some other library known
I haven't watched the video, but from what you said, I think the problem
is with the lady in the video, not with Kira. She doesn't bolster her
point when she clips out the part that exactly contradicts her position. :-)
As for apologetically strange, I think that is actually right. They certainly have nothing to apologize for, especially as OS 1100 predates Linux by decades. And yes, compared to virtually every other CPU
generally available, 1s complement arithmetic and 36 bit words are at
best "unusual" and I don't blink at someone calling them strange. Of
course, it might appear less strange if you understood the reasoning
behind those decisions, and the upward compatibility requirements for keeping them. (Reasons supplied on request.)
As for the meat of the issue, I can certainly see that someone steeped
in Unix would have a hard time "getting" the 2200 way of doing things.
I applaud Kira for taking on the challenge, and from what I can see,
doing remarkably well.
BTW, Kira, if you are comfortable saying, why did you embark on this journey? Was it an assignment of some sort, or just for fun? If for
fun, why did you pick the 2200 as opposed to some other system?
On Tuesday, September 27, 2022 at 3:24:11 PM UTC-7, Kira Ash wrote:
On Tuesday, September 27, 2022 at 2:34:59 PM UTC-7, Stephen Fuld wrote:
As for the meat of the issue, I can certainly see that someone steeped
in Unix would have a hard time "getting" the 2200 way of doing things.
I applaud Kira for taking on the challenge, and from what I can see,
doing remarkably well.
BTW, Kira, if you are comfortable saying, why did you embark on this
journey? Was it an assignment of some sort, or just for fun? If for
fun, why did you pick the 2200 as opposed to some other system?
Thank you for your kind reply. I was very worried that I had made some grievous error and was flirting with taking LPOS2200 down - I didn't
mean to upset anyone at all. [...]
[...] I have enjoyed my time with OS 2200 more
than I've expected, and certainly found it to be a likable system. And
while Lewis seems to think I'd rather being using ncurses, once I
figured out the mechanisms of DPS 2200, I'm impressed by how powerful
it is - mostly-transparent form validation, in particular, is a very
handy feature.
If I've done something wrong in all of that, I really am sorry.
I did not want to ruffle any feathers.< snip >
I left the following comment on the video, and I hope the< snip >
Retrocomputing Roundtable hosts will read it:
Again, I apologize if I did or said something wrong. This
is all new to me, and I freely admit I'm out of my depth,
but I'm trying to do right by the system.
On Tuesday, September 27, 2022 at 2:34:59 PM UTC-7, Stephen Fuld wrote:< snip >
On 9/26/2022 11:20 AM, Lewis Cole wrote:
I haven't watched the video, but from what you said, I think the problem
is with the lady in the video, not with Kira. She doesn't bolster her
point when she clips out the part that exactly contradicts her position. :-)
As for the meat of the issue, I can certainly see that someone steeped< snip >
in Unix would have a hard time "getting" the 2200 way of doing things.
I applaud Kira for taking on the challenge, and from what I can see,
doing remarkably well.
< snip >FWIW, as I read what you were doing, I kept thinking to myself,
"If you're trying to send something to a Uniscope terminal as a
test (which I assume is the output device you're using either
simulated or real), why is she messing around with C and DPS at
all?"
I means if I wanted to send something to a screen just as a test,
I'd probably use @FLIT.
Talk about strange! While you could do as you say, Flit is even more a departure from "typical" systems and has a learning curve all its own.
On 9/24/22 17:14, Kira Ash wrote:
On Friday, September 23, 2022 at 10:07:00 PM UTC-7, David W Schroth wrote: >>> On Thu, 22 Sep 2022 14:35:59 -0700 (PDT), Kira Ash
<hpeint...@gmail.com> wrote:
Hi all,I will echo Mr. Fuld's welcome to you, and offer semi-random comments
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
Kira
on your fascinating posts.
1) I believe that there is a CSHELL program for the 2200 that provides
a decent emulation of a *nix shell. While I knew the authors when
they worked at Unisys, I don't know where the program can be found.
2) The 2200 File System is a flat file system. According to Ron Smith
(co-author with George Gray of what I regard as the definitive history
of Unisys systems), there was a time when Univac considered replacing
the flat file system with a heirarchical file systems. The customers
they discussed this with were not supportive of the idea, so it was
dropped.
3) Based upon your comment regarding read and write keys on files, I
infer that you are running on what is called a Fundamental Security
system. I tend to regard SECOPT1 (or higher) systems somewhat more
secure, and on those systems read and write keys are not meaningful
for most files.
4) The ECL (Exec Control Language) internally works with Fieldata,
which is a six bit character set. So ECL is not case-sensitive.
5) @PRT (with no options) is not something I typically use, as it
displays the contents of the entire Master File Directory. Not a big
deal on systems with small filesystems; a very big deal at some sites
I have supported with very large file systems.
6) PLUS is a descendent of JOVIAL (Jules Own Version of the
International Algorithmic Language), or so I've been told. It
suffers, from my perspective, from trying to be too many things for
too many people (for a while it ran on 2200s, Series 30, and Series 90
systems). While there are UCS flavors of COBOL and Fortran, I suspect
that the customer you cite probably uses FTN (ASCII Fortran). I could
be mistaken.
7) From a user program perspective, there are 48 general registers
(partitioned into index registers, accumulators, and R-registers),
and16 base registers (critical for addressing and security). While
negative zero is a thing, arithmetic operations never return a
negative zero. The existence of negative zero does cause some quirks
that most people encounter very rarely. Instructions can access sixth
words, quarter words, third words, half words, words, and double
words. I will observe that the complex memory security features seem
to be effective.
8) AT&T, before it was broken up, was a heavy user of SX 1100. There
are some oddball Exec ERs that were put in specifically to support
AT&T's needs.
I'm now dying to see Part 4: The Periodic File of Elements.
Thank you very much for your thoughtful post, and I hope Part 4 is interesting to you when it's posted in a couple of weeks! I've been hitting the manuals on OS 2200 storage and the APIs for dealing with it - lot of interesting stuff in there.
For the Ron Smith and George Gray history, do you mean "Unisys Computers: An Introductory History" or "Sperry Rand's Third Generation Computers" in IEEE Annals?
CSHELL sounds like an interesting beast. Please let me know if you come across a copy somewhere, as I'd be very curious - though the truth is, I've become increasingly accustomed to doing things the 2200 way. :-)
Thanks again!
Speaking of SHELLs. Do copies of the Software Tools Virtual Operating
System written for EXEC-8 on the 1100 still exist? Has it been ported
to the 2200?
bill
Lewis Cole wrote:<snip>
As Mr. Fuld and Mr. Schroth said, welcome.
There are four levels of security,
0 - File access controlled by keys and public/private is by account or >project-id (configurable in the Exec).
1 - ACRs, Clearance Levels and Owned files are introduced.
Public/private is by file ownership unless the file is unowned (the old
rules then apply), ACRs can be used to say who has which access to a
file, Clearance levels can be used to block all access to those with a
lower CL. A Security Officer can block/permit access to many ERs and >privileges, either on a userid basis or to make the default generally >(un)available.
I did not find Clearance levels helpful but the rest was heavily used.
2 - SECOPT1 + some bits which controlled file access via some control
bits. We had it but eventually decided it was of no use to us and we
dropped back to SECOPT1, something which required a JK13 with a local >modification to FAS along with a "script" to set up the ACRs for the
system files. Those control bits had a name but we dumped SECOPT2 25
years ago.
3 - SECOPT2 + some way of blocking unwelcome access to Common Banks,
there were options to scrub memory and to scrub files when they were
being reduced in size or deleted (obviously only the parts being
released were scrubbed). I can't remember much about this and we never
used it, SECOPT3 was the one with what will have been C-2 compliancy.
On Sat, 24 Sep 2022 18:28:20 -0400, Bill Gunshannon <bill.gunshannon@gmail.com> wrote:
On 9/24/22 17:14, Kira Ash wrote:
On Friday, September 23, 2022 at 10:07:00 PM UTC-7, David W Schroth wrote: >>>> On Thu, 22 Sep 2022 14:35:59 -0700 (PDT), Kira Ash
<hpeint...@gmail.com> wrote:
Hi all,I will echo Mr. Fuld's welcome to you, and offer semi-random comments
I've been working on a series of posts documenting my experiences learning about, and programming for, OS 2200 using OS 2200 Express. I wasn't sure if anyone here would find it interesting or not, but I thought I'd post it just in case.
https://arcanesciences.com/os2200/
Feel free to yell at me if I'm misunderstanding or misstating anything - I'm not an expert in this system at all, but I'm getting deeper into it and liking what I'm seeing.
Kira
on your fascinating posts.
1) I believe that there is a CSHELL program for the 2200 that provides >>>> a decent emulation of a *nix shell. While I knew the authors when
they worked at Unisys, I don't know where the program can be found.
2) The 2200 File System is a flat file system. According to Ron Smith
(co-author with George Gray of what I regard as the definitive history >>>> of Unisys systems), there was a time when Univac considered replacing
the flat file system with a heirarchical file systems. The customers
they discussed this with were not supportive of the idea, so it was
dropped.
3) Based upon your comment regarding read and write keys on files, I
infer that you are running on what is called a Fundamental Security
system. I tend to regard SECOPT1 (or higher) systems somewhat more
secure, and on those systems read and write keys are not meaningful
for most files.
4) The ECL (Exec Control Language) internally works with Fieldata,
which is a six bit character set. So ECL is not case-sensitive.
5) @PRT (with no options) is not something I typically use, as it
displays the contents of the entire Master File Directory. Not a big
deal on systems with small filesystems; a very big deal at some sites
I have supported with very large file systems.
6) PLUS is a descendent of JOVIAL (Jules Own Version of the
International Algorithmic Language), or so I've been told. It
suffers, from my perspective, from trying to be too many things for
too many people (for a while it ran on 2200s, Series 30, and Series 90 >>>> systems). While there are UCS flavors of COBOL and Fortran, I suspect
that the customer you cite probably uses FTN (ASCII Fortran). I could
be mistaken.
7) From a user program perspective, there are 48 general registers
(partitioned into index registers, accumulators, and R-registers),
and16 base registers (critical for addressing and security). While
negative zero is a thing, arithmetic operations never return a
negative zero. The existence of negative zero does cause some quirks
that most people encounter very rarely. Instructions can access sixth
words, quarter words, third words, half words, words, and double
words. I will observe that the complex memory security features seem
to be effective.
8) AT&T, before it was broken up, was a heavy user of SX 1100. There
are some oddball Exec ERs that were put in specifically to support
AT&T's needs.
I'm now dying to see Part 4: The Periodic File of Elements.
Thank you very much for your thoughtful post, and I hope Part 4 is interesting to you when it's posted in a couple of weeks! I've been hitting the manuals on OS 2200 storage and the APIs for dealing with it - lot of interesting stuff in there.
For the Ron Smith and George Gray history, do you mean "Unisys Computers: An Introductory History" or "Sperry Rand's Third Generation Computers" in IEEE Annals?
CSHELL sounds like an interesting beast. Please let me know if you come across a copy somewhere, as I'd be very curious - though the truth is, I've become increasingly accustomed to doing things the 2200 way. :-)
Thanks again!
Speaking of SHELLs. Do copies of the Software Tools Virtual Operating
System written for EXEC-8 on the 1100 still exist? Has it been ported
to the 2200?
bill
Much to my surprise, you reference something 1100/2200 related that
I've never heard of. Please tell us more.
IMO, the major difference (from a user code standpoint) between an
1100 and a 2200 is that marketing thought 2200 sounded twice as sexy
as 1100.
Mileage almost certainly varies.
On Tuesday, September 27, 2022 at 9:49:41 PM UTC-7, Stephen Fuld wrote:
On 9/27/2022 7:30 PM, David W Schroth wrote:
IMO, the major difference (from a user code standpoint) between an
1100 and a 2200 is that marketing thought 2200 sounded twice as sexy
as 1100.
That, plus they essentially ran out of numbers, as they already had the 1100/10, /20, /40, /50, /60, /80, and /90. So they had to change something.
On 9/27/2022 3:24 PM, Kira Ash wrote:it to be a likable system. And while Lewis seems to think I'd rather being using ncurses, once I figured out the mechanisms of DPS 2200, I'm impressed by how powerful it is - mostly-transparent form validation, in particular, is a very handy feature.
On Tuesday, September 27, 2022 at 2:34:59 PM UTC-7, Stephen Fuld wrote:big snip
As for apologetically
Typo. Of course I meant *un*apologetically
strange, I think that is actually right. They
certainly have nothing to apologize for, especially as OS 1100 predates >> Linux by decades. And yes, compared to virtually every other CPU
generally available, 1s complement arithmetic and 36 bit words are at
best "unusual" and I don't blink at someone calling them strange. Of
course, it might appear less strange if you understood the reasoning
behind those decisions, and the upward compatibility requirements for
keeping them. (Reasons supplied on request.)
As for the meat of the issue, I can certainly see that someone steeped
in Unix would have a hard time "getting" the 2200 way of doing things.
I applaud Kira for taking on the challenge, and from what I can see,
doing remarkably well.
BTW, Kira, if you are comfortable saying, why did you embark on this
journey? Was it an assignment of some sort, or just for fun? If for
fun, why did you pick the 2200 as opposed to some other system?
Thank you for your kind reply. I was very worried that I had made some grievous error and was flirting with taking LPOS2200 down - I didn't mean to upset anyone at all. I have enjoyed my time with OS 2200 more than I've expected, and certainly found
for a while, and nothing really came to mind - until I remembered that the Robotfindskitten website included, on its list of ports, "Did you port rfk to Univac? Click here!" and I decided to take it more literally than the Robotfindskitten developersI've actually been interested in non-IBM mainframes for a while - I've been sporadically researching GCOS since I was in high school, over a decade ago, for instance - and I had read MCP manuals but never OS 2200.You are a bit strange - delightfully so! :-)
I had some concern that OS 2200 Express would be going away, since MCP Express already had, and figured I should get a license and learn it while I still had the opportunity. I had been trying to think up a good project to learn the APIs and tools
and the process by which I was learning it.I never initially planned to write articles about it at all, but friends of mine - perhaps growing unhappy with walls of text on chat programs about "whoa, look at what I figured out how to do in OS 2200!" - suggested I publish what I was learning,
ridiculous. As with any operating system, I know it evolved to be the way it is because of the combination of application requirements and technical and practical limitations, and I actually quite like what I've seen of it so far. I left the followingIf I've done something wrong in all of that, I really am sorry. I did not want to ruffle any feathers. As to the video, I take the assertion that OS 2200 is designed the way it is because of some kind of job-security-related plot to be completely
requires you to request a license. As for OS 2200 itself - I really don't think it's complex for the purpose of job security or complexity in and of itself. It's complex because it's supporting complex workloads that have to do a lot of different things -" Hiya! I wrote Let's Play OS 2200 and I'm glad you enjoyed it! The emulator used is PS/2200, provided as a component of Unisys's OS 2200 Express. It runs a modern version of OS 2200 (iirc one version behind what's shipping today) and is free, but
Well said!I would assume that it's a natural word length for storing 6-bit character data, and that opting for word-addressing instead of byte-address increases the effective address space for a given word size.
As for 36-bit systems, I know they used to be common - GCOS 8 and OS 2200 are the only survivors, to the best of my knowledge, but there used to be the IBM 7xxx, PDP-10, and various others - but I've been somewhat unclear as to why; if I had to guess,
While both of those are true, I believe the reason you give next is the
most convincing.
Someone also once told me it was because it provided ten signed integer digits.
The 1108 was a successor machine (not compatible, but designed by the
same group) to the 1103, which was an unclassified version of the Atlas
II which was designed for the military. I think I was told that the 10 digits, hence 36 bits came from a military requirement for the Atlas II.
I am less clear as to the reasoning for ones' complement; wasn't it a
little unusual even at the time?
Perhaps, I was told that the 1108 used ones complement as it required slightly less logic and was slightly faster in the technology of the
time than two's complement. The example I remember is that in ones complement, you negate a value just by flipping all the bits, whereas
for twos complement, more logic is required. Eliminating the extra
logic reduced costs and made operations such as subtraction faster, i.e. invert all the bits of the minuend and perform an add.
Again, I apologize if I did or said something wrong. This is all new to me, and I freely admit I'm out of my depth, but I'm trying to do right by the system.No problem.
Rereading parts of your web site, a few minor nits.
1. You say the 2200 has full decimal floating point. This is not
correct. It has full binary floating point (36 and 72 bits), and
limited decimal fixed point support.
2. When you talk about file names you missed a subtlety. Note that the dollar sign $, is a legal character in file names. With a flat file
system, you have to insure that the combination of Qualifier and
Filename are unique across the whole system. But the OS has various
system files, some visible to the user (e.g system libraries), some not, e.g. the swap file. So in your file naming, you have to insure you
don't mistakenly use a name that the system is already using. So early
on, Univac decided that all Exec files would contain a dollar sign in
the name. So to insure no conflicts, all a use has to do is not use a
dollar sign in the name any of his files.
3. There has been a some discussion here about security levels, etc.
While this is important in a multi-user site, for a single user system
such as PS/2200, you really don't have to care about it at all, and
minimal security is fine.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
Much to my surprise, you reference something 1100/2200 related that
I've never heard of. Please tell us more.
captured for posterity.On Tuesday, September 27, 2022 at 2:34:59 PM UTC-7, Stephen Fuld wrote:
On 9/26/2022 11:20 AM, Lewis Cole wrote:
I haven't watched the video, but from what you said, I think the problem< snip >
is with the lady in the video, not with Kira. She doesn't bolster her
point when she clips out the part that exactly contradicts her position. :-)
Agreed.
Be that as it may, Quinn is no dummy and she expressed an opinion that the other members didn't take issue with ... which I took to mean they either agreed with her or didn't think they knew enough to get into an agreement with her that would be
As for the meat of the issue, I can certainly see that someone steeped< snip >
in Unix would have a hard time "getting" the 2200 way of doing things.
I applaud Kira for taking on the challenge, and from what I can see,
doing remarkably well.
Agreed.
In an E-mail from one of the members of the board (the details of which I won't get into), he basically said that the board members' experience is more with small scale systems (micros really) rather than mainframes.
So while I might (and did) have problems getting used to Windows coming from an OS1100/2200 environment, and have done just about everything I can to avoid dealing with Linux for the same sort of reason,
the fact of the matter is that if I were to make comments about my experience, no one would likely take such comments as a reflection on PCs or Windows or Linux.
(Well, maybe the Windows users would bitch about Linux and the Linux users would bitch about Windows, but I think most people would bitch about me being a newbie.)
FWIW, as I read what you were doing, I kept thinking to myself,
"If you're trying to send something to a Uniscope terminal as a
test (which I assume is the output device you're using either
simulated or real), why is she messing around with C and DPS at
all?"
I means if I wanted to send something to a screen just as a test,
I'd probably use @FLIT.
Talk about strange! While you could do as you say, Flit is even more a< snip >
departure from "typical" systems and has a learning curve all its own.
@FLIT is of course 1100/2200 specific, but IMHO is essential if one has any interest in developing software on OS1100/2200, just as much as using GDB is on Linux.
What I've found is that I can do more than just debug programs using @FLIT and in particular when I wanted to see what a terminal/console would do in response to control sequences, @FLIT beat writing a @MASM program to get the job done.
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote:
Rereading parts of your web site, a few minor nits.
On 9/28/2022 6:48 AM, Kira Ash wrote:
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote:
Rereading parts of your web site, a few minor nits.
Your section on other mainframe systems got me to do some research.
1. You mention that GCOS was emulated on Itanium hardware, but with its >demise, you didn't know what happened. According to
https://en.wikipedia.org/wiki/General_Comprehensive_Operating_System#Legacy
it is now emulated on various Xeon based systems.
2. Your mention of the Siemens BS2000, systems brought up a memory that >they were another of the "almost IBM S/360 compatible" systems, but I
wasn't sure. Again from Wikipedia
https://en.wikipedia.org/wiki/BS2000
it was based on the RCA Spectra/70 series, which is the same series
that, when RCA decided to exit the computer business, they sold to
Univac which became the high end of the Univac Series 90!
Probably OT for this group, but all of this got me to wondering why,
several companies chose to build their own computer lines, and their own
OS, designed their hardware to be "almost" IBM compatible? Or, to put
it another way, What was so compelling about the ISA that several
companies decided to adopt it, even when the software wouldn't be
compatible?
On 9/28/2022 6:48 AM, Kira Ash wrote:
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote:
Your section on other mainframe systems got me to do some research.Rereading parts of your web site, a few minor nits.
1. You mention that GCOS was emulated on Itanium hardware, but with its demise, you didn't know what happened. According to
https://en.wikipedia.org/wiki/General_Comprehensive_Operating_System#Legacy
it is now emulated on various Xeon based systems.
On 9/30/2022 9:12 AM, Scott Lurndal wrote:
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
On 9/28/2022 6:48 AM, Kira Ash wrote:
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote: >>>
Rereading parts of your web site, a few minor nits.
Your section on other mainframe systems got me to do some research.
1. You mention that GCOS was emulated on Itanium hardware, but with its >>> demise, you didn't know what happened. According to
https://en.wikipedia.org/wiki/General_Comprehensive_Operating_System#Legacy >>>
it is now emulated on various Xeon based systems.
2. Your mention of the Siemens BS2000, systems brought up a memory that >>> they were another of the "almost IBM S/360 compatible" systems, but I
wasn't sure. Again from Wikipedia
https://en.wikipedia.org/wiki/BS2000
it was based on the RCA Spectra/70 series, which is the same series
that, when RCA decided to exit the computer business, they sold to
Univac which became the high end of the Univac Series 90!
Probably OT for this group, but all of this got me to wondering why,
several companies chose to build their own computer lines, and their own >>> OS, designed their hardware to be "almost" IBM compatible? Or, to put
it another way, What was so compelling about the ISA that several
companies decided to adopt it, even when the software wouldn't be
compatible?
Burroughs chose EBCDIC for compatability with IBM peripherals.
I presume you meant data compatibility of the media (primarily tape),
not the peripheral, as things like tape drives don't care at all what
data you write (i.e. "bits is bits").
Caveat, I just don't know if "paper" peripherals such as printers, card >readers or the specialized banking peripherals used with many Burroughs >systems know about character set.
BTW, what character code did the Burroughs 5000 series use, as I think
they were designed before IBM announced EBCDIC?
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
On 9/28/2022 6:48 AM, Kira Ash wrote:
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote:
Rereading parts of your web site, a few minor nits.
Your section on other mainframe systems got me to do some research.
1. You mention that GCOS was emulated on Itanium hardware, but with its
demise, you didn't know what happened. According to
https://en.wikipedia.org/wiki/General_Comprehensive_Operating_System#Legacy >>
it is now emulated on various Xeon based systems.
2. Your mention of the Siemens BS2000, systems brought up a memory that
they were another of the "almost IBM S/360 compatible" systems, but I
wasn't sure. Again from Wikipedia
https://en.wikipedia.org/wiki/BS2000
it was based on the RCA Spectra/70 series, which is the same series
that, when RCA decided to exit the computer business, they sold to
Univac which became the high end of the Univac Series 90!
Probably OT for this group, but all of this got me to wondering why,
several companies chose to build their own computer lines, and their own
OS, designed their hardware to be "almost" IBM compatible? Or, to put
it another way, What was so compelling about the ISA that several
companies decided to adopt it, even when the software wouldn't be
compatible?
Burroughs chose EBCDIC for compatability with IBM peripherals.
Why design your own ISA if you can use one with an existence proof;
likely made it easier to hire programmers away from IBM sites.
BTW, what character code did the Burroughs 5000 series use, as I thinkOriginally, they used the same character set as the B300 (6-bit BCL).
they were designed before IBM announced EBCDIC?
Note that both the B5000 and the B2500 were developed in the old
electrodata plant in Pasadena by the same folks that did the E220
and the B300 series.
On 10/4/2022 9:43 AM, Scott Lurndal wrote:
BTW, what character code did the Burroughs 5000 series use, as I thinkOriginally, they used the same character set as the B300 (6-bit BCL).
they were designed before IBM announced EBCDIC?
Note that both the B5000 and the B2500 were developed in the old
electrodata plant in Pasadena by the same folks that did the E220
and the B300 series.
Actually, the B5000/5500/5700 used three character encodings. BCL was
similar to the IBM 1401 code, although the glyphs for a number of the
special characters were different. It was the standard used to
communicate from the various I/O interfaces to most peripheral devices.
When writing to magnetic tape in "alpha" (even-parity) mode, BCL is what
was written.
I assume by "E220" Scott meant the Burroughs 220, which was a
vacuum-tube, decimal, core memory system. ca. 1958. The Pasadena plant
was also the origin of the ElectroData Datatron 203/204/205 systems,
which were also vacuum-tube, decimal systems, but with a drum memory.
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
On 9/30/2022 9:12 AM, Scott Lurndal wrote:
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
On 9/28/2022 6:48 AM, Kira Ash wrote:
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote: >>>>
Rereading parts of your web site, a few minor nits.
Your section on other mainframe systems got me to do some research.
1. You mention that GCOS was emulated on Itanium hardware, but with its >>>> demise, you didn't know what happened. According to
https://en.wikipedia.org/wiki/General_Comprehensive_Operating_System#Legacy
it is now emulated on various Xeon based systems.
2. Your mention of the Siemens BS2000, systems brought up a memory that >>>> they were another of the "almost IBM S/360 compatible" systems, but I
wasn't sure. Again from Wikipedia
https://en.wikipedia.org/wiki/BS2000
it was based on the RCA Spectra/70 series, which is the same series
that, when RCA decided to exit the computer business, they sold to
Univac which became the high end of the Univac Series 90!
Probably OT for this group, but all of this got me to wondering why,
several companies chose to build their own computer lines, and their own >>>> OS, designed their hardware to be "almost" IBM compatible? Or, to put >>>> it another way, What was so compelling about the ISA that several
companies decided to adopt it, even when the software wouldn't be
compatible?
Burroughs chose EBCDIC for compatability with IBM peripherals.
I presume you meant data compatibility of the media (primarily tape),
not the peripheral, as things like tape drives don't care at all what
data you write (i.e. "bits is bits").
Actually, IBM (compatiable, e.g. Memorex) peripherals themselves
were often rebadged and used with Burroughs systems.
'Burroughs intended target was the low-end 360/30 and 360/4020burroughs&f=false
which were marketed to SMBs. Thus, the machines used IBM's EBCDIC
coding for data and emulated IBM's file structures [ed. on tape]'.
https://books.google.com/books?id=Mk9-EAAAQBAJ&pg=PA120&lpg=PA120&dq=dave+dahm+ebcdic+burroughs&source=bl&ots=Noc6PbRnKC&sig=ACfU3U1gnPP99Wsx3UufLwMVBcTqaAUH7Q&hl=en&sa=X&ved=2ahUKEwjIhZiogcf6AhX9LEQIHZHqDwsQ6AF6BAgkEAM#v=onepage&q=dave%20dahm%20ebcdic%
On 10/4/2022 9:43 AM, Scott Lurndal wrote:
20ebcdic%20burroughs&f=false
'Burroughs intended target was the low-end 360/30 and 360/40
which were marketed to SMBs. Thus, the machines used IBM's EBCDIC
coding for data and emulated IBM's file structures [ed. on tape]'.
https://books.google.com/books?id=Mk9-EAAAQBAJ&pg=PA120&lpg=PA120&dq=dave+dahm+ebcdic+burroughs&source=bl&ots=Noc6PbRnKC&sig=ACfU3U1gnPP99Wsx3UufLwMVBcTqaAUH7Q&hl=en&sa=X&ved=2ahUKEwjIhZiogcf6AhX9LEQIHZHqDwsQ6AF6BAgkEAM#v=onepage&q=dave%20dahm%
Makes sense. The excerpt at that link is quite interesting. It gives a
lot of detail on internals, ISA etc. Thanks.
On Wednesday, September 28, 2022 at 6:48:32 AM UTC-7, Kira Ash wrote:< snip >
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote:
< snip >Rereading parts of your web site, a few minor nits.
< snip >
2. When you talk about file names you missed a subtlety. Note that the
dollar sign $, is a legal character in file names. With a flat file
system, you have to insure that the combination of Qualifier and
Filename are unique across the whole system. But the OS has various
system files, some visible to the user (e.g system libraries), some not,
e.g. the swap file. So in your file naming, you have to insure you
don't mistakenly use a name that the system is already using. So early
on, Univac decided that all Exec files would contain a dollar sign in
the name. So to insure no conflicts, all a use has to do is not use a
dollar sign in the name any of his files.
- Stephen Fuld
(e-mail address disguised to prevent spam)
I've revised the section on filenames to mention that system files all contain a $ - I had actually noticed that and > was curious about the reason, so thanks for explaining it! Also corrected the mention of decimal floating point.
Thanks again for your feedback - I want to get this right!
imagine, but rather simply different because of the different evolutionary path involved.On Wednesday, September 28, 2022 at 6:48:32 AM UTC-7, Kira Ash wrote:< snip >
On Tuesday, September 27, 2022 at 10:27:09 PM UTC-7, Stephen Fuld wrote:
< snip >Rereading parts of your web site, a few minor nits.
< snip >
2. When you talk about file names you missed a subtlety. Note that the
dollar sign $, is a legal character in file names. With a flat file
system, you have to insure that the combination of Qualifier and
Filename are unique across the whole system. But the OS has various
system files, some visible to the user (e.g system libraries), some not, >>> e.g. the swap file. So in your file naming, you have to insure you
don't mistakenly use a name that the system is already using. So early
on, Univac decided that all Exec files would contain a dollar sign in
the name. So to insure no conflicts, all a use has to do is not use a
dollar sign in the name any of his files.
- Stephen Fuld
(e-mail address disguised to prevent spam)
I've revised the section on filenames to mention that system files all contain a $ - I had actually noticed that and > was curious about the reason, so thanks for explaining it! Also corrected the mention of decimal floating point.
Thanks again for your feedback - I want to get this right!
You probably don't want to hear from me again, but it occurred to me that the OS2200/OS1100/Exec8 file system might be a good place to show case the fact that it might not be as backward of a system as those coming from a micro/mini environment might
I assume that you are probably aware of this, but just in case, what I'm specifically referring to is the fact that due to their evolutionary history, many/most? folks used to micros/minis often times assume that there's a one-to-one relationshipbetween disks and file systems.
That is each disk must have on it one and only one file system, and for each file system must exist on one and only one disk.moving the media doesn't cause the data on it to become inaccessible.
Given that in the Good Old Days folks were lucky if there as even one disk on early micro/mini systems and the media on such a disk might be physically removeable, it makes sense to assume that any given disk must contain a complete file system so that
Mainframe systems, however, usually had quite a few attached disks, some of which might have physically removable disk packs.removed (AKA physically "removable").
But just because a disk pack could be or couldn't be physically moved without taking apart the drive it was attached to did not/does not mean that the OS has to treat it, or rather the disk drive that pack was attached/mounted on, that way.
An OS can treat a disk drive with its mounted disk pack as being "logically removable" even though its pack couldn't be removed (AKA physically "fixed") or treat a disk drive with its mounted pack as "fixed" even though its pack could be physically
In the case of OS2200/OS1100/Exec8, a disk pack that is "prep" ("prepared"/initialized) as "removable" did contain enough stuff on it so that it could be moved around in the disk farm attached to an 1100/2200 system.years before.
But if the pack was prepped "fixed", then the OS regarded it as being permanently a part of the system and so the OS doesn't do anything to try to restrict the files so that they reside entirely on one and only one disk pack.
IOW, parts of files can be spread out over just about ANY ("fixed") disk in the system.
This is of course something that folks familiar with Unix/Linux think of as a modern invention of Sun in the form of the ZFS file system and its ilk since 2000-something ... except that it's been present in OS2200/OS1100/Exec8 for something like 40+
I recently saw a question elsewhere in comp.sys.unisys where a person asked if there was a utility for finding out what files were on a particular pack.
The people who answered appeared to be fluent in Burroughs systems, which makes sense because that question is usually effectively meaningless for 1100/2200 systems.
Of course, one of the nice things about ZFS is interest in detecting and/or preventing bit rot and I this is something that (AFAIK) wasn't really a concern for OS2200/OS1100/Exec8 systems.from a non-1100/2200 environment.
Perhaps bit rot was somewhat ameliorated by rolling out mass storage files to tape and rolling them in from tape to mass storage, but that's something I never really thought about.
Perhaps Mr. Fuld and/or Mr. Schroth can wave their arms at this.
In any case, I don't think that this distribution of file parts across disks is something that most folks even those who come from an 1100/2200 environment would notice or care about and therefore bother mentioning when talking to someone who comes
On Wednesday, September 28, 2022 at 6:48:32 AM UTC-7, Kira Ash wrote:
Mainframe systems, however, usually had quite a few attached disks, some of=
which might have physically removable disk packs.
But just because a disk pack could be or couldn't be physically moved witho= >ut taking apart the drive it was attached to did not/does not mean that the=
OS has to treat it, or rather the disk drive that pack was attached/mounte=
d on, that way.
An OS can treat a disk drive with its mounted disk pack as being "logically=
removable" even though its pack couldn't be removed (AKA physically "fixed=
") or treat a disk drive with its mounted pack as "fixed" even though its p= >ack could be physically removed (AKA physically "removable").
In the case of OS2200/OS1100/Exec8, a disk pack that is "prep" ("prepared"/= >initialized) as "removable" did contain enough stuff on it so that it could=
be moved around in the disk farm attached to an 1100/2200 system.
But if the pack was prepped "fixed", then the OS regarded it as being perma= >nently a part of the system and so the OS doesn't do anything to try to res= >trict the files so that they reside entirely on one and only one disk pack. >IOW, parts of files can be spread out over just about ANY ("fixed") disk in=
the system.
This is of course something that folks familiar with Unix/Linux think of as=
a modern invention of Sun in the form of the ZFS file system and its ilk s=
ince 2000-something ... except that it's been present in OS2200/OS1100/Exec= >8 for something like 40+ years before.
The people who answered appeared to be fluent in Burroughs systems, which m= >akes sense because that question is usually effectively meaningless for 110= >0/2200 systems.
I recently saw a question elsewhere in comp.sys.unisys where a person asked if there was a utility for finding out what files were on a particular pack.
The people who answered appeared to be fluent in Burroughs systems, which makes sense because that question is usually effectively meaningless for 1100/2200 systems.
I assume that you are probably aware of this, but just in case, what I'm specifically referring to is the fact that due to their evolutionary history, many/most? folks used to micros/minis often times assume that there's a one-to-one relationshipbetween disks and file systems.
That is each disk must have on it one and only one file system, and for each file system must exist on one and only one disk.moving the media doesn't cause the data on it to become inaccessible.
Given that in the Good Old Days folks were lucky if there as even one disk on early micro/mini systems and the media on such a disk might be physically removeable, it makes sense to assume that any given disk must contain a complete file system so that
You probably don't want to hear from me again, but it occurred to me that the OS2200/OS1100/Exec8 file system might be a good place to show case the fact that it might not be as backward of a system as those coming from a micro/mini environment mightimagine, but rather simply different because of the different evolutionary path involved.
I assume that you are probably aware of this, but just in case, what I'm specifically referring to is the fact that due to their evolutionary history, many/most? folks used to micros/minis often times assume that there's a one-to-one relationshipbetween disks and file systems.
That is each disk must have on it one and only one file system, and for each file system must exist on one and only one disk.moving the media doesn't cause the data on it to become inaccessible.
Given that in the Good Old Days folks were lucky if there as even one disk on early micro/mini systems and the media on such a disk might be physically removeable, it makes sense to assume that any given disk must contain a complete file system so that
Mainframe systems, however, usually had quite a few attached disks, some of which might have physically removable disk packs.removed (AKA physically "removable").
But just because a disk pack could be or couldn't be physically moved without taking apart the drive it was attached to did not/does not mean that the OS has to treat it, or rather the disk drive that pack was attached/mounted on, that way.
An OS can treat a disk drive with its mounted disk pack as being "logically removable" even though its pack couldn't be removed (AKA physically "fixed") or treat a disk drive with its mounted pack as "fixed" even though its pack could be physically
In the case of OS2200/OS1100/Exec8, a disk pack that is "prep" ("prepared"/initialized) as "removable" did contain enough stuff on it so that it could be moved around in the disk farm attached to an 1100/2200 system.years before.
But if the pack was prepped "fixed", then the OS regarded it as being permanently a part of the system and so the OS doesn't do anything to try to restrict the files so that they reside entirely on one and only one disk pack.
IOW, parts of files can be spread out over just about ANY ("fixed") disk in the system.
This is of course something that folks familiar with Unix/Linux think of as a modern invention of Sun in the form of the ZFS file system and its ilk since 2000-something ... except that it's been present in OS2200/OS1100/Exec8 for something like 40+
I recently saw a question elsewhere in comp.sys.unisys where a person asked if there was a utility for finding out what files were on a particular pack.
The people who answered appeared to be fluent in Burroughs systems, which makes sense because that question is usually effectively meaningless for 1100/2200 systems.
Of course, one of the nice things about ZFS is interest in detecting and/or preventing bit rot and I this is something that (AFAIK) wasn't really a concern for OS2200/OS1100/Exec8 systems.
Perhaps bit rot was somewhat ameliorated by rolling out mass storage files to tape and rolling them in from tape to mass storage, but that's something I never really thought about.
Perhaps Mr. Fuld and/or Mr. Schroth can wave their arms at this.
On 10/4/22 21:58, Lewis Cole wrote:
I assume that you are probably aware of this,
but just in case, what I'm specifically referring
to is the fact that due to their evolutionary history,
many/most? folks used to micros/minis often
times assume that there's a one-to-one
relationship between disks and file systems.
Why on earth would you think that?
That is each disk must have on it one and only
one file system, and for each file system must
exist on one and only one disk.
Given that in the Good Old Days folks were lucky
if there as even one disk on early micro/mini
systems and the media on such a disk might be
physically removeable, it makes sense to assume
that any given disk must contain a complete file
system so that moving the media doesn't cause
the data on it to become inaccessible.
Sorry, but that isn't at all real. Unix, which began on
the PDP-11 minicomputer, not only supported multiple
file systems on a single disk (even on disks as small
as the 10 Meg RL02) but in some cases required it
for a properly working system.
And then, PC's from the very early days supported
multiple (and often very different) file systems on
single disks which gave us the concept of dual (or
more) booting systems.
Even on DOS only systems
it was not unusual to partition the disk and separate
the DOS and USER file systems.
bill
When (IBM compatible) PCs came along, 8.5-inch floppies gave way to 5.25-inch floppies and eventually hard disks.
On Wednesday, October 5, 2022 at 7:50:08 AM UTC-7, Bill Gunshannon wrote:
On 10/4/22 21:58, Lewis Cole wrote:=20
=20=20
I assume that you are probably aware of this,=20
but just in case, what I'm specifically referring=20
to is the fact that due to their evolutionary history,=20
many/most? folks used to micros/minis often=20
times assume that there's a one-to-one=20
relationship between disks and file systems.
Why on earth would you think that?
You mean, aside from the fact that what I said is true?
In any event, it is my understanding that while Unix file system has change= >d a bit since the early days, back in the time of the PDP-11, it still invo= >lved what amounts to a file descriptor (an "inode") and a list of granules = >allocated to a particular file.
In the beginning, which is to say even before the IBM PC hit the fan, disk = >drives of any sort were very much a rarity.
On Wednesday, October 5, 2022 at 7:50:08 AM UTC-7, Bill Gunshannon wrote:
On 10/4/22 21:58, Lewis Cole wrote:
I assume that you are probably aware of this,
but just in case, what I'm specifically referring
to is the fact that due to their evolutionary history,
many/most? folks used to micros/minis often
times assume that there's a one-to-one
relationship between disks and file systems.
Why on earth would you think that?
You mean, aside from the fact that what I said is true?
That is each disk must have on it one and only
one file system, and for each file system must
exist on one and only one disk.
Given that in the Good Old Days folks were lucky
if there as even one disk on early micro/mini
systems and the media on such a disk might be
physically removeable, it makes sense to assume
that any given disk must contain a complete file
system so that moving the media doesn't cause
the data on it to become inaccessible.
Sorry, but that isn't at all real. Unix, which began on
the PDP-11 minicomputer, not only supported multiple
file systems on a single disk (even on disks as small
as the 10 Meg RL02) but in some cases required it
for a properly working system.
Sorry, but no.
Unix began on a PDP-7, not a PDP-11, and in the beginning there were *NO* disks on the PDP-7 ... zero, zip, nada, none. Instead it had tapes.
Although I'm certainly no expert on Unix development history, the story I heard is that when Unix was first being developed on the PDP-7 with *ONE*, count 'em, *ONE* disk which had a capacity of a few megabytes at most, and it was not partitioned.
Unix was ported to the PDP-11, but that's not how it began.
I assume that you are confused by the fact that when Unix was first announced to the World, it was on a PDP-11.
In any event, it is my understanding that while Unix file system has changed a bit since the early days, back in the time of the PDP-11, it still involved what amounts to a file descriptor (an "inode") and a list of granules allocated to a particularfile.
Each entry in the granule list basically contained what amounts to a disk address without any further identification as to location, unlike an 1100/2200 DAD (Device Address Descriptor) pointed at by an 1100/2200 file item.
IOW, the list could *NOT* indicate that a particular granule was located on a different disk.
So while you could argue that by partitioning a real physical disk into multiple virtual disks, you could have more than one file system per (physical) disk (with one file system per virtual disk), that's only from the prospective of the human lookingat the system and not the OS that actually has to deal with the "disks".
In effect, it was basically impossible for a file in a file system to span more than one "disks" the way files on "fixed" mass storage do on the 1100/2200.
If you think I am mistaken, then I await with bated breath for you to present evidence to show that this was the case.
And then, PC's from the very early days supported
multiple (and often very different) file systems on
single disks which gave us the concept of dual (or
more) booting systems.
Sorry but again no.
In the beginning, which is to say even before the IBM PC hit the fan, disk drives of any sort were very much a rarity.
And if they existed, they were 8.5-inch floppies, not 5.25-inch floppies or hard drives, and each floppy or hard disk contained one and only one "file system" if you can call it that.
In the case of CP/M, for example, its file structure basically described a list of allocated granules for a file where each granule did *NOT* allow for the specification of a different disk.
In fact, if you ever changed a floppy on CP/M, you had to effectively reboot the "OS" each time so that it could recognize the fact that a disk had changed.
When (IBM compatible) PCs came along, 8.5-inch floppies gave way to 5.25-inch floppies and eventually hard disks.allow bits of files on one physical disk to be linked to bits of files on a different disk.
In each case, however, there was one-to-one relationship between disks and file systems, in no small part because they used FAT12.
In FAT12, there's a table (well actually two tables but since one is a copy of the other, let's call it one) called a FAT (File Allocation Table) that contains 12-bit pointers to allocated clusters of a file with NO other information at all that would
And this continued on with FAT16 and FAT32 where the pointers simply grew in size from 12-bits to 16-bits and then 32-bits.
"Dual booting" did *NOT* change this in the slightest.
Instead, a trick was cooked up where a table was set up in the Master Boot Record (MBR) that allowed one to treat the disk as if it were actually four separate (virtual) "disks", one "primary" and three "logical".
And with a little extra code and this partition table, you could have a boot loader that could boot from each of the virtual "disks" on the physical disk, but once again, since the file system installed in any of these virtual disks was some version ofFAT, it was *NOT* possible for file to span "disks".
Even when HPFS came along, IIRC, they are still using something that amounts to a list of granules without any drive identifier which would allow a file in a file system to span multiple "disks", no matter how you define a "disk" although pointers tolots of other crud in file descriptors was added.
And OBTW, I did this with OS/2 and (I think) MSDOS (although it could have been an early version of Windows) so I can assure you that what I've said is correct from person experience.
Even on DOS only systems
it was not unusual to partition the disk and separate
the DOS and USER file systems.
Again, we're talking about something that eventually developed, but wasn't the case originally.
And due to the limitations of the structures that described files, it wasn't possible (unless you didn't really care about file integrity at least) for files to span "drives" no matter how they were defined.
And what I said still holds true today.
Just the other day, I poked around with a Linux Mint distribution "live" USB drive and looked at the hardware on an old laptop that I'm thinking about installing Linux Mint.
What I see still clearly indicates that there's an MBR with a partition table that can make the hard drive "look" like up to four hard drives, where each "drive" contains one and only one file system.
In fact, if this weren't the case, then the old trick of putting ones "usr" tree on its own "drive" so that everything remains on that one "drive" despite installing newer versions of the OS wouldn't work.
historically, you'll pardon me if I don't particularly care about it.bill
If you goal was to point out that one can partition physical disk drives so that they can be treated as multiple virtual disk which allows one to have more than one file system per physical disk, then I stand corrected and that you were/are right.
However, as this correction doesn't affect whether or not a file systems of yore could support files that span more than one "disk" -- you know, the first sentence after you asked, "Why on earth would you think that?" -- or where/when things happened
On 11/5/2022 8:32 PM, Lewis Cole wrote:
When (IBM compatible) PCs came along, 8.5-inch floppies gave way to 5.25-inch floppies and eventually hard disks.No, 5.25-inch floppies came along quite a bit earlier (1976) than the
IBM PC (1981). I bought an Apple ][ Plus in 1979 to host Apple Pascal
that had dual 5.25 single-sided disks, storing a whopping 140KB each.
Each disk could have only a single file system, and of course there was
no spanning of file systems across disks.
See https://en.wikipedia.org/wiki/Floppy_disk.
Paul
On Wednesday, October 5, 2022 at 7:50:08 AM UTC-7, Bill Gunshannon wrote:
On 10/4/22 21:58, Lewis Cole wrote:=20
I assume that you are probably aware of this,=20
but just in case, what I'm specifically referring=20
to is the fact that due to their evolutionary history,=20
many/most? folks used to micros/minis often=20
times assume that there's a one-to-one=20
relationship between disks and file systems.
Why on earth would you think that?
You mean, aside from the fact that what I said is true?
Given that mainframe systems had no such constraints
in the periods before, during and after the mini
era, and most people using minicomputers in those
days were familiar with one or more mainframe
families, I'm not sure why you think that is true.
For PC and hobby users, on the other hand, multivolume
floppies or cassettes weren't uncommon.
In any event, it is my understanding that while Unix file system has change= >>d a bit since the early days, back in the time of the PDP-11, it still invo= >>lved what amounts to a file descriptor (an "inode") and a list of granules = >>allocated to a particular file.
That fundamentally describes most file systems, with variations in terminology.
Multivolume filesystems showed up in Unix in the late 80's/early 90's with the
Tolerant filesystem volume manager (later Veritas VxFS/VxVM).
In the beginning, which is to say even before the IBM PC hit the fan, disk >> drives of any sort were very much a rarity.
For PCs, perhaps. There were of course wide ranges of disk subsystems
from the 1960's forward on mini and mainframe computers.
If you goal was to point out that one can partition physical
disk drives so that they can be treated as multiple virtual
disk which allows one to have more than one file system per
physical disk, then I stand corrected and that you were/are
right.
However, as this correction doesn't affect whether or not a
file systems of yore could support files that span more than
one "disk" -- you know, the first sentence after you asked,
"Why on earth would you think that?" -- or where/when things
happened historically, you'll pardon me if I don't particularly
care about it.
I'm just baffled at how this went from multiple file systems on
one disk to multiple disks containing one file system. [...]
[...] Both of which
various Unixes have supported for several decades. [...]
I ran small disks
combined by software into large disks back in the days we were still
running SPARC at the University. Needed it for large user file
space to support over a hundred students and to support very large
storage area for our USENET News Server.
On Wednesday, October 5, 2022 at 7:50:08 AM UTC-7, Bill Gunshannon wrote:
On 10/4/22 21:58, Lewis Cole wrote:=20
I assume that you are probably aware of this,=20
but just in case, what I'm specifically referring=20
to is the fact that due to their evolutionary history,=20
many/most? folks used to micros/minis often=20
times assume that there's a one-to-one=20
relationship between disks and file systems.
Why on earth would you think that?
You mean, aside from the fact that what I said is true?
Given that mainframe systems had no such constraints
in the periods before, during and after the mini
era, and most people using minicomputers in those
days were familiar with one or more mainframe
families, I'm not sure why you think that is true.
I think it's true because the "givens" you think applied at the time, didn't.
Eventually, disk prices did drop and you could have more than one hard
drive in your PC, but as I indicated before, the very structure of FAT
rules out the possibility of files that span more than one disk.
Yes, many OSs use a list of allocated granules to keep track of the
parts of a file, but for a single disk system, you can also use bit
maps to keep track of what records are allocated and to who[m].
Lewis Cole <l_c...@juno.com> writes:< snip >
On Wednesday, October 5, 2022 at 7:50:08 AM UTC-7, Bill Gunshannon wrote: >>> On 10/4/22 21:58, Lewis Cole wrote:=20
You mean, aside from the fact that what I said is true?
Why on earth would you think that?
Given that mainframe systems had no such constraints
in the periods before, during and after the mini
era, and most people using minicomputers in those
days were familiar with one or more mainframe
families, I'm not sure why you think that is true.
I think it's true because the "givens" you think applied at the time, didn't.
Well, I was "messing" with mini's (HP2k, PDP-8, in particular) in the middle of the 1970s. They had disks, and both hosts were basically obsolete at that time.
I was also messing with HP-3000's during the same timeframe, which had many disks,
and I spent 14 years at Burroughs writing mainframe operating systems (which supported multivolume filesystems since the 1960s).
Eventually, disk prices did drop and you could have more than one hard
drive in your PC, but as I indicated before, the very structure of FAT
rules out the possibility of files that span more than one disk.
DOS was never the only operating system, nor was FAT the only filesystem
on disks for personal computers even in the early days.
Yes, many OSs use a list of allocated granules to keep track of the
parts of a file, but for a single disk system, you can also use bit
maps to keep track of what records are allocated and to who[m].
In generally, you can divide filesystems into two fundamental types:
- extent-based allocators
- fixed-block-size allocators.>>>
Both have their advantages and both have their disavantages, based on workload.
In Unix filesystems, the 'inode' encapsulated filesystem metadata, including addresses of the sectors (or groups of sectors) containing file data. This should not be confused with a 'directory entry', because it was not such.
This was no different than the Burroughs filesystems other than the terminology
changed; there the 'inode' was called a 'file header'. Both contained the same metadata, and in both cases, the 'directory' was just a file of tuples <filename, metadata-pointer>.
The Sperry/Univac systems were similar to the Burroughs systems in that the device addresses in the file metadata included a device identifier and a device relative sector address.
The Univac filesystem structure and access methods however, are quite different
from the Burroughs, Unix or microsoft OS methods.
IMO, the major difference (from a user code standpoint) between an
1100 and a 2200 is that marketing thought 2200 sounded twice as sexy
as 1100. Mileage almost certainly varies.
I know it's been longer than I intended, but I have my OS 2200 Express license renewed (though unfortunately Unisys did let me know that the program is over for new users) and will be working on continuing the series in the coming weeks.
On Mon, 4 Dec 2023 14:18:35 -0800 (PST), Kira Ash
<hpeintegrity@gmail.com> wrote:
I know it's been longer than I intended, but I have my OS 2200 Express license renewed (though unfortunately Unisys did let me know that the program is over for new users) and will be working on continuing the series in the coming weeks.
I, for one, am still looking forward to seeing what you write in the future...
Regards,
David W Schroth
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 307 |
Nodes: | 16 (2 / 14) |
Uptime: | 113:32:25 |
Calls: | 6,854 |
Files: | 12,355 |
Messages: | 5,416,589 |