• Re: FPUs (was: Fujitsu will discontinue SPARC in 2034)

    From MitchAlsup@21:1/5 to Anton Ertl on Sat Oct 28 12:09:15 2023
    On Saturday, October 28, 2023 at 10:45:30 AM UTC-5, Anton Ertl wrote:
    Thomas Koenig <tko...@netcologne.de> writes:
    MitchAlsup <Mitch...@aol.com> schrieb:
    [68881]
    Where "worked" meant that::
    FABS was 35 cycles !!

    That is weird, that is just setting a bit.

    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Likewise, just an NEG...

    It (and x87 originals) were so bad at IEEE 754 that this opened up the door to RISCs.

    I think the main source of the slowness whas that both 8087 and >68881/68882 used CORDIC.
    That cannot be the reason for FABS FADD FNEG.
    It is an interesting question what sort of performance would have
    been possible with the transistor budget of the 8087, around 45000 >transistors, or 155000 for the 68881, if Wikipedia is to be believed.
    The 88100 contains 165,000 transistors, contains a pipelined integer
    CPU and a pipelined FPU. It only does 32-bit and 64-bit FP (not
    <
    There were 4 pipelined units: Integer+logical+shift, multiply {Int, FP,
    int DIV, FP DIV+SQRT}, and FADD {ADD, SUB, CMP, certain others},
    AND memory {LDs and STs}
    <
    80-bit), and the latency for fadd.ddd is 6 cycles, and for fmul.ddd is
    9 cycles. The FUs are pipelined, but register read and write is 32
    bits at a time, so you can start an fadd.ddd only every second cycle.
    If I understand the FP1 Extra stuff in Table 7-6 correctly, it can
    start an fmul.ddd only ever 4 cycles.

    FP absolute and FP negation are implemented with integer instructions
    and take 1 cycle (I think).
    <
    Correct but getting the 1<<31 was harder than FABS or FNEG, and these
    are minor reasons My 66000 has universal constant support.

    According to <https://archive.org/details/ieee_micro_v8n3_june_88/page/n54/mode/1up>,
    the MIPS R3010 contains 75,000 transistors; it runs at 25MHz and takes
    2 cycles for an FP DP add, and 5 cycles for an FP DP multiply
    (compared to 26 and 46 for the 68882). According to Figure 1 and
    Table 3, the R3010 was 16-27 times faster than a VAX11/780, and more
    than 3 times faster than a VAX 8700. The Weitek 1164/1165 in the Sun
    4/260 roughly matched the VAX 8700, and both were 3-7 times faster
    than the VAX 11/780. The 68881 (in a Sun 3/260) roughly matched the
    VAX 11/780.

    In find the title of the next article funny: "Intel's 80960: An
    Architecture optimized for Embedded Control". I only realized a few
    years ago that it was anything but.
    <
    Anything but 'optimized' or anything but 'Embedded' ??

    Going to the next article, it is about the Weitek 3164 (probably a
    successor to the 1164 mentioned above). I did't find a transistor
    count for the WTL3164, though.
    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7...@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Schultz@21:1/5 to Opus on Sat Oct 28 18:14:05 2023
    On 10/27/23 3:54 PM, Opus wrote:
    I don't remember if it was at all possible to use a 68881 with a 68000.
    But maybe by "68000" you meant the 68020 and later.

    You could use it as a peripheral with the 68000. You could still write
    the code to use the 68881 instructions, they were just dealt with by an exception handler rather than the coprocessor interface.

    --
    http://davesrocketworks.com
    David Schultz

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to Branimir Maksimovic on Sat Oct 28 17:01:17 2023
    On Saturday, October 28, 2023 at 5:24:44 PM UTC-5, Branimir Maksimovic wrote:
    On 2023-10-27, Opus <ifo...@youknew.org> wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <sc...@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the 68881/2 were designed for use with a 68020 (and later 68030) and not for the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000. But maybe by "68000" you meant the 68020 and later.
    <
    68000 was 16 bit, 68020 I think 32 bit, correct me if I am wrong...
    <
    68000 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and 16-bit data bus.
    68008 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 8-bit data bus.
    68010 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 16-bit data bus.
    68020 was a 32-bit architecture on a 32-bit µarchitecture with a 24-bit address bus and a 32-bit data bus.

    --

    7-77-777, Evil Sinner! https://www.linkedin.com/in/branimir-maksimovic-6762bbaa/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to MitchAlsup on Sun Oct 29 02:50:04 2023
    On Sunday, October 29, 2023 at 2:01:20 AM UTC+2, MitchAlsup wrote:
    On Saturday, October 28, 2023 at 5:24:44 PM UTC-5, Branimir Maksimovic wrote:
    On 2023-10-27, Opus <ifo...@youknew.org> wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <sc...@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the 68881/2 were designed for use with a 68020 (and later 68030) and not for the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000. But maybe by "68000" you meant the 68020 and later.
    <
    68000 was 16 bit, 68020 I think 32 bit, correct me if I am wrong...
    <
    68000 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and 16-bit data bus.
    68008 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 8-bit data bus.
    68010 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 16-bit data bus.
    68020 was a 32-bit architecture on a 32-bit µarchitecture with a 24-bit address bus and a 32-bit data bus.

    --

    7-77-777, Evil Sinner! https://www.linkedin.com/in/branimir-maksimovic-6762bbaa/

    Mitch,
    Users that access this group through Eternal September server, which by now appear to be approximately half of the regular poster, don't see you last post, because admin of Eternal September turned off Google Groups feeds for
    majority of yesterday (Sat). If you want your post read, please repost later today (Sun) through Google Groups or whenever you wish through other
    provider.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Timothy McCaffrey@21:1/5 to MitchAlsup on Sun Oct 29 17:32:17 2023
    On Saturday, October 28, 2023 at 8:01:20 PM UTC-4, MitchAlsup wrote:
    On Saturday, October 28, 2023 at 5:24:44 PM UTC-5, Branimir Maksimovic wrote:
    On 2023-10-27, Opus <ifo...@youknew.org> wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <sc...@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the 68881/2 were designed for use with a 68020 (and later 68030) and not for the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000. But maybe by "68000" you meant the 68020 and later.
    <
    68000 was 16 bit, 68020 I think 32 bit, correct me if I am wrong...
    <
    68000 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and 16-bit data bus.
    68008 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 8-bit data bus.
    68010 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 16-bit data bus.
    68020 was a 32-bit architecture on a 32-bit µarchitecture with a 24-bit address bus and a 32-bit data bus.

    --

    7-77-777, Evil Sinner! https://www.linkedin.com/in/branimir-maksimovic-6762bbaa/

    I think the 68008 was only a 20 bit address bus.
    The 68020 was 32 bit address bus.

    - Tim

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to MitchAlsup@aol.com on Mon Oct 30 00:06:05 2023
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote:
    On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000.
    But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least
    up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [Technically the pixels really were a fractional deviation from a
    common base, however both base and deviation(s) were expressed in 754
    single format. For the most part, it was easiest to handle it all as
    FP data.]

    25 years later I don't have numbers to give, but I worked with
    68030/68882 and 80386/80387 both running at 33MHz. At least with our
    FP imaging codes, the 68030/68882 combo was noticeably faster.

    Same with 68040: at 40MHz it trounced 100Mhz 80486-dx4 and (original)
    60MHz Pentium. Versus 75Mhz Pentium, it was a toss-up ... some codes
    were faster on Motorola, some on Intel.

    Had different results with integer imaging codes - actually very
    different results depending on the algorithm - but we are talking
    about FPU performance here. 8-)

    Unfortunately 40MHz 68040 was the last Motorola I used. When the
    90MHz Pentium arrived, Intel definitively won the performance/price
    contest, and we never looked back.


    YMMV.


    It (and x87 originals) were so bad at IEEE 754 that this opened up the door to RISCs.

    See e.g.
    http://www.bitsavers.org/components/motorola/_appNotes/AN-0947_MC68881_Floating-Point_Coprocessor_as_a_Peripheral_in_a_M68000_System_%5BMotorola_1987_37p%5D.pdf


    --
    Bernd Linsel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From robfi680@gmail.com@21:1/5 to Timothy McCaffrey on Sun Oct 29 22:24:01 2023
    On Sunday, October 29, 2023 at 8:32:19 PM UTC-4, Timothy McCaffrey wrote:
    On Saturday, October 28, 2023 at 8:01:20 PM UTC-4, MitchAlsup wrote:
    On Saturday, October 28, 2023 at 5:24:44 PM UTC-5, Branimir Maksimovic wrote:
    On 2023-10-27, Opus <ifo...@youknew.org> wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <sc...@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the
    68881/2 were designed for use with a 68020 (and later 68030) and not for
    the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000.
    But maybe by "68000" you meant the 68020 and later.
    <
    68000 was 16 bit, 68020 I think 32 bit, correct me if I am wrong...
    <
    68000 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and 16-bit data bus.
    68008 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 8-bit data bus.
    68010 was a 32-bit architecture on a 16-bit µarchitecture with a 24-bit address bus and a 16-bit data bus.
    68020 was a 32-bit architecture on a 32-bit µarchitecture with a 24-bit address bus and a 32-bit data bus.

    --

    7-77-777, Evil Sinner! https://www.linkedin.com/in/branimir-maksimovic-6762bbaa/
    I think the 68008 was only a 20 bit address bus.

    The 68008 48 pin DIP was 20 address bits, but the 52-pin QFP version brought out a couple more address bits (22 total) and another interrupt line.

    The 68020 was 32 bit address bus.

    - Tim

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Thomas Koenig on Mon Oct 30 12:29:47 2023
    On 28/10/2023 16:02, Thomas Koenig wrote:
    MitchAlsup <MitchAlsup@aol.com> schrieb:
    On Friday, October 27, 2023 at 5:07:00 PM UTC-5, Bernd Linsel wrote:
    On 27.10.2023 22:54, Opus wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <sc...@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the >>>> 68881/2 were designed for use with a 68020 (and later 68030) and not for >>>> the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000. >>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least
    up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!

    That is weird, that is just setting a bit.

    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Likewise, just an NEG...

    It (and x87 originals) were so bad at IEEE 754 that this opened up the door to RISCs.

    I think the main source of the slowness whas that both 8087 and
    68881/68882 used CORDIC.


    Another issue for much of it was the slow coprocessor communication.
    IIRC the 68000 did not have a separate coprocessor interface, requiring
    many bus cycles (at 16-bit width) to pass instructions and data across.

    A /long/ time ago, we made a board with a 68332 (a microcontroller with
    a CPU mostly like a 68020, but with a 16-bit external databus) and a
    68881 floating point coprocessor. The 68332 was great, and we used it
    on many systems, but working with the 68881 was /slow/.

    It is an interesting question what sort of performance would have
    been possible with the transistor budget of the 8087, around 45000 transistors, or 155000 for the 68881, if Wikipedia is to be believed.

    Some of the hardware implementations of that time were extremely slow.
    I think it was on the 68020 that they realised that the hardware
    division instructions were slower than software division functions, and
    so the hardware division instructions were dropped for the 68030 and later.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Thomas Koenig on Mon Oct 30 12:30:46 2023
    On 28/10/2023 16:02, Thomas Koenig wrote:
    MitchAlsup <MitchAlsup@aol.com> schrieb:
    On Friday, October 27, 2023 at 5:07:00 PM UTC-5, Bernd Linsel wrote:
    On 27.10.2023 22:54, Opus wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <sc...@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the >>>> 68881/2 were designed for use with a 68020 (and later 68030) and not for >>>> the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000. >>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least
    up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!

    That is weird, that is just setting a bit.

    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Likewise, just an NEG...

    It (and x87 originals) were so bad at IEEE 754 that this opened up the door to RISCs.

    I think the main source of the slowness whas that both 8087 and
    68881/68882 used CORDIC.


    Another issue for much of it was the slow coprocessor communication.
    IIRC the 68000 did not have a separate coprocessor interface, requiring
    many bus cycles (at 16-bit width) to pass instructions and data across.

    A /long/ time ago, we made a board with a 68332 (a microcontroller with
    a CPU mostly like a 68020, but with a 16-bit external databus) and a
    68881 floating point coprocessor. The 68332 was great, and we used it
    on many systems, but working with the 68881 was /slow/.

    It is an interesting question what sort of performance would have
    been possible with the transistor budget of the 8087, around 45000 transistors, or 155000 for the 68881, if Wikipedia is to be believed.

    Some of the hardware implementations of that time were extremely slow.
    I think it was on the 68020 that they realised that the hardware
    division instructions were slower than software division functions, and
    so the hardware division instructions were dropped for the 68030 and later.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Thomas Koenig on Mon Oct 30 12:33:18 2023
    On 28/10/2023 16:02, Thomas Koenig wrote:
    MitchAlsup <MitchAlsup@aol.com> schrieb:
    On Friday, October 27, 2023 at 5:07:00 PM UTC-5, Bernd Linsel wrote:
    On 27.10.2023 22:54, Opus wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <sc...@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the >>>> 68881/2 were designed for use with a 68020 (and later 68030) and not for >>>> the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000. >>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least
    up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!

    That is weird, that is just setting a bit.

    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Likewise, just an NEG...

    It (and x87 originals) were so bad at IEEE 754 that this opened up the door to RISCs.

    I think the main source of the slowness whas that both 8087 and
    68881/68882 used CORDIC.


    Another issue for much of it was the slow coprocessor communication.
    IIRC the 68000 did not have a separate coprocessor interface, requiring
    many bus cycles (at 16-bit width) to pass instructions and data across.

    A /long/ time ago, we made a board with a 68332 (a microcontroller with
    a CPU mostly like a 68020, but with a 16-bit external databus) and a
    68881 floating point coprocessor. The 68332 was great, and we used it
    on many systems, but working with the 68881 was /slow/.

    It is an interesting question what sort of performance would have
    been possible with the transistor budget of the 8087, around 45000 transistors, or 155000 for the 68881, if Wikipedia is to be believed.

    Some of the hardware implementations of that time were extremely slow. I
    think it was on the 68020 that they realised that the hardware division instructions were slower than software division functions, and so the
    hardware division instructions were dropped for the 68030 and later.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Mon Oct 30 16:11:32 2023
    On 30/10/2023 12:37, David Brown wrote:

    Sorry about the multiple posts - I had a bit of a hang-up with my news
    client.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Joe Pfeiffer on Mon Oct 30 16:11:52 2023
    On 29/10/2023 02:15, Joe Pfeiffer wrote:
    Branimir Maksimovic <branimir.maksimovic@icloud.com> writes:

    On 2023-10-27, Opus <ifonly@youknew.org> wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <scott@slp53.sl.home> schrieb:

    When the Sun-1 came out in the  80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point.  This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page, the >>> 68881/2 were designed for use with a 68020 (and later 68030) and not for >>> the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000.
    But maybe by "68000" you meant the 68020 and later.
    68000 was 16 bit, 68020 I think 32 bit, correct me if I am wrong...

    68000 was 32 bit architecture on a 16 bit bus.

    The 68000, IIRC, had a 16-bit ALU (as well as the 16-bit databus). The register set was 32-bit wide, and the instructions all supported 32-bit
    sizes (along with 8-bit and 32-bit). But a full 32-bit ALU would have
    been too big and expensive. The idea was that the architecture would be competitive with existing 16-bit devices while being "forward
    compatible" with planned fully 32-bit versions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to David Brown on Mon Oct 30 13:27:13 2023
    David Brown wrote:
    On 29/10/2023 02:15, Joe Pfeiffer wrote:
    Branimir Maksimovic <branimir.maksimovic@icloud.com> writes:

    On 2023-10-27, Opus <ifonly@youknew.org> wrote:
    On 27/10/2023 18:41, Bernd Linsel wrote:
    On 27.10.2023 15:01, Thomas Koenig wrote:
    Scott Lurndal <scott@slp53.sl.home> schrieb:

    When the Sun-1 came out in the 80's, the bulk of
    technical computing was done on minicomputers (e.g. VAX)
    using graphical output devices like the 4014 and VK100 GIGI.

    Without a numerical co-processor, the 68000 was not really
    competetive for floating point. This probably let minicomputer
    vendors sleep at night, for a time.

    There were the 68881 and 68882.
    https://en.wikipedia.org/wiki/Motorola_68881

    As I remember and they seem to state as well on this wikipedia page,
    the
    68881/2 were designed for use with a 68020 (and later 68030) and not
    for
    the 68000 itself.

    I don't remember if it was at all possible to use a 68881 with a 68000. >>>> But maybe by "68000" you meant the 68020 and later.
    68000 was 16 bit, 68020 I think 32 bit, correct me if I am wrong...

    68000 was 32 bit architecture on a 16 bit bus.

    The 68000, IIRC, had a 16-bit ALU (as well as the 16-bit databus). The register set was 32-bit wide, and the instructions all supported 32-bit
    sizes (along with 8-bit and 32-bit). But a full 32-bit ALU would have
    been too big and expensive. The idea was that the architecture would be competitive with existing 16-bit devices while being "forward
    compatible" with planned fully 32-bit versions.


    (It is hard pick messages out of the clutter as the
    denial of service attack msg flood from GG continues)

    It had 3 banks of 16-bit registers each with 16-bit ALU's
    for data low, address low, address & data high,
    plus two segmented buses for address and data.

    see Fig.2

    Patent US4296469A, 1978
    Execution unit for data processor using segmented bus structure https://patents.google.com/patent/US4296469A/

    Harry Tredennick also did the IBM Micro-370 from a modified 68000.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to EricP on Mon Oct 30 22:31:51 2023
    On Mon, 30 Oct 2023 13:27:13 -0400
    EricP <ThatWouldBeTelling@thevillage.com> wrote:

    (It is hard pick messages out of the clutter as the
    denial of service attack msg flood from GG continues)


    [O.T.]
    You can read very well-filtered variant of comp.arch here: https://www.novabbs.com/devel/thread.php?group=comp.arch

    That includes all non-spam post done via Google Groups in the last
    couple of days, i.e. during the period when Eternal September was
    temmporarily disconnected from GG feeds.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to George Neuner on Mon Oct 30 14:16:40 2023
    On 10/29/2023 9:06 PM, George Neuner wrote:
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote:
    On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000. >>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least
    up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [...]

    DICOM volumetric images?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to chris.m.thomasson.1@gmail.com on Tue Oct 31 22:19:36 2023
    On Mon, 30 Oct 2023 14:16:40 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 10/29/2023 9:06 PM, George Neuner wrote:
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote:
    On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000. >>>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least >>>> up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [...]

    DICOM volumetric images?

    Yes. We were post-processing for holographic film rendering.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to George Neuner on Tue Oct 31 20:40:59 2023
    On 10/31/2023 7:19 PM, George Neuner wrote:
    On Mon, 30 Oct 2023 14:16:40 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 10/29/2023 9:06 PM, George Neuner wrote:
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote:
    On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000. >>>>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least >>>>> up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [...]

    DICOM volumetric images?

    Yes. We were post-processing for holographic film rendering.

    Nice. I am wondering if you can you remember the general resolutions you
    were working with at the time? 2048^3 resolution? Fwiw, I had to do a
    lot of work in volumetrics to help me visualize some of my n-ary vector
    fields. Here is some of my work for an experimental mandelbulb of mine:

    https://nocache-nocookies.digitalgott.com/gallery/17/11687_15_03_15_8_18_09.jpeg

    https://www.fractalforums.com/index.php?action=gallery%3Bsa%3Dview%3Bid%3D17187

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to chris.m.thomasson.1@gmail.com on Wed Nov 1 19:46:20 2023
    On Tue, 31 Oct 2023 20:40:59 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 10/31/2023 7:19 PM, George Neuner wrote:
    On Mon, 30 Oct 2023 14:16:40 -0700, "Chris M. Thomasson"
    <chris.m.thomasson.1@gmail.com> wrote:

    On 10/29/2023 9:06 PM, George Neuner wrote:
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote: >>>>>> On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000. >>>>>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least >>>>>> up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [...]

    DICOM volumetric images?

    Yes. We were post-processing for holographic film rendering.

    Nice. I am wondering if you can you remember the general resolutions you
    were working with at the time? 2048^3 resolution? Fwiw, I had to do a
    lot of work in volumetrics to help me visualize some of my n-ary vector >fields. Here is some of my work for an experimental mandelbulb of mine:

    https://nocache-nocookies.digitalgott.com/gallery/17/11687_15_03_15_8_18_09.jpeg

    https://www.fractalforums.com/index.php?action=gallery%3Bsa%3Dview%3Bid%3D17187


    Sorry, I don't remember what resolutions we were working with ... we
    worked with MR, CT and radiograph imagery.

    What I worked on did not need to know the source resolution because by
    the time my code got the imagery, it already had been cropped to fit
    the projector resolution: at most 1024x768. Depth was limited to, at
    most, 80-100 slices: recall that ALL the spatial content of a hologram
    gets compressed into the single interference image, so too much
    content can cause the hologram to become light saturated and to lose
    fidelity.


    Our system ultimately included front-end "workstations" running either
    Linux or NetBSD (depending), and back-end "compute servers" running
    VxWorks co-located with the control units in each film printer.


    The workstations needed to be (reasonably) interactive - allowed no
    more than a couple of seconds to update the display. They did not
    even attempt to do 3D rendering, but rather provided a quick-n-dirty approximation of the hologram that would result from rendering the
    selected volume in the selected orientation. The operator could
    rotate and crop the volume and step through the approximated
    "hologram" image, to get an idea of what would ultimately be rendered
    on film.

    The original idea was for the workstations to do exposure calculations directly, and just have storage and a queue manager in the film
    printer. We desired to use x86 chips as much as possible because they
    were cheap relative to others even in industrial SBC.

    But then testing with various CPUs showed just how poor existing x86
    were at doing floating point. 80386/80387 was unacceptably slow even
    just doing the display approximations. Fast 80486dx could manage
    display for our initial target resolutions - but we expected
    resolutions to increase, and even the fastest 80486 was not capable to
    do exposure calculations in any reasonable amount of time. In the
    end, we went with Pentium [still expensive at the time] and dropped
    the whole notion of workstations doing the exposure calculations.





    when we realized just how ridiculously long even average exposure
    calculations would take on a PC , we knew that this solution was was
    not viable: computing in the background while letting the operator
    continue working would delay the time to start printing unacceptably
    in an emergency situation where the operator might want to quickly
    produce multiple films from different orientations/croppings of the
    same volume. Conversely, computing in the foreground would delay the
    operator for unacceptable periods which would interfere with "normal"
    operation where the operator was expected to take care of several
    patients and then sit down to do reviews and make films.

    We could have provided both options, but switching between them in an
    emergency situation could have been complicated, so we decided it
    would be better to go with a dedicated compute server on the back end.
    Once that decision was made, we explored how best to do it (with
    reasonable cost), and after some experimenting we realized that the
    68030/68882 was so much faster that it would be able to serve multiple
    80386 workstations in the normal case, and not delay prints beyond
    availability of the projector unit in the emergency case.


    Final rotated/cropped volumes would be sent to the film printer. The
    "lens" [actually a clear LCD] could be moved in 1mm increments to
    expose the film, so ray tracing was performed to determine the desired
    contents of each pixel on 1mm deep slice of the final hologram, and
    from there to determine

    Then calculations were done to determine what points actually needed
    to be exposed to produce the desired interference image on film.

    , and how bright each point needed to be. And finally it took the
    point information and mapped that onto movements of the LCD and
    shutter times for the laser.

    That produced a hologram "negative" which, once developed, could be
    used repeatedly to make "positive" holograms. The holograms were, by
    design, dimensionally accurate: 1cm in the source volume was 1cm in
    the hologram. In practice it was limited to, roughly, a 20cm cube
    centered on the plane of the film [yes, the film could be reversed on
    the viewer to to see what was "behind" it].

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to gneuner2@comcast.net on Wed Nov 1 20:31:27 2023
    On Wed, 01 Nov 2023 20:26:55 -0400, George Neuner
    <gneuner2@comcast.net> wrote:

    :
    Ultimately the compute servers ended up based on 68040, but the
    software was developed initially on 68030 using first 68881 and then
    68882. The first units shipped with 68030/68882. We switched to
    68040 when they became available
    ^^^
    at more reasonable prices.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to chris.m.thomasson.1@gmail.com on Wed Nov 1 20:26:55 2023
    On Tue, 31 Oct 2023 20:40:59 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 10/31/2023 7:19 PM, George Neuner wrote:
    On Mon, 30 Oct 2023 14:16:40 -0700, "Chris M. Thomasson"
    <chris.m.thomasson.1@gmail.com> wrote:

    On 10/29/2023 9:06 PM, George Neuner wrote:
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote: >>>>>> On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000. >>>>>>> But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least >>>>>> up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [...]

    DICOM volumetric images?

    Yes. We were post-processing for holographic film rendering.

    Nice. I am wondering if you can you remember the general resolutions you
    were working with at the time? 2048^3 resolution? Fwiw, I had to do a
    lot of work in volumetrics to help me visualize some of my n-ary vector >fields. Here is some of my work for an experimental mandelbulb of mine:

    https://nocache-nocookies.digitalgott.com/gallery/17/11687_15_03_15_8_18_09.jpeg

    https://www.fractalforums.com/index.php?action=gallery%3Bsa%3Dview%3Bid%3D17187


    Sorry, I don't remember what resolutions we were working with ... we
    worked variously with MR, CT and radiograph imagery.

    What I worked on did not need to know the source resolution because by
    the time my code got the imagery, it already had been cropped to
    fitthe projector resolution, which was at most 1024x768. Depth was
    limited to, at most, 80-100 slices: recall that ALL the spatial
    content of a hologram gets compressed into the single interference
    image, so too much content can cause the hologram to become light
    saturated and to lose fidelity.


    Our system ultimately included front-end PC "workstations" running
    either Linux or NetBSD (depending), and back-end VME "compute server"
    chassis running VxWorks co-located with the control units in each film
    printer.


    The workstations needed to be (reasonably) interactive - allowed no
    more than a couple of seconds to update the display. They did not
    even attempt to do 3D rendering, but rather provided a quick-n-dirty approximation of the hologram that would result from rendering the
    selected volume in the selected orientation. The operator could
    rotate and crop the volume and step through the approximated
    "hologram" image, to get an idea of what would ultimately be rendered
    on film.

    The original idea was for the workstations to do exposure calculations directly, and just have storage and a queue manager in the film
    printer. We desired to use x86 chips as much as possible because they
    were cheap relative to others even in industrial SBC.

    But then testing with various CPUs showed just how poor existing x86
    were at doing floating point. 80386/80387 was unacceptably slow even
    just doing the display approximations. Fast 80486dx could manage
    display for our initial target resolutions - but we expected
    resolutions to increase, and even the fastest 80486 was not capable to
    do exposure calculations in any reasonable amount of time. In the
    end, we went with Pentium [still expensive at the time] and dropped
    the whole notion of workstations doing the exposure calculations.

    When we realized just how ridiculously long even an average exposure calculation would take on x86, we knew that doing it on the
    workstations was not viable: computing in the background while letting
    the operator continue working would delay start of printing
    unacceptably in an emergency situation where the operator might want
    to quickly produce multiple films from different orientations or
    croppings of the same volume. Conversely, computing in the foreground
    would delay the operator unacceptably which would interfere with [what
    we thought of as] "normal" usage where the operator might be expected
    to collect and save multiple image sets, possibly from multiple
    patients, and then sit down to review and make films.

    We maybe could have provided both options, but switching between them
    in an emergency situation could have been complicated, so we decided
    it would be better to go with a dedicated compute server on the back
    end. Once that decision was made, we explored how best to do it (with reasonable cost), and after some experimenting we realized that the
    68030/68882 combination was so much faster [on our codes] that it
    would be able to serve multiple workstations acceptably fast in the
    normal case, and would not delay emergency prints beyond availability
    of the projector unit.


    Final rotated/cropped volumes would be sent to the compute server and ultimately to the projector and film printer. The "lens" [actually a
    clear LCD] could be moved in 1mm increments to expose the film, so ray
    tracing was performed to determine the desired contents of each pixel
    on each 1mm deep slices of the final hologram, and from there to
    determine what points actually needed to be exposed to produce the
    desired interference image.

    Multiple exposures produced an interference "negative" which, once
    developed, could be used repeatedly to create final "positive"
    holograms. The holograms were, by design, dimensionally accurate: 1cm
    in the source volume was 1cm in its hologram - so, e.g., a surgeon
    could in theory measure some feature in the image and that measurement
    would be - within 1mm - the same as in the body.

    In practice the volume we could render was limited to, roughly, a 20cm
    cube centered at the plane of the film [and yes, the film could be
    reversed on the viewer to to see things that were "behind" it].


    Ultimately the compute servers ended up based on 68040, but the
    software was developed initially on 68030 using first 68881 and then
    68882. The first units shipped with 68030/68882. We switched to
    68040 when they became available

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to All on Wed Nov 1 20:29:24 2023
    Sorry, I must have accidentally sent a draft. Ignore the 1st one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to George Neuner on Sun Nov 5 12:53:28 2023
    On 11/1/2023 4:46 PM, George Neuner wrote:
    On Tue, 31 Oct 2023 20:40:59 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 10/31/2023 7:19 PM, George Neuner wrote:
    On Mon, 30 Oct 2023 14:16:40 -0700, "Chris M. Thomasson"
    <chris.m.thomasson.1@gmail.com> wrote:

    On 10/29/2023 9:06 PM, George Neuner wrote:
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote: >>>>>>> On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000.
    But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least >>>>>>> up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [...]

    DICOM volumetric images?

    Yes. We were post-processing for holographic film rendering.

    Nice. I am wondering if you can you remember the general resolutions you
    were working with at the time? 2048^3 resolution? Fwiw, I had to do a
    lot of work in volumetrics to help me visualize some of my n-ary vector
    fields. Here is some of my work for an experimental mandelbulb of mine:

    https://nocache-nocookies.digitalgott.com/gallery/17/11687_15_03_15_8_18_09.jpeg

    https://www.fractalforums.com/index.php?action=gallery%3Bsa%3Dview%3Bid%3D17187


    Sorry, I don't remember what resolutions we were working with ... we
    worked with MR, CT and radiograph imagery.

    What I worked on did not need to know the source resolution because by
    the time my code got the imagery, it already had been cropped to fit
    the projector resolution: at most 1024x768. Depth was limited to, at
    most, 80-100 slices: recall that ALL the spatial content of a hologram
    gets compressed into the single interference image, so too much
    content can cause the hologram to become light saturated and to lose fidelity.

    Humm... Good point. I remember a 1024^3 power volumetric image would
    start to crunch my computer. 1024 images at 1024x1024 resolution. I need
    to study your response. Will get back to you. Actually, you might be
    able to help me...




    Our system ultimately included front-end "workstations" running either
    Linux or NetBSD (depending), and back-end "compute servers" running
    VxWorks co-located with the control units in each film printer.


    The workstations needed to be (reasonably) interactive - allowed no
    more than a couple of seconds to update the display. They did not
    even attempt to do 3D rendering, but rather provided a quick-n-dirty approximation of the hologram that would result from rendering the
    selected volume in the selected orientation. The operator could
    rotate and crop the volume and step through the approximated
    "hologram" image, to get an idea of what would ultimately be rendered
    on film.

    The original idea was for the workstations to do exposure calculations directly, and just have storage and a queue manager in the film
    printer. We desired to use x86 chips as much as possible because they
    were cheap relative to others even in industrial SBC.

    But then testing with various CPUs showed just how poor existing x86
    were at doing floating point. 80386/80387 was unacceptably slow even
    just doing the display approximations. Fast 80486dx could manage
    display for our initial target resolutions - but we expected
    resolutions to increase, and even the fastest 80486 was not capable to
    do exposure calculations in any reasonable amount of time. In the
    end, we went with Pentium [still expensive at the time] and dropped
    the whole notion of workstations doing the exposure calculations.





    when we realized just how ridiculously long even average exposure calculations would take on a PC , we knew that this solution was was
    not viable: computing in the background while letting the operator
    continue working would delay the time to start printing unacceptably
    in an emergency situation where the operator might want to quickly
    produce multiple films from different orientations/croppings of the
    same volume. Conversely, computing in the foreground would delay the operator for unacceptable periods which would interfere with "normal" operation where the operator was expected to take care of several
    patients and then sit down to do reviews and make films.

    We could have provided both options, but switching between them in an emergency situation could have been complicated, so we decided it
    would be better to go with a dedicated compute server on the back end.
    Once that decision was made, we explored how best to do it (with
    reasonable cost), and after some experimenting we realized that the 68030/68882 was so much faster that it would be able to serve multiple
    80386 workstations in the normal case, and not delay prints beyond availability of the projector unit in the emergency case.


    Final rotated/cropped volumes would be sent to the film printer. The
    "lens" [actually a clear LCD] could be moved in 1mm increments to
    expose the film, so ray tracing was performed to determine the desired contents of each pixel on 1mm deep slice of the final hologram, and
    from there to determine

    Then calculations were done to determine what points actually needed
    to be exposed to produce the desired interference image on film.

    , and how bright each point needed to be. And finally it took the
    point information and mapped that onto movements of the LCD and
    shutter times for the laser.

    That produced a hologram "negative" which, once developed, could be
    used repeatedly to make "positive" holograms. The holograms were, by
    design, dimensionally accurate: 1cm in the source volume was 1cm in
    the hologram. In practice it was limited to, roughly, a 20cm cube
    centered on the plane of the film [yes, the film could be reversed on
    the viewer to to see what was "behind" it].




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to All on Wed Nov 8 23:26:01 2023
    On 11/1/2023 4:46 PM, George Neuner wrote:
    [...]

    Excellent, thanks for you the info! Btw, do you mind if I ask you some technical questions from time to time? Can any volumetric image be
    converted into a hologram? I think so...

    Some more of my work:

    https://youtu.be/iF75LSbzIVM

    https://youtu.be/yZbO0314gRo

    https://youtu.be/HwIkk9zENcg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to George Neuner on Wed Nov 8 22:50:30 2023
    On 11/1/2023 4:46 PM, George Neuner wrote:
    On Tue, 31 Oct 2023 20:40:59 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 10/31/2023 7:19 PM, George Neuner wrote:
    On Mon, 30 Oct 2023 14:16:40 -0700, "Chris M. Thomasson"
    <chris.m.thomasson.1@gmail.com> wrote:

    On 10/29/2023 9:06 PM, George Neuner wrote:
    On Fri, 27 Oct 2023 17:43:05 -0700 (PDT), MitchAlsup
    <MitchAlsup@aol.com> wrote:

    On Friday, October 27, 2023 at 5:07:00?PM UTC-5, Bernd Linsel wrote: >>>>>>> On 27.10.2023 22:54, Opus wrote:
    :
    I don't remember if it was at all possible to use a 68881 with a 68000.
    But maybe by "68000" you meant the 68020 and later.
    Au contraire, the -881 worked with all 68k family processors, at least >>>>>>> up to the 040.
    <
    Where "worked" meant that::
    FABS was 35 cycles !!
    FADD was 51 cycles !!
    FMUL was 71 cycles !!
    heck even FNEG was 35 cycles !!!

    Which still was much faster than doing it in software on the CPU.


    Mid 90's to 2000 I was doing a lot of medical imaging. At that time
    MRI 'pixels' were 32-bit floating point and we were starting to see
    64-bit pixels on the latest units.
    [...]

    DICOM volumetric images?

    Yes. We were post-processing for holographic film rendering.

    Nice. I am wondering if you can you remember the general resolutions you
    were working with at the time? 2048^3 resolution? Fwiw, I had to do a
    lot of work in volumetrics to help me visualize some of my n-ary vector
    fields. Here is some of my work for an experimental mandelbulb of mine:

    https://nocache-nocookies.digitalgott.com/gallery/17/11687_15_03_15_8_18_09.jpeg

    https://www.fractalforums.com/index.php?action=gallery%3Bsa%3Dview%3Bid%3D17187


    Sorry, I don't remember what resolutions we were working with ... we
    worked with MR, CT and radiograph imagery.

    What I worked on did not need to know the source resolution because by
    the time my code got the imagery, it already had been cropped to fit
    the projector resolution: at most 1024x768. Depth was limited to, at
    most, 80-100 slices: recall that ALL the spatial content of a hologram
    gets compressed into the single interference image, so too much
    content can cause the hologram to become light saturated and to lose fidelity.


    Our system ultimately included front-end "workstations" running either
    Linux or NetBSD (depending), and back-end "compute servers" running
    VxWorks co-located with the control units in each film printer.


    The workstations needed to be (reasonably) interactive - allowed no
    more than a couple of seconds to update the display. They did not
    even attempt to do 3D rendering, but rather provided a quick-n-dirty approximation of the hologram that would result from rendering the
    selected volume in the selected orientation. The operator could
    rotate and crop the volume and step through the approximated
    "hologram" image, to get an idea of what would ultimately be rendered
    on film.

    The original idea was for the workstations to do exposure calculations directly, and just have storage and a queue manager in the film
    printer. We desired to use x86 chips as much as possible because they
    were cheap relative to others even in industrial SBC.

    But then testing with various CPUs showed just how poor existing x86
    were at doing floating point. 80386/80387 was unacceptably slow even
    just doing the display approximations. Fast 80486dx could manage
    display for our initial target resolutions - but we expected
    resolutions to increase, and even the fastest 80486 was not capable to
    do exposure calculations in any reasonable amount of time. In the
    end, we went with Pentium [still expensive at the time] and dropped
    the whole notion of workstations doing the exposure calculations.





    when we realized just how ridiculously long even average exposure calculations would take on a PC , we knew that this solution was was
    not viable: computing in the background while letting the operator
    continue working would delay the time to start printing unacceptably
    in an emergency situation where the operator might want to quickly
    produce multiple films from different orientations/croppings of the
    same volume. Conversely, computing in the foreground would delay the operator for unacceptable periods which would interfere with "normal" operation where the operator was expected to take care of several
    patients and then sit down to do reviews and make films.

    We could have provided both options, but switching between them in an emergency situation could have been complicated, so we decided it
    would be better to go with a dedicated compute server on the back end.
    Once that decision was made, we explored how best to do it (with
    reasonable cost), and after some experimenting we realized that the 68030/68882 was so much faster that it would be able to serve multiple
    80386 workstations in the normal case, and not delay prints beyond availability of the projector unit in the emergency case.


    Final rotated/cropped volumes would be sent to the film printer. The
    "lens" [actually a clear LCD] could be moved in 1mm increments to
    expose the film, so ray tracing was performed to determine the desired contents of each pixel on 1mm deep slice of the final hologram, and
    from there to determine

    Then calculations were done to determine what points actually needed
    to be exposed to produce the desired interference image on film.

    , and how bright each point needed to be. And finally it took the
    point information and mapped that onto movements of the LCD and
    shutter times for the laser.

    That produced a hologram "negative" which, once developed, could be
    used repeatedly to make "positive" holograms. The holograms were, by
    design, dimensionally accurate: 1cm in the source volume was 1cm in
    the hologram. In practice it was limited to, roughly, a 20cm cube
    centered on the plane of the film [yes, the film could be reversed on
    the viewer to to see what was "behind" it].




    Fwiw, here is some of my work wrt volumes broken out into geometric
    forms, real time, opengl and GLSL shaders:

    https://youtu.be/KRkKZj9s3wk

    https://youtu.be/oVCjAaY1pOY

    Btw, I made the music as well, via MIDI. ;^)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to Thomas Koenig on Fri Nov 10 01:45:52 2023
    Thomas Koenig wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Friday, October 13, 2023 at 10:13:25 PM UTC+3, John Dallman wrote:

    Gone: MIPS, PA-RISC, Alpha, 68000.

    68K still sold today in form of ColdFire. But those are microcontrollers
    rather than general-purpose computers.

    ColdFire does not implement the whole 68000 instruction set
    (at least according to Wikipedia), some instructions and some
    addressing modes are not implemented.
    <
    It implements enough of the ISA to smell like a 68K to the compilers
    and assembly language programmers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to chris.m.thomasson.1@gmail.com on Fri Nov 10 13:35:24 2023
    On Wed, 8 Nov 2023 23:26:01 -0800, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 11/1/2023 4:46 PM, George Neuner wrote:
    [...]

    Excellent, thanks for you the info! Btw, do you mind if I ask you some >technical questions from time to time? Can any volumetric image be
    converted into a hologram? I think so...

    I believe you are correct that any volumetric image can be rendered as
    a hologram, but I can't say for certain ... I haven't really studied
    it enough.

    A company I was working for at that time was contracted to implement
    the software and UI side of the hologram project. Our main business
    was in machine vision - mainly industrial QA/QC - but from that we had
    both image processing experience and also industry connections to
    printing technology.

    The NDAs are long expired, so I can talk about [the parts I know of]
    what we did, but wrt the image processing, apart from some system
    specific implementation tweaks, we were just following someone else's
    recipes.

    George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to George Neuner on Sat Nov 11 11:47:12 2023
    On 11/10/2023 10:35 AM, George Neuner wrote:
    On Wed, 8 Nov 2023 23:26:01 -0800, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 11/1/2023 4:46 PM, George Neuner wrote:
    [...]

    Excellent, thanks for you the info! Btw, do you mind if I ask you some
    technical questions from time to time? Can any volumetric image be
    converted into a hologram? I think so...

    I believe you are correct that any volumetric image can be rendered as
    a hologram, but I can't say for certain ... I haven't really studied
    it enough.

    A company I was working for at that time was contracted to implement
    the software and UI side of the hologram project. Our main business
    was in machine vision - mainly industrial QA/QC - but from that we had
    both image processing experience and also industry connections to
    printing technology.

    The NDAs are long expired, so I can talk about [the parts I know of]
    what we did, but wrt the image processing, apart from some system
    specific implementation tweaks, we were just following someone else's recipes.

    Excellent! Thank you so much. I am tight on time right now, will get
    back to you. Fwiw, check this out:

    https://youtu.be/XpbPzrSXOgk

    Imagine it as a full blown hologram! That would be interesting, well,
    for me at least. :^) Thanks again.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to Thomas Koenig on Sun Nov 12 19:19:00 2023
    Thomas Koenig wrote:

    MitchAlsup <MitchAlsup@aol.com> schrieb:
    On Saturday, October 14, 2023 at 1:17:44 PM UTC-5, BGB wrote:
    On 10/14/2023 11:33 AM, Michael S wrote:
    On Friday, October 13, 2023 at 9:13:04 PM UTC+3, John Levine wrote:
    According to John Dallman <j...@cix.co.uk>:
    Overall, Oracle's vertical integration plan for making their database >>> >>> work better on their own hardware was not a success. It turned out that >>> >>> Oracle DB and SPARC Solaris were already pretty well-tuned for each other,
    and there were no easy gains there.
    What could they do that would make it better than an ARM or RISC V chip >>> >> running at the same speed? Transactional memory?


    It seems, by time of acquisition (late 2009) it was already know internally
    that Rock (SPARC processor with TM support) is doomed. Although it was not
    enounced publicly until next year.

    I also personally have difficulty imagining what exactly one could do in >>> a CPU that would give it much advantage for database tasks that would
    not otherwise make sense with a general purpose CPU.
    <
    Bigger caches and a slower clock rate help data base but do not help general >> purpose. More slower CPUs and thinner cache hierarchy helps. Less prediction >> helps too.
    <
    So, instead of 64KB L1s and 1MB L2s and 8MB L3s:: do a 256KB L1 and
    8MB L2 with no L3s.
    <
    It does not mater how fast the clock rate is if you are waiting for memory to
    respond.

    Hm, but 256k L1 would increase the L1 latency, correct?

    Maybe this is a reason why SMT is popular in certain processors:
    If a core is waiting for memory, it might as well switch to another
    thread for which the outstanding request has already arrived.
    <
    This reasoning works well enough as long as the register state of the
    other thread adds no cycles to the pipeline not access time to the
    Decode stage. GPUs switch threads every cycle, so do barrel processors,
    x86s not so much.
    <
    Another reason, I'm told, is that SMT can be a reaction to
    software licensing - some licenses in the commercial field are
    are paid per physical core, and if SMT can squeeze out a few
    percent of performance, even a more expensive processor might be
    more cost-effective.
    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.
    <

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to John Dallman on Sun Nov 12 19:24:12 2023
    John Dallman wrote:

    In article <c2399332-9696-4749-adc4-8d9b071c060dn@googlegroups.com>, already5chosen@yahoo.com (Michael S) wrote:

    Growing, but not yet fully-established: RISC-V.
    Is RISC-V really present in general-purpose computing?

    Not yet, but it seems to have an adequate design and enough momentum to
    get there. /Staying/ there is a different question.
    <
    RISC-V seems to have enough momentum to avoid being written off in the
    large general computing realm. Whether it gets there, whether the
    Chinese money behind it helps or harms long term, is all playing out
    as we watch.
    <
    Current RISC-V products are not good enough to land design-wins right now--although that might change soon.
    <
    In established niches, but not growing out of them: POWER, IBM Z.
    IBM POWER - not sure about it. POWER holds on for as long as IBM
    able to make POWER chips that are competitive (or better exceeding)
    x86-64 and ARM64 in absolute performance and at the same time not
    ridiculously behind in price/performance. It looks to me like POWER
    is in the same rat race that effectively killed SPARC and IPF. They
    just manage to run a little better.
    <
    Power lives only so long as IBM continues to fund design teams. For the
    people willing to pay for its performance(s) it has a developed niche.
    <
    It's also used to run IBM i, which is a pretty big niche that's quite
    easy to forget. It could be replaced, since the concept of the system is
    that the hardware is replaceable, but IBM would try hard to avoid the
    costs of doing that.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian G. Lucas@21:1/5 to MitchAlsup on Sun Nov 12 19:50:08 2023
    On 11/12/23 13:19, MitchAlsup wrote:
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.
    And talk to farmers who bought a John Deere tractor and need to fix it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to Brian G. Lucas on Mon Nov 13 20:05:34 2023
    Brian G. Lucas wrote:

    On 11/12/23 13:19, MitchAlsup wrote:
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.
    And talk to farmers who bought a John Deere tractor and need to fix it.
    <
    Had they bought one from the 1980's they would have no problem fixing it.
    <
    And John Deere is doing their brand no favors with this newish policy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to MitchAlsup on Mon Nov 13 14:32:19 2023
    On 11/13/2023 2:05 PM, MitchAlsup wrote:
    Brian G. Lucas wrote:

    On 11/12/23 13:19, MitchAlsup wrote:
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.
    And talk to farmers who bought a John Deere tractor and need to fix it.
    <
    Had they bought one from the 1980's they would have no problem fixing it.
    <
    And John Deere is doing their brand no favors with this newish policy.

    Probably some executive somewhere: "But, a tractor is premium, of course
    they will take it to the dealership for oil changes and servicing...".

    Next up, they start using micropayments for each "extra perk" the user
    tries to use (like apparently in some newer cars):
    Want air conditioner? Want windshield wipers? ...
    You are going to need to pay to use those...

    Farmer then pays by the minute every time the "thresher" component is
    active, and the thresher can't be used at all if their farm lacks a
    nearby 5G cell-tower or similar, etc.


    Similar to like, say, if one has a car that micro-payments the AC, and
    then finds that the AC shuts off whenever they drive out on rural roads
    or similar because of interruptions to cell-coverage (and thus inability
    for the car to make the micropayments).


    Or, they do like some other companies, and maybe "auto brick" the
    tractor if it gets too far outside the EOL period, and then hits an "End
    of Service" situation.

    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to MitchAlsup on Mon Nov 13 14:00:45 2023
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences. First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still running
    Windows XP), although if it is licensed to a particular hardware system,
    that system's life may limit your use, and you may not get vendor support.

    But I think the problem you are referring to is the limit on the number
    of copies you can make, or simultaneous users you can have. Of course,
    this is due to the obvious difference that, as opposed to a lathe, it is trivial to make an arbitrary number of copies, or have multiple
    different users use it simultaneously. This difference accounts for the different licensing terms. It makes things better for the software
    vendor, which, at least in theory, allows for more software to be created.

    Of course, YMMV



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to Stephen Fuld on Mon Nov 13 14:22:51 2023
    On 11/13/2023 2:00 PM, Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still running Windows XP), although if it is licensed to a particular hardware system,
    that system's life may limit your use, and you may not get vendor support.

    But I think the problem you are referring to is the limit on the number
    of copies you can make, or simultaneous users you can have.

    Side note, and related... Iirc, IOCP on windows client versions only
    allowed for 2 IOCP TransmitFile functions to be concurrent at the same
    time. The server version allowed much more before non-paged memory ran
    out...

    https://learn.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-transmitfile

    wow, this was back in nt 4 days.


    Of course,
    this is due to the obvious difference that, as opposed to a lathe, it is trivial to make an arbitrary number of copies, or have multiple
    different users use it simultaneously.  This difference accounts for the different licensing terms.  It makes things better for the software
    vendor, which, at least in theory, allows for more software to be created.

    Of course, YMMV




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to Stefan Monnier on Mon Nov 13 14:57:29 2023
    On 11/13/2023 2:44 PM, Stefan Monnier wrote:
    https://learn.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-transmitfile

    I love this part:

    Note This function is a Microsoft-specific extension to the Windows
    Sockets specification.

    Comic relief, indeed!

    Fwiw, I love this part as well:
    _______________________
    Server versions of Windows optimize the TransmitFile function for high performance. On server versions, there are no default limits placed on
    the number of concurrent TransmitFile operations allowed on the system.
    Expect better performance results when using TransmitFile on server
    versions of Windows.
    _______________________

    ;^D

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Mon Nov 13 17:44:57 2023
    https://learn.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-transmitfile

    I love this part:

    Note This function is a Microsoft-specific extension to the Windows
    Sockets specification.


    -- Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to Chris M. Thomasson on Mon Nov 13 14:58:18 2023
    On 11/13/2023 2:57 PM, Chris M. Thomasson wrote:
    On 11/13/2023 2:44 PM, Stefan Monnier wrote:
    https://learn.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-transmitfile

    I love this part:

         Note  This function is a Microsoft-specific extension to the Windows
         Sockets specification.

    Comic relief, indeed!

    Fwiw, I love this part as well:
    _______________________
    Server versions of Windows optimize the TransmitFile function for high performance. On server versions, there are no default limits placed on
    the number of concurrent TransmitFile operations allowed on the system. Expect better performance results when using TransmitFile on server
    versions of Windows.
    _______________________

    ;^D


    By the server version or this music will play forevermore in Windows 12...

    https://youtu.be/tQ13aroz7eE

    ;^D

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Mon Nov 13 19:44:09 2023
    I guess, it "would be" a similar situation with the newer PlayStation
    and XBox consoles, except that I guess Sony and MS have been keeping
    the servers alive (so PS3 and XBox360 still work), but sucks if
    a person is running a Wii or WiiU or similar (apparently, these are
    all scheduled go offline in 2024).

    I suspect the Wii shouldn't be affected because AFAIK mine's never been connected to the Internet (I blacklisted its MAC address) yet it's
    worked fine for the games we bought.
    Maybe some of the games require some remote service, but definitely not
    all of them.

    But, hardware depending on online servers to work, will not last
    forever, in any case.

    Proprietary software is a curse, indeed.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to Stephen Fuld on Mon Nov 13 18:29:18 2023
    On 11/13/2023 4:00 PM, Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still running Windows XP), although if it is licensed to a particular hardware system,
    that system's life may limit your use, and you may not get vendor support.


    Well, unless it is Apple or Nintendo...


    I guess that ironically, apparently lot of older Nintendo platforms are outliving some of the newer ones because the older ones had physical
    media and didn't depend on an always-online internet connection to work,
    but the newer consoles do, so when Nintendo pulls the plug on the
    servers, the consoles are effectively bricks from then on.


    I guess, it "would be" a similar situation with the newer PlayStation
    and XBox consoles, except that I guess Sony and MS have been keeping the servers alive (so PS3 and XBox360 still work), but sucks if a person is
    running a Wii or WiiU or similar (apparently, these are all scheduled go offline in 2024).

    I guess apparently 3DS devices will still partially work, in that they
    can still use previously installed games, and don't require an active
    internet connection or signing in to play games.

    But, hardware depending on online servers to work, will not last
    forever, in any case.


    On the other side of things, people are still running the original NES
    and NES clones, but apparently the original cartridges are becoming rare
    and expensive; being gradually replaced by mock cartridges that
    internally load ROMs from an SDcard (well, among other things; excluding
    the occasional people that do something hacky and run Doom or Quake on
    the thing via cartridge magic).

    But, I guess people have to be careful what they do, and keep it
    low-key, to avoid getting angry C&D letters from Nintendo's lawyers...

    Theoretically though, in cases where it is both a clone console and not
    using any of their original IP (say, in the case of running Doom or
    Quake via an FPGA and ARM core or similar), the lawyers should have no
    say in the matter so long as the person doesn't invoke any of their
    trademarks.


    But I think the problem you are referring to is the limit on the number
    of copies you can make, or simultaneous users you can have.  Of course,
    this is due to the obvious difference that, as opposed to a lathe, it is trivial to make an arbitrary number of copies, or have multiple
    different users use it simultaneously.  This difference accounts for the different licensing terms.  It makes things better for the software
    vendor, which, at least in theory, allows for more software to be created.


    Yeah, this is a fundamental difference...

    To copy a lathe would require the time and expense both to buy all of
    the materials and machine all the parts.

    Then one may find that it ends up costing more than it would have cost
    just to buy another one.

    And, despite the lathes and mills typically costing $k, one is
    hard-pressed to make something of comparable quality for cheaper.



    For stability, one needs things like a lot of weight, and the OEM has
    the advantage that they can cost-effectively sand-cast big chunks of
    cast iron (where cheaply making big cast iron parts is a technology
    mostly out of reach of home shops).

    Other alternatives are things like:
    Abrasive sand (such as garnet sand or slag) mixed with epoxy:
    Not cheap;
    Concrete: Not very good;
    Needs to be encased in metal or epoxy to not suck;
    One may still need garnet sand or slag for the needed density;
    Silica sand is cheaper, but not really dense enough.

    If the machine is too light, then it would rattle or vibrate excessively
    during cuts (AKA: chatter) which would ruin the quality of the machined
    parts.


    Though, I guess if a person had access to a waterjet, they could cut the
    parts out of a bunch of steel plate as layers, and then braze all the
    layers together. Labor would likely still cost more than being able to
    pour cast iron though.


    For other parts, having machines purpose built to make one specific part
    will make those parts cheaper than using more general purpose machines
    to make all the parts.

    ...


    So, say, I don't really fault Grizzly or Tormach for the cost of their machines, it is unlikely they could make them for all that much cheaper
    and still make a profit.


    Though, now having one of the Tormach CNC machines, limits can still be
    noted:
    Tries to use a 1/2 inch drill on some stainless steel:
    Say, to enlarge a hole from 3/8 to 1/2 inch.
    Starts to drill, eats into steel a little, spindle: "NOPE!",
    Spindle stalls and machine goes into a faulted state.
    So, one needs to go: 1/4" (OK), 3/8" (Also OK), to 7/16".
    And then mill the hole the rest of the way via an endmill.

    Say, because 1.5 kW at 10k RPM (in high range), doesn't mean it can
    drill a 1/2" hole at 700 RPM. Granted, this works out to around 1 lb-ft
    or 192 oz-in of torque (so, not very much torque even by cordless drill standards).

    Well, or one can put it into low range by manually moving a belt, but
    this kinda sucks as well, as then one has the torque to run the drill,
    but not really enough RPM range for the endmill, ... (too bad, say, they couldn't have put a CVT or similar in the thing).


    So, say, vs a CNC converted Bridgeport:
    Tormach:
    + Tighter tolerances
    Holds +/- 0.005 easily
    Smaller is still hard (+/- 0.002 is still pushing it)
    + Has flood coolant;
    + Has tool changer;
    + Can dynamically change RPM.
    - Less travel.
    + Though, still more than my G0704 (at least in Y and Z).
    + Faster
    Can make quick work of aluminum and similar, ...
    - Not so much torque.
    Works fine if mostly using 1/8, 3/16, and 1/4 inch endmills.
    7/16 and 5/8 (*2), don't get too aggressive here with cuts.
    3/4: Only if you are milling plastic...
    Bridgeport:
    - Maybe gets +/- 0.005 if you are feeling lucky
    +/- 0.015 mostly OK.
    + More powerful spindle;
    Not much issue drilling holes in steel.
    + More X/Y/Z travel;
    - No coolant;
    - RPM is controlled by moving V belts.
    - Using R8 collets and drawbar sucks.
    - The software is buggy and likes to crash frequently, ...

    *2: Machine came with some ER20 tool holders (in BT30), but these can't
    handle larger tools. Also got some ER32 / BT30 tool holders, which can
    handle 5/8 and 3/4" tools, but these can't really be used for general
    purpose milling (bigger tools being more for if one needs a lot more
    flute length).

    I guess, one can use a shell-mill/face-mill, but one is limited to
    fairly small face mills and fairly light cuts (mostly defeating the
    advantage vs going back and forth using a 7/16 or 5/8 mill or similar).


    Contrast, the G0704 will try to spin a bigger mill, but the machine will
    just sorta rattle and hop all over the place while doing so.

    Bridgeport: "Yeah, fine, whatever." (the Bridgeport will also cut with a
    7/8" or 1" endmill without complaint).

    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to Stefan Monnier on Mon Nov 13 19:57:27 2023
    On 11/13/2023 6:44 PM, Stefan Monnier wrote:
    I guess, it "would be" a similar situation with the newer PlayStation
    and XBox consoles, except that I guess Sony and MS have been keeping
    the servers alive (so PS3 and XBox360 still work), but sucks if
    a person is running a Wii or WiiU or similar (apparently, these are
    all scheduled go offline in 2024).

    I suspect the Wii shouldn't be affected because AFAIK mine's never been connected to the Internet (I blacklisted its MAC address) yet it's
    worked fine for the games we bought.
    Maybe some of the games require some remote service, but definitely not
    all of them.


    OK.

    I don't have one, but this is what I heard from videos that were talking
    about it online. It sounded like it was an "always online" thing that
    required people to be signed in to use the console.

    I guess, good to know if it is not.


    But, hardware depending on online servers to work, will not last
    forever, in any case.

    Proprietary software is a curse, indeed.


    Yeah.

    Better when everything (hardware and software) is open, and not
    dependent on whether some corporation continues to find it profitable
    (or continues to exist, for that matter).

    Often times, the "artifacts" continue to still be relevant to people,
    long after the companies that created them have ceased to exist (like,
    all the "retro guys" who are still obsessing on Commodore computers and similar, decades after the company has ceased to exist, *...).


    *: Though, I guess the trademarks have still been passed around and used occasionally (like, I guess a few years back some company did some
    "Commodore 64" branded smartphones or similar, along with using the
    various Commodore logos and a similar pasted all over what was otherwise
    a generic Android smartphone, as a thing apparently to try to keep the trademark alive; but then there was apparent dispute between several
    parties as to who actually owns the various logos and trademarks, ...).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to BGB on Tue Nov 14 02:06:21 2023
    BGB wrote:

    On 11/13/2023 4:00 PM, Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still running
    Windows XP), although if it is licensed to a particular hardware system,
    that system's life may limit your use, and you may not get vendor support. >>

    But I think the problem you are referring to is the limit on the number
    of copies you can make, or simultaneous users you can have.  Of course,
    this is due to the obvious difference that, as opposed to a lathe, it is
    trivial to make an arbitrary number of copies, or have multiple
    different users use it simultaneously.  This difference accounts for the
    different licensing terms.  It makes things better for the software
    vendor, which, at least in theory, allows for more software to be created. >>

    Yeah, this is a fundamental difference...

    To copy a lathe would require the time and expense both to buy all of
    the materials and machine all the parts.
    Yes
    Then one may find that it ends up costing more than it would have cost
    just to buy another one.
    Invariably
    And, despite the lathes and mills typically costing $k, one is
    hard-pressed to make something of comparable quality for cheaper.
    Yes


    For stability, one needs things like a lot of weight, and the OEM has
    the advantage that they can cost-effectively sand-cast big chunks of
    cast iron (where cheaply making big cast iron parts is a technology
    mostly out of reach of home shops).

    Other alternatives are things like:
    Abrasive sand (such as garnet sand or slag) mixed with epoxy:
    Not cheap;
    Concrete: Not very good;
    Needs to be encased in metal or epoxy to not suck;
    One may still need garnet sand or slag for the needed density;
    Silica sand is cheaper, but not really dense enough.

    If the machine is too light, then it would rattle or vibrate excessively during cuts (AKA: chatter) which would ruin the quality of the machined parts.
    <
    It is not weight per seé, it is stiffness between the piece holding the part and the spindle applying forces to remove chips from the part.
    <
    Though, I guess if a person had access to a waterjet, they could cut the parts out of a bunch of steel plate as layers, and then braze all the
    layers together. Labor would likely still cost more than being able to
    pour cast iron though.


    For other parts, having machines purpose built to make one specific part
    will make those parts cheaper than using more general purpose machines
    to make all the parts.

    ....


    So, say, I don't really fault Grizzly or Tormach for the cost of their machines, it is unlikely they could make them for all that much cheaper
    and still make a profit.


    Though, now having one of the Tormach CNC machines, limits can still be noted:
    Tries to use a 1/2 inch drill on some stainless steel:
    Say, to enlarge a hole from 3/8 to 1/2 inch.
    Starts to drill, eats into steel a little, spindle: "NOPE!",
    Spindle stalls and machine goes into a faulted state.
    So, one needs to go: 1/4" (OK), 3/8" (Also OK), to 7/16".
    And then mill the hole the rest of the way via an endmill.

    Say, because 1.5 kW at 10k RPM (in high range), doesn't mean it can
    drill a 1/2" hole at 700 RPM. Granted, this works out to around 1 lb-ft
    or 192 oz-in of torque (so, not very much torque even by cordless drill standards).

    Well, or one can put it into low range by manually moving a belt, but
    this kinda sucks as well, as then one has the torque to run the drill,
    but not really enough RPM range for the endmill, ... (too bad, say, they couldn't have put a CVT or similar in the thing).


    So, say, vs a CNC converted Bridgeport:
    Tormach:
    + Tighter tolerances
    Holds +/- 0.005 easily
    Smaller is still hard (+/- 0.002 is still pushing it)
    + Has flood coolant;
    + Has tool changer;
    + Can dynamically change RPM.
    - Less travel.
    + Though, still more than my G0704 (at least in Y and Z).
    + Faster
    Can make quick work of aluminum and similar, ...
    - Not so much torque.
    Works fine if mostly using 1/8, 3/16, and 1/4 inch endmills.
    7/16 and 5/8 (*2), don't get too aggressive here with cuts.
    3/4: Only if you are milling plastic...
    Bridgeport:
    - Maybe gets +/- 0.005 if you are feeling lucky
    +/- 0.015 mostly OK.
    + More powerful spindle;
    Not much issue drilling holes in steel.
    + More X/Y/Z travel;
    - No coolant;
    - RPM is controlled by moving V belts.
    - Using R8 collets and drawbar sucks.
    - The software is buggy and likes to crash frequently, ...
    <
    The difference between 0.005 and 0.001 on a Bridgeport is metrology and
    care. If you can't measure something you cannot compensate for it--this
    goes double during setup where you sweep the clamped part to determine
    that it is held flat in the vise and normal to the milling direction.
    <
    *2: Machine came with some ER20 tool holders (in BT30), but these can't handle larger tools. Also got some ER32 / BT30 tool holders, which can
    handle 5/8 and 3/4" tools, but these can't really be used for general
    purpose milling (bigger tools being more for if one needs a lot more
    flute length).
    <
    I have a whole set of ER-40 collets an R8 spindle adapter and a 5C adapter
    so the collets can be used in Mill or lathe.
    <
    I guess, one can use a shell-mill/face-mill, but one is limited to
    fairly small face mills and fairly light cuts (mostly defeating the
    advantage vs going back and forth using a 7/16 or 5/8 mill or similar).
    <
    Granted a Bridgeport is limited, but so are 100,000 pound 36" lathes with 75-foot beds run by 50 HP motors.
    <
    Contrast, the G0704 will try to spin a bigger mill, but the machine will
    just sorta rattle and hop all over the place while doing so.
    <
    So, you mill in smaller chunks. I have a fly cutter than can cut 4140 at
    5" width on my G0704--albeit far from the speeds and feeds appropriate
    for a Cincinnati doing the same job.....


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to MitchAlsup on Mon Nov 13 22:52:03 2023
    On 11/13/2023 8:06 PM, MitchAlsup wrote:
    BGB wrote:

    On 11/13/2023 4:00 PM, Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.


    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.
    Of course, this is due to the obvious difference that, as opposed to
    a lathe, it is trivial to make an arbitrary number of copies, or have
    multiple different users use it simultaneously.  This difference
    accounts for the different licensing terms.  It makes things better
    for the software vendor, which, at least in theory, allows for more
    software to be created.


    Yeah, this is a fundamental difference...

    To copy a lathe would require the time and expense both to buy all of
    the materials and machine all the parts.
    Yes
    Then one may find that it ends up costing more than it would have cost
    just to buy another one.
    Invariably
    And, despite the lathes and mills typically costing $k, one is
    hard-pressed to make something of comparable quality for cheaper.
    Yes


    For stability, one needs things like a lot of weight, and the OEM has
    the advantage that they can cost-effectively sand-cast big chunks of
    cast iron (where cheaply making big cast iron parts is a technology
    mostly out of reach of home shops).

    Other alternatives are things like:
       Abrasive sand (such as garnet sand or slag) mixed with epoxy:
         Not cheap;
       Concrete: Not very good;
         Needs to be encased in metal or epoxy to not suck;
         One may still need garnet sand or slag for the needed density;
         Silica sand is cheaper, but not really dense enough.

    If the machine is too light, then it would rattle or vibrate
    excessively during cuts (AKA: chatter) which would ruin the quality of
    the machined parts.
    <
    It is not weight per seé, it is stiffness between the piece holding the
    part
    and the spindle applying forces to remove chips from the part.
    <

    Or, some combination of machine stiffness, inertia, and having enough weight+inertia to keep the thing firmly anchored in place on the floor.


    Like, when I tried at one point to mill steel with a 1" ballnose endmill
    on a CNC converted G0704, I quickly stopped this as, as soon as the
    endmill started cutting the steel, it seemed like the whole thing was
    going to rattle itself apart...


    Though, I guess if a person had access to a waterjet, they could cut
    the parts out of a bunch of steel plate as layers, and then braze all
    the layers together. Labor would likely still cost more than being
    able to pour cast iron though.


    For other parts, having machines purpose built to make one specific
    part will make those parts cheaper than using more general purpose
    machines to make all the parts.

    ....


    So, say, I don't really fault Grizzly or Tormach for the cost of their
    machines, it is unlikely they could make them for all that much
    cheaper and still make a profit.


    Though, now having one of the Tormach CNC machines, limits can still
    be noted:
       Tries to use a 1/2 inch drill on some stainless steel:
         Say, to enlarge a hole from 3/8 to 1/2 inch.
       Starts to drill, eats into steel a little, spindle: "NOPE!",
         Spindle stalls and machine goes into a faulted state.
       So, one needs to go: 1/4" (OK), 3/8" (Also OK), to 7/16".
         And then mill the hole the rest of the way via an endmill.

    Say, because 1.5 kW at 10k RPM (in high range), doesn't mean it can
    drill a 1/2" hole at 700 RPM. Granted, this works out to around 1
    lb-ft or 192 oz-in of torque (so, not very much torque even by
    cordless drill standards).

    Well, or one can put it into low range by manually moving a belt, but
    this kinda sucks as well, as then one has the torque to run the drill,
    but not really enough RPM range for the endmill, ... (too bad, say,
    they couldn't have put a CVT or similar in the thing).


    So, say, vs a CNC converted Bridgeport:
        Tormach:
          + Tighter tolerances
            Holds +/- 0.005 easily
            Smaller is still hard (+/- 0.002 is still pushing it)
          + Has flood coolant;
          + Has tool changer;
          + Can dynamically change RPM.
          - Less travel.
            + Though, still more than my G0704 (at least in Y and Z).
          + Faster
              Can make quick work of aluminum and similar, ...
          - Not so much torque.
              Works fine if mostly using 1/8, 3/16, and 1/4 inch endmills.
              7/16 and 5/8 (*2), don't get too aggressive here with cuts.
              3/4: Only if you are milling plastic...
        Bridgeport:
          - Maybe gets +/- 0.005 if you are feeling lucky
             +/- 0.015 mostly OK.
          + More powerful spindle;
              Not much issue drilling holes in steel.
          + More X/Y/Z travel;
          - No coolant;
          - RPM is controlled by moving V belts.
          - Using R8 collets and drawbar sucks.
          - The software is buggy and likes to crash frequently, ...
    <
    The difference between 0.005 and 0.001 on a Bridgeport is metrology and
    care. If you can't measure something you cannot compensate for it--this
    goes double during setup where you sweep the clamped part to determine
    that it is held flat in the vise and normal to the milling direction.
    <


    The Bridgeport in this case was modified to use ballscrews, with NEMA34 steppers (IIRC, 1740 oz-in) connected to the leadscrews via timing belts
    and pulleys (similar setup on my G0704; just using 470 oz-in NEMA23 motors).

    Both machines can get "in the area" of 0.005", but don't seem to get
    this with much repeatability (say, one cuts the same hole, and one time
    it might be 0.004 over, or 0.004 under, or, ...).

    Things like rotating a feature, changing the feedrate, etc, may also
    effect what size it cuts.


    This seems to be less of an issue on the Tormach machine (1100MX), which
    seems to usually get +/- 0.001, but for parts asking for this, this is
    more of a problem.


    Though, have noted that feedrate does effect accuracy (it seems to cut a
    little oversize if using faster feedrates or deeper cuts).

    Some settings I had found that seem to work fairly well (on the Tormach):
    Aluminum with 3/16 endmill:
    RPM: 6500
    Feed: 17.0 inch/minute
    Depth: 0.015
    Aluminum with 1/8 endmill:
    RPM: 8500
    Feed: 19.0
    Depth: 0.010
    ...

    For the Bridgeport, was generally using (for 1/8 and 3/16):
    RPM: ~ 3000
    Feed: 6.5 inch/minute
    Depth: 0.010

    The G0704 is hard-pressed to go much over around 1500 RPM, so ~ 3.0 inch/minute.

    It is possibly to go a little faster, but doing so has drawbacks:
    Less accurate cuts;
    Worse burrs;
    Higher risk of breaking the endmills.


    For steel, I usually reduce the depth of cut to around 0.005, but can
    get away with 0.010 or 0.015 for bigger endmills. RPM and feedrate need
    to be reduced by around half.

    For stainless steel, need to go down lower (to around 1/3).

    For brass, around 2/3 or 3/4 the speeds used for aluminum.


    *2: Machine came with some ER20 tool holders (in BT30), but these
    can't handle larger tools. Also got some ER32 / BT30 tool holders,
    which can handle 5/8 and 3/4" tools, but these can't really be used
    for general purpose milling (bigger tools being more for if one needs
    a lot more flute length).
    <
    I have a whole set of ER-40 collets an R8 spindle adapter and a 5C adapter
    so the collets can be used in Mill or lathe.
    <

    OK.

    Don't have any ER40, mostly ER20 and ER32.

    Ended up getting several sets of ER collets, mostly so that one can set
    up multiple tools of the same size.



    I guess, one can use a shell-mill/face-mill, but one is limited to
    fairly small face mills and fairly light cuts (mostly defeating the
    advantage vs going back and forth using a 7/16 or 5/8 mill or similar).
    <
    Granted a Bridgeport is limited, but so are 100,000 pound 36" lathes with 75-foot beds run by 50 HP motors.
    <

    I was talking about the Tormach here.

    The Bridgeport is at an advantage in terms of "raw power", since while
    it has the same theoretical motor power as the Tormach, it is geared
    down with belts (so, more effective/usable power). Putting high load on
    the spindle on the Bridgeport also doesn't cause the machine to go "Oh
    Crap!" and immediately fault out (for better or for worse).


    So, say:
    Bridgeport: 2 HP 3-Phase, driven via a VFD, with belts.
    Generally, was keeping the VFD in the 50-70 Hz range,
    using the belts for coarse adjustment.
    Travel: 30 x 14 x 16

    Tormach: 1.5 kW Servo (~ 2 HP), powered off 230 VAC single phase.
    However, spindle seems to be "Requested speed or die immediately".
    So, the spindle doesn't slow down, it stops and the machine faults.
    Its belt only has too positions:
    200 .. 10000 RPM, with weak torque;
    40 .. 2000 RPM, slower but more torque.
    Travel: 17 x 10 x 16

    G0704: 1 HP PMDC motor.
    High Range: ~ 0 .. 1500 RPM (depends on temperature)
    Low Range: ~ 0 .. 500 RPM (IIRC).
    Travel: 18 x 6 x 11
    Though, the ballscrew conversion negatively effected Y travel,
    So mine is only ~ 4 inch.


    Despite having a theoretically weaker motor, the G0704 is still a bit
    more capable in terms of "put a hole in this piece of steel using this
    drill".

    Like, it can in-fact drill a 1/2 inch hole in a piece of steel...

    Vs, say, needing to circular interpolate the hole using a 7/16 endmill
    (a lot slower, but needs a lot less torque).



    Granted, for its weaknesses in terms of torque and similar, the Tormach
    does win in terms of having flood coolant and an automatic tool changer
    and similar (and a lot cheaper than any of the Haas machines).


    Granted, it is preferable to have the machine be like "Oh Crap!" and
    fault, than try to continue on and break the tool (it is still plenty
    well capable of snapping 1/8 inch and 3/16 mills and similar, or
    snapping off a 7/64 bit if it gets stuck, ...).

    But, it seems like there is a fair bit of a power differential between, say:
    Snapping off a 7/64;
    Stalling on a 3/8 bit;
    Actually drilling a hole with 1/2.



    It does also at least still at least beat the "put holes in stuff" power
    of a cordless Drill Master drill I have (with an epic NiCd battery
    pack). Like, this drill doesn't do well for anything much beyond wood or plastic (and even plastic is asking a lot of it; or any wood much harder
    than yellow pine...).


    Granted, less "scary" than other cordless drills:
    Other cordless drills, if they get stuck, will try to yank themselves
    out of ones hand;
    The Drill Master will just sort of meekly push against ones' hand and
    give up (with roughly the force that my cat uses when ramming me with
    his head).

    Torque is also low enough that one can free-hand hold whatever they are drilling without it breaking loose (and possibly causing injury),
    including relatively small items (if the drill can actually drill it,
    that is).

    Though, would be nicer if its battery life wasn't also crap.
    Though, as noted, can technically also get a lot more "drilling power"
    via a hand-drill, or by turning the chuck by hand (may be needed if it
    gets stuck). Technically, can also use the drill to power-tighten and power-release bits by holding the chuck (since its stall torque is less
    than my grip power...).


    Contrast, the G0704 will try to spin a bigger mill, but the machine
    will just sorta rattle and hop all over the place while doing so.
    <
    So, you mill in smaller chunks. I have a fly cutter than can cut 4140 at
    5" width on my G0704--albeit far from the speeds and feeds appropriate
    for a Cincinnati doing the same job.....


    OK.

    I was trying at one point to use a 1" ball nose mill (mostly trying to
    mill something with a Z radius), but was not happy at the results...

    The Bridgeport had no complaints though...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Stephen Fuld on Tue Nov 14 08:02:42 2023
    Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still running Windows XP), although if it is licensed to a particular hardware system, that system's life may limit your use, and you may not get vendor support.

    But I think the problem you are referring to is the limit on the number
    of copies you can make, or simultaneous users you can have.  Of course, this is due to the obvious difference that, as opposed to a lathe, it is trivial to make an arbitrary number of copies, or have multiple
    different users use it simultaneously.  This difference accounts for the different licensing terms.  It makes things better for the software
    vendor, which, at least in theory, allows for more software to be created.

    I am still using my licensed version of Photoshop CS2, which is almost
    20 years old. It was the last version with a permanent license but in
    order to transfer the SW to a new PC I had to contact Adobe's license
    servers.

    After 5-10 years they terminated the last remaining license server but
    instead allowed owners to download a personal copy that does not require licensing. I am guessing it could be watermarked with my personal
    details so it could be traced?

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to Terje Mathisen on Tue Nov 14 03:24:23 2023
    On 11/14/2023 1:02 AM, Terje Mathisen wrote:
    Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.

    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.  Of
    course, this is due to the obvious difference that, as opposed to a
    lathe, it is trivial to make an arbitrary number of copies, or have
    multiple different users use it simultaneously.  This difference
    accounts for the different licensing terms.  It makes things better
    for the software vendor, which, at least in theory, allows for more
    software to be created.

    I am still using my licensed version of Photoshop CS2, which is almost
    20 years old. It was the last version with a permanent license but in
    order to transfer the SW to a new PC I had to contact Adobe's license servers.

    After 5-10 years they terminated the last remaining license server but instead allowed owners to download a personal copy that does not require licensing. I am guessing it could be watermarked with my personal
    details so it could be traced?


    For a while, I had been using "Cool Edit Pro" for audio editing, but
    what eventually wrecked this was it no longer working natively after
    Windows moved to 64 bits, and the lack of any really good/convenient
    ways of emulating older versions of Windows which:
    Continued to still work;
    Allowed some way to share files between the host machine and VM.

    For whatever reason, I have had very little success getting hardware virtualization to work, and all the "mainstream" VMs went over to
    requiring it, eventually leaving me with little real option other than
    QEMU and DOSBox (running Windows 3.11).

    Then ended up mostly going over to Audacity (pros/cons apparently).
    Then, in more recent years, someone had released something (as a sort of Windows plug-in) to make Win16 software work again on 64-bit Windows.

    But, haven't felt much need to go back to Cool Edit, but the old MS
    BitEdit and PalEdit tools lack any good modern equivalent (basically,
    sort of like Paint, but built specifically for 16-color and 256-color
    bitmap images, with direct control over the color palette and operating directly in terms of said palette).


    Newer programs like Paint.NET can be used in a vaguely similar way, but
    lacks the ability to work explicitly with a 16 or 256 color palette (nor
    any real usable support for indexed-color graphics).

    Similarly "GraphX2" seems to be in a vaguely similar direction, but its
    UI is very weird (apparently inspired by some programs on the Amiga).

    Could almost write a tool for this, but it would be "pretty niche" in
    any case (and if it were a popular use-case, presumably someone else
    would have done so?...).

    Sometimes, there are cases where one wants to work with low-res graphics
    in terms of individual pixels and a fixed color palette (rather than
    high-res true-color graphics). Needing to use another image specifically
    as a color palette in an otherwise true-color oriented program is
    annoying and counter-productive.

    Maybe also useful would be an option for it to also be able to display
    pixel data in binary or hexadecimal, or load/dump images in a
    hexadecimal notation (bonus points if in a C based notation), ...

    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to BGB on Tue Nov 14 01:31:58 2023
    On 11/14/2023 1:24 AM, BGB wrote:
    On 11/14/2023 1:02 AM, Terje Mathisen wrote:
    Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.

    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.
    Of course, this is due to the obvious difference that, as opposed to
    a lathe, it is trivial to make an arbitrary number of copies, or have
    multiple different users use it simultaneously.  This difference
    accounts for the different licensing terms.  It makes things better
    for the software vendor, which, at least in theory, allows for more
    software to be created.

    I am still using my licensed version of Photoshop CS2, which is almost
    20 years old. It was the last version with a permanent license but in
    order to transfer the SW to a new PC I had to contact Adobe's license
    servers.

    After 5-10 years they terminated the last remaining license server but
    instead allowed owners to download a personal copy that does not
    require licensing. I am guessing it could be watermarked with my
    personal details so it could be traced?


    For a while, I had been using "Cool Edit Pro" for audio editing, but
    what eventually wrecked this was it no longer working natively after
    Windows moved to 64 bits, and the lack of any really good/convenient
    ways of emulating older versions of Windows which:
      Continued to still work;
      Allowed some way to share files between the host machine and VM.

    For whatever reason, I have had very little success getting hardware virtualization to work, and all the "mainstream" VMs went over to
    requiring it, eventually leaving me with little real option other than
    QEMU and DOSBox (running Windows 3.11).

    Then ended up mostly going over to Audacity (pros/cons apparently).
    Then, in more recent years, someone had released something (as a sort of Windows plug-in) to make Win16 software work again on 64-bit Windows.

    But, haven't felt much need to go back to Cool Edit, but the old MS
    BitEdit and PalEdit tools lack any good modern equivalent (basically,
    sort of like Paint, but built specifically for 16-color and 256-color
    bitmap images, with direct control over the color palette and operating directly in terms of said palette).


    Newer programs like Paint.NET can be used in a vaguely similar way, but
    lacks the ability to work explicitly with a 16 or 256 color palette (nor
    any real usable support for indexed-color graphics).

    Similarly "GraphX2" seems to be in a vaguely similar direction, but its
    UI is very weird (apparently inspired by some programs on the Amiga).

    Could almost write a tool for this, but it would be "pretty niche" in
    any case (and if it were a popular use-case, presumably someone else
    would have done so?...).

    Sometimes, there are cases where one wants to work with low-res graphics
    in terms of individual pixels and a fixed color palette (rather than
    high-res true-color graphics). Needing to use another image specifically
    as a color palette in an otherwise true-color oriented program is
    annoying and counter-productive.

    Maybe also useful would be an option for it to also be able to display
    pixel data in binary or hexadecimal, or load/dump images in a
    hexadecimal notation (bonus points if in a C based notation), ...

    ...


    Bust out some Protracker... ;^)

    https://youtu.be/kEBW8A3bw-Q

    Amiga?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Finch@21:1/5 to BGB on Tue Nov 14 10:33:45 2023
    On 2023-11-14 4:24 a.m., BGB wrote:
    On 11/14/2023 1:02 AM, Terje Mathisen wrote:
    Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you
    *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.

    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.
    Of course, this is due to the obvious difference that, as opposed to
    a lathe, it is trivial to make an arbitrary number of copies, or have
    multiple different users use it simultaneously.  This difference
    accounts for the different licensing terms.  It makes things better
    for the software vendor, which, at least in theory, allows for more
    software to be created.

    I am still using my licensed version of Photoshop CS2, which is almost
    20 years old. It was the last version with a permanent license but in
    order to transfer the SW to a new PC I had to contact Adobe's license
    servers.

    After 5-10 years they terminated the last remaining license server but
    instead allowed owners to download a personal copy that does not
    require licensing. I am guessing it could be watermarked with my
    personal details so it could be traced?


    For a while, I had been using "Cool Edit Pro" for audio editing, but
    what eventually wrecked this was it no longer working natively after
    Windows moved to 64 bits, and the lack of any really good/convenient
    ways of emulating older versions of Windows which:
      Continued to still work;
      Allowed some way to share files between the host machine and VM.

    For whatever reason, I have had very little success getting hardware virtualization to work, and all the "mainstream" VMs went over to
    requiring it, eventually leaving me with little real option other than
    QEMU and DOSBox (running Windows 3.11).

    Then ended up mostly going over to Audacity (pros/cons apparently).
    Then, in more recent years, someone had released something (as a sort of Windows plug-in) to make Win16 software work again on 64-bit Windows.

    But, haven't felt much need to go back to Cool Edit, but the old MS
    BitEdit and PalEdit tools lack any good modern equivalent (basically,
    sort of like Paint, but built specifically for 16-color and 256-color
    bitmap images, with direct control over the color palette and operating directly in terms of said palette).


    Newer programs like Paint.NET can be used in a vaguely similar way, but
    lacks the ability to work explicitly with a 16 or 256 color palette (nor
    any real usable support for indexed-color graphics).

    Similarly "GraphX2" seems to be in a vaguely similar direction, but its
    UI is very weird (apparently inspired by some programs on the Amiga).

    Could almost write a tool for this, but it would be "pretty niche" in
    any case (and if it were a popular use-case, presumably someone else
    would have done so?...).

    Sometimes, there are cases where one wants to work with low-res graphics
    in terms of individual pixels and a fixed color palette (rather than
    high-res true-color graphics). Needing to use another image specifically
    as a color palette in an otherwise true-color oriented program is
    annoying and counter-productive.

    Maybe also useful would be an option for it to also be able to display
    pixel data in binary or hexadecimal, or load/dump images in a
    hexadecimal notation (bonus points if in a C based notation), ...

    ...

    After searching around on the web for a font editor that could output
    in formats I needed, I decided to roll my own. It is pretty basic but
    can be used to create bitmap images for sprites in addition to fonts.
    It outputs raw, memory file, and verilog code as output data. It is
    called GlyphEdit. It support 8bpp, 16bpp, and 32bpp for sprite images.
    IIRC it is a VB program, and might make a starting point to support
    other graphics.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to BGB on Tue Nov 14 16:41:43 2023
    BGB wrote:

    On 11/13/2023 8:06 PM, MitchAlsup wrote:
    BGB wrote:

    On 11/13/2023 4:00 PM, Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses, you >>>> *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.


    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.
    Of course, this is due to the obvious difference that, as opposed to
    a lathe, it is trivial to make an arbitrary number of copies, or have
    multiple different users use it simultaneously.  This difference
    accounts for the different licensing terms.  It makes things better
    for the software vendor, which, at least in theory, allows for more
    software to be created.


    Yeah, this is a fundamental difference...

    To copy a lathe would require the time and expense both to buy all of
    the materials and machine all the parts.
    Yes
    Then one may find that it ends up costing more than it would have cost
    just to buy another one.
    Invariably
    And, despite the lathes and mills typically costing $k, one is
    hard-pressed to make something of comparable quality for cheaper.
    Yes


    For stability, one needs things like a lot of weight, and the OEM has
    the advantage that they can cost-effectively sand-cast big chunks of
    cast iron (where cheaply making big cast iron parts is a technology
    mostly out of reach of home shops).

    Other alternatives are things like:
       Abrasive sand (such as garnet sand or slag) mixed with epoxy:
         Not cheap;
       Concrete: Not very good;
         Needs to be encased in metal or epoxy to not suck;
         One may still need garnet sand or slag for the needed density;
         Silica sand is cheaper, but not really dense enough.

    If the machine is too light, then it would rattle or vibrate
    excessively during cuts (AKA: chatter) which would ruin the quality of
    the machined parts.
    <
    It is not weight per seé, it is stiffness between the piece holding the
    part
    and the spindle applying forces to remove chips from the part.
    <

    Or, some combination of machine stiffness, inertia, and having enough weight+inertia to keep the thing firmly anchored in place on the floor.


    Like, when I tried at one point to mill steel with a 1" ballnose endmill
    on a CNC converted G0704, I quickly stopped this as, as soon as the
    endmill started cutting the steel, it seemed like the whole thing was
    going to rattle itself apart...


    Though, I guess if a person had access to a waterjet, they could cut
    the parts out of a bunch of steel plate as layers, and then braze all
    the layers together. Labor would likely still cost more than being
    able to pour cast iron though.


    For other parts, having machines purpose built to make one specific
    part will make those parts cheaper than using more general purpose
    machines to make all the parts.

    ....


    So, say, I don't really fault Grizzly or Tormach for the cost of their
    machines, it is unlikely they could make them for all that much
    cheaper and still make a profit.


    Though, now having one of the Tormach CNC machines, limits can still
    be noted:
       Tries to use a 1/2 inch drill on some stainless steel:
         Say, to enlarge a hole from 3/8 to 1/2 inch.
       Starts to drill, eats into steel a little, spindle: "NOPE!",
         Spindle stalls and machine goes into a faulted state.
       So, one needs to go: 1/4" (OK), 3/8" (Also OK), to 7/16".
         And then mill the hole the rest of the way via an endmill.

    Say, because 1.5 kW at 10k RPM (in high range), doesn't mean it can
    drill a 1/2" hole at 700 RPM. Granted, this works out to around 1
    lb-ft or 192 oz-in of torque (so, not very much torque even by
    cordless drill standards).

    Well, or one can put it into low range by manually moving a belt, but
    this kinda sucks as well, as then one has the torque to run the drill,
    but not really enough RPM range for the endmill, ... (too bad, say,
    they couldn't have put a CVT or similar in the thing).


    So, say, vs a CNC converted Bridgeport:
        Tormach:
          + Tighter tolerances
            Holds +/- 0.005 easily
            Smaller is still hard (+/- 0.002 is still pushing it)
          + Has flood coolant;
          + Has tool changer;
          + Can dynamically change RPM.
          - Less travel.
            + Though, still more than my G0704 (at least in Y and Z). >>>       + Faster
              Can make quick work of aluminum and similar, ...
          - Not so much torque.
              Works fine if mostly using 1/8, 3/16, and 1/4 inch endmills.
              7/16 and 5/8 (*2), don't get too aggressive here with cuts.
              3/4: Only if you are milling plastic...
        Bridgeport:
          - Maybe gets +/- 0.005 if you are feeling lucky
             +/- 0.015 mostly OK.
          + More powerful spindle;
              Not much issue drilling holes in steel.
          + More X/Y/Z travel;
          - No coolant;
          - RPM is controlled by moving V belts.
          - Using R8 collets and drawbar sucks.
          - The software is buggy and likes to crash frequently, ...
    <
    The difference between 0.005 and 0.001 on a Bridgeport is metrology and
    care. If you can't measure something you cannot compensate for it--this
    goes double during setup where you sweep the clamped part to determine
    that it is held flat in the vise and normal to the milling direction.
    <


    The Bridgeport in this case was modified to use ballscrews, with NEMA34 steppers (IIRC, 1740 oz-in) connected to the leadscrews via timing belts
    and pulleys (similar setup on my G0704; just using 470 oz-in NEMA23 motors).

    Both machines can get "in the area" of 0.005", but don't seem to get
    this with much repeatability (say, one cuts the same hole, and one time
    it might be 0.004 over, or 0.004 under, or, ...).

    Things like rotating a feature, changing the feedrate, etc, may also
    effect what size it cuts.


    This seems to be less of an issue on the Tormach machine (1100MX), which seems to usually get +/- 0.001, but for parts asking for this, this is
    more of a problem.


    Though, have noted that feedrate does effect accuracy (it seems to cut a little oversize if using faster feedrates or deeper cuts).

    Some settings I had found that seem to work fairly well (on the Tormach):
    Aluminum with 3/16 endmill:
    RPM: 6500
    Feed: 17.0 inch/minute
    Depth: 0.015
    Aluminum with 1/8 endmill:
    RPM: 8500
    Feed: 19.0
    Depth: 0.010
    ...

    For the Bridgeport, was generally using (for 1/8 and 3/16):
    RPM: ~ 3000
    Feed: 6.5 inch/minute
    Depth: 0.010

    The G0704 is hard-pressed to go much over around 1500 RPM, so ~ 3.0 inch/minute.
    <
    I rarely use my C07300 with a spindle speed above 500 RPM, and mostly
    use 270 RPMs for milling.

    It is possibly to go a little faster, but doing so has drawbacks:
    Less accurate cuts;
    Worse burrs;
    Higher risk of breaking the endmills.


    For steel, I usually reduce the depth of cut to around 0.005, but can
    get away with 0.010 or 0.015 for bigger endmills. RPM and feedrate need
    to be reduced by around half.

    For stainless steel, need to go down lower (to around 1/3).

    For brass, around 2/3 or 3/4 the speeds used for aluminum.


    *2: Machine came with some ER20 tool holders (in BT30), but these
    can't handle larger tools. Also got some ER32 / BT30 tool holders,
    which can handle 5/8 and 3/4" tools, but these can't really be used
    for general purpose milling (bigger tools being more for if one needs
    a lot more flute length).
    <
    I have a whole set of ER-40 collets an R8 spindle adapter and a 5C adapter >> so the collets can be used in Mill or lathe.
    <

    OK.

    Don't have any ER40, mostly ER20 and ER32.

    Ended up getting several sets of ER collets, mostly so that one can set
    up multiple tools of the same size.



    I guess, one can use a shell-mill/face-mill, but one is limited to
    fairly small face mills and fairly light cuts (mostly defeating the
    advantage vs going back and forth using a 7/16 or 5/8 mill or similar).
    <
    Granted a Bridgeport is limited, but so are 100,000 pound 36" lathes with
    75-foot beds run by 50 HP motors.
    <

    I was talking about the Tormach here.

    The Bridgeport is at an advantage in terms of "raw power", since while
    it has the same theoretical motor power as the Tormach, it is geared
    down with belts (so, more effective/usable power). Putting high load on
    the spindle on the Bridgeport also doesn't cause the machine to go "Oh
    Crap!" and immediately fault out (for better or for worse).


    So, say:
    Bridgeport: 2 HP 3-Phase, driven via a VFD, with belts.
    Generally, was keeping the VFD in the 50-70 Hz range,
    using the belts for coarse adjustment.
    Travel: 30 x 14 x 16

    Tormach: 1.5 kW Servo (~ 2 HP), powered off 230 VAC single phase.
    However, spindle seems to be "Requested speed or die immediately".
    So, the spindle doesn't slow down, it stops and the machine faults.
    Its belt only has too positions:
    200 .. 10000 RPM, with weak torque;
    40 .. 2000 RPM, slower but more torque.
    Travel: 17 x 10 x 16

    G0704: 1 HP PMDC motor.
    High Range: ~ 0 .. 1500 RPM (depends on temperature)
    Low Range: ~ 0 .. 500 RPM (IIRC).
    Travel: 18 x 6 x 11
    Though, the ballscrew conversion negatively effected Y travel,
    So mine is only ~ 4 inch.


    Despite having a theoretically weaker motor, the G0704 is still a bit
    more capable in terms of "put a hole in this piece of steel using this drill".

    Like, it can in-fact drill a 1/2 inch hole in a piece of steel...

    Vs, say, needing to circular interpolate the hole using a 7/16 endmill
    (a lot slower, but needs a lot less torque).



    Granted, for its weaknesses in terms of torque and similar, the Tormach
    does win in terms of having flood coolant and an automatic tool changer
    and similar (and a lot cheaper than any of the Haas machines).


    Granted, it is preferable to have the machine be like "Oh Crap!" and
    fault, than try to continue on and break the tool (it is still plenty
    well capable of snapping 1/8 inch and 3/16 mills and similar, or
    snapping off a 7/64 bit if it gets stuck, ...).

    But, it seems like there is a fair bit of a power differential between, say:
    Snapping off a 7/64;
    Stalling on a 3/8 bit;
    Actually drilling a hole with 1/2.



    It does also at least still at least beat the "put holes in stuff" power
    of a cordless Drill Master drill I have (with an epic NiCd battery
    pack). Like, this drill doesn't do well for anything much beyond wood or plastic (and even plastic is asking a lot of it; or any wood much harder
    than yellow pine...).


    Granted, less "scary" than other cordless drills:
    Other cordless drills, if they get stuck, will try to yank themselves
    out of ones hand;
    The Drill Master will just sort of meekly push against ones' hand and
    give up (with roughly the force that my cat uses when ramming me with
    his head).

    Torque is also low enough that one can free-hand hold whatever they are drilling without it breaking loose (and possibly causing injury),
    including relatively small items (if the drill can actually drill it,
    that is).

    Though, would be nicer if its battery life wasn't also crap.
    Though, as noted, can technically also get a lot more "drilling power"
    via a hand-drill, or by turning the chuck by hand (may be needed if it
    gets stuck). Technically, can also use the drill to power-tighten and power-release bits by holding the chuck (since its stall torque is less
    than my grip power...).


    Contrast, the G0704 will try to spin a bigger mill, but the machine
    will just sorta rattle and hop all over the place while doing so.
    <
    So, you mill in smaller chunks. I have a fly cutter than can cut 4140 at
    5" width on my G0704--albeit far from the speeds and feeds appropriate
    for a Cincinnati doing the same job.....


    OK.

    I was trying at one point to use a 1" ball nose mill (mostly trying to
    mill something with a Z radius), but was not happy at the results...

    The Bridgeport had no complaints though...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to MitchAlsup on Tue Nov 14 13:42:56 2023
    On 11/14/2023 10:41 AM, MitchAlsup wrote:
    BGB wrote:

    On 11/13/2023 8:06 PM, MitchAlsup wrote:
    BGB wrote:

    On 11/13/2023 4:00 PM, Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses,
    you *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.


    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.
    Of course, this is due to the obvious difference that, as opposed
    to a lathe, it is trivial to make an arbitrary number of copies, or
    have multiple different users use it simultaneously.  This
    difference accounts for the different licensing terms.  It makes
    things better for the software vendor, which, at least in theory,
    allows for more software to be created.


    Yeah, this is a fundamental difference...

    To copy a lathe would require the time and expense both to buy all
    of the materials and machine all the parts.
    Yes
    Then one may find that it ends up costing more than it would have
    cost just to buy another one.
    Invariably
    And, despite the lathes and mills typically costing $k, one is
    hard-pressed to make something of comparable quality for cheaper.
    Yes


    For stability, one needs things like a lot of weight, and the OEM
    has the advantage that they can cost-effectively sand-cast big
    chunks of cast iron (where cheaply making big cast iron parts is a
    technology mostly out of reach of home shops).

    Other alternatives are things like:
       Abrasive sand (such as garnet sand or slag) mixed with epoxy:
         Not cheap;
       Concrete: Not very good;
         Needs to be encased in metal or epoxy to not suck;
         One may still need garnet sand or slag for the needed density; >>>>      Silica sand is cheaper, but not really dense enough.

    If the machine is too light, then it would rattle or vibrate
    excessively during cuts (AKA: chatter) which would ruin the quality
    of the machined parts.
    <
    It is not weight per seé, it is stiffness between the piece holding
    the part
    and the spindle applying forces to remove chips from the part.
    <

    Or, some combination of machine stiffness, inertia, and having enough
    weight+inertia to keep the thing firmly anchored in place on the floor.


    Like, when I tried at one point to mill steel with a 1" ballnose
    endmill on a CNC converted G0704, I quickly stopped this as, as soon
    as the endmill started cutting the steel, it seemed like the whole
    thing was going to rattle itself apart...


    Though, I guess if a person had access to a waterjet, they could cut
    the parts out of a bunch of steel plate as layers, and then braze
    all the layers together. Labor would likely still cost more than
    being able to pour cast iron though.


    For other parts, having machines purpose built to make one specific
    part will make those parts cheaper than using more general purpose
    machines to make all the parts.

    ....


    So, say, I don't really fault Grizzly or Tormach for the cost of
    their machines, it is unlikely they could make them for all that
    much cheaper and still make a profit.


    Though, now having one of the Tormach CNC machines, limits can still
    be noted:
       Tries to use a 1/2 inch drill on some stainless steel:
         Say, to enlarge a hole from 3/8 to 1/2 inch.
       Starts to drill, eats into steel a little, spindle: "NOPE!",
         Spindle stalls and machine goes into a faulted state.
       So, one needs to go: 1/4" (OK), 3/8" (Also OK), to 7/16".
         And then mill the hole the rest of the way via an endmill.

    Say, because 1.5 kW at 10k RPM (in high range), doesn't mean it can
    drill a 1/2" hole at 700 RPM. Granted, this works out to around 1
    lb-ft or 192 oz-in of torque (so, not very much torque even by
    cordless drill standards).

    Well, or one can put it into low range by manually moving a belt,
    but this kinda sucks as well, as then one has the torque to run the
    drill, but not really enough RPM range for the endmill, ... (too
    bad, say, they couldn't have put a CVT or similar in the thing).


    So, say, vs a CNC converted Bridgeport:
        Tormach:
          + Tighter tolerances
            Holds +/- 0.005 easily
            Smaller is still hard (+/- 0.002 is still pushing it)
          + Has flood coolant;
          + Has tool changer;
          + Can dynamically change RPM.
          - Less travel.
            + Though, still more than my G0704 (at least in Y and Z). >>>>       + Faster
              Can make quick work of aluminum and similar, ...
          - Not so much torque.
              Works fine if mostly using 1/8, 3/16, and 1/4 inch endmills.
              7/16 and 5/8 (*2), don't get too aggressive here with cuts.
              3/4: Only if you are milling plastic...
        Bridgeport:
          - Maybe gets +/- 0.005 if you are feeling lucky
             +/- 0.015 mostly OK.
          + More powerful spindle;
              Not much issue drilling holes in steel.
          + More X/Y/Z travel;
          - No coolant;
          - RPM is controlled by moving V belts.
          - Using R8 collets and drawbar sucks.
          - The software is buggy and likes to crash frequently, ...
    <
    The difference between 0.005 and 0.001 on a Bridgeport is metrology and
    care. If you can't measure something you cannot compensate for
    it--this goes double during setup where you sweep the clamped part to
    determine
    that it is held flat in the vise and normal to the milling direction.
    <


    The Bridgeport in this case was modified to use ballscrews, with
    NEMA34 steppers (IIRC, 1740 oz-in) connected to the leadscrews via
    timing belts and pulleys (similar setup on my G0704; just using 470
    oz-in NEMA23 motors).

    Both machines can get "in the area" of 0.005", but don't seem to get
    this with much repeatability (say, one cuts the same hole, and one
    time it might be 0.004 over, or 0.004 under, or, ...).

    Things like rotating a feature, changing the feedrate, etc, may also
    effect what size it cuts.


    This seems to be less of an issue on the Tormach machine (1100MX),
    which seems to usually get +/- 0.001, but for parts asking for this,
    this is more of a problem.


    Though, have noted that feedrate does effect accuracy (it seems to cut
    a little oversize if using faster feedrates or deeper cuts).

    Some settings I had found that seem to work fairly well (on the Tormach):
       Aluminum with 3/16 endmill:
         RPM: 6500
         Feed: 17.0  inch/minute
         Depth: 0.015
       Aluminum with 1/8 endmill:
         RPM: 8500
         Feed: 19.0
         Depth: 0.010
       ...

    For the Bridgeport, was generally using (for 1/8 and 3/16):
       RPM: ~ 3000
       Feed: 6.5 inch/minute
       Depth: 0.010

    The G0704 is hard-pressed to go much over around 1500 RPM, so ~ 3.0
    inch/minute.
    <
    I rarely use my C07300 with a spindle speed above 500 RPM, and mostly
    use 270 RPMs for milling.


    Well, mostly it is "more RPM, more cutting", at least up to a certain
    point (if RPM is too fast, the mill no longer cuts effectively).

    So, if RPM is slower, feedrate needs to be slower, which means it takes
    longer to make the part, ...


    Well, and material, say, steel needs to be cut slower than aluminum or
    brass, and stainless (or hardened steel) slower than normal steel.

    Probably a majority of the parts being made were aluminum (mostly 6061, sometimes 7075, or various other alloys).

    Sometimes steel, brass, stainless, ...

    Sometimes plastic, phenolic (fiberglass+resin, similar to circuit board material, but much thicker), fiberglass or carbon-fiber composite, or
    other weirder materials (often composites of various sorts).

    Then, there are materials like graphite, that will crumble if one looks
    at them funny, or ones that will melt into goo if they get wet (like PVA
    or PVA based composites, *), ...


    *: Say, when someone decides it is a good idea to make gears out of a
    composite of compressed cotton fiber and PVA. But, then one has to avoid
    any contact with water or coolant, as the material will melt (and then
    the "chips" left in the machine later absorb coolant and turn into a
    sort of sticky goo that is stuck on everything, or form a layer of slime
    in the chip-tray).



    Usually, "whatever shows up" and needs to be milled (someone else is
    mostly dealing with sourcing the jobs and materials).

    Mostly, from what I can gather, it is a lot of the stuff that either the
    other shops in the area refuse to do, or would charge considerably more
    to do so.

    Like, say, other shops don't really want to try to hold 0.005 on
    PVA+Cotton either, and then one is left to wonder how asking 0.005 even
    makes sense on the material (since it is slightly compressible and will shrink/expand based on humidity), ...



    But, yeah, for a 7/16 endmill at aluminum, this would be around 4500 RPM
    or so. But, depends on the machine.

    So, as noted, my G0704 doesn't go much over 1500, and the Bridgeport
    doesn't go much over 3000. Which, in turn, is fairly limiting for
    aluminum, but mostly OK for steel.

    RPM limits do somewhat limit feedrate for smaller endmills though, like
    1/8 and 3/16, which are used a lot (but, it gets kind of annoying if
    someone wants a feature smaller than what can be done with a 1/8
    endmill; or, say, a pocket with a corner radius less than 0.063, ...).


    But, say, smaller endmills, like 5/64, or 3/64, are not used as often as
    they have sort of a habit of breaking if one looks at them funny. Well,
    and for related reasons, a perfectly square-edged pocket isn't going to
    happen (well, at least short of using a file or similar, and then
    probably wrecking the tolerance and surface finish in the process, ...).

    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to Robert Finch on Tue Nov 14 16:03:26 2023
    On 11/14/2023 9:33 AM, Robert Finch wrote:
    On 2023-11-14 4:24 a.m., BGB wrote:
    On 11/14/2023 1:02 AM, Terje Mathisen wrote:
    Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses,
    you *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.

    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.
    Of course, this is due to the obvious difference that, as opposed to
    a lathe, it is trivial to make an arbitrary number of copies, or
    have multiple different users use it simultaneously.  This
    difference accounts for the different licensing terms.  It makes
    things better for the software vendor, which, at least in theory,
    allows for more software to be created.

    I am still using my licensed version of Photoshop CS2, which is
    almost 20 years old. It was the last version with a permanent license
    but in order to transfer the SW to a new PC I had to contact Adobe's
    license servers.

    After 5-10 years they terminated the last remaining license server
    but instead allowed owners to download a personal copy that does not
    require licensing. I am guessing it could be watermarked with my
    personal details so it could be traced?


    For a while, I had been using "Cool Edit Pro" for audio editing, but
    what eventually wrecked this was it no longer working natively after
    Windows moved to 64 bits, and the lack of any really good/convenient
    ways of emulating older versions of Windows which:
       Continued to still work;
       Allowed some way to share files between the host machine and VM.

    For whatever reason, I have had very little success getting hardware
    virtualization to work, and all the "mainstream" VMs went over to
    requiring it, eventually leaving me with little real option other than
    QEMU and DOSBox (running Windows 3.11).

    Then ended up mostly going over to Audacity (pros/cons apparently).
    Then, in more recent years, someone had released something (as a sort
    of Windows plug-in) to make Win16 software work again on 64-bit Windows.

    But, haven't felt much need to go back to Cool Edit, but the old MS
    BitEdit and PalEdit tools lack any good modern equivalent (basically,
    sort of like Paint, but built specifically for 16-color and 256-color
    bitmap images, with direct control over the color palette and
    operating directly in terms of said palette).


    Newer programs like Paint.NET can be used in a vaguely similar way,
    but lacks the ability to work explicitly with a 16 or 256 color
    palette (nor any real usable support for indexed-color graphics).

    Similarly "GraphX2" seems to be in a vaguely similar direction, but
    its UI is very weird (apparently inspired by some programs on the Amiga).

    Could almost write a tool for this, but it would be "pretty niche" in
    any case (and if it were a popular use-case, presumably someone else
    would have done so?...).

    Sometimes, there are cases where one wants to work with low-res
    graphics in terms of individual pixels and a fixed color palette
    (rather than high-res true-color graphics). Needing to use another
    image specifically as a color palette in an otherwise true-color
    oriented program is annoying and counter-productive.

    Maybe also useful would be an option for it to also be able to display
    pixel data in binary or hexadecimal, or load/dump images in a
    hexadecimal notation (bonus points if in a C based notation), ...

    ...

    After searching around on the web for a font editor that could output
    in formats I needed, I decided to roll my own. It is pretty basic but
    can be used to create bitmap images for sprites in addition to fonts.
    It outputs raw, memory file, and verilog code as output data. It is
    called GlyphEdit. It support 8bpp, 16bpp, and 32bpp for sprite images.
    IIRC it is a VB program, and might make a starting point to support
    other graphics.


    I had mostly done my font artwork by first drawing stuff in Paint.NET,
    and then writing a quick/dirty tool to process the glyph cells (from a
    TGA image) and emit them in a hexadecimal form.

    Where, I was typically using TGA as a working format for true-color
    images, and BMP for indexed-color images (but, then get annoyed that
    tools like Paint.NET don't support emitting BMP's with a fixed
    user-specified palette, rather than a dynamically generated optimized
    palette).


    Say, for 16 color, one might want a standard VGA RGBI palette, not some dynamically-generated one. Or, for 256 color, also a specified palette
    (such as my 16 shades of 16 colors, or 13 shades of 19 colors, palettes).

    So, in effect, in some cases I may need to first save the image to a
    TGA, and then use a tool to convert them to a BMP of the needed
    bit-depth and palette.

    Or, in other cases, I might want to use RGB555/RGB565 hi-color BMP
    images, which programs like Paint.NET can't export (only indexed color,
    and 24/32 bit). Where, within TestKern and associated programs, most of
    the graphics stuff thus far works internally in terms of RGB555.

    Standard BMP uses biBitCount==16, and RGB565, but TestKern uses a variant:
    biBitCount==15, and RGB555 (storage is still 16 bits).
    The "more standard" variant being:
    biBitCount==16, biCompression==BI_BITFIELDS
    With 4 16-bit numbers glued onto the end of the BITMAPINFOHEADER.
    These then define RGBA555 as a bit-mask.
    WinCE had used this format.
    But "easier" to allow biBitCount==15,
    Which is then understood to mean native RGB555.
    Granted, tools like Paint.NET wont load this...



    But, kinda funny that some old/forgotten Win 3.x era tools manage to be
    closer to what I would want in these areas, than most of the modern equivalents.



    Or, sometimes I might want an image in the form of a blob of C code that
    I can add to a program (well, or use the "resource section", but there
    are still pros/cons here).

    At one point, partly by accident, my tools had created a BMP variant
    which was almost the same as the normal BMP format, except that all the
    fields ended up being natively aligned (rather than standard BMP where everything ends up misaligned).

    I could almost be tempted to revive this format, though probably
    changing the magic from "BM" to " BM" or " BMP" or similar (as there is
    some possible merit to "BMP, but with all the fields correctly aligned";
    even if none of the tools that read BMP files will understand such a
    format).

    At the time, it didn't matter (and didn't get noticed immediately)
    because I was also using them to store non-standard image data which
    normal BMP tools wouldn't recognize anyways.


    This would make sense for "resource section" images, which may be
    accessed in RAM.

    Generally, the resource section in BGBCC is effectively a WAD variant (replacing the original Windows format), IIRC with the lumps being
    exposed to the program as symbols with names like "__rsrc_lumpname".

    So, say, there is an import line like:
    mainicon=mainicon.bmp
    With a symbol like:
    __rsrc_mainicon
    Set to the start of the lump data for this image.

    Where, say, "mainicon.bmp" might be 32x32 in 4bpp RGBI or similar.


    Well, all this is also in the "limbo" of TeskKern ever really getting a
    proper GUI; beyond mostly small-scale experimentation thus far, and
    annoyance at the difficulty of seeming inability to get window-stack
    redraw and screen refresh "sufficiently fast" (even with a relatively
    small number of windows).


    Well, and the inability to do resolutions like 800x600 hi-color within
    the available RAM bandwidth.

    Like, "Meh, will need to settle with 640x480 256-color or similar...",
    then be annoyed about the limitations of what is possible with a
    256-color palette (and/or use color-cell modes).

    And, also annoyances that one can't display an 80x25 text window in
    640x480 (with 8x8 glyphs) without it basically taking up the whole
    screen horizontally.

    Though, likely options are smaller glyphs:
    6x7, could reuse the Boot-ROM font
    Which uses 5x6 cells with the assumption of padding pixels.
    5x6, native, could try to pack glyphs a little tighter,
    reworking font to not assume pad pixels.
    4x6, could be possible, effectively drawing glyphs like in 7-segment.
    Unclear if all of ASCII can be represented effectively though.
    Could fit 80x25 into 320x150 pixels.

    Well, and/or pull the trick of using smaller glyphs, but then using
    2-bit grayscale to try to compensate for the missing pixels.



    Could almost try to "reverse engineer" whatever Win 3.11 did (via
    DOSBox), except seemingly I can't get a windowed command-prompt in Win
    3.11 without also corrupting the display mode.

    Then trying to change the resolution in "Windows Setup" breaks the
    windows install (complaining about being unable to load files). Grr...
    Then the pain of trying to get Win3.11 reinstalled and hopefully working
    again (though, it seemed possibly some of the Win3.x installer files got corrupted somehow; had to re-copy the Win3.11 installation files into
    DOSBox).

    Some hassle later, I am back to having a mostly working Win3.11 install,
    but still can't use the MS-DOS box without corrupting the display mode,
    Grr. Not sure if an emulation issue, or mostly a "this stuff is old and
    crufty" issue.

    Oddly, despite its age, Win3.11 seems to be using comparably large
    ~10x16 font glyphs...

    Why, exactly, in an era when screens were low-res and every pixel was expensive, would they be using 10x16 fonts rather than 8x8 or 6x8 or something?... (But, seems like in any case, Win3.11 isn't offering much
    in terms of possible pixel-saving trickery).


    Seems between then and now, the title-bar and similar got 40% bigger,
    but the comically large title bar and icons are not exactly something I
    was fond of in Win10 (IMHO, "Peak Windows", UI-wise, was sometime around
    2K or XP).

    But, like, I have walls of windows on a 4K monitor, so probably can't
    complain too much about the title bar being roughly 8 pixels bigger...

    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BGB@21:1/5 to Chris M. Thomasson on Tue Nov 14 17:37:11 2023
    On 11/14/2023 3:31 AM, Chris M. Thomasson wrote:
    On 11/14/2023 1:24 AM, BGB wrote:
    On 11/14/2023 1:02 AM, Terje Mathisen wrote:
    Stephen Fuld wrote:
    On 11/12/2023 11:19 AM, MitchAlsup wrote:

    <
    If I buy a Lathe, I can use it forever.
    If I buy a SW license, I cannot use it forever.
    See the problem here.

    Well, there are differences.  First of all, for many sw licenses,
    you *can* use the software "forever" (i.e. lots of people are still
    running Windows XP), although if it is licensed to a particular
    hardware system, that system's life may limit your use, and you may
    not get vendor support.

    But I think the problem you are referring to is the limit on the
    number of copies you can make, or simultaneous users you can have.
    Of course, this is due to the obvious difference that, as opposed to
    a lathe, it is trivial to make an arbitrary number of copies, or
    have multiple different users use it simultaneously.  This
    difference accounts for the different licensing terms.  It makes
    things better for the software vendor, which, at least in theory,
    allows for more software to be created.

    I am still using my licensed version of Photoshop CS2, which is
    almost 20 years old. It was the last version with a permanent license
    but in order to transfer the SW to a new PC I had to contact Adobe's
    license servers.

    After 5-10 years they terminated the last remaining license server
    but instead allowed owners to download a personal copy that does not
    require licensing. I am guessing it could be watermarked with my
    personal details so it could be traced?


    For a while, I had been using "Cool Edit Pro" for audio editing, but
    what eventually wrecked this was it no longer working natively after
    Windows moved to 64 bits, and the lack of any really good/convenient
    ways of emulating older versions of Windows which:
       Continued to still work;
       Allowed some way to share files between the host machine and VM.

    For whatever reason, I have had very little success getting hardware
    virtualization to work, and all the "mainstream" VMs went over to
    requiring it, eventually leaving me with little real option other than
    QEMU and DOSBox (running Windows 3.11).

    Then ended up mostly going over to Audacity (pros/cons apparently).
    Then, in more recent years, someone had released something (as a sort
    of Windows plug-in) to make Win16 software work again on 64-bit Windows.

    But, haven't felt much need to go back to Cool Edit, but the old MS
    BitEdit and PalEdit tools lack any good modern equivalent (basically,
    sort of like Paint, but built specifically for 16-color and 256-color
    bitmap images, with direct control over the color palette and
    operating directly in terms of said palette).


    Newer programs like Paint.NET can be used in a vaguely similar way,
    but lacks the ability to work explicitly with a 16 or 256 color
    palette (nor any real usable support for indexed-color graphics).

    Similarly "GraphX2" seems to be in a vaguely similar direction, but
    its UI is very weird (apparently inspired by some programs on the Amiga).

    Could almost write a tool for this, but it would be "pretty niche" in
    any case (and if it were a popular use-case, presumably someone else
    would have done so?...).

    Sometimes, there are cases where one wants to work with low-res
    graphics in terms of individual pixels and a fixed color palette
    (rather than high-res true-color graphics). Needing to use another
    image specifically as a color palette in an otherwise true-color
    oriented program is annoying and counter-productive.

    Maybe also useful would be an option for it to also be able to display
    pixel data in binary or hexadecimal, or load/dump images in a
    hexadecimal notation (bonus points if in a C based notation), ...

    ...


    Bust out some Protracker... ;^)

    https://youtu.be/kEBW8A3bw-Q

    Amiga?

    Tracker music is pretty cool sometimes, and generally much preferable to
    the "garage band butt metal" that largely displaced it in game
    soundtracks in the 2000s and onward...

    But, yeah, I had never used an Amiga personally, nor ever saw one IRL.


    It was rare to even see a Mac, most of my life has been almost entirely dominated by IBM PC clones running Windows; but in my life, I have seen
    Windows steadily change over the decades; sometimes better, sometimes
    not. I get annoyed at times with MS's seemingly repeated attempts to
    make Windows look like a baby toy (like, seemingly every other version,
    they try to f* the UI design in one way or another, and don't leave the
    options to entirely undo the damage).

    Then again, it is possible that many other people wouldn't agree with my
    UI design sentiments either (and TestKern is still pretty far from
    having a GUI as well; so I can't claim to have come up with anything
    better).

    Though, I can claim, with some confidence, that it is not "whatever the
    hell was going on" with TempleOS...

    Sort of imagining something sort of like Win2K mixed with Motif or
    similar (then again, looking at it, Motif has a curious level of
    similarity to the Win 3.x design in some ways as well).



    Technically, this is part of why I had worked on a sort of PCM /
    Wavetable mixing hardware for BJX2 (as an add-on to the existing FM
    Synth module), but hadn't yet come up with a good API interface to allow programs to submit patches, or define the association between these
    patches and the currently playing MIDI channels.

    Had also made a sort of quick/dirty patch-set for MIDI playback, but it
    isn't particularly good (basically, was quickly looking for usable clips
    in free sound-effects collections to fill out the general-MIDI
    instrument set). Technically, samples based on the original GUS sample
    set are better, but more legally questionable.

    But, theoretically, could be used for hardware-accelerated S3M playback
    (as-is, mixing S3M in software on the BJX2 core is relatively expensive).



    Basically, it handles this playback mostly by reinterpreting the note
    phase accumulator as a sample position, with a loop-start and loop-end
    value, and some phase-control flags.
    Say:
    NoteOn:
    Starts at sample 0, plays forward.

    If Loop is Set, if position hits LoopEnd, it is updated to LoopStart.
    Else (Loop is Clear), it advances to SampleEnd, and then loops on the
    last sample (until the channel is cleared or overwritten).
    If Reverse is set, the counter runs backwards, looping from LoopStart to LoopEnd.

    Generally, IIRC, the patch samples were understood as 8-bit A-Law (also
    used by the PCM hardware; mostly as 16-bit linear PCM uses twice as much memory; and 8-bit PCM sounds kinda like crap).

    Typically, I was also using 16kHz as a default sample rate, as it seems
    like a "good compromise" of quality and space/cost.
    8kHz and 11kHz have poor audio quality.
    32kHz and 44.1kHz need too much space (for not much more quality).
    22kHz is also a reasonable balance, but 16kHz is cheaper.


    One option is to make the software-defined patch numbers "global", but
    this is questionable (what if, say, two programs are trying to use this
    API). Though, one other option is to tie them to a given program
    instance (and internally remapped in the backend), but, yeah...

    Or, maybe, the program submits a patch and the backend gives it a handle
    (more like "open()" or similar), the program then remaps its internal
    patch numbers to the patch-handles, which can then be issued in
    "ProgramChange" commands or similar.

    Though, couldn't really use a standard WAVEFORMATEX header, as this
    lacks the parameters needed for patches (would need a header that also
    defines loop-control parameters and similar; though could likely define
    a PATCHWAVEFORMATEX or similar, as a WAVEFORMATEX with MOD/S3M style loop-control parameters glued on the end).

    ...


    As for formats, I had mostly went with MOD and S3M.
    I had less interest in the IT or XM formats as they seemed to add a bit
    too much needless complexity over something like S3M.

    Like, we don't need things like MP3 compressed patch samples or similar,
    who even thought this was a good idea? ...


    Granted, if I were designing a tracker format, it would probably likely
    end up as something like a WAD variant with the patches as WAD lumps,
    probably using A-Law or ADPCM, with the music data stored as a tweaked
    version of the General MIDI stream format or similar (and/or the Doom
    MUS format).

    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Marco Moock@21:1/5 to All on Thu Nov 16 13:40:57 2023
    Am 15.10.2023 schrieb Michael S <already5chosen@yahoo.com>:

    IBM Z has established niche.
    IBM POWER - not sure about it. POWER holds on for as long as IBM
    able to make POWER chips that are competitive (or better exceeding)
    x86-64 and ARM64 in absolute performance and at the same time not ridiculously behind in price/performance. It looks to me like POWER
    is in the same rat race that effectively killed SPARC and IPF. They
    just manage to run a little better.

    Is IBM's Power the same that Apple used in older Macs?
    Then it wasn't a niche, IBM made it one.

    IBM also sold POWER workstations, but sadly discontinued that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Marco Moock on Thu Nov 16 13:15:19 2023
    Marco Moock <mm+solani@dorfdsl.de> writes:
    Is IBM's Power the same that Apple used in older Macs?

    Yes, at that time it was called PowerPC.

    Then it wasn't a niche, IBM made it one.

    It seems to me that Apple made it more niche than it was at the time.

    IBM also sold POWER workstations, but sadly discontinued that.

    Would you buy one? At what cost? How many others would buy that?

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Marco Moock@21:1/5 to All on Thu Nov 16 15:06:07 2023
    Am 16.11.2023 schrieb anton@mips.complang.tuwien.ac.at (Anton Ertl):

    Marco Moock <mm+solani@dorfdsl.de> writes:

    IBM also sold POWER workstations, but sadly discontinued that.

    Would you buy one?

    Maybe yes.

    At what cost?

    Depends on the performance, power consumption, spare part availability
    etc.

    How many others would buy that?

    At our university many of them existed, but were replaced by windows
    machines before I started to work there, I know that from old
    documentation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Marco Moock on Thu Nov 16 15:19:22 2023
    Marco Moock <mm+solani@dorfdsl.de> writes:
    Am 16.11.2023 schrieb anton@mips.complang.tuwien.ac.at (Anton Ertl):

    Marco Moock <mm+solani@dorfdsl.de> writes:

    IBM also sold POWER workstations, but sadly discontinued that.

    Would you buy one?

    Maybe yes.

    At what cost?

    Depends on the performance, power consumption, spare part availability
    etc.

    You can buy a "Talos II Secure Workstation" starting at $9898.24 from
    Raptor Computing Systems <https://www.raptorcs.com/TALOSII/>.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to All on Thu Nov 16 18:47:00 2023
    In article <uj52kq$1j12t$1@solani.org>, mm+solani@dorfdsl.de (Marco Moock) wrote:

    IBM also sold POWER workstations, but sadly discontinued that.

    They were solid machines, but too expensive. In the late 1990s, I was
    trying to upgrade the memory in one, and found the IBM price for standard
    DIMMs was four times what they cost on the PC market.

    Once 64-bit Windows and Linux were established, nobody was willing to buy
    POWER workstations because their price-performance was seriously inferior,
    and they didn't have compensating advantages that were worth the price.
    The Talos workstations look pretty good, but seem to sell mainly to
    people too paranoid to run x86-64 (either because of malware, or because
    of hidden management processors).

    I could do with a big-endian machine for software testing, but since
    POWER is a bit carefree about trapping on misaligned accesses, POWER is
    not the answer.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Dallman on Thu Nov 16 14:05:11 2023
    On 10/15/2023 8:21 AM, John Dallman wrote:
    In article <c2399332-9696-4749-adc4-8d9b071c060dn@googlegroups.com>, already5chosen@yahoo.com (Michael S) wrote:

    Growing, but not yet fully-established: RISC-V.
    Is RISC-V really present in general-purpose computing?

    Not yet, but it seems to have an adequate design and enough momentum to
    get there. /Staying/ there is a different question.

    In established niches, but not growing out of them: POWER, IBM Z.
    IBM POWER - not sure about it. POWER holds on for as long as IBM
    able to make POWER chips that are competitive (or better exceeding)
    x86-64 and ARM64 in absolute performance and at the same time not
    ridiculously behind in price/performance. It looks to me like POWER
    is in the same rat race that effectively killed SPARC and IPF. They
    just manage to run a little better.

    It's also used to run IBM i, which is a pretty big niche that's quite
    easy to forget. It could be replaced, since the concept of the system is
    that the hardware is replaceable, but IBM would try hard to avoid the
    costs of doing that.

    Yup. And I vaguely recall that Power, probably with a bunch of
    application specific peripherals, was a major player in the automotive
    engine control space. If that is/was true, then it represents huge volumes.


    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Anton Ertl on Thu Nov 16 21:16:16 2023
    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
    Marco Moock <mm+solani@dorfdsl.de> writes:
    Is IBM's Power the same that Apple used in older Macs?

    Yes, at that time it was called PowerPC.

    Then it wasn't a niche, IBM made it one.

    It seems to me that Apple made it more niche than it was at the time.

    IBM also sold POWER workstations, but sadly discontinued that.

    Would you buy one? At what cost? How many others would buy that?

    A few years ago, I was involved in the purchase of two Talos
    II machines, based on POWER9, one of them for privte use, the
    other for business, as a CFD workstation. At the time, the memory
    performance of POWER exceeded everything that we had on offer for
    Intel and AMD, and the price was comparable.

    One disadvantage was the poor availability of commercial software,
    you have to compile yourself. In this case, it did not matter;
    the CFD machine was meant to run OpenFOAM, which it did.

    Now, this is no longer attractive: For a variety of reasons,
    RaptorCS did not make a Power10-based workstation, and today's
    Intel and AMD chips have better memory performance than POWER9.

    I have no idea what an IBM server would have cost, but I suspect
    it would have been much more expensive.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Stephen Fuld on Thu Nov 16 23:19:00 2023
    In article <uj63mo$2ejff$1@dont-email.me>, sfuld@alumni.cmu.edu.invalid (Stephen Fuld) wrote:

    Yup. And I vaguely recall that Power, probably with a bunch of
    application specific peripherals, was a major player in the
    automotive engine control space. If that is/was true, then it
    represents huge volumes.

    Don't know about that, but it is used in quite a few digital cameras,
    last I heard.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Stephen Fuld on Fri Nov 17 06:41:35 2023
    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
    And I vaguely recall that Power, probably with a bunch of
    application specific peripherals, was a major player in the automotive
    engine control space. If that is/was true, then it represents huge volumes.

    That was Motorola's business, and it was then spun off to Freescale,
    which was bought by NXP. NXP still sells Power Architecture
    processors (and they also switched the name from PowerPC to Power,
    like IBM): <https://www.nxp.com/products/processors-and-microcontrollers/power-architecture:POWER-ARCHITECTURE>

    However, an expert in the microcontroller market told me in IIRC 2016
    that NXP is deemphasizing other architectures in favor of ARM.

    And indeed, when I look at the web page above, I see "S32 Automotive
    Platform" right above "Power Architecture", and when I click on "S32
    Automotive Platform", it says: "Scalability across products based on
    Arm Cortex-A, Cortex-R and Cortex-M cores with ASIL D capabilities".

    OTOH, when I click on "Power Architecture", it says: "Automotive Power Architecture Microcontrollers" and "Power Architecture" is on the same
    level on the website as "Arm Microcontrollers", and "Arm Processors",
    not among "Additional MPU/MCUs Architectures" or "Legacy MPU/MCUs"
    (like Coldfire).

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to George Neuner on Thu Nov 16 23:30:45 2023
    On 11/10/2023 10:35 AM, George Neuner wrote:
    On Wed, 8 Nov 2023 23:26:01 -0800, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 11/1/2023 4:46 PM, George Neuner wrote:
    [...]

    Excellent, thanks for you the info! Btw, do you mind if I ask you some
    technical questions from time to time? Can any volumetric image be
    converted into a hologram? I think so...

    I believe you are correct that any volumetric image can be rendered as
    a hologram, but I can't say for certain ... I haven't really studied
    it enough.

    A company I was working for at that time was contracted to implement
    the software and UI side of the hologram project. Our main business
    was in machine vision - mainly industrial QA/QC - but from that we had
    both image processing experience and also industry connections to
    printing technology.

    The NDAs are long expired, so I can talk about [the parts I know of]
    what we did, but wrt the image processing, apart from some system
    specific implementation tweaks, we were just following someone else's recipes.

    I am getting ready to ask you some questions. Fwiw, here is one of my
    recent tests of a volumetric fractal of mine. Low res test at 256^3:

    https://i.ibb.co/8XfnmrR/image.png

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri Nov 17 18:37:06 2023
    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    However, an expert in the microcontroller market told me in IIRC 2016
    that NXP is deemphasizing other architectures in favor of ARM.

    I wonder how much of that is the relative techincal merits of the two architectures and how much is the toolchain.

    Arm provides development tools for embedded systems, including linkers
    that do link time optimization and layout control. For POWER, IBM has
    fine tools if you want to run AIX, linux, or i, but for embedded, I'm
    guessing not so much.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to John Levine on Fri Nov 17 20:52:36 2023
    John Levine wrote:

    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    However, an expert in the microcontroller market told me in IIRC 2016
    that NXP is deemphasizing other architectures in favor of ARM.

    I wonder how much of that is the relative techincal merits of the two architectures and how much is the toolchain.

    A recent paper shows ARM is essentially equivalent to RISC-V in instruction counts. https://dl.acm.org/doi/pdf/10.1145/3624062.3624233

    Arm provides development tools for embedded systems, including linkers
    that do link time optimization and layout control. For POWER, IBM has
    fine tools if you want to run AIX, linux, or i, but for embedded, I'm guessing not so much.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to MitchAlsup on Sat Nov 18 00:09:07 2023
    mitchalsup@aol.com (MitchAlsup) writes:
    John Levine wrote:

    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    However, an expert in the microcontroller market told me in IIRC 2016 >>>that NXP is deemphasizing other architectures in favor of ARM.

    I wonder how much of that is the relative techincal merits of the two
    architectures and how much is the toolchain.

    A recent paper shows ARM is essentially equivalent to RISC-V in instruction >counts. https://dl.acm.org/doi/pdf/10.1145/3624062.3624233

    What is the point of comparing instruction counts? It seems to be a
    rather useless metric.

    The paper compares against some nebulus "ARMv8-A" architecture, for which
    there are nine distinct versions, each with different mix of instructions.

    Even then the dominating factors in performance is going to be
    the implementation of the architecture and the quality of implementation
    of the architecture in the compilers.

    The number of instructions generated to solve a particular problem isn't necessarily representative of workload performance.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Scott Lurndal on Sat Nov 18 07:39:21 2023
    scott@slp53.sl.home (Scott Lurndal) writes:
    mitchalsup@aol.com (MitchAlsup) writes:
    A recent paper shows ARM is essentially equivalent to RISC-V in instruction >>counts. https://dl.acm.org/doi/pdf/10.1145/3624062.3624233

    What is the point of comparing instruction counts? It seems to be a
    rather useless metric.

    Yes, especially given the rather different philosophies behind RISC-V
    (simple instructions with compressed versions for space and fuse
    instructions on more sophisticated implementations) and ARM A64 (make
    good use of the fixed 32-bit width to put in functionality that is
    expected to help performance).

    Still, one interesting result is that RISC-V's compare-and-branch
    instructions provide a significant reduction in the number of executed instructions, while A64 uses separate compare and branch instructions.
    This is also something that I have found in my work on adding carry
    and overflow flags to all GPRs in RISC-V: One won't be using them for
    most branches.

    The paper compares against some nebulus "ARMv8-A" architecture, for which >there are nine distinct versions, each with different mix of instructions.

    It's not nebulous, the paper explicitly specifies that it's the code
    you get from the two gcc versions used with -march=armv8-a+nosimd. I
    assume that this is the A64 instruction set in the original AMDv8-A,
    not in 8.1 or later.

    Even then the dominating factors in performance is going to be
    the implementation of the architecture

    Definitely, in particular for a benchmark like STREAM which is
    designed to measure the memory subsystem of the CPU and not at all
    anything related to the instruction set.

    and the quality of implementation
    of the architecture in the compilers.

    The paper notes that this plays a role, and that's why they used two
    different compiler versions. It also notes that it is harder for the
    compiler to generate optimal code for A64 than for RISC-V, because
    there are many more options.

    They compare simulated version of "sifive-7-series" and Cortex-A55,
    because they are both 2-wide in-order microarchitectures.

    I have results (for a different set of benchmarks) of hardware Sifive
    U74, Cortex-A55, Cortex-A53, and Bonnell (all of which are two-issue
    in-order microarchitectures). I have not processed them for comparing
    the executed instructions or the cycles (the work focussed on
    comparing the effect of certain optimizations and compared those), but
    if there is demand, I can do that.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to John Levine on Sat Nov 18 08:06:53 2023
    John Levine <johnl@taugh.com> writes:
    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    However, an expert in the microcontroller market told me in IIRC 2016
    that NXP is deemphasizing other architectures in favor of ARM.

    I wonder how much of that is the relative techincal merits of the two >architectures and how much is the toolchain.

    Arm provides development tools for embedded systems, including linkers
    that do link time optimization and layout control. For POWER, IBM has
    fine tools if you want to run AIX, linux, or i, but for embedded, I'm >guessing not so much.

    I am sure that Motorola/Freescale/NXP have their own toolchain for
    Power, and there is also the GNU toolchain, but you may be right. NXP
    probably does not want to maintain n toolchains for n architectures indefinitely, so they want to consolidate on as few architectures as
    possible.

    And maybe they have decided (at least in 2016) that paying the ARM tax
    is cheaper than forever maintaining the Power toolchain and especially developing new Power designs that would have been the result of
    consolidating on Power. Maybe the fact that TI and STM are also
    consolidating on ARM also played a role: NXP's customers probably
    don't want to become too dependent on NXP, and asked for ARM. Now,
    with the recent moves by ARM to raise its licensing fees, maybe they
    are regretting this decision.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to John Levine on Sat Nov 18 20:41:20 2023
    On Fri, 17 Nov 2023 18:37:06 -0000 (UTC)
    John Levine <johnl@taugh.com> wrote:

    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    However, an expert in the microcontroller market told me in IIRC 2016
    that NXP is deemphasizing other architectures in favor of ARM.

    I wonder how much of that is the relative techincal merits of the two architectures and how much is the toolchain.

    Arm provides development tools for embedded systems, including linkers
    that do link time optimization and layout control. For POWER, IBM has
    fine tools if you want to run AIX, linux, or i, but for embedded, I'm guessing not so much.



    Arm stopped independent tools development at least 5 years ago,
    probably even earlier. Today all "Arm" tools are either re-branded LLVM
    or plain LLVM.

    IBM POWER tools are irrelevant for Moto->Freescale->NXP automative
    PPC microcontrollers. This MCUs are based on various variants of e200
    core that have VLE encoding as its distiguishing feature. The smollest
    and pobably the most attractive member of the family can't run clasic fixed-width PPC32 ISA at all.
    And that before we consider that IBM probably didn't update 32-bit
    back-end for their compiler since year 2000.

    https://en.wikipedia.org/wiki/PowerPC_e200

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)