https://research.swtch.com/mm
Russ Cox recently published two essays on threads and synchronization,
a/k/a memory models, one at the hardware level and another for
programming languages. A 3rd is in the works on the implications for
Go.
I've said many times that threads are a terrible model for concurrent programming.
On 2021-07-09, James K. Lowden <jklowden@speakeasy.net> wrote:
https://research.swtch.com/mm
Russ Cox recently published two essays on threads and synchronization,
a/k/a memory models, one at the hardware level and another for
programming languages. A 3rd is in the works on the implications for
Go.
I've said many times that threads are a terrible model for concurrent
programming.
Threads are just something that most hackers get crazy about in their >programming puberty.
This is because due to their intellectual limitations they hit a
wall as to the kinds of programming problems they are able to solve.
So for excitement and redemption, they turn their attention to slicing
up and scrambling the execution order of solutions they understand.
Though threads do have use cases , eg controlling GUIs with many
widgets that may have to do asynchronous tasks such as flashing
cursors etc, but in general they're way over used in situations that
they're not required for or where there's a better solution. Though
Windows screws things up on the network side by the sockets API not presenting through file descriptors so its not possible AFAIK to
single task multiplex on them in select or poll.
Though threads do have use cases , eg controlling GUIs with many
widgets that may have to do asynchronous tasks such as flashing
cursors etc, but in general they're way over used in situations that
they're not required for or where there's a better solution. Though
Windows screws things up on the network side by the sockets API not
presenting through file descriptors so its not possible AFAIK to
single task multiplex on them in select or poll.
Does WSAPoll achieve what you want?
There are other less select/poll-like models for IO multiplexing in
Windows too. For instance in one project we used overlapped IO with the >completion routines dispatched from WaitForMultipleObjectsEx.
I've said many times that threads are a terrible model for
concurrent programming.
Threads are just something that most hackers get crazy about in their programming puberty.
This is because due to their intellectual limitations they hit a
wall as to the kinds of programming problems they are able to solve.
Though threads do have use cases , eg controlling GUIs with many
widgets that may have to do asynchronous tasks such as flashing
cursors
On Fri, 9 Jul 2021 21:42:52 -0000 (UTC)
Kaz Kylheku <563-365-8930@kylheku.com> wrote:
I've said many times that threads are a terrible model for
concurrent programming.
Threads are just something that most hackers get crazy about in their
programming puberty.
This is because due to their intellectual limitations they hit a
wall as to the kinds of programming problems they are able to solve.
Not only does no popular programming language model multithreaded
execution, neither does it have much of a model in formal logic. I
suspect those two voids are related, and not related to my puberty.
Communicating sequential processes, conversely, has a strong
mathematical foundation.
On 2021-07-09, James K. Lowden <jklowden@speakeasy.net> wrote:
https://research.swtch.com/mm
Russ Cox recently published two essays on threads and synchronization,
a/k/a memory models, one at the hardware level and another for
programming languages. A 3rd is in the works on the implications for
Go.
I've said many times that threads are a terrible model for concurrent
programming.
Threads are just something that most hackers get crazy about in their >programming puberty.
On Sat, 10 Jul 2021 09:01:17 +0000 (UTC) >MrSpud_jTiTwi1o@qvyj9am89pyiehl99j.eu wrote:
Though threads do have use cases , eg controlling GUIs with many
widgets that may have to do asynchronous tasks such as flashing
cursors
The Plan9 GUI programming model AIUI sets up 3 queues for CSP-style >programming: mouse and keyboard for input, and Window for output.
https://www.usenix.org/publications/compsystems/1989/spr_pike.pdf
It's not clear that GUIs justify threads on any basis. To the best of
my knowledge, Jim Getty's observation still stands that no multithreaded >implementation of the X11 server has ever outperformed the original >single-threaded model.
"James K. Lowden" <jklowden@speakeasy.net> writes:
On Fri, 9 Jul 2021 21:42:52 -0000 (UTC)
Kaz Kylheku <563-365-8930@kylheku.com> wrote:
Not only does no popular programming language model multithreaded >>execution, neither does it have much of a model in formal logic. I
suspect those two voids are related, and not related to my puberty.
It starts with the hardware, for example:
https://developer.arm.com/architectures/cpu-architecture/a-profile/memory-model
-tool
I would also argue that C++, for example, has a well-defined
memory model.
Communicating sequential processes, conversely, has a strong
mathematical foundation.
Yes, I studied CSP in 1981 - was useful for creating formal
proofs. Wasn't a useful language.
The fact that some programmers can't be bothered to learn how
to use them properly, nor bother to learn about the underlying
hardware constraints and memory models doesn't invalidate the
concept of threaded code in any way, shape or form.
On Mon, 12 Jul 2021 17:18:53 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
"James K. Lowden" <jklowden@speakeasy.net> writes:
On Fri, 9 Jul 2021 21:42:52 -0000 (UTC)
Kaz Kylheku <563-365-8930@kylheku.com> wrote:
Not only does no popular programming language model multithreaded >>>execution, neither does it have much of a model in formal logic. I >>>suspect those two voids are related, and not related to my puberty.
It starts with the hardware, for example:
-toolhttps://developer.arm.com/architectures/cpu-architecture/a-profile/memory-model
Most popular programming languages are simply a higher level version of assembler - ie arithmetic, if/then, looping, jumps, calls etc
https://research.swtch.com/mm
Russ Cox recently published two essays on threads and synchronization,
a/k/a memory models, one at the hardware level and another for
programming languages. A 3rd is in the works on the implications for
Go.
On Mon, 12 Jul 2021 17:18:53 GMTel
scott@slp53.sl.home (Scott Lurndal) wrote:
"James K. Lowden" <jklowden@speakeasy.net> writes:
On Fri, 9 Jul 2021 21:42:52 -0000 (UTC)
Kaz Kylheku <563-365-8930@kylheku.com> wrote:
Not only does no popular programming language model multithreaded >>>>execution, neither does it have much of a model in formal logic. I >>>>suspect those two voids are related, and not related to my puberty.
It starts with the hardware, for example:
https://developer.arm.com/architectures/cpu-architecture/a-profile/memory-mod
-tool
Most popular programming languages are simply a higher level version of
assembler - ie arithmetic, if/then, looping, jumps, calls etc
"Assembler" (the name of a certain kind of program, namely, one which >translated machine code in mnemonic notation into the corresponding
numbers/ bit patterns) doesn't have control structures like if/then/else
or loops. That's what differentiates high-level programming languages
from it.
On Tue, 13 Jul 2021 15:24:26 +0100
Rainer Weikusat <rweikusat@talktalk.net> wrote:
MrSpud_db80@9hso4b520vypdrb_.org writes:
On Mon, 12 Jul 2021 17:18:53 GMTel
scott@slp53.sl.home (Scott Lurndal) wrote:
"James K. Lowden" <jklowden@speakeasy.net> writes:
On Fri, 9 Jul 2021 21:42:52 -0000 (UTC)
Kaz Kylheku <563-365-8930@kylheku.com> wrote:
Not only does no popular programming language model multithreaded >>>>>execution, neither does it have much of a model in formal logic. I >>>>>suspect those two voids are related, and not related to my puberty.
It starts with the hardware, for example:
https://developer.arm.com/architectures/cpu-architecture/a-profile/memory-mod
-tool
Most popular programming languages are simply a higher level version of
assembler - ie arithmetic, if/then, looping, jumps, calls etc
"Assembler" (the name of a certain kind of program, namely, one which >>translated machine code in mnemonic notation into the corresponding >>numbers/ bit patterns) doesn't have control structures like if/then/else
or loops. That's what differentiates high-level programming languages
from it.
What do you think operations such as jump-if-equals, jump-if-carry-set etc are if not a type of if construct? Or are you just being pedantic?
"James K. Lowden" <jklowden@speakeasy.net> writes:
https://research.swtch.com/mm
Russ Cox recently published two essays on threads and synchronization,
a/k/a memory models, one at the hardware level and another for
programming languages. A 3rd is in the works on the implications for
Go.
I've tried to read throught this but at some point in time, the
pointless, contrived example and rampant misuse of borrowed[*]
terminology became too annoying. Who is this guy and why does he believe >independent memory accesses by one CPU should become visible to another
CPU in any particular order just because some other guy said so almost
50 years ago?
[*] A "litmus test" is used to determine the ph value of some liquid
which is decidedly not "binary.
On Tue, 13 Jul 2021 15:50:10 +0100
Rainer Weikusat <rweikusat@talktalk.net> wrote:
MrSpud_oc@vu9ga1.gov.uk writes:
On Tue, 13 Jul 2021 15:24:26 +0100
"Assembler" (the name of a certain kind of program, namely, one which >>>>translated machine code in mnemonic notation into the corresponding >>>>numbers/ bit patterns) doesn't have control structures like if/then/else >>>>or loops. That's what differentiates high-level programming languages >>>>from it.
What do you think operations such as jump-if-equals, jump-if-carry-set etc >>> are if not a type of if construct? Or are you just being pedantic?
Because they aren't. They're conditional branches. Compilers for
high-level programming languages employ them to implement control >>structures (like if/then/else). That's the difference between
"high-level languages" and "machine code".
Assembler doesn't have for-next or while either, whats your point? There
is still a direct link between most high level declarative language statements
and assembler instructions unlike languages such as SQL or Prolog. But at least
we know you're a pedant now.
MrSpud_oc@vu9ga1.gov.uk writes:
On Tue, 13 Jul 2021 15:24:26 +0100
"Assembler" (the name of a certain kind of program, namely, one which >>>translated machine code in mnemonic notation into the corresponding >>>numbers/ bit patterns) doesn't have control structures like if/then/else >>>or loops. That's what differentiates high-level programming languages >>>from it.
What do you think operations such as jump-if-equals, jump-if-carry-set etc >> are if not a type of if construct? Or are you just being pedantic?
Because they aren't. They're conditional branches. Compilers for
high-level programming languages employ them to implement control
structures (like if/then/else). That's the difference between
"high-level languages" and "machine code".
MrSpud_db80@9hso4b520vypdrb_.org writes:
On Mon, 12 Jul 2021 17:18:53 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
"James K. Lowden" <jklowden@speakeasy.net> writes:
On Fri, 9 Jul 2021 21:42:52 -0000 (UTC)
Kaz Kylheku <563-365-8930@kylheku.com> wrote:
Not only does no popular programming language model multithreaded >>>>execution, neither does it have much of a model in formal logic. I >>>>suspect those two voids are related, and not related to my puberty.
It starts with the hardware, for example:
-toolhttps://developer.arm.com/architectures/cpu-architecture/a-profile/memory-model
Most popular programming languages are simply a higher level version of
assembler - ie arithmetic, if/then, looping, jumps, calls etc
"Assembler" (the name of a certain kind of program, namely, one which translated machine code in mnemonic notation into the corresponding
numbers/ bit patterns) doesn't have control structures like if/then/else
or loops. That's what differentiates high-level programming languages
from it.
On Tue, 13 Jul 2021 15:50:10 +0100
Rainer Weikusat <rweikusat@talktalk.net> wrote:
MrSpud_oc@vu9ga1.gov.uk writes:
On Tue, 13 Jul 2021 15:24:26 +0100
"Assembler" (the name of a certain kind of program, namely, one which >>>>>translated machine code in mnemonic notation into the corresponding >>>>>numbers/ bit patterns) doesn't have control structures like if/then/else >>>>>or loops. That's what differentiates high-level programming languages >>>>>from it.
What do you think operations such as jump-if-equals, jump-if-carry-set etc >>>> are if not a type of if construct? Or are you just being pedantic?
Because they aren't. They're conditional branches. Compilers for >>>high-level programming languages employ them to implement control >>>structures (like if/then/else). That's the difference between
"high-level languages" and "machine code".
Assembler doesn't have for-next or while either, whats your point? There
is still a direct link between most high level declarative language >statements
and assembler instructions unlike languages such as SQL or Prolog. But at >least
we know you're a pedant now.
The only thing "we" know is that you have no clue about programming
language development and actively resent to learn anything about it.
Rainer Weikusat <rweikusat@talktalk.net> writes:
"Assembler" (the name of a certain kind of program, namely, one which
translated machine code in mnemonic notation into the corresponding
numbers/ bit patterns) doesn't have control structures like if/then/else
or loops. That's what differentiates high-level programming languages
from it.
In my opinion, the critical feature that differentiates assembly
languages from higher-level languages (including C) is that an assembly >language program specifies a sequence of CPU instructions, while a
program in a higher-level language specifies run-time behavior.
On Tue, 13 Jul 2021 14:01:50 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Rainer Weikusat <rweikusat@talktalk.net> writes:
"Assembler" (the name of a certain kind of program, namely, one which
translated machine code in mnemonic notation into the corresponding
numbers/ bit patterns) doesn't have control structures like if/then/else >>> or loops. That's what differentiates high-level programming languages
from it.
In my opinion, the critical feature that differentiates assembly
languages from higher-level languages (including C) is that an assembly >>language program specifies a sequence of CPU instructions, while a
program in a higher-level language specifies run-time behavior.
Bear in mind of course that a lot of CPU instructions these days are essentially "high level" in that they have to be broken down into further actions by the CPU itself, eg x86 trigonometric opcodes, so its turtles
all the way down.
On Wed, 14 Jul 2021 00:58:40 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
MrSpud__r5a_biAU@waq_91fv.org writes:
essentially "high level" in that they have to be broken down into further >>> actions by the CPU itself, eg x86 trigonometric opcodes, so its turtles
all the way down.
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
Do modern CPUs still use microcode?
The point is that assembly language specifies those instructions; >>higher-level languages do not (ignoring inline assembly).
Sure, but my point was that the logical constructs in a procedural language are similar to those in assembler and generalily map directly as 1 -> N .
The same cannot be said for lamguages such as SQL, Prolog etc where there is no direct mapping between a lot of their contructs and assembler.
essentially "high level" in that they have to be broken down into further
actions by the CPU itself, eg x86 trigonometric opcodes, so its turtles
all the way down.
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
The point is that assembly language specifies those instructions; >higher-level languages do not (ignoring inline assembly).
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
MrSpud__r5a_biAU@waq_91fv.org writes:
essentially "high level" in that they have to be broken down into further >>> actions by the CPU itself, eg x86 trigonometric opcodes, so its turtles
all the way down.
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
Do modern CPUs still use microcode?
Sure, but my point was that the logical constructs in a procedural language >> are similar to those in assembler and generalily map directly as 1 -> N .
And my point is that no, they don't.
If I write
printf("Hello, world\n");
in a C program, nothing in the C language says anything about what CPU >instructions will be generated -- and of course the instructions
On Wed, 14 Jul 2021 00:58:40 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote: >>MrSpud__r5a_biAU@waq_91fv.org writes:
essentially "high level" in that they have to be broken down into further >>> actions by the CPU itself, eg x86 trigonometric opcodes, so its turtles
all the way down.
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
Do modern CPUs still use microcode?
On Wed, 14 Jul 2021 00:58:40 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote: >>>MrSpud__r5a_biAU@waq_91fv.org writes:
essentially "high level" in that they have to be broken down into further >>>> actions by the CPU itself, eg x86 trigonometric opcodes, so its turtles >>>> all the way down.
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
Do modern CPUs still use microcode?
Not in the sense of the olden days.
There is a blob of loadable "stuff" that most Intel CPUs and older
AMD cpus would load from the BIOS that handled various things where
the possibility of bugs was higher than average, allowing such
bugs to be repaired without metal fixes. But it's not microcode
in the classic sense - most instructions are decoded and executed
in logic.
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
Keith Thompson , dans le message
<87v95d2uof.fsf@nosuchdomain.example.com>, a écrit :
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
I always wonder if it would make sense to give software access to the >"underlying turtles", as you put it. Not for generic code, of course, but
for code that needs to be extremely fast, the bits that are currently
written as CPU-specific, and even CPU-generation-specific, assembly code, >like the FFTs in a codec.
Keith Thompson , dans le message
<87v95d2uof.fsf@nosuchdomain.example.com>, a écrit :
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
I always wonder if it would make sense to give software access to the >"underlying turtles", as you put it. Not for generic code, of course, but
for code that needs to be extremely fast, the bits that are currently
written as CPU-specific, and even CPU-generation-specific, assembly code, >like the FFTs in a codec.
On Wed, 14 Jul 2021 15:05:53 GMT
scott@slp53.sl.home (Scott Lurndal) wrote: >>MrSpud_Ugb1x@xo6j81fgqjgx8x715co.co.uk writes:
On Wed, 14 Jul 2021 00:58:40 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote: >>>>MrSpud__r5a_biAU@waq_91fv.org writes:
essentially "high level" in that they have to be broken down into further >>>>> actions by the CPU itself, eg x86 trigonometric opcodes, so its turtles >>>>> all the way down.
True, but not particularly relevant to my point. Each target CPU >>>>exposes an instruction set, regardless of how it's implemented. One >>>>chip in a family might implement a given instruction directly, another >>>>in microcode. Software has no access to the underlying turtles.
Do modern CPUs still use microcode?
Not in the sense of the olden days.
There is a blob of loadable "stuff" that most Intel CPUs and older
AMD cpus would load from the BIOS that handled various things where
the possibility of bugs was higher than average, allowing such
bugs to be repaired without metal fixes. But it's not microcode
in the classic sense - most instructions are decoded and executed
in logic.
No wonder the transister count of modern CPUs is so phenomenally high in that >case :) I wouldn't want to even think about how you'd build a circuit in logic >gates to solve trig functions for example. Just doing division is hard enough >even in assembler if the instruction isn't provided by the CPU going by a video
I watched.
On 14 Jul 2021 15:33:17 GMT
Nicolas George <nicolas$george@salle-s.org> wrote:
Keith Thompson , dans le message
<87v95d2uof.fsf@nosuchdomain.example.com>, a écrit :
True, but not particularly relevant to my point. Each target CPU
exposes an instruction set, regardless of how it's implemented. One
chip in a family might implement a given instruction directly, another
in microcode. Software has no access to the underlying turtles.
I always wonder if it would make sense to give software access to the >>"underlying turtles", as you put it. Not for generic code, of course, but >>for code that needs to be extremely fast, the bits that are currently >>written as CPU-specific, and even CPU-generation-specific, assembly code, >>like the FFTs in a codec.
Probably far too complex to implement as you'd need a whole host of extra >assembler instructions to support it. If you need that kind of literally >to-the-metal logic then you're probably better off putting an FPGA into
your circuit.
Kaz Kylheku <563-365-8930@kylheku.com> writes:
On 2021-07-09, James K. Lowden <jklowden@speakeasy.net> wrote:
https://research.swtch.com/mm
Russ Cox recently published two essays on threads and synchronization,
a/k/a memory models, one at the hardware level and another for
programming languages. A 3rd is in the works on the implications for
Go.
I've said many times that threads are a terrible model for concurrent
programming.
Threads are just something that most hackers get crazy about in their >>programming puberty.
I disagree.
Threads are a highly useful paradigm for utilizing the resources
available on modern processors.
Consider, for example, the most sucessful multithreaded application
on any hardware is the Operating System itself.
The fact that some programmers can't be bothered to learn how
to use them properly, nor bother to learn about the underlying
hardware constraints and memory models doesn't invalidate the
concept of threaded code in any way, shape or form.
On Fri, 9 Jul 2021 21:42:52 -0000 (UTC)
Kaz Kylheku <563-365-8930@kylheku.com> wrote:
I've said many times that threads are a terrible model for
concurrent programming.
Threads are just something that most hackers get crazy about in their
programming puberty.
This is because due to their intellectual limitations they hit a
wall as to the kinds of programming problems they are able to solve.
Not only does no popular programming language model multithreaded
execution, neither does it have much of a model in formal logic. I
suspect those two voids are related, and not related to my puberty.
Communicating sequential processes, conversely, has a strong
mathematical foundation.
I would also argue that C++, for example, has a well-defined
memory model.
Yes, I studied CSP in 1981 - was useful for creating formal
proofs. Wasn't a useful language.
Who is this guy
why does he believe independent memory accesses by one CPU should
become visible to another CPU in any particular order just because
some other guy said so almost 50 years ago?
Most popular programming languages are simply a higher level
version of assembler - ie arithmetic, if/then, looping, jumps,
calls etc
"Assembler" (the name of a certain kind of program, namely, one which >translated machine code in mnemonic notation into the corresponding >numbers/ bit patterns) doesn't have control structures like
if/then/else or loops. That's what differentiates high-level
programming languages from it.
What do you think operations such as jump-if-equals,
jump-if-carry-set etc are if not a type of if construct? Or are you
just being pedantic?
That's why ASIC SoCs have on-board DSPs and other offload engines
that the application programmer can offload such work to.
Processors don't have loop or "else" opcodes. They don't have "call", either, afaik. His statement that such constructs are what distinguish
"high level" languages from assembler isn't pedantry; it's the textbook definition. It's why Fortran and Algol and C and Cobol were invented.
Not according to me; according to those who did the inventing.
I think the OP, in referencing "assembler", really meant a
class of imperative programming langauges that sought, as a design
criterion, to be convertible to machine code. That would distinguish
them not just from logical and functional languages (Prolog, ML) but
also from Lisp.
I'm sure I read Dennis Ritchie say that C is an idealized assembler for
an idealized machine, but I've never been able to track it down. I
think it's quite accurate, for some value of "idealized".
On 2021-07-12, Scott Lurndal <scott@slp53.sl.home> wrote:
Kaz Kylheku <563-365-8930@kylheku.com> writes:
On 2021-07-09, James K. Lowden <jklowden@speakeasy.net> wrote:
https://research.swtch.com/mm
Russ Cox recently published two essays on threads and synchronization, >>>> a/k/a memory models, one at the hardware level and another for
programming languages. A 3rd is in the works on the implications for
Go.
I've said many times that threads are a terrible model for concurrent
programming.
Threads are just something that most hackers get crazy about in their >>>programming puberty.
I disagree.
Threads are a highly useful paradigm for utilizing the resources
available on modern processors.
Consider, for example, the most sucessful multithreaded application
on any hardware is the Operating System itself.
Well, yes and no. Operating systems have concurrency, but it's not >necessarily the same as "threading".
If we look back at early Unix,
SMP support can be introduced into this paradigm with great care.
E.g. Linux first introduced non-preemptive SMP, and then evolved
the ability to opt-into preemption at compile time (which is
an incredibly bad idea to enable).
In the user space, the Unix fathers were careful to avoid introducing >anything like threads;
In any case, the concurrent programming at the kernel level is
substantially more sane than the haphazard user space threading models
bolted onto processes. It's almost a different beast.
The fact that some programmers can't be bothered to learn how
to use them properly, nor bother to learn about the underlying
hardware constraints and memory models doesn't invalidate the
concept of threaded code in any way, shape or form.
The problem is that some of the people who have designed threading
interfaces for operating systems may actualy be in this camp;
On Tue, 13 Jul 2021 14:34:28 +0000 (UTC)
MrSpud_oc@vu9ga1.gov.uk wrote:
Most popular programming languages are simply a higher level
version of assembler - ie arithmetic, if/then, looping, jumps,
calls etc
"Assembler" (the name of a certain kind of program, namely, one which
translated machine code in mnemonic notation into the corresponding
numbers/ bit patterns) doesn't have control structures like
if/then/else or loops. That's what differentiates high-level
programming languages from it.
What do you think operations such as jump-if-equals,
jump-if-carry-set etc are if not a type of if construct? Or are you
just being pedantic?
Rainer's not being pedantic. He's being precise.
Processors don't have loop or "else" opcodes.
On Tue, 13 Jul 2021 16:05:59 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
He claims Plan 9 as the first true multiprocessor
x86 OS without a global locak in 1997 -
I'm not sure that's important to his point.
yet Dynix/PTX had been shipping for almost a decade at that point,
and SVR4.2 ES/MP (highly scalable) and Chorus/Mix development started
in the late 1980s. Unisys was shipping Opus by 1997 with 32
processors running SVR4.2 ES/MP on top of the Chorus microkernel with
fine grained locking.
Are any of those freely available as open source? (Trying to avoid
loaded language.) I think Cox was restricting his claim to system's
whose global lock of lack thereof could be independently verified.
No wonder the transister count of modern CPUs is so phenomenally high in that
case :) I wouldn't want to even think about how you'd build a circuit in logic
gates to solve trig functions for example. Just doing division is hard enough >>even in assembler if the instruction isn't provided by the CPU going by a >video
I watched.
Well, nowadays, most of that is handled by the VHDL tools using libraries
of existing functionality.
// Add two BCD digits plus the carry_in, producing sum and carry_out
//
module bcd_adder(a,b,carry_in,sum,carry_out);
The chip I'm currently working on has two dozen 2.5Ghz ARMv9 cores,
over a dozen high-end DSP (Digital Signal Processors), hardware blocks
to manage ethernet packets (ingress, egress, classification,
routing, deep packet inspection, TLS initiation/termination,
and packet header manipulation)
and hardware blocks for machine learning and various proprietary
signal processing blocks. And a virtualizable hardware mechanism
to divide the hardware resources amongst virtual machines in a
secure, high-performance manner.
On Wed, 14 Jul 2021 17:37:10 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
The chip I'm currently working on has two dozen 2.5Ghz ARMv9 cores,
over a dozen high-end DSP (Digital Signal Processors), hardware blocks
to manage ethernet packets (ingress, egress, classification,
routing, deep packet inspection, TLS initiation/termination,
and packet header manipulation)
and hardware blocks for machine learning and various proprietary
signal processing blocks. And a virtualizable hardware mechanism
to divide the hardware resources amongst virtual machines in a
secure, high-performance manner.
Sounds interesting. Can you tell us what its for or is that classified?
On Wed, 14 Jul 2021 17:30:40 GMT
scott@slp53.sl.home (Scott Lurndal) wrote: >>MrSpud_de9sx0e@u29fnm4ndcl379qydk5ln.tv writes:
No wonder the transister count of modern CPUs is so phenomenally high in that
case :) I wouldn't want to even think about how you'd build a circuit in logic
gates to solve trig functions for example. Just doing division is hard enough
even in assembler if the instruction isn't provided by the CPU going by a >>video
I watched.
Well, nowadays, most of that is handled by the VHDL tools using libraries >>of existing functionality.
Fair enough. But if there's a bug in the chip does any human have a chance of >figuring out where it is or how to fix it?
// Add two BCD digits plus the carry_in, producing sum and carry_out
//
module bcd_adder(a,b,carry_in,sum,carry_out);
Looks like a mix of a declarative and procedural language. Is that fair?
MrSpud_bBq7@q7hlm8jyc5498ve028.biz writes:
On Wed, 14 Jul 2021 17:37:10 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
The chip I'm currently working on has two dozen 2.5Ghz ARMv9 cores,
over a dozen high-end DSP (Digital Signal Processors), hardware blocks
to manage ethernet packets (ingress, egress, classification,
routing, deep packet inspection, TLS initiation/termination,
and packet header manipulation)
and hardware blocks for machine learning and various proprietary
signal processing blocks. And a virtualizable hardware mechanism
to divide the hardware resources amongst virtual machines in a
secure, high-performance manner.
Sounds interesting. Can you tell us what its for or is that classified?
5G cellular base stations.
On Thu, 15 Jul 2021 13:51:17 GMT
scott@slp53.sl.home (Scott Lurndal) wrote: >>MrSpud_bBq7@q7hlm8jyc5498ve028.biz writes:
On Wed, 14 Jul 2021 17:37:10 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
The chip I'm currently working on has two dozen 2.5Ghz ARMv9 cores, >>>>over a dozen high-end DSP (Digital Signal Processors), hardware blocks >>>>to manage ethernet packets (ingress, egress, classification,
routing, deep packet inspection, TLS initiation/termination,
and packet header manipulation)
and hardware blocks for machine learning and various proprietary
signal processing blocks. And a virtualizable hardware mechanism
to divide the hardware resources amongst virtual machines in a
secure, high-performance manner.
Sounds interesting. Can you tell us what its for or is that classified?
5G cellular base stations.
Why does a base station need machine learning? Its simply a multiplexer.
I think Cox was restricting his claim to system's
whose global lock of lack thereof could be independently verified.
How about published papers? There are many that have been published
between ACM, IEEE, DIGITAL, usenix and others that describe the
operating systems above.
On Wed, 14 Jul 2021 22:30:15 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
I think Cox was restricting his claim to system's
whose global lock of lack thereof could be independently verified.
How about published papers? There are many that have been published
between ACM, IEEE, DIGITAL, usenix and others that describe the
operating systems above.
That's a fair point, Scott. I'll concede Bell Labs frequently turned a
blind eye toward what IBM in particular ever did.
Rainer Weikusat <rweikusat@talktalk.net> wrote:
Who is this guy
I think if you read more about Russ Cox, you'll find he's no fool.
why does he believe independent memory accesses by one CPU should
become visible to another CPU in any particular order just because
some other guy said so almost 50 years ago?
I don't know who the "some other guy" is?
He's exploring what the programmer in a given language can expect from
the same source code running on different machines. Every programmer
has some model of memory in their heads -- correct or not -- but in
some cases what's "correct" holds only sometimes.
"James K. Lowden" <jklowden@speakeasy.net> writes:
On Tue, 13 Jul 2021 14:34:28 +0000 (UTC)
MrSpud_oc@vu9ga1.gov.uk wrote:
Most popular programming languages are simply a higher level
version of assembler - ie arithmetic, if/then, looping, jumps,
calls etc
"Assembler" (the name of a certain kind of program, namely, one which
translated machine code in mnemonic notation into the corresponding
numbers/ bit patterns) doesn't have control structures like
if/then/else or loops. That's what differentiates high-level
programming languages from it.
What do you think operations such as jump-if-equals,
jump-if-carry-set etc are if not a type of if construct? Or are you
just being pedantic?
Rainer's not being pedantic. He's being precise.
Processors don't have loop or "else" opcodes.
SOB on the PDP-11 qualifies as a loop opcode (subtract one and branch).
and of course the "LOOP" and "LOOPcc" instructions on the Intel x86 processors.
Burroughs medium systems didn't offer an assembler at all, but had a
higher level language called BPL (Burroughs Programming Language) that
had constructs sufficient to write efficient low level code.
why does he believe independent memory accesses by one CPU should
become visible to another CPU in any particular order just because
some other guy said so almost 50 years ago?
I don't know who the "some other guy" is?
He's referring to Leslie Lamport for the definition of "sequentially consistent machine" which bascially means all CPUs will see all memory accesses by any CPU in program order.
Every programmer
has some model of memory in their heads -- correct or not -- but in
some cases what's "correct" holds only sometimes.
In the real world,
the are no ordering constraints on independent memory accesses
MrSpud_87ivx0b0f@h4qtzo.eu writes:
5G cellular base stations.
Why does a base station need machine learning? Its simply a multiplexer.
Handling 5G requires far more than a multiplexer. Consider the
signal processing required for a radio head with a MIMO antenna
array. Once you've teased the data out of the hundreds of streams
active on the radio side, you need to process it, error correct it, >accomodate reflections from nearby obstructions, and produce a
data packet. That goes from the radio head to the base station
where it is convered from CPRI/eCPRI to IP packets (at multiples of >100Gbits/sec) and through a gateway to the internet.
On Thu, 15 Jul 2021 16:05:58 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
MrSpud_87ivx0b0f@h4qtzo.eu writes:
5G cellular base stations.
Why does a base station need machine learning? Its simply a multiplexer.
Handling 5G requires far more than a multiplexer. Consider the
signal processing required for a radio head with a MIMO antenna
array. Once you've teased the data out of the hundreds of streams
active on the radio side, you need to process it, error correct it, >>accomodate reflections from nearby obstructions, and produce a
data packet. That goes from the radio head to the base station
where it is convered from CPRI/eCPRI to IP packets (at multiples of >>100Gbits/sec) and through a gateway to the internet.
I still don't see why any of that needs machine learning, its just bog >standard signal processing
Rainer Weikusat <rweikusat@talktalk.net> wrote:
why does he believe independent memory accesses by one CPU should
become visible to another CPU in any particular order just because
some other guy said so almost 50 years ago?
I don't know who the "some other guy" is?
He's referring to Leslie Lamport for the definition of "sequentially
consistent machine" which bascially means all CPUs will see all memory
accesses by any CPU in program order.
Thank you. I would say anyone dismissing Leslie Lamport either doesn't
know what he's doing, or had be very sure he does.
Every programmer
has some model of memory in their heads -- correct or not -- but in
some cases what's "correct" holds only sometimes.
In the real world,
I'm very often introduced to the real world, as if I live elsewhere. I
know that wasn't your intent, though.
the are no ordering constraints on independent memory accesses
That is part of Cox's point. There's hardly any more real-world
encounter with the behavior of memory caches on hardware than while
writing an OS and trying to determine what minimal guarantees the
hardware provides.
Let me make up my own litmus test (sorry, I just work here):
Processor A:
x = 1
y = x + 1
Processor B:
z = x + y
If B runs after A, z = 3. If before A, z = 0. If during A (between
the two assignments), z = 1. But in no event, under the Intel TSO
model, can z be 2, Because to processor B, x must be visible before y.
As I understand Cox's paper, your assertion that there's "no [reliable] ordering constraint" isn't quite true. The hardware may offer some guarantees. Intel does (nowadays) and ARM does not. Your assertion is
*safe*, in the sense that by adhering to that rule as a programmer you
won't get caught relying on guarantees that aren't there.
But it's not
optimal, because you'll sometimes introduce synchronization overhead
where guarantees are present.
MrSpud_e3u@8stt_1g7x3hci.tv writes:
On Thu, 15 Jul 2021 16:05:58 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
MrSpud_87ivx0b0f@h4qtzo.eu writes:
5G cellular base stations.
Why does a base station need machine learning? Its simply a multiplexer. >>>>
Handling 5G requires far more than a multiplexer. Consider the
signal processing required for a radio head with a MIMO antenna
array. Once you've teased the data out of the hundreds of streams >>>active on the radio side, you need to process it, error correct it, >>>accomodate reflections from nearby obstructions, and produce a
data packet. That goes from the radio head to the base station
where it is convered from CPRI/eCPRI to IP packets (at multiples of >>>100Gbits/sec) and through a gateway to the internet.
I still don't see why any of that needs machine learning, its just bog >>standard signal processing
Actually, that's not the case. 5G is _quite_ different than 3G/4G/LTE,
we make chips for both.
And the use case for the ML is currently proprietary.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 286 |
Nodes: | 16 (2 / 14) |
Uptime: | 82:26:04 |
Calls: | 6,495 |
Calls today: | 6 |
Files: | 12,096 |
Messages: | 5,276,781 |