The J1 is an amazing chip. Hard to improve on it, but it does have two weaknesses, both of which could benefit from a sharing economy. One is the shift register, and the other is the lack of math functions.
With the J1, you get a choice between a small one bit shift register,
or large barrel shift register which more than doubles the size of the cpu from 70 to 150 LUTs.
I would think that 8 J1 Forth cores could share a barrel shift register. That would increase the overall chip size, not by 100%, but by about 12.5 percent, maybe twice that if you add in control logic and networking.
Of course two cores may want the barrel shift register at the same time, so you would need a pause option and some control logic.
Here is the line of code which updates the state on the J1.CPU, runs at 80 Hz with barely any math functions. The Microcore has all the functions, and runs at 20 hz. So running at 80Hz, and occasionally waiting for 4 cycles is not that bad. An 80/20 MHz Barrel CPU. Of course reality will be a worse than that.
{ pc, dsp, st0, rsp } <= { pcN, dspN, st0N, rspN };
The following code change would solve the pause problem.
case (pause)
1’b0: //No pause, update the state
{ pc, dsp, st0, rsp } <= { pcN, dspN, st0N, rspN };
1’b1: // Pause, keep the current state
{ pc, dsp, st0, rsp } <= { pc, dsp, st0, rsp }
endcase
The simplest way to do the control logic is with a circulating one. On every clock cycle one of the cores gets access to the barrel shift register. On average a core would have to wait 3.5 clock cycles for access. It is interesting to note that the J1
And of course the J1’s other problem is it that it has almost no math functions. Worse yet there is no space for additional op codes. In contrast the microCore, focused on real time control has some 82 instructions. https://github.com/microCore-VHDL/microCore/blob/master/documents/uCore_instructions.pdf
So I could imagine a many core j1 barrel processor, with two extra instruction bits, allowing for a bunch of shared math functions.
Then on every clock cycle, every core gets exclusive access to about 8 math functions. With 8 such groupings there are 64 additional math functions. On every clock cycle, each core would also have access to its own 16 dedicated ALU functions. Thelarger functions could be shared. The more frequently used instructions would be in the dedicated ALU;s. Popular shared functions, such as multiply could even be available twice. Here is the study of instruction frequency.
https://users.ece.cmu.edu/~koopman/stack_computers/sec6_3.htmlthe slower math functions could be given 2 clock cycles to execute, or even three. The multi clock cycle instructions could be pipelined.
It is interesting to note that the Propeller Parallax does something similar. Every 8th cycle, the 8 cores get access to the shared core memory and the shared Cordic functions.
I am also mindful that some math functions are faster, and some are slower. Or maybe the math functions are all located far from some of the cores, so by default they should get an extra clock cycle for the signal to propagate. With the pause option,
So what do you all think? Is this a good idea?
Did I learn anything this semester? Would this be a good master’s project?
Does anyone have a need for a cpu with 8 forth cores?
Over on the AI and robotics discussion group, one person is controlling 14 motors, so he could use a 16 core CPU. Then there would be less FPGA development, and more software development, which is presumably faster. And another person is alsocontrolling a lot of things. Details omitted.
As for me, I finished the classes and exams for my first semester of graduate school in Digital Circuit Design. As a long time software developer, when they first taught us verilog synthesis, my reaction was that it was completely unintuitive. By theend of the semester, my verilog term project, a frequency and duty cycle meter got a perfect score. Progress.
My master’s thesis to build a forth CPU was approved, but there is no way I could design something better than the J1.
I remember that I used to think that it was a nutty cpu, why was he mixing jumps and addresses in one instruction. Now I have the skills to understand and appreciate it. So there is no point building a single Forth CPU, but a many core J1 barrelprocessor would be a most reasonable project. I could even reuse some of the 60+ VHDL math functions from the MicroCore.
Comments? I am still new to all of this stuff. I have not yet built a chip as complex as this proposed one. so your expertise would be most appreciated.
It is very hard to find people who have any idea what I am talking about.
My master’s thesis to build a forth CPU was approved, but there is no way I could design something better than the J1. I remember that I used to think that it was a nutty cpu, why was he mixing jumps and addresses in one instruction.
Now I have the skills to understand and appreciate it. So there is no point building
a single Forth CPU, but a many core J1 barrel processor would be a most reasonable project. I could even reuse some of the 60+ VHDL math functions from the MicroCore.
It is better to be a wolf that hunts mice, than a coyote that eats carrion.
The world is full of maintenance programmers who have realized that they
are too incompetent to ever write a program of their own, so their new plan >is to find source-code written by a real programmer and make some modification >to it, then claim that they are smarter than the original programmer.
I don't have any respect for maintenance programmers.
AFAIK, nobody in the world has respect for maintenance programmers.
The J1 is an amazing chip. Hard to improve on it,
but it does have two weaknesses...
My master’s thesis to build a forth CPU was approved, .
but there is no way I could design something better than the J1.
Thank you everyone for your replies.design if you prefer.
I am most interested in learning more about your cpu designs.
I think that every cpu I look at has some good ideas.
i am actually doing a talk about forth cpu's at FPGA world in Stockholm in September.
I will probably repeat it a few other places, as it evolves.
I will cover the J1 family, Mecrisp ice, MicroCore, EP16/24/32, HowerJ's forth-cpu, and any others you want to suggest. There was one other also mentioned on here recently, I forget the name. He had a very nice video. Or I can stay quiet about your
I still thank that there is opportunity in working with large numbers of forth cores. I think a grid of Forth cores for image processing would be most interesting. Images are the one computational problem I know of which is really in a plane, so asystolic array is most appropriate. Plus it makes for a good demo. I am also busy learning about cordic, quaternions, and image processing in general. I find all of this stuff quite fascinating.
Plus I am still finishing up a ton of homework. The semester ended, but a bunch of us are still doing the assignments. So sorry for the slow reply.
On Thursday, July 13, 2023 at 1:35:02 PM UTC-4, Christopher Lozinski wrote:design if you prefer.
Thank you everyone for your replies.
I am most interested in learning more about your cpu designs.
I think that every cpu I look at has some good ideas.
i am actually doing a talk about forth cpu's at FPGA world in Stockholm in September.
I will probably repeat it a few other places, as it evolves.
I will cover the J1 family, Mecrisp ice, MicroCore, EP16/24/32, HowerJ's forth-cpu, and any others you want to suggest. There was one other also mentioned on here recently, I forget the name. He had a very nice video. Or I can stay quiet about your
systolic array is most appropriate. Plus it makes for a good demo. I am also busy learning about cordic, quaternions, and image processing in general. I find all of this stuff quite fascinating.I still thank that there is opportunity in working with large numbers of forth cores. I think a grid of Forth cores for image processing would be most interesting. Images are the one computational problem I know of which is really in a plane, so a
Plus I am still finishing up a ton of homework. The semester ended, but a bunch of us are still doing the assignments. So sorry for the slow reply.For some historical overview (ie: this is not brand new stuff) you might want to mention RTX2000/2010
by Harris. It was (is?) used in spacecraft because it is a RAD-hardened CPU. Due to the way it was designed, with the 8Mhz clock it performed 10M Forth instructions per second.
One amazing spec to me is: 1 cycle sub-routine call and return!
https://en.wikipedia.org/wiki/RTX2010
looks like it is re-marketed by Intersil (year 2000) https://www.mouser.com/catalog/specsheets/intersil_fn3961.pdf
On Thursday, July 13, 2023 at 1:35:02 PM UTC-4, Christopher Lozinski wrote:design if you prefer.
Thank you everyone for your replies.
I am most interested in learning more about your cpu designs.
I think that every cpu I look at has some good ideas.
i am actually doing a talk about forth cpu's at FPGA world in Stockholm in September.
I will probably repeat it a few other places, as it evolves.
I will cover the J1 family, Mecrisp ice, MicroCore, EP16/24/32, HowerJ's forth-cpu, and any others you want to suggest. There was one other also mentioned on here recently, I forget the name. He had a very nice video. Or I can stay quiet about your
systolic array is most appropriate. Plus it makes for a good demo. I am also busy learning about cordic, quaternions, and image processing in general. I find all of this stuff quite fascinating.I still thank that there is opportunity in working with large numbers of forth cores. I think a grid of Forth cores for image processing would be most interesting. Images are the one computational problem I know of which is really in a plane, so a
Plus I am still finishing up a ton of homework. The semester ended, but a bunch of us are still doing the assignments. So sorry for the slow reply.For some historical overview (ie: this is not brand new stuff) you might want to mention RTX2000/2010
by Harris. It was (is?) used in spacecraft because it is a RAD-hardened CPU. Due to the way it was designed, with the 8Mhz clock it performed 10M Forth instructions per second.
One amazing spec to me is: 1 cycle sub-routine call and return!
https://en.wikipedia.org/wiki/RTX2010
looks like it is re-marketed by Intersil (year 2000) https://www.mouser.com/catalog/specsheets/intersil_fn3961.pdf
The RTX is no more a Forth processor than a Pentium or an ARM would be.
A subroutine requires just two things,
save the next instruction address on the return stack and jump to the address specified,
easy to do in one clock cycle.
A return is even easier, just jump to the address on the return stack and pop the return stack.
On Thursday, July 13, 2023 at 1:32:10 PM UTC-7, Lorem Ipsum wrote:
The RTX is no more a Forth processor than a Pentium or an ARM would be.Rick Collins doesn't know what he is talking about!
Of course the RTX-2000 is a Forth processor. The Pentium or ARM are not.
A subroutine requires just two things,In my design, the subroutine call is two instructions because it does two things.
save the next instruction address on the return stack and jump to the address specified,
easy to do in one clock cycle.
A return is even easier, just jump to the address on the return stack and pop the return stack.The subroutine return is one instruction because it does one thing.
As a general rule of thumb, the people who claim that everything is easy
are those who have not done anything.
I think Rick Collins is a total fake.
Christopher Lozinski admitted that he has taken one class on the subject
and is not an expert. Most likely, Rick Collins has also taken one class
on the subject, got a 'C', and now spends all of his time on comp.lang.forth pretending to be the world's expert on the subject, lording over the admitted novices such as Christopher Lozinski. Rick Collins is a total fake!
...
One amazing spec to me is: 1 cycle sub-routine call and return!
On 14/07/2023 6:09 am, Brian Fox wrote:
...If it sounds too good to be true, it usually is. '1 cycle' instructions sound great compared to CPUs of the 1970's. But it would be comparing
One amazing spec to me is: 1 cycle sub-routine call and return!
apples and oranges. My intro into 1 cycle CPUs was AVR 8-bit. It wasn't
at all positive. I enjoy saving bytes and gaining speed but when every instruction is a minimum of 16-bits and the speed is the same, it's a
lost cause.
On Friday, July 14, 2023 at 2:05:41 AM UTC-4, dxforth wrote:
On 14/07/2023 6:09 am, Brian Fox wrote:
...If it sounds too good to be true, it usually is. '1 cycle' instructions
One amazing spec to me is: 1 cycle sub-routine call and return!
sound great compared to CPUs of the 1970's. But it would be comparing
apples and oranges. My intro into 1 cycle CPUs was AVR 8-bit. It wasn't
at all positive. I enjoy saving bytes and gaining speed but when every
instruction is a minimum of 16-bits and the speed is the same, it's a
lost cause.
Are you saying the AVR has 16 bit instructions? I've never worked with them at that level.
On 14/07/2023 6:09 am, Brian Fox wrote:
...If it sounds too good to be true, it usually is. '1 cycle' instructions sound great compared to CPUs of the 1970's. But it would be comparing
One amazing spec to me is: 1 cycle sub-routine call and return!
apples and oranges. My intro into 1 cycle CPUs was AVR 8-bit. It wasn't
at all positive. I enjoy saving bytes and gaining speed but when every instruction is a minimum of 16-bits and the speed is the same, it's a
lost cause.
On Friday, July 14, 2023 at 2:05:41 AM UTC-4, dxforth wrote:
On 14/07/2023 6:09 am, Brian Fox wrote:Perhaps a difference worth considering with RTX is in some cases
...If it sounds too good to be true, it usually is. '1 cycle' instructions sound great compared to CPUs of the 1970's. But it would be comparing apples and oranges. My intro into 1 cycle CPUs was AVR 8-bit. It wasn't
One amazing spec to me is: 1 cycle sub-routine call and return!
at all positive. I enjoy saving bytes and gaining speed but when every instruction is a minimum of 16-bits and the speed is the same, it's a
lost cause.
multiple instructions can fetched in one 16 bit read and then operate
in parallel in the processor. The room to do that was because
there are no bits required for register selection and the instruction set
is very small. This did require a smarter compiler however.
Generally in my experience call/return on a register machine
uses many more clocks than one, because you have to push/pop
registers most of the time.
But as with all things in engineering there is no free lunch.
"Fast, Good, Cheap. Pick two"
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote:
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote:
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.In the case of AVR8 there is some sort of trade-off at play. These
devices have quite small flash yet instruction sizes are relatively
large. Were there a better option Atmel (or a competitor) would have
used it.
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote:some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote:In the case of AVR8 there is some sort of trade-off at play. These
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
devices have quite small flash yet instruction sizes are relatively
large. Were there a better option Atmel (or a competitor) would have
used it.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I think
...
On 15/07/2023 5:34 pm, Lorem Ipsum wrote:some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote:
On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote:In the case of AVR8 there is some sort of trade-off at play. These
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
devices have quite small flash yet instruction sizes are relatively
large. Were there a better option Atmel (or a competitor) would have
used it.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I think
...
The 8085 had few registers, variable-length instructions as short as one byte, 16-bit push/pops (also one byte). To my mind the latter would have been a better model for the flash sizes typically found in AVR devices.
What use is a 1 cycle instruction CPU if the program doesn't fit.
On Sunday, July 16, 2023 at 6:45:51 AM UTC-4, dxforth wrote:some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On 15/07/2023 5:34 pm, Lorem Ipsum wrote:
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote:
On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote:In the case of AVR8 there is some sort of trade-off at play. These
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
devices have quite small flash yet instruction sizes are relatively
large. Were there a better option Atmel (or a competitor) would have
used it.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I think
...
The 8085 had few registers, variable-length instructions as short as one
byte, 16-bit push/pops (also one byte). To my mind the latter would have
been a better model for the flash sizes typically found in AVR devices.
What use is a 1 cycle instruction CPU if the program doesn't fit.
Sorry, I don't follow what you are trying to say.
On 16/07/2023 8:48 pm, Lorem Ipsum wrote:some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On Sunday, July 16, 2023 at 6:45:51 AM UTC-4, dxforth wrote:
On 15/07/2023 5:34 pm, Lorem Ipsum wrote:
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote:
On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote: >>>>>In the case of AVR8 there is some sort of trade-off at play. These
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
devices have quite small flash yet instruction sizes are relatively >>>> large. Were there a better option Atmel (or a competitor) would have >>>> used it.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I think
...
The 8085 had few registers, variable-length instructions as short as one >> byte, 16-bit push/pops (also one byte). To my mind the latter would have >> been a better model for the flash sizes typically found in AVR devices. >> What use is a 1 cycle instruction CPU if the program doesn't fit.
Sorry, I don't follow what you are trying to say.Current 8-bit CPU's are not memory efficient.
On Sunday, July 16, 2023 at 11:21:09 PM UTC-4, dxforth wrote:some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On 16/07/2023 8:48 pm, Lorem Ipsum wrote:
On Sunday, July 16, 2023 at 6:45:51 AM UTC-4, dxforth wrote:
On 15/07/2023 5:34 pm, Lorem Ipsum wrote:
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote:
On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote: >>>>>>>In the case of AVR8 there is some sort of trade-off at play. These >>>>>> devices have quite small flash yet instruction sizes are relatively >>>>>> large. Were there a better option Atmel (or a competitor) would have >>>>>> used it.
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I think
don't get the connection.Current 8-bit CPU's are not memory efficient....
The 8085 had few registers, variable-length instructions as short as one >>>> byte, 16-bit push/pops (also one byte). To my mind the latter would have >>>> been a better model for the flash sizes typically found in AVR devices. >>>> What use is a 1 cycle instruction CPU if the program doesn't fit.
Sorry, I don't follow what you are trying to say.
And this is relevant to the conversation in what way? I'm just now following the flow of thought. I mentioned that instruction sized vary, but are mostly powers of two and gave an exception. You made a comment that doesn't seem to flow from that. I
On 17/07/2023 3:38 pm, Lorem Ipsum wrote:think some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On Sunday, July 16, 2023 at 11:21:09 PM UTC-4, dxforth wrote:
On 16/07/2023 8:48 pm, Lorem Ipsum wrote:
On Sunday, July 16, 2023 at 6:45:51 AM UTC-4, dxforth wrote:
On 15/07/2023 5:34 pm, Lorem Ipsum wrote:
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote: >>>>>> On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote: >>>>>>>In the case of AVR8 there is some sort of trade-off at play. These >>>>>> devices have quite small flash yet instruction sizes are relatively >>>>>> large. Were there a better option Atmel (or a competitor) would have >>>>>> used it.
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I
don't get the connection.Current 8-bit CPU's are not memory efficient....
The 8085 had few registers, variable-length instructions as short as one
byte, 16-bit push/pops (also one byte). To my mind the latter would have
been a better model for the flash sizes typically found in AVR devices. >>>> What use is a 1 cycle instruction CPU if the program doesn't fit.
Sorry, I don't follow what you are trying to say.
And this is relevant to the conversation in what way? I'm just now following the flow of thought. I mentioned that instruction sized vary, but are mostly powers of two and gave an exception. You made a comment that doesn't seem to flow from that. I
It's news to me customers are biased towards instructions being a power (multiple?)
of two. They buy what's available which at the moment is 1 cycle CPU's. Even if
customers are aware it may not be best fit, what are they going to do - design their
own CPU? Manufacturers have customers by the short and curlies.
On 18/07/2023 8:47 am, Lorem Ipsum wrote:think some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On Monday, July 17, 2023 at 6:37:45 AM UTC-4, dxforth wrote:
On 17/07/2023 3:38 pm, Lorem Ipsum wrote:
On Sunday, July 16, 2023 at 11:21:09 PM UTC-4, dxforth wrote:
On 16/07/2023 8:48 pm, Lorem Ipsum wrote:
On Sunday, July 16, 2023 at 6:45:51 AM UTC-4, dxforth wrote:
On 15/07/2023 5:34 pm, Lorem Ipsum wrote:
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote: >>>>>>>> On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote: >>>>>>>>>In the case of AVR8 there is some sort of trade-off at play. These >>>>>>>> devices have quite small flash yet instruction sizes are relatively >>>>>>>> large. Were there a better option Atmel (or a competitor) would have
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
used it.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I
don't get the connection.Current 8-bit CPU's are not memory efficient.Sorry, I don't follow what you are trying to say....
The 8085 had few registers, variable-length instructions as short as one
byte, 16-bit push/pops (also one byte). To my mind the latter would have
been a better model for the flash sizes typically found in AVR devices.
What use is a 1 cycle instruction CPU if the program doesn't fit. >>>>>
And this is relevant to the conversation in what way? I'm just now following the flow of thought. I mentioned that instruction sized vary, but are mostly powers of two and gave an exception. You made a comment that doesn't seem to flow from that. I
non-binary power sizes are very unusual. The only one I can even think of is the 12 bit instruction PIC.It's news to me customers are biased towards instructions being a power (multiple?)
of two. They buy what's available which at the moment is 1 cycle CPU's. Even if
customers are aware it may not be best fit, what are they going to do - design their
own CPU? Manufacturers have customers by the short and curlies.
There are times you make absolutely no sense. Customers have choice. The vast majority of CPUs have instruction sizes of 8, 16, 32, or 64 bits. Some, when addresses are combined with the opcode, will result in 48 bit instructions, but otherwise, the
So, what are you trying to say?I've already said it. Current 8-bit CPU's are not memory efficient. Users didn't
decide that.
On Monday, July 17, 2023 at 6:37:45 AM UTC-4, dxforth wrote:think some of the PICs have 12 bit instructions and no mixing. I suppose that would be a Harvard architecture. I can't think of any others.
On 17/07/2023 3:38 pm, Lorem Ipsum wrote:
On Sunday, July 16, 2023 at 11:21:09 PM UTC-4, dxforth wrote:
On 16/07/2023 8:48 pm, Lorem Ipsum wrote:
On Sunday, July 16, 2023 at 6:45:51 AM UTC-4, dxforth wrote:
On 15/07/2023 5:34 pm, Lorem Ipsum wrote:
On Saturday, July 15, 2023 at 1:24:15 AM UTC-4, dxforth wrote: >>>>>>>> On 15/07/2023 2:04 pm, Brian Fox wrote:
On Friday, July 14, 2023 at 3:27:38 PM UTC-4, Lorem Ipsum wrote: >>>>>>>>>In the case of AVR8 there is some sort of trade-off at play. These >>>>>>>> devices have quite small flash yet instruction sizes are relatively >>>>>>>> large. Were there a better option Atmel (or a competitor) would have >>>>>>>> used it.
"Fast, Good, Cheap. Pick two"Not sure how that trope applies here.
I was considering stack machine versus register machine
advantages/tradeoffs as I wrote it.
I am sure there is a better trope. I just don't have one.
Better in what way? Often decisions are made so the product does not appear "goofy". For example, users are biased to hate instruction words that are not powers of 2. I suppose that's because of the typical mixing of data and instructions. I
don't get the connection.Current 8-bit CPU's are not memory efficient....
The 8085 had few registers, variable-length instructions as short as one >>>>>> byte, 16-bit push/pops (also one byte). To my mind the latter would have >>>>>> been a better model for the flash sizes typically found in AVR devices. >>>>>> What use is a 1 cycle instruction CPU if the program doesn't fit.
Sorry, I don't follow what you are trying to say.
And this is relevant to the conversation in what way? I'm just now following the flow of thought. I mentioned that instruction sized vary, but are mostly powers of two and gave an exception. You made a comment that doesn't seem to flow from that. I
non-binary power sizes are very unusual. The only one I can even think of is the 12 bit instruction PIC.It's news to me customers are biased towards instructions being a power (multiple?)
of two. They buy what's available which at the moment is 1 cycle CPU's. Even if
customers are aware it may not be best fit, what are they going to do - design their
own CPU? Manufacturers have customers by the short and curlies.
There are times you make absolutely no sense. Customers have choice. The vast majority of CPUs have instruction sizes of 8, 16, 32, or 64 bits. Some, when addresses are combined with the opcode, will result in 48 bit instructions, but otherwise, the
So, what are you trying to say?
There are times you make absolutely no sense. Customers have choice.
The vast majority of CPUs have instruction sizes of 8, 16, 32, or 64
bits. Some, when addresses are combined with the opcode, will result in
48 bit instructions, but otherwise, the non-binary power sizes are very >unusual. The only one I can even think of is the 12 bit instruction
PIC.
Rick C.Groetjes Albert
In article <a262ff14-cd28-4713...@googlegroups.com>,
Lorem Ipsum <gnuarm.del...@gmail.com> wrote:
There are times you make absolutely no sense. Customers have choice.This makes sense because of memories. Memories in multiples of bytes
The vast majority of CPUs have instruction sizes of 8, 16, 32, or 64
bits. Some, when addresses are combined with the opcode, will result in
48 bit instructions, but otherwise, the non-binary power sizes are very >unusual. The only one I can even think of is the 12 bit instruction
PIC.
(octets) are practical and have gained the upper hand not only in
hardware but also in software.
It is unimaginable that the billion euro's investment needed for
10 bit memories are duplicated.
This more or less dictates a decision that busses are a multiple of
8 and consequences for the CPU architectures.
In article <a262ff14-cd28-4713...@googlegroups.com>,
Lorem Ipsum <gnuarm.del...@gmail.com> wrote:
There are times you make absolutely no sense. Customers have choice.This makes sense because of memories. Memories in multiples of bytes (octets) are practical and have gained the upper hand not only in
The vast majority of CPUs have instruction sizes of 8, 16, 32, or 64
bits. Some, when addresses are combined with the opcode, will result in
48 bit instructions, but otherwise, the non-binary power sizes are very >unusual. The only one I can even think of is the 12 bit instruction
PIC.
hardware but also in software.
It is unimaginable that the billion euro's investment needed for
10 bit memories are duplicated.
This more or less dictates a decision that busses are a multiple of
8 and consequences for the CPU architectures.
On 18/07/2023 3:27 pm, albert wrote:
In article <a262ff14-cd28-4713...@googlegroups.com>,
Lorem Ipsum <gnuarm.del...@gmail.com> wrote:
There are times you make absolutely no sense. Customers have choice.
The vast majority of CPUs have instruction sizes of 8, 16, 32, or 64
bits. Some, when addresses are combined with the opcode, will result in >> 48 bit instructions, but otherwise, the non-binary power sizes are very >> unusual. The only one I can even think of is the 12 bit instruction
PIC.
This makes sense because of memories. Memories in multiples of bytes (octets) are practical and have gained the upper hand not only inOnly issue would be data stored in program memory. FlashForth handles
hardware but also in software.
It is unimaginable that the billion euro's investment needed for
10 bit memories are duplicated.
This more or less dictates a decision that busses are a multiple of
8 and consequences for the CPU architectures.
it through de-blocking and address munging. To a user, data in program memory appears byte-addressed and word-aligned when in reality it's a different beast altogether.
In article <a262ff14-cd28-4713-875e-73c604b9faa5n@googlegroups.com>,
Lorem Ipsum <gnuarm.deletethisbit@gmail.com> wrote:
There are times you make absolutely no sense. Customers have choice.
The vast majority of CPUs have instruction sizes of 8, 16, 32, or 64
bits. Some, when addresses are combined with the opcode, will result in
48 bit instructions, but otherwise, the non-binary power sizes are very
unusual. The only one I can even think of is the 12 bit instruction
PIC.
This makes sense because of memories. Memories in multiples of bytes
(octets) are practical and have gained the upper hand not only in
hardware but also in software.
It is unimaginable that the billion euro's investment needed for
10 bit memories are duplicated.
This more or less dictates a decision that busses are a multiple of
8 and consequences for the CPU architectures.
No rocket science there. Harvard architectures have been used in many >devices, some with mismatched data sizes.
I don't see any constraints, other than that users like to see lots of >symmetry. Meanwhile, the very symmetric 68000 and Power PC
architectures have faded away to be replaced by the Intel lines. But
Rick C.
In article <f3836786-0a36-4d32...@googlegroups.com>,
Lorem Ipsum <gnuarm.del...@gmail.com> wrote:
<SNIP>
No rocket science there. Harvard architectures have been used in many >devices, some with mismatched data sizes.Harvard architectures with the possibility to write program memory
are in fact botched Newman architectures.
I don't see any constraints, other than that users like to see lots of >symmetry. Meanwhile, the very symmetric 68000 and Power PCThe newest development is that the CISCy Intel lines fades
architectures have faded away to be replaced by the Intel lines. But
away quickly to be replaced by the all too symmetric risc-V.
Users (like me) like that.
I realize that you like to pull people's legs and often live in a fantasy world. Why do you want to pretend like Risc-V is a significant CPU competing with the Intel lines?
I realize that you like to pull people's legs and often live in a fantasy world. Why do you want to pretend like Risc-V is a significant CPU competing with the Intel lines?Haha, wait for it! RISC-V is advancing quickly. Have you seen the announcement that the Debian team plans official riscv64 architecture support in the upcoming version 13 "Trixie"?
Have you seen the announcement that the Debian team plans official riscv64 architecture support in the upcoming version 13 "Trixie"?
I have one of the original rPis. It sucks as a desktop machine, even runni= >ng just one tab in a browser. Maybe it would be a bit better with an rPi4,=
but it's not competition for mainstream processors. =20
What is different about the Risc-V compared to the 68000 family or the Powe= >r-PC family? Why would it compete when the others did not? Didn't the Pow= >er-PC have the full backing of Sun and IBM, yet it still could not keep up?=
Even Apple switched to Intel processors.=20
I don't know how Risc-V will do in the mobile market. From what I've read,=
the reason it is getting traction is simply because there are no royalties= to pay.
So it appeals to the Chinese market.
I don't know how Risc-V will do in the mobile market. From what I've
read, the reason it is getting traction is simply because there are no >royalties to pay. So it appeals to the Chinese market.
Have they everDont be ridiculous. All but the most technically advanced stuff
produced anything that was significant in the mainstream markets?
Rick C.
I realize that you like to pull people's legs and often live in afantasy world. Why do you want to pretend like Risc-V is a significant
CPU competing with the Intel lines?
Haha, wait for it! RISC-V is advancing quickly. Have you seen the >announcement that the Debian team plans official riscv64 architecture
support in the upcoming version 13 "Trixie"?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 49:39:43 |
Calls: | 6,711 |
Calls today: | 4 |
Files: | 12,243 |
Messages: | 5,354,784 |
Posted today: | 1 |