I have a Z80 project in mind and would like to build a compiler for a
Z80. I was wondering if modern backend techniques can be applied successfully for these old CPU's, i.e. SSA.
I know GCC has backends for some older architectures, but these do weird gymnastics such as implementing a virtual cpu in rtl and then lowering further.
I have a Z80 project in mind and would like to build a compiler for a
Z80. I was wondering if modern backend techniques can be applied successfully for these old CPU's, i.e. SSA.
I know GCC has backends for some older architectures, but these do weird gymnastics such as implementing a virtual cpu in rtl and then lowering further.
Z80. I was wondering if modern backend techniques can be applied successfully for these old CPU's, i.e. SSA.
I have a Z80 project in mind and would like to build a compiler for a
Z80. I was wondering if modern backend techniques can be applied >successfully for these old CPU's, i.e. SSA.
For the Z80, however, there are a number of compiler options. ...
Your best bet is probably SDCC. This is a multi-target open source
compiler that is in regular development, and aimed specifically for
small CISC microcontroller cores. The Z80 is one of its targets.
Modern, as in post-1985 techniques require lots of memory.
So they are not of any use if you plan to host your compiler
on a Z80, otherwise your next problem is mapping techniques
that are designed to work well with orthogonal architectures
(which the Z80 is certainly not).
[I believe the plan is to cross-compile with Z80 as the target. -John]
I have a Z80 project in mind and would like to build a compiler for a
Z80. I was wondering if modern backend techniques can be applied successfully for these old CPU's, i.e. SSA.
I know GCC has backends for some older architectures, but these do weird gymnastics such as implementing a virtual cpu in rtl and then lowering further.
Luke A. Guest <laguest@archeia.com> wrote:
I have a Z80 project in mind and would like to build a compiler for a
Z80. I was wondering if modern backend techniques can be applied
successfully for these old CPU's, i.e. SSA.
I know GCC has backends for some older architectures, but these do weird
gymnastics such as implementing a virtual cpu in rtl and then lowering
further.
There is, it seems, an LLVM backend for Z80: https://github.com/jacobly0/llvm-project
(see the 'z80' branch)
It appears TI calculators are the main use case.
I don't know the current status/functionality, but it would be fun to see what the various LLVM passes do to the generated code.
It seems like there's been some work done on Rust for Z80 (and 6502): https://github.com/jacobly0/llvm-project/issues/15
So eventually computer architects introduced machines with
general-purpose registers like the PDP-11, the VAX, and the RISCs; and compiler writers developed techniques like graph colouring to make
good use of these architectures.
Maybe with the increased memory and processing power available now,
one could do better, but given that special-purpose registers are
mostly a thing of the past, there has not been much research into
that, that I am aware of.
See "Optimal Register Allocation in Polynomial Time". A graph-coloring >approach that can handle irregularities well (as long as there are not
too many registers). SDCC uses such a register allocator for some
backends, including z80.
Philipp Klaus Krause <pkk@spth.de> writes:
I am wondering about one thing in the empirical results in your paper:
Why is the code size not monotonously falling with increased numbers
of assignments? Are these independent runs with different
(pseudo-random) assignments?
Which poses the question: In your empirical work you stopped at 10^8 assignments (in some cases, less). How did you get provably optimal assignments on the Z80 with its 9 registers?
Castañeda Lozano
I had some questions which were mostly answered by the paper, but
maybe you can offer additional insights:
* Am I right that earlier register allocators were bad for irregular
register sets, and that's why general-purpose registers won once
compilers became dominant? Why did general-purpose registers become
dominant?
* What are the key points why your work can deal with irregular
register sets, and earlier approaches are pretty bad at that?
It seems to me that you use the CPU power available now to try out
many different assignments, while earlier work has balked at that.
* Do you have any idea why no good approach for dealing with irregular
instruction sets has been found in, say, the 1970s and 1980s when
irregular register sets were more common (e.g. on the Z80 and the
8086).
Your approach is an (ideally exhaustive) search that uses more CPU
power (and memory?) than was available then. At the time, one would
have resorted to heuristics, but apparently no general effective
heuristics have been found.
On one hand, we have the theoretical bound on the number of assignments, which is useful for proving that we can be optimal in polynomial time.
On the other hand, getting a provably optimal result when compiling an individual function is something that is easier to achieve, as the theoretical bound is a worst case.
I suspect that there is no interest in bringing FREQUENCY back to Fortran,
or any other language, though.
[Legend says that in at least one compiler, FREQUENCY was implemented backward and nobody noticed. -John]
As for the back-end, it seems to me that the major problem with the
Z80 is that it does not have general-purpose registers; instead, many instructions deal with specific registers. Many early architectures
were like that, and assembly programmers could puzzle out good
register assignments, but compilers were not particularly good at it.
So eventually computer architects introduced machines with
general-purpose registers like the PDP-11, the VAX, and the RISCs; ...
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 55:32:07 |
Calls: | 6,650 |
Calls today: | 2 |
Files: | 12,200 |
Messages: | 5,330,754 |