12
$\begingroup$

I'm reading Tracy Kidder's "The Soul of a New Machine" where a team at Data General design a new machine (codenamed "Eagle", later named MV/8000). It is 32-bit extension of a previous architecture (the 16-bit Eclipse). One of the revolving themes seems to be, that they don't want to create a machine with a mode bit and that they succeeded in this.

However, it leaves out how this is technically achieved, and it also doesn't go into why it was so attractive to create a machine without a mode bit. The book is not a technical book so it could be that the details were somehow distorted. However, you get the feeling reading that book that a "mode bit" solution was common (and hence feasible) at the time, but was deemed unattractive by the engineers maybe for aesthetic reasons. The book also makes it seem like an immensely difficult task to create a design without a mode bit, which was somehow overcome by this particular team.

I found this description of how it was achieved:

http://people.cs.clemson.edu/~mark/330/kidder/no_mode_bit.txt

It seems to basically be about using a previously unused portion of the opcode space for the new instructions. I must admit I was a bit disappointed that it was "just that". Also I think this still leaves some questions unanswered:

Firstly, how did the 16-bit processes live in the 32-bit address space? Because I think that is the key challenge in making a 32-bit extension "without a mode bit". Extending the instruction set, on the other hand, is a relatively common undertaking. Since there's no description of how it happened one could assume that the 16-bit code simply access memory as it always did, maybe it sees some type of virtualized/banked view of memory (with new CPU registers controlling where the first address is) or something like that. But I don't know if there's more to it than that. In that case one could argue it sort-of was a "mode bit" solution. The 16-bit mode processes could run alongside the other processes by virtue of special features added to the CPU. Whether this might imply a new bit in a control register or just some new added control register is unclear.

Secondly, why was it so appealing to create a machine without a mode bit? Many of the benefits touted in the book is that customers wanted to run old software. But this doesn't seem to speak against a mode bit, since the whole purpose of using a mode bit is to have backwards compatibility. When AMD extended x86 to 64-bits, at least according to my understanding of the word "mode bit" what they did was exactly to add a mode bit. A special bit that would make the CPU in 64-bit mode. And another bit that would make a process execute in a "sub-mode" of the 64-bit mode (to enable compatibility with 32-bit applications). The essence of the submode is that the CPU interprets the instruction stream as the old 32-bit instructions but that the 32-bit memory accesses made are resolved using the new page tables format (setup by the 64-bit aware operating system) and eventually mapped to the full physical address space. Also, the 32-bit code can be preempted by 64-bit code. Like the Data General solution this also allowed a 32-bit program to run along under 64-bit programs (16-bit vs 32-bit in the DG case). So from a customer point-of-view there appears to be no difference at all. Hence the only benefit could have been in the implementation, simplifying the design, but the book doesn't make it sound like that is the concern, since the mode bit seeemed to be common even at that time (and it seems later architectures have also employed it as the x64 case shows).

I'm sure there's something I missed, so it would be great if someone could discuss more on the technical details and virtues of this "no mode-bit" design.

$\endgroup$
2
  • $\begingroup$ In those days - the days of moving the common word size from 16-bits to 32-bits - most of the new 32-bit architectures had different instruction sets entirely from the same mfr's16-bit line - even if they could also execute the 16-bit instruction set with a "mode bit". This led to marketing uncertainty as people upgrading projects to new 32-bit machines saw no reason to stay with the same mfr - as long as it was a new architecture why not choose the best of the new machines from whatever mfr. Lack of "mode bit" suggested an easier "incremental" transition: Therefore, stay with DG. $\endgroup$
    – davidbak
    Commented Jun 15, 2016 at 20:18
  • $\begingroup$ This may be of interest: "The Micro-Architecture of the Eclipse MV/8000" dl.acm.org/doi/pdf/10.5555/800084.802715 $\endgroup$ Commented Jun 12, 2022 at 19:42

3 Answers 3

10
$\begingroup$

The answer is that Ed deCastro, President of Data General Management, had established a team of engineers in North Carolina specifically to design the next generation CPU. He assigned the task of support and incremental enhancements to us, the Massachusetts team. Three times we proposed a major new architecture, each time with a very sensible mode bit, and described it as a modest incremental enhancement. Each time, Ed saw through our disguise and rejected the proposal, expecting the North Carolina team to succeed. Ed believed that regardless of how we attempted to disguise our proposals, he would know it was a new generation architecture if it had a mode bit. So we had to propose a new generation architecture with no mode bit, even if that made it less efficient. That's how we got it past Ed deCastro. See The Soul of a New Machine, by Tracy Kidder.

$\endgroup$
1
  • 1
    $\begingroup$ Hi Carl, thanks for the info, yes it is also my impression (from reading the book) that the mode-bit discussion was very much about political implications. Good with insider info - sounds like the MV/8000 was a very exciting project to be part of. $\endgroup$
    – Morty
    Commented Sep 28, 2015 at 20:47
8
$\begingroup$

In theory "no mode bit" would allow you to use the old, 16-bit, operating system with absolutely no modifications, and that OS would be able to launch 32-bit apps, although the new 32-bit apps would be stuck with 16-bits of virtual address, so would be able to use the 32-bit registers and new instructions, but would misbehave if they accessed virtual addresses larger than $2^{16}$.

With a mode bit the old 16-bit OS would have had to have been modified to figure out whether the program was 16-bit or 32-bit and then set the mode bit appropriately before launching the program.

In practice, it seems the MV/8000 actually had a mode bit. Elsewhere on Mark Smotherman's web page at Clemson he has posted the Data General, ECLIPSE MV/8000 Principles of Operation, 1980. If you look in Appendix E (starting on page 369) you will see that the MV/8000 had two completely different page table mechanisms. The specific machine the MV/8000 was backwards compatible with was the C/350, and the C/350 had a specific 16-bit Memory Allocation and Protection Unit, with specific ways of controlling that unit. For 32-bit logical to physical operation you would instead turn on the Address Translation Unit (described in Chapter 3, starting at page 31.

In practice what that means is that when you execute a 16-bit instruction in 32-bit mode it is specified that the high 16-bits of the logical address get set to 0. There also must be some specification about what happens to the high 16-bits of an address when you execute a 32-bit instruction in 16-bit mode, but I wasn't able to find it during my brief perusing of the manual.

So it's less a question of whether or not a mode bit is a good or a bad thing. It's more that there was no particularly good reason to use a mode-bit to differentiate between 16-bit and 32-bit instructions. The 16-bit instructions use 16-bits of logical address (with the high 16 bits set to 0) and 16-bit registers, and the 32-bit instructions use 32-bits of logical address and 32-bit registers. The old OS "just works" on the new machine, but you can also try out the new instructions by running a new program under the old OS.

$\endgroup$
3
  • $\begingroup$ Hi OK this makes the objective with "no mode bit" clearer - so the objective was to be able to boot the original 16-bit o/s, but still launch a 32-bit program from there. However, as you say, it would be impossible to use the 32-bit logical address space from within 32-bit programs running in that mode. In a way it is similar to what Intel did with the 16-bit to 32-bit transition. Here it was also possible to execute 32-bit instructions (accessing the higher portion of the registers) from within a 16-bit program (running, for instance, under MS-DOS). However, at the same time they had... $\endgroup$
    – Morty
    Commented Aug 1, 2015 at 9:45
  • $\begingroup$ ... mode bits for entering the "true" 32-bit protected mode (which also allowed paging). A difference is though that in 32-bit mode the encoding of the 32-bit instructions is different from encoding of 32-bit instructions in 16-bit mode (since the "default" is different), but the capability is the same. On the other hand, with the x86 transition to 64-bit the instruction encoding was completely changed, so a 32-bit program (or a 16-bit program) can't use the 64-bit registers etc. It requires an updated o/s launching the process in 64-bit mode ("Long mode"). $\endgroup$
    – Morty
    Commented Aug 1, 2015 at 9:48
  • $\begingroup$ However, one could still question the merits of stressing this "no mode bit" design since firstly there is a mode bit, and it seems to be very much a corner case ("Running old os, and new application") where it offers any benefit - wouldn't most customers want to run the new os which can take full advantage of the hardware? The important feature here is that the new os can run the old apps! But as the link I sent mentions, this isn't even possible, the programs require re-linking and even re-compilation (due to changes in the o/s) rendering the CPU compat. aspect moot! $\endgroup$
    – Morty
    Commented Aug 1, 2015 at 9:54
1
$\begingroup$

I think part of the reason was that the DEC VAX 780. Dec DG's main competitor had beaten DG to the creation of a 32 bit minicomputer. However Dec handled backwards compatibility with an inelegant kludge. In the early Vax there was an embedded PDP-11 (the Vax's 16 bit ancestor), switching between PDP-11 mode and Vax 32 bit mode was literally switching between 2 machines. The MV-8000 like the Intel x86 CPUs, POWER and Sparc allow 32 and 64 bit programs to co-exist concurrently on the same OS.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.