Skip to main content

06 May 2024

75 years old TODAY – the modern computer

Richard Pawson profile image
Written by

Richard Pawson

75 years ago, today – on the 6th May 1949 – Maurice Wilkes and his team at the Cambridge Mathematical Laboratory, switched on the EDSAC and ran its first program. Computer historians are, rightly, reluctant to define ‘the first computer’ – unless with very specific qualification such as ‘the first [fill-in-the-blank] computer’ or the ‘first computer to …’ . There had been automated computing devices before the EDSAC, so why my claim that this was the first ‘modern’ computer?

The EDSAC was distinguished by two things:

  • It was designed specifically to be applied to real work, rather than as proof-of-concept for automated computing. That was also true of the Harvard Mark I (which went live in 1944) and the ENIAC (1945), but …
  • The EDSAC was the first machine in the world to follow a design that would later be named the ‘Von Neumann Architecture’ (VNA). Before I receive howls of protest from the North, yes, the ‘Manchester Small Scale Experimental Machine (SSEM)’ (also known as the ‘Baby’) was the first, working, ‘stored program computer’, but, as I shall explain below, the concept of ‘stored program’ was an aspect of the VNA – the two terms are not synonymous. Also, as its name suggests, the SSEM was a proof-of-concept machine, focussed primarily on proving the viability of the Williams-Kilburn Tube, an innovative technology for working storage. (Having proven the concept, the Manchester team went on to design and implement a full-blown design, the Mark I – but that didn’t go live until later in 1949).

The VNA is the basis of all modern computers. If you’ve bought the line – peddled by some industry vendors and by an alarming number of school and college textbooks – that the VNA has been superseded by the so-called ‘Harvard architecture’, then you need to read my paper: The Myth of the Harvard architecture: https://metalup.org/harvardarchitecture/Description.html, the only paper about the so-called Harvard architecture published in any peer-reviewed journal).

So what exactly is the ‘Von Neumann Architecture’? John (‘Johnny’ to his friends and colleagues) Von Neumann was the most revered and sought-after applied mathematician in the United States throughout WWII and up until his death in 1957 at the tragically young age of 53. Every major military research project wanted to get a small slice of his prodigious mathematical ability, including the Manhattan project. (I loved the film Oppenheimer, so much so that I saw it three times in the opening three weeks. But it is a shame that it featured the whole pantheon of American physicists – including a fictional involvement of Albert Einstein – yet made no mention of Von Neumann, even though he made significant contributions, was physically present at the Trinity test, and is mentioned multiple times in the book, American Prometheus, on which the film was based.) Former editor of The Economist and Von Neumann biographer, Norman Macrae, described him as ‘one of the greatest minds of the 20th century’.

In 1945, some months before the ENIAC – the first electronic computer to be put to useful work – would go live, its inventors, John Mauchly and Presper Eckert, were already proposing a successor machine, to be known as the EDVAC. Von Neumann was invited to participate in a preliminary discussion of ideas. After the meeting – reputedly on the long train-ride home – Von Neumann hand-wrote a 101-page set of technical notes, entitled ‘First Draft of a Report on the EDVAC’. He didn’t put his name to it – that was added later by his boss, after the (never completed) report was typed and distributed. Eckert and Mauchly were annoyed, to put it mildly, that only Von Neumann’s name was attached to the report, since they felt that the notes reflected the work of the whole committee. Nonetheless, most historians concur that many of the ideas in the report bear the hallmarks of Von Neumann’s own thinking, not to mention his fecundity. Although not all those ideas were taken forward, the report did set out the core ideas for what would eventually become the EDVAC.

Maurice Wilkes, based at Cambridge University in the UK, already had a keen interest in the field of automated computing. He learned about the ideas proposed for the EDVAC and initiated a project to build a machine along similar lines in the UK.  That machine was given the name EDSAC, and the EDSAC would go live before the EDVAC itself.

You can, today, see a recreation of the EDSAC at The National Museum Of Computing (TNMOC), which is located in the grounds of Bletchley Park. (TNMOC is completely independent from the Bletchley Park Museum – and is, in my opinion, by far the better of the two museums.) The recreated EDSAC is nearly complete and the project director, Andrew Herbert, tells me that the team hopes to have it running programs ‘within the 75th anniversary year of the original’. 

After the ‘first draft’ report, Von Neumann himself didn’t contribute significantly to the EDVAC. He went on to design and build his own machine at Princeton, which adopted most of the EDVACs core design principles, but also the further radical innovation – now standard practice – that the computer should process all the bits of a ‘word’ in parallel, rather than serially as they would be in both the EDVAC and EDSAC.

So what exactly were those core design principles? From the many specific ideas in the report, it is possible to distil some broader principles and the following is my own such analysis, which yields seven principles:

  1. Large addressable read/write memory
  2. Separate ‘organs’ for storage, arithmetic, and control
  3. Binary number representation
  4. Programming by writing sequential atomic instructions
  5. Program executed from fast random-access memory
  6. Instructions operating on variable addresses
  7. Interchangeable memory

While these principles weren’t all conceived by Von Neumann, it seems likely that several of them were. Several of these principles now seem so obvious that they hardly feel like principles worth stating at all. But they were radical at the time. As I explain and expand on those principles, below, I have tried to show how each one contrasted with the thinking and practice that had gone before.

1. Large addressable read/write memory

This now seems like a no-brainer, but it wasn’t in 1945. The two most important prior computing machines were both built around ‘accumulators’, each of which stored and could perform addition/subtraction on a single number – 20 of them on the all-electronic ENIAC, and 60 on the electro-mechanical Harvard Mark I. The EDVAC design called for thousands of numbers to be held in fast, addressable (what we would today call ‘random-access’), read-write memory. Partly this idea was prompted by Eckert’s idea for applying the ‘mercury delay lines’ used in wartime radar to make a form of fast data storage. But it was also prompted by Von Neumann’s vision of new kinds of application. Prior applications were concerned with generating large amounts of tabular data output from a small number of working ‘variables’ (similar to the vision of Charles Babbage, more than a century earlier) - in particular, with the numerical solution of non-linear differential equations. Von Neumann, however, foresaw applications that would involve accessing and updating very large data. He was, amongst many other things, one of the pioneers of modern weather forecasting using a ‘finite-element’ numerical simulation, rather than a high-level and somewhat subjective human interpretation of visual weather charts.

This first principle was, possibly, the hardest to realise in practice. For the next two decades at least, the story of computer evolution would be dominated by the search for effective technologies for ‘fast’ memory: mercury delay lines and the Williams-Kilburn Tube (both mentioned above), rotating magnetic drums, and then solid-state ‘core’ memory – comprising tiny rings of ferrite literally woven into looms of fine wires. Only after the transition to semiconductor memory, and the rapid scaling characterised by Moore’s Law, would memory cease to be the dominant constraint on the application of automated computing.

2. Separate ‘organs’ for storage, arithmetic, and control

Although the proposed separation of ‘organs’ (devices) responsible for storage, arithmetic, and control, offered some advantages, it was primarily motivated by the previous principle. Moving up from 10s of memory units to 1000s meant that it wasn’t practical to continue the previous pattern wherein each storage unit (the ‘accumulators’) had its own processing and control logic. Whence was born the idea of the ‘central processing unit’, which, 25 years later, would be realised as a single solid-state semiconductor device – the microprocessor. Inside the processor there would be further specialisation of function in the form of a handful of super-fast storage units, known as registers, each with a dedicated purpose – such as the Accumulator, Instruction Register, and Program Counter – hardwired to different arithmetic and/or control circuits.

3. Binary number representation

Modern programmers might be astonished to learn that most automated computers before the EDVAC design stored and processed numbers in decimal notation. Howard Aiken – the driving force behind the Harvard Mark I, and its three successors, was strongly opposed to the idea of using binary, which he felt was unnatural, would be plagued with rounding issues, and that any saving in the ‘cost’ of performing arithmetic operations would be outweighed by the cost of having to transform data between binary and decimal notations for both input and output. However, binary representation would quickly become the dominant paradigm.

4. Programming by writing sequential atomic instructions

Programming the ENIAC had involved physically configuring the operation of each accumulator and their interconnection, through plugboards and switches. Thought had been given to allowing this physical configuration to be specified in software (to use a modern term), but by the time of the EDVAC draft, the concept of a program (as we now call it) had changed – to mean the specification of separate, atomic, instructions to be processed in a sequence. (This was probably the only design principle that Aiken had previously established with the Harvard Mark I that is still the mainstream practice.)

5. Program executed from fast random-access memory

Program instructions should be held in numbered memory locations, randomly accessible at high speed. This principle contrasted with the design of the Harvard Mark I where each instruction was read and executed from paper tape; where fully automatic repetition was possible only by glueing the ends of the tape together to form, literally, a ‘loop’; and where any form of branching required manual intervention by the operator to skip ahead or change the tape.  By executing the program from fast, random-access memory, the three ‘procedural’ programming capabilities that we now call sequence, selection, and iteration, were established. Executing the program from fast memory also implied the capability to load in the program before execution from a non-volatile external medium – such as paper or magnetic tape.  This principle has become known as the ‘stored program computer’. Note that it is only one of seven in this list.

6. Instructions operating on variable addresses

The EDVAC draft design indicated that it should be possible to vary the address part of an instruction. One need for that was to apply the same code to different data elements successively. Later, this capability would be needed to support subroutines, which upon completion needed to return execution to the instruction after the one that had called the subroutine. These requirements can be realised in different ways: by modifying the address portion of an instruction stored in the program memory; by modifying a copy of the instruction held in a register; or by defining an instruction that reads its address from one or more specialised registers. The third option – which includes what is now known as ‘indexed’ or ‘indirect’ addressing – was implemented by others even before the first machine to adopt the EDVAC blueprint, for example in the Manchester Mark I. It is misleading, however, to equate this principle with the idea of ‘self-modifying code’ – an idea that upset Howard Aiken even more than binary arithmetic – not only because of the alternative ways in which it could be implemented, but because even the initial idea did not permit instructions to overwrite other instructions – only to alter the address portion of those instructions.

7. Interchangeable memory

The final principle (in this analysis, that is) is that the same memory could be used for storing data and instructions, with the ratio dynamically allocated as needed. This is often given higher billing in descriptions of the Von Neumann Architecture, yet Von Neumann’s own wording of this principle in the report is surprisingly tentative (especially compared to his usual authoritative style), stating only that it is:

 ‘…tempting to treat the entire memory as one organ, and to have its parts even as interchangeable as possible.’

Proponents of the so-called Harvard architecture have described the decision to store data and instructions in the same memory, accessed via the same data- and address- buses, as ‘the Von Neumann bottleneck’ – but this is also wrong. The term ‘Von Neumann bottleneck’ was coined by John Backus (perhaps now best known for Backus-Naur Format, but who was responsible for so much more) in his 1977 Turing Lecture – and it referred not to the sharing of the same buses for instruction and data access, but to the fact that every item of data that needs to be changed, or even just moved, must pass through the central processor.

However, despite its tentative statement, this last principle of interchangeable memory was ultimately perhaps the most prophetic. This interchangeability, derided and despised by Howard Aiken and his followers, would prove essential to the emergence of operating systems, which in turn enabled batch processing, time-sharing, concurrent processing, asynchronous networking, rich user interfaces, and much more.

-----------------------

Hopefully, you are not a sad case like me and working on a Bank Holiday – and so you won’t be reading this until after the 75th anniversary has passed. But perhaps it might still have given you some pause for thought. The rapid scaling, and rate of innovation in computing is unmatched by any other technology in history. We should honour those prophets – John Von Neumann being one of them – who not only had a clear and realistic vision of the future possibilities, but who laid down key design principles that would make the realisation of that vision possible.

Discussion

Please login to post a comment

Chris Sharples
09/05/2024 21:26

My howl too, as I spent some time with my year 10s a couple of months ago, showing pictures of the Manchester baby and discussing the stored program concept.

Pete Dring
06/05/2024 11:31

This is a really interesting read - thank you Richard. Well done for pre-empting my “howl of protest from the North”!