We're bean counting on Friday. No shipping.
How many MIPS does it take to get to the moon? Let me tell you about the Apollo computer designated as 'Mod 3C':
"Who among us would risk our lives on our desktop computers, with all their speed, accuracy, and memory, and rely on their working flawlessly for two straight weeks? The space shuttle flies with five redundant computers. Any fully digital airliner has a minimum of three. Apollo had only one. And it never failed in flight."
Hmm. Ok. I'll bet my life on a computer if it meant taking a ride to the moon. Oh wait - it's 1961 and ICs are still considered to be unproven black-boxes. Hmm.
"In today's terms it resembled a 'microcontroller', the microprocessor embedded in everything from cell phones to automobiles, more than it did a scientific computer. It occupied a cubic foot rather than a tiny chip, but it shared with microcontrollers a modest calculation capacity melded with numerous, sophisticated channels for quickly getting data into and out of the machine. Like a microcontroller, the Apollo computer included an interruptible processor so events could command the computer's attention in real-time. It had a 'night-watchman' circuit to keep the computer from crashing and locking up (a feature still included in many embedded computers today as a 'watchdog'). "
Hah! It had a watchdog circuit! I had no idea that this idea came from so long ago.
"The original Apollo machine used 16-bit words for its data and instructions, adequate for the local control functions, but the navigation calculations required 'double precision,' more bits with more numerical precision, a feature implemented in software."
Cheaters.
"The computer had two types of memory, 'erasable' and 'fixed', what today are called RAM and ROM... The computer comprised 1,700 transistors, each in their own metal can, and nearly 20,000 metal and ferrite cores for memory... It had 12,288 words of permanent storage, 1,024 words of erasable memory, and a 16-bit architecture with 8 instructions. It included a variety of counters, pulse outputs, interrupts, and input-output registers for interfacing to the rest of the vehicle. "
I've heard about the massive labor that went into weaving the ferrite core memory. But 16-bit architecture in 1962? Wow! Really pretty impressive. I'm still playing with 8-bit AVRs. 12k of program memory and 1k of RAM. Not too shabby. Not that I could program a re-entry procedure in 12k, but my autonomous rover is currently using 5614 bytes.
The 'Mod 3C' was followed by the AGC (Apollo Guidance Computer) which was built out of this new thing called a MicroLogic device (1961):
"One type of circuit, a two-input NOR gate, would repeat through the entire computer, rather than thirty-four different modules, as would be the case with core-transistor logic."
I do enjoy a good Karnaugh Map reduction.
By 1965, things were really screaming:
"By February 1965... the IL received a $15 million-dollar contract to build the Block II... The new computer would be sealed from moisture in the cabin, with no provision for repair. To accommodate the digital autopilot, the Block II computer's program memory was increased from 24k words to 36k and the erasable memory from 1k to 2k. Other aspects of the computations speeded up as well, with new machine instructions built into the architecture (where Block I had eleven instructions, Block II had thirty-four) which allowed for more efficient code, and the clock now ran at 1,024kHz... The number of logic gates went up. The flat packs contained two logic gates instead of one, doubling the packing density, which halved the circuits' size and brought the AGC's weight down from eighty-seven to seventy pounds, and reduced the power from eighty-five to fifty-five watts."
Apollo Guidance Computer | ATmega168 |
$15M | $2 |
55W Power | 0.055W Power |
~1 MIPS? | 20 MIPS |
70 lbs. | 0.0022 lbs. |
I know, comparisons have been done before (and will continue). It's apples to oranges. What really struck me was that in 1962, there was nothing called a CS degree. You had perhaps 30 people in the world that could manage to program a series of NOR's in a logical way. And with 70lbs of them, they managed to point the boomstick at the moon and get there. Simply amazing.
I agree with joepierson. Nowadays, everything needs faster and faster processors. What for? I believe they do that to avoid having to optimizate the code. I think that if everyone programmed in x86 Assembly, or even C or C++, instead of in high level interpreted languages, applications would be a loooot faster and clearly they wouldn't need a lot of memory.
I might have to borrow "optimizate".
This is an old, old argument, of course. Presumably at least as old as assembly languages, or anything much more abstract than moving wires or toggling switches by hand. Folks posting here generally work in the embedded space, so the priorities and parameters are quite a bit different from the ones I deal with on a day-to-day basis, but I think I can summarize the counterpoint as "have fun writing that web application in x86 assembly."
Cycles are cheap. Programmers aren't.
(Of course, most desktop applications are still written in C, C++, or a related compiled language, and the line between "compiled" and "interpreted" is kind of fuzzy anyway - are Perl and Python "interpreted"? How about Java or .NET?)
-and the line between "compiled" and "interpreted" is kind of fuzzy anyway-
An 'interpreted' program converts the program code as it is being executed, while a compiled program has been converted to machine language before execution.
If you are dealing with low speed systems(microcontrollers) there is a big difference in performance, whereas with high speed systems(GHz PC's) a human can't comprehend the difference, unless using some sort of measurement for comparison.
Lately I've been using Linux and playing around with the many open-source distributions, and I must admit "Linux has MS whipped when it comes to performance". Not only that, but a majority of compilers(whose code is portable) are free.
<blockquote>An 'interpreted' program converts the program code as it is being executed, while a compiled program has been converted to machine language before execution.</blockquote>
This is where the fuzzy part comes in. The way I understand it, modern languages that we're used to thinking of as "interpreted" tend to be compiled before execution, albeit to something like bytecode or a parse tree for a virtual machine / interpreter. This bit by Tom Christiansen is old, but it's a useful example.
The startup costs for dynamic languages on modern PC hardware can still be noticeable, even without doing metrics. This is why stuff like mod_perl and FastCGI is a big win in the web world - you wind up doing the startup & compilation phase once to serve a bunch of page requests.
just shows you just how little processing power you actual need to GET THINGS DONE
sometimes I think my 3Ghz PC is slower then my 16Mhz 80386 I used in the 90's at getting jobs done, my word procesor was faster, my DOS based layout CAD software was infinitely faster that what I use today
Although "it never failed in flight" is right in the sense that it didn't have a hardware failure, it did suffer a software related failure during the first moon landing. There was a series of memory overflow errors and the computer had to be restarted even in the late stages of the landing, which it coped with well. There's lots of information about this event on the net - for example: http://www.abc.net.au/science/moon/computer.htm.
The image links are broken...
Yea, fun book, but realise that the Titan 4 was still using less than 1 Mips in the 1990's
Also note that the Surveyor Landers did the same job, before Apollo, with no computer, just a few timers! (and similar radar, gyros, photocells, rockets, and communications)
The Surveyor return missions and precision landings were cancelled since Apollo was flying.
-Gar.
I've found it interesting that NASA permits generic selected laptops on the ISS (Space Station), for non-mission-critical uses. Probably without battery though - safety issue.
You also have to remember that was 15 Millions back in 1965!
That was worth way more then 15 Millions today!
I remember my dad telling me that he could buy a hotdog and coke for lesss then 25cents.
So that is probably equals to 4 times more.
I think we should all be a little humbled and embarrassed. 40 years and countless transistor architectures later we have yet to return to the moon; those who accomplished greater things before us would have given their right arm to have the cheapest PIC, ARM, or MSP that many of us can sample for free. As engineers or hobbyist we must dream big! The pride of our species depends on it.
yeaa haha that's true AndyL, someone else who dare to reduce the Karnaugh maps of Orion module??
I wonder how this compares to whatever computers that are planned for Orion 15.
I seriously doubt they used a Karnaugh Map to reduce a logic design of that magnitude, I bet they used Quine?McCluskey Method.
For those who have 2 hours, here is a FREE MIT lecture on the shuttle guidance, navigation and control: http://www.youtube.com/watch?v=OksC02Xqe7Q&feature=PlayList&p=35721A60B7B57386&index=15
Not that we're not thrilled with another writeup on NASA's Apollo computer, but what is the current Chinese moon program using?
I'm SO getting this book - thanks spark fun!
On a related note, I remember being kind of fascinated by this Fast Company piece about writing code for the shuttle, along with some of the programming reddit discussion.
Very good blog!!!it's amazing...you went to the moon and return with only a few mips... that is a real adventure, the biggest one!! I admire you!!!
It's even more amazing that we need 6 cores @ 2.4ghz to play a GAME.
if you think that's amazing, how about building your own flight computer...in your basement
http://www.galaxiki.org/web/main/_blog/all/build-your-own-nasa-apollo-landing-computer-no-kidding.shtml