This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more
100+ Answers
Keith Shannon
Keith Shannon, C# coder, Software analyst, and professional techie

As many answers have said, a lot of ways, none of which would be palatable to today's C# and Java coders who have all the memory they need (or at least, a heckuvalot more than coders had 25 years ago), and so are focused on packaging it in a logical, easy-to-decipher manner, allowing for the app (be it a game, an online banking app, or what have you) to be updated for compatibility with future hardware. Programmers of games for systems like the Atari 2600, Sega Master System, NES, SNES, and even later-generation consoles like PSX and N64, had none of these concerns; they had very little interest in people outside the team needing to go in and understand the code or its memory usage and in fact, the more obscure the code was, the better the defense against the relatively few knowledgeable "hackers" out there (like the guys working for Game Genie; remember those?). The platform for the game was static, and the game had direct, almost complete control over the console's capabilities, so there was no need for the multilayered architecture that is ubiquitous in modern software to allow hardware/software independence.

The only problem was that those consoles' capabilities were sharply limited.

As my example, I'll use The Legend of Zelda for the NES. For its day, this was a massive game, comprising a large overworld and two underworlds (for first and second quest), both twice the size of the overworld, and yet the game's ROM is fairly small, as it had to be for one of Nintendo's first releases for the console.

Overworld:

First Quest Underworld:

Second Quest Underworld:

You'll notice the first trick used right away; they crammed as much as they possibly could into a standard grid size. The entire overworld and half of each underworld is 16 screens by 8 screens, or 128 total screens. The current screen Link was on in either the overworld or the underworld could thus be contained in one byte (two nibbles, one for X-coordinate and one for Y, with one bit of the Y coordinate not used in the overworld). The second quest underworld has an interesting hack; the "L" dungeon couldn't be made to fit, so they put in an underground passage leading to the two blue rooms at the far left, nestled into the left side of the Z, then they simply fudged the dungeon map.

Each screen, if you zoom in, has a similar matrix arrangement to the whole. Overworld and underworld screens were handled a little differently. Each overworld screen is simply a grid of 16x11 "tiles", each one of which could be ground, sand, rock, bush, water, waterfall, gravestone, staircase, bridge/dock, "trapdoor" or wall opening, plus a few options for special ornamentation like large trees and the entrances to dungeons which are rendered in a special way. There are never more than 3 tile types besides bare ground, so 2 bytes, broken up into 4-bit "nibbles" (half a byte is a nibble - get it?), can be used to identify which terrain types are used for a screen, and then each tile of the screen can be identified with two bits of data, so the terrain map of each screen only requires 44 bytes of data. 44 bytes. The map data for the entire overworld can be stored in just five and a half kilobytes.

Color schemes in the overworld are divided into "outside" (the outer two rows/columns) and "inside" (everything else). Most screens are all one color scheme, either green, brown, or white, with a few screens being brown on green, green on brown, or white on brown. That's 6 options, storable in 3 bits (or another nibble as odd numbers of bits are not kosher).

Each underworld "room" uses a 12x7 grid for the actual contents of the room, and they further reduce space in a very ingenious way; except for a few very special rooms, like dungeon entrances (sand and statues, virtually all the same layout, just different colors), Triforce rooms (stones and statues, again all the same layout), Ganon's room (uses blackness and statues and is unique) and the final room where Princess Zelda is kept (stones, statues and blackness), and some underground passages which have both stairs and walls, no two different obstacle types are in the same room. This means, for the majority of rooms, all the information needed to render the floorplan can be held by one byte per column in the room, which is 12 bytes total, plus the information about what's on each wall (nothing, open door, shuttered door, locked door, bomb point, bombed opening). 6 options times four walls is 24 permutations, which can be held in 5 bits. Here's where it gets even cleverer; remember that there are only 7 rows in each room, and a byte is 8 bits? The wall information is stored as the last bit of the first 5 columns of the room. The remaining 7 bits is divided in three; 3 bits hold 8 options for the primary obstacle type in the room, either stone, statue, water/lava, sand, or blackness. The last four bits determine the color scheme; there's basically a unique combination of main and alternate color for each dungeon, and there are 16 in the game, so the last nibble basically identifies which dungeon scheme the room uses, and each scheme can be described in a byte (16 possible colors each for room and water/lava). So, each room's map data can be stored in just 12 bytes. The underworlds of both first and second quest are stored in just 3KiB, so the entire game's map is just 8 KiB.

They used other pretty obvious tricks as well. The maximum number of bombs you can carry is 16 (5 bits); the other 3 bits of that byte is used to store the number of keys you can have (up to 7), which leaves one bit free. Arrows and Rupees (money) use the same one-byte value store, so shooting an arrow costs you one Rupee and you can only have 255. One byte is used to store the Triforce pieces you have collected; it is possible to obtain them out of order, so instead of using 3 bits to indicate how many you have, they used a full byte as a bitmask. There are only 8 usable treasures, so that's another byte, however one of them (medicine/potion) is available in three levels (red, blue and the "letter" that allows you to buy more), while two more are available in two levels (blue and red candles, and normal or silver arrows. There are 6 more "passive" treasures that grant you additional abilities like being able to cross gaps in water and push heavy stones, one of which (the ring) comes in two flavors. Finally, you can have no sword, or the bronze, white, or magical swords. So, there are 22 possible things to have in your inventory, which can be stored in 3 bytes with 2 bits spare. You can have up to 16 heart containers (4 bits) with a varying number of them full (4 bits), and the game can count half-hearts using one of the leftover bits from your inventory. You can have a regular shield or a magic shield (protects against fireballs), so that's one more bit, and finally, you could have completed the first quest or not, which is the last available bit in 7 bytes of storage. Finally, there are 26 characters, 10 numbers and a couple special characters in the game's alphabet, which can be stored in 6 bits; 8 characters for your name can be stored in 6 bytes (although as the Japanese alphabet is much larger they go ahead and use the full ASCII charset, limiting your name to 6 characters). One byte each for your coordinates in the overworld and underworld, plus one byte for anything I'm forgetting (probably something like whether you've defeated each of the 8 dungeon monsters), and everything the game needs to know about your character can be stored in just 16 bytes. 16 bytes for a savegame is unheard of in modern games, though modern game programmers do still do quite a bit of byte-packing and other memory-saving tricks to get a savegame to fit into a certain unit of storage space (often referred to as a "page" in user manuals and UI; more accurately a "cluster" or a "leaf" of the file allocation system that is within a few orders of magnitude of the minimum addressable amount of memory the file system permits).

Joe Zbiciak
Joe Zbiciak, Been programming since grade school

First, it might surprise you to learn folks are still programming for some of these old systems. The techniques aren’t necessarily lost, so much as aren’t entirely relevant to modern systems.

The system I currently focus on is the Intellivision. This system came out around 1980. Some specs to set the stage:

  • 16-bit CP-1610 CPU with an 895kHz instruction cycle rate
  • 240 bytes of 8-bit scratch RAM
  • 352 words of 16-bit system RAM (including 240 words of character buffer)
  • 512 bytes of 8-bit Graphics RAM
  • 4K x 10-bit EXEC ROM

For its time, it was a high end system, with significantly more RAM and ROM available to it than its main rival, the Atari 2600. Its processor was about half the speed, but its graphics controller did more of the work drawing the display than the Atari’s. (This was both good and bad: The CPU didn’t need to “chase the beam” as much, but you had much less flexibility in other aspects of how the display is made.)

As many people have already noted, games were generally coded in assembly language. I’ll be the first to say that assembly language isn’t magic. But, when you have a very limited amount of memory (both RAM and ROM) and processor cycles available, it’s much easier to know where your bytes and cycles are going.

The Intellivision’s display controller uses tile based graphics along with tile based sprites. This was a common technique among many early systems, including the Colecovision, the Sega Master System, Sega Genesis, NES, and so on. The basic elements of the scheme include:

  1. A tile memory that describes the bitmap associated with each tile. On the earliest systems, the bitmaps were 1 bit per pixel, and 8 pixels by 8 pixels. Later systems used larger tiles and/or more bits per pixel.
  2. A character buffer provides a fixed grid to place tiles onto. Each element of the grid says what tile to display in the location, and what color(s) to assign that tile.
  3. Sprites that are also built from tiles, but not confined to the character grid. These provide movable objects that can navigate a play field, or augment other graphics in the grid.

This system has multiple benefits. For one, the tiles themselves take very little memory. An 8x8 tile at 1bpp only takes 8 bytes of storage. A 64 tile set then only takes 512 bytes of memory.

The tiles themselves don’t have color information. Rather, the character buffer indicates what colors to assign. So, the same tile can be reused multiple times in different colors, making it easy to construct colorful, complex screens from a small number of well-chosen tiles.

And finally, sprites are hardware managed. You can put a sprite anywhere on the playfield, and change its color or picture by just changing a few numbers in a sprite descriptor. On the Intellivision, the sprite descriptors are stored directly in the display controller. On other systems, they reside in RAM. Either way, the descriptors are very compact, and very easy to modify.

So, a big part of game development on these systems revolved around mapping the graphics to the display controller architecture, aligning the playfield onto the grid in a way that leveraged the tile-oriented architecture. That’s rather different from systems that use bitmapped graphics and could place anything anywhere. If you don’t do a good job fitting onto the grid, you have problems.

But once you fit it onto the grid structure, updating the display becomes fairly efficient, and the graphics themselves fit more snugly into ROM.

Another big aspect of programming was finding efficient ways of expressing the computations needed for the games. Consider, for example, jumping in a parabolic arc. The vertical position of your character vs. time is described by an equation of the form [math]y(t) = at^2 + bt + c[/math], where [math]y[/math] is your y coordinate, and [math]t[/math] is time. Now most processors of that era don’t actually have multiply instructions. Implementing that equation straight from a math text will be sloowwww.

Turns out, though, if you apply a detail you learned in physics class, you can break this up into small pieces that are very cheap to compute. Remember that gravity provides constant acceleration. That’s the first hint. Next, remember that for each unit of time [math]t[/math], when you’re moving at velocity [math]v[/math], the distance you move during that unit of time is [math]tv[/math]. Since most games of the era are computed frame-by-frame, you’re computing updates in discrete points of time that are equally spaced. You can break up that parabolic arc into a two simple addition statements that execute once per frame:

yvel = yvel + yaccel;
ypos = ypos + yvel;

That’s it. That’s much cheaper than directly computing [math]y(t) = at^2 + bt + c[/math]. No multiplication, no squaring. Just a couple cheap additions. You can define a surprising range of motions by decomposing them to a velocity and position update like this, and then just modulating the acceleration as appropriate.

Tricks like that show up all over the place.

Another common technique was to use fixed point arithmetic. The play field itself isn’t very high resolution. The Intellivision’s display is 160x96 pixels, and the Colecovision was 256x192. Coordinates therefore fit in 8 or so bits. If you move by 1 pixel every frame, you’re moving at a decent clip. And the difference between 1 pixel per frame vs. 2 is fairly large. As a result, games often represented positions with some number of integer bits, and some number of fraction bits.

This allows you to now specify velocity and acceleration in fractions of a pixel per frame. You don’t need many fraction bits to make this work well, although on the Intellivision specifically, 8 integer bits plus 8 fraction bits ends up working really well.

For music, a common technique was to decompose it into multiple pieces:

  • Software “instruments” defined how notes sound when played (attack, sustain, vibrato, etc)
  • Music patterns describe a sequence of notes to play
  • A top-level tracker says what order and combination of patterns to play to build the music

That way, you can define, say, a rhythm track once and have it repeat, while playing a more complex melody over it. If the melody itself has repeats, you can reuse patterns accordingly. The “instrument” definitions expand that sequence of notes into the actual series of register values that get programmed into the sound hardware. Thus, the music itself can be stored rather compactly as well.

Sound effects are often defined algorithmically, rather than stored as samples. This makes their representation fairly dense as well. (And, as I learned when I went to write my own games: Many tones used in sound effects are derived from notes in the same key as the game’s music, so you can lean on the note pitch table to get your pitch values.)

Other standard optimization tricks show up as well: Loop unrolling, clever shift-and-masking, bit-fiddling tricks, and so on. Also, lots of lookup tables in ROM to collapse complex computations down to fast, simple lookups.

If you’d like to see the source code for one of my games (Space Patrol), have a look here: Space Patrol source code. For reference, the full Space Patrol game is 32K bytes (16K x 16-bit.)

And if you’d like to see some of the games that ran on that system, check out Kevtris’ video. Space Patrol makes an appearance around 8:22.

Abe Pralle
Abe Pralle, Gamer since 1980; Game Developer since 1990.

I think it’s important to emphasize that most 8-bit and 16-bit computers and game systems basically had fixed-feature tile-based 2D game engines built into the hardware.

For a straightforward game you didn’t need any image loading or rendering routines and only minimal drawing logic. For instance, for Commodore 64, whatever 8 bytes you put in a certain memory location was the bitmap definition for a certain tile. Then whatever tile indices you put into a certain area of memory caused the corresponding tiles to be drawn on-screen. You could often just use the tile index map as your game data structure as well; you didn’t need to copy over indices each frame or anything like that.

You didn’t need any collision detection because that could be done in hardware too (sprite vs sprite, sprite vs tile). You didn’t need any event handling - you had full imperative control of CPU execution and you just polled the state of the joystick and other inputs directly.

So really a basic game becomes quite simple:

  1. On launch, copy sprite, tile, and tile map bytes to correct memory locations to define a game’s graphics.
  2. Adjust sprite positions based on joystick input bits and/or simple AI offsets, using collision status bits to see when a bullet has struck the player or an enemy.

Games requiring more complex logic would have to work around the memory and speed constraints as others have mentioned. And as the OP mentioned, systems had a fair number of helpful built-in OS routines that could be called by loading up certain registers with certain values and then jumping to certain addresses.

Programming Anecdotes

  1. The C64 had hardware tile scrolling where you could specify the X and Y offsets of the tile map between 0 and 7 pixels. Games would scroll to the max offset and then quickly copy all the tile indices over one row or one column and then copy in one new row or column. To hide what was going on, games would set a hardware flag that expanded the solid color display border to make it one tile thicker on each side, covering up the 0–7 pixel gaps at the edges.
  2. The Amiga had a fast hardware blitter (2D bitmap stamper), but (depending on scene complexity) it still wasn’t smooth enough to have full screen, silky smooth scrolling. However, you could have a window that was twice as wide as the screen and use hardware offsets to scroll smoothly within it. So what programmers would do would be to start showing the left half of the window in the screen. As the screen scrolled to the right, you would blit one new column of tiles to the right edge of the left half of the window AND place another copy on the right edge of the right half of the window (wrapping back to the far left of the window). When the player scrolled past the half-way point in the window, the screen position would be reset to the far left again. The effect was a continuous scrolling BG with very smooth scrolling that required 0 or 2 columns to be drawn at a time.

    If that’s a little confusing still, here’s a symbolic representation of a screen that’s 3 tiles wide showing a subset of a window that’s 6 tiles wide. Capital “ABC” etc. are visible tiles while lowercase “abc” etc. are duplicate tiles that are not visible:

    Initial screen, ABC is visible
    ABCabc

    Scroll right by 1 tile, BCD is visible, 2 columns change
    dBCDbc

    Scroll right by 1 tile, CDE is visible, 2 columns change:
    deCDEc

    Scroll right by 1 tile, etc.:
    defDEF

    Scroll right by 1 tile, etc.:
    gEFGef
  3. On Game Boy the CPU had no built-in multiplication or division. For
     multiplication I would do additions and left bit shifts; for division I would do subtractions and right bit shifts.
  4. Also on GB, as there was no random number generator, my RNG was a LUT (Look-Up Table) of all 256 byte values in a preset random order (as generated by a simple program on my PC). Whenever I needed a random number I would just grab the next value the cursor was pointing to, advance the cursor, and then use a bitmask etc. to clamp the value to my desired range. As many different elements of gameplay would end up pulling from the RNG LUT, it appeared sufficiently random and was not a predictable pattern.
  5. Color cycling was a very useful trick on classic systems. Similar to a crosswalk sign where both the stop hand and the walking person are always “there” but only one is illuminated, you could have an image with several frames of animation drawn in at once, each using a, say, different 4-color palette out of 8 possible palettes, and at any one moment have 7 palettes set to be all-black/neutral and 1 palette set to be visible colors.
Dave Baggett
Dave Baggett, Naughty Dog (employee #1), ITA Software (co-founder), inky.com (founder)

Here's a related anecdote from the late 1990s. I was one of the two programmers (along with Andy Gavin) who wrote Crash Bandicoot for the PlayStation 1.

RAM was still a major issue even then. The PS1 had 2MB of RAM, and we had to do crazy things to get the game to fit. We had levels with over 10MB of data in them, and this had to be paged in and out dynamically, without any "hitches"—loading lags where the frame rate would drop below 30 Hz.

It mainly worked because Andy wrote an incredible paging system that would swap in and out 64K data pages as Crash traversed the level. This was a "full stack" tour de force, in that it ran the gamut from high-level memory management to opcode-level DMA coding. Andy even controlled the physical layout of bytes on the CD-ROM disk so that—even at 300KB/sec—the PS1 could load the data for each piece of a given level by the time Crash ended up there.

I wrote the packer tool that took the resources—sounds, art, lisp control code for critters, etc.—and packed them into 64K pages for Andy's system. (Incidentally, this problem—producing the ideal packing into fixed-sized pages of a set of arbitrarily-sized objects—is NP-complete, and therefore likely impossible to solve optimally in polynomial—i.e., reasonable—time.)

Some levels barely fit, and my packer used a variety of algorithms (first-fit, best-fit, etc.) to try to find the best packing, including a stochastic search akin to the gradient descent process used in Simulated annealing. Basically, I had a whole bunch of different packing strategies, and would try them all and use the best result.

The problem with using a random guided search like that, though, is that you never know if you're going to get the same result again. Some Crash levels fit into the maximum allowed number of pages (I think it was 21) only by virtue of the stochastic packer "getting lucky". This meant that once you had the level packed, you might change the code for a turtle and never be able to find a 21-page packing again. There were times when one of the artists would want to change something, and it would blow out the page count, and we'd have to change other stuff semi-randomly until the packer again found a packing that worked. Try explaining this to a crabby artist at 3 in the morning. :)

By far the best part in retrospect—and the worst part at the time—was getting the core C/assembly code to fit. We were literally days away from the drop-dead date for the "gold master"—our last chance to make the holiday season before we lost the entire year—and we were randomly permuting C code into semantically identical but syntactically different manifestations to get the compiler to produce code that was 200, 125, 50, then 8 bytes smaller. Permuting as in, "for (i=0; i < x; i++)"—what happens if we rewrite that as a while loop using a variable we already used above for something else? This was after we'd already exhausted the usual tricks of, e.g., stuffing data into the lower two bits of pointers (which only works because all addresses on the R3000 were 4-byte aligned).

Ultimately Crash fit into the PS1's memory with 4 bytes to spare. Yes, 4 bytes out of 2097152. Good times.

Boris Chuprin
Boris Chuprin, Programmer, retrocomputing enthusiast from Belarus(ex-USSR)

Lets start with the size of executable code.
First of all I must say that 8-16 bit game consoles typically provided no software libraries at all. Home computers did, but their ROM subroutines were mostly used just for accessing tape or disk storage.
Yes, most of the code was written in assembly but, more importantly, there were no software bloat. Basically, modern software developers can afford to create programs that take tens of megabytes to save on development time. They include 5 mb external library to use 5kb worth of functionality from it, they use templates that generate tons of very similar code to avoid writing a few extra lines by hand. Modern compilers generate 2 kb of code where 200 bytes would suffice simply to make it 5% faster, or to ensure compatibility with several different processors. Since few extra megabytes of RAM change nothing, memory gets sacrificed for trivial things. Just 2 examples: C++ language, where a small neat-looking program can explode into megabytes of code due to hidden calls to inlined overloaded operators, parent constructors, templates, etc.; modern web page that downloads huge 3rd party libraries just to let you read a simple 1000-word article.

Graphics.
First reason: Smaller amount of colors. Modern JPEG picture typically uses 24 bits(3 bytes) per pixel, but old games typically used 1-4 bits per pixel. Second reason: Huge amount of graphics reuse. Old game consoles did not use pictures wider that 8 or 16 pixels and displayed that data multiple times. Everybody knows that plain text files take much less space than colorful pictures. You need only few kb of characters to fill the whole screen. Same with NES or other game consoles. Each 8x8 tile is basically a custom text character that gets reused multiple times. Moving objects are also made from tiles, but you can assign separate position to each.
Hardware will even mirror and recolor these tiles at no performance or memory cost, adding even more reuse opportunities. You only need a few to generate a huge world!

Music.
In 80s games music wasn't pre-recorded, but generated with various synthesizer chips that were able to produce simple tones at specific frequency and volume. Obviously, you need much less space to record a sequence of commands for an instrument instead of recording the sound produced by that instrument. Even today you can record a performance of a musician into a MIDI file that is only 10-100 kb long and replay it on the same instrument to get exactly the same output.
Even with very efficient encoding there weren't enough space for long complex compositions, so music in very old games tends to be short and repetitive, if present at all.

Graham Luke
Graham Luke, artificial intelligence is coming

A lot of this stuff has not been lost. Books document software development in tight constraints. Even working in mobile today can test your low-powered abilities.

There are probably thousands of tricks they used - way more than that, actually. You have to be so much more thorough with memory management. You are more careful with your data structures and not copying stuff around too much.

I remember a story I read, written in a good book, about the creators of the original Doom and Wolfenstein 3D. Do you know those guys? They were just starting out - hadn’t made a shooter yet - and they were trying to port Mario to a PC. Back then, PCs couldn’t make platformers. They just weren’t fast enough.

So, John Carmack (one of the guys) had the enterprising idea of saving draw time by only drawing what moved on the screen. He noticed that most of Mario (the game) had little movement and a lot of the background stayed exactly the same over time. So, he wrote an algorithm that located the parts that stayed the same and didn’t redraw those sections every frame. This saved enough time to make Mario portable to PCs.

The team actually ported a full level, told Nintendo, then were told to F-off by Nintendo. So, they built their own platformer instead, called Commander Keen. This earned them enough money to fund Wolfenstein 3D. id Software is still around today. They made all of the Quake games. Doom came out recently, as a remake, and isn’t too bad.

See how most of the screen is just blue? Only 30% of the screen changes. Carmack’s algorithm could make the game 3x as fast (at a guess).

Also, if you notice, the bushes and the clouds are the exact same shape.

That way, memory is saved for the image. Only the color changes :). Though this second idea was done by Nintendo, obviously. instead of id Software. The first idea was all Carmack.

Honestly, today, when we scale a system, to go from 10 users to 10 million, we deal with many of the same problems. Our code runs X fast, but we want it to run 0.1 X fast. We make 1 million SQL queries a day; we would rather make 200,000. How is that different?

Remember Y2K, when some people said technology would explode and banks would crash? We used to store dates in all software with 2 digits - that’s years. So 1996 would be 96. We implied the 19 prefix. Then 2000 rolled around - it was on the horizon - and suddenly we are worried about software crashing. The cost of adding an extra 2 digits to a date was so high, on memory, hard drives, network bandwidth (whatever), that we took a shortcut that bit us in the butt later.

Though this issue was unavoidable. The cost-saving was worth it.

I think we will always be optimizing and the nature of those optimizations will always be changing. However, the fundamentals stay the same. Find repetition and remove it with cleverness.

Ed Ahrenhoerster
Ed Ahrenhoerster, When I was a developer we carved 0s and 1s in stone

Occasionally they did crazy things. One example (not strictly a video game development but it gives the gist) was code I inherited for a CAD (computer aided drafting) package. It had to run on a PC with 64k of RAM. Those guys did everything they could but were still a couple hundred bytes shy of being able to make it work.

So they did a careful analysis of the PCs operating system and figured out which parts of the OS weren’t necessary to run the CAD program. They then had their code overwrite those specific sections of memory. Of course this meant you had to reboot the PC when you finished with your CAD work, but since it was one of the first CAD programs engineers happily put up with that annoyance.

But really that was the exception. The major answer is much more boring. Developers then were conscious of memory in every single thing they did. They focused on it constantly.

You mentioned JPGs, which is an excellent example. Today game developers spend all their time focusing on quality of images, how lifelike things are, etc. They give almost no consideration to how big they are. Why not? Because systems have so much memory it (almost) doesn’t matter. Games sell because of all kinds of reasons, none of which have anything to do with how much memory they need.

In the early days the opposite was true. Games didn’t run unless you made them fit within the system. People rarely buy games that don’t run. :) So every single image had tons of time spent on it making it as small as possible. You’d have meetings spent discussing various versions of images, balancing looks vs memory. You’d have developers spend days rewriting and rewriting code so that it compiled down to a smaller footprint. All kinds of things like that.

If developers today spent a third of their time worrying about nothing but how much memory things used you’d see amazing reductions in memory usage. Of course you’d also have a much simpler game. It could still be a a very fun game - the ridiculous amount of hours I spent playing Ms. PacMan attest to that. But you’re not going to get World of Warcraft focusing on memory.

Marco Alvarado
Marco Alvarado, 20 years on software industry - Security Specialist
I have not the same experience as all the wonderful professionals wrote the already provided answers.  They are really inspiring.

When I was in University, starting 90s, although I used computers some time before these days, was common for us to create our own graphical user interfaces and to go down inventing clever algorithms to use as less resources as possible.  Not 4 instructions, but we just had a Zenith PC with 256MB RAM and 1 5.25 Floppy without hard disk to work, including DOS, Programming language, our programs, etc. (my first program was one that use directly the pins in an Epson machine to write graphics "by hand").

And it was enough.  I don't remember to be out of resources, just that was "better" to have more of them because we were able to have RAM disks and other sorts of tools.

Then, when going to the working world, with Windows 3.11 and Oracle CDE or playing directly with notepad and sql files with an Oracle DB on a Novell Server (previously had an experience on a HP server), I starting having a mixed set of feelings: (1) how powerful computers and (2) what a waste of resources.

Then I enter different stages, but usually working in the low-level area, with C, C++, Assembly ... for commercial applications.  I made my first trnsactional engine in 1995 for Windows 3.5, and was in front of many of the problems others described on Games.  In particular when a memory leak made the engine to stop working into production environment each 4 days.  I was forced to trace it for many hours to "detect" the leak.

... going forward.

Around 3 years ago I started creating my own infrastructure on C++ and Linux, aimed to develop security applications, including an optimized transactional engine (because everything can be represented as a transaction).  This is a POSIX system developed on OpenSUSE Linux 64bit.  Then, several months ago, I acquired some Raspberry PI2, and I wonder myself if could be possible to "port" my engine on that small machine (HUGE for the parameters others exposed here).  And, without changing one source line, I have it now working on the RPI2 machine with its "current" constrained resources.

The engine works there using a mere of 34% of one of the 4 cores CPU, less than 2 megabytes RAM and can performs around 150 transactions each second.  And, the limitiation is not the CPU, neither the RAM (it has 1 gigabyte RAM), but the TCP/IP stack design, something I have been tunning to have these numbers.  But as the machine resources says, I would be capable to work around 10 times more transactions on such small machine if I rewrite the TCP/IP stack ... so, as can be seen, current machines, working current problems have the same type of problems described for games in the past.

There is a general lack of understanding about the need to use correctly the computing resources.  This is not if I can or not to have a better and/or more powerful device to do the work, this is about a well made work.

Even without further optimisation, what I have today will permit me to do many things that few can perform with this type of modular transaction engines.  And for a few bucks on good quality hardware for my customers.