How Computers Work

If you’re anything like me, at some point on your journey toward becoming a programmer, as you’re finding your feet and gaining some real fluency in Ruby, it will strike you - you still have no idea why what you’re doing works. You’re writing words, somewhat English-like ones at that. You may have a vague sense that it all gets turned into 1s and 0s for the computer somewhere. The process is a total black box. I got curious and sought out a high-level, not terribly technical explanation of what was going on inside my computer when I hit run-program, which I will now pass on to you.

Higher level languages like Ruby and nearly any other language that you will be called upon to use, are created by people, for people. They are meant to be easy to read (even if it doesn't always feel that way). In order for a computer to execute commands written in these languages, your programs must undergo several layers of translation into those 1s and 0s that we all know and love.

But I think it’s worth asking right off the bat, "Why 1s and 0s?" How does the computer parse even that? It turns out that there’s a fairly simple explanation.

You’ve probably been using Boolean values in your code for a while now, and you may remember learning about truth tables in your high school geometry class. For a quick refresher, Boolean algebra uses truth values instead of numbers to perform various operations. Instead of using operators like + and - and values like x or 3, as in regular algebra, Boolean algebra uses operators like AND and OR and values like True and False. So an example of a Boolean expression, similar to 1 + 2 = 3, would be True AND True is True.

Boolean algebra is useful in part because it allows for the evaluation of complex ideas, such as ‘I am a human’ and ‘I am purple’ - clearly one of these two statements is false and so, taken as a unit (‘I am a human and I am purple’) the entire sentence is false. The truth tables for AND and OR are as follows below and if you think about it, they’re just a formal version of the basic logic that you use in daily life.

aba AND b
TrueTrueTrue
TrueFalseFalse
FalseTrueFalse
FalseFalseFalse
aba OR b
TrueTrueTrue
TrueFalseTrue
FalseTrueTrue
FalseFalseFalse

Now how is this relevant to 1s and 0s? Well, 1s and 0s are a convenient way to represent a universe with only two values, True and False. So let’s say that 0 means False and 1 means True in our Boolean world. So this is the exact same thing as the truth tables above, represented differently:

aba AND b
111
100
010
000
aba AND b
111
101
011
000

Okay, so we have our 1s and 0s. As it happens, it’s not too difficult to physically represent the AND and OR operations. You can actually build machines, or in the case of modern computers, circuits, that behave like AND or OR operators. An AND machine might have two buttons that, when pushed, will set a third button into the ‘pushed’ state - thus modeling the True AND True equal True statement of Boolean algebra. This is how, say, a crazy-dedicated Minecraft player is able create an entire rudimentary in-game computer from the objects available in that world (true story!). These machines or circuits are called Logic Gates.

Bear with me a little longer; now we’re getting somewhere. Binary is particularly handy because, in addition to doing Boolean algebra with it, you can do basic math. You’ve probably heard that that any number can be represented in binary, but you may not have had it explained how. The way we ordinarily count, using the numbers 0-9, is what's called a base-ten system. This means that we have ten unique characters that represent numbers and if we want to represent a number that’s any bigger than 9, we need to carry a digit to the next column (the tens-place, if we’re back in elementary school) and reset the first column (the units-place) to zero.

Binary works the same way, but instead of being base-ten, it’s base-two and thus only has two unique characters available to represent all numbers. So, where in base-ten, three can easily be represented as the unique character '3', in binary, we would already need to carry one over and represent it as 11. Below, for comparison, are tables explaining how the representation for the number 231 works in base-ten alongside the representation for the same number in binary.

Base-Ten

102101100
231

So: (2 * 102) + (3 * 101) + (1 * 100) = 231

Base-Two (Binary)

2726252423222120
11100111

And: (1 * 27) + (1 * 26) + (1 * 25) + (0 * 24) + (0 * 23) + (1 * 22) + (1 * 21) + (1 * 20) = 231

Binary numbers can be added and subtracted much as in regular addition, only in this case:

Binary Addition Rules
1 + 0 = 1
0 + 1 = 1
1 + 1 = 10

So at this point, we have established how Logic Gates can be used to do math and use rudimentary logic. Computer processors work by having millions of these chained together to perform complex operations. The nitty gritty of how all of this happens is fascinating, but more than we need to answer our main question here.

We've established at a basic level why every programming language must eventually be translated down into binary. Hopefully it’s also clear that programming in binary would be terrible. Just imagine a keyboard with only a one and a zero key!

Source

You might have heard of Assembly Code, which is a progamming language that uses mnemonic codes to stand in for a single binary instruction. It’s what’s called a low-level language, meaning it provides minimal abstraction from the native structure of your computer processor. When it’s run, it’s converted by a program called an assembler down into executable binary. It’s a decided improvement over writing programs in binary or hex code (a base-16 system sometimes used as another low-level abstraction from binary), but it’s not really a picnic to write in either. Here’s a classic ‘Hello World’ program written in Assembly for a Mac:

; Program can be run with the below commands in the terminal on a mac. Each command
; is explained in the lines above.
;
; nasm is a particular assembler that ships with most current macs: http://www.nasm.us/
; The nasm argument specifies that you want to use that assembler to assemble your
; code, the -f command lets you chose the output file format, and macho is a
; Unix file format.
;
; nasm -f macho helloworld.asm
;
; nasm is able to produce executable files directly, but that's not what we've done
; here. As such, you need to 'link' the object file you just output, ie turn it
; into proper machine code. The reason for this intermediary step would be to allow
; you to use libraries or multiple files in assembly, and then have the linker turn
; it all into one executable file. ld does the actual linking, and the arguments
; just let it know what kind of system you're on, what file type and file to act
; on, and what the name of the output executable should be. The linker will pick
; a default osx version if that argument is left out.
;
; ld -macosx_version_min 10.7.0 -o helloworld helloworld.o
;
; And finally, we run our executable!
; ./helloworld

global start                  ; Makes program available to linker

section .data                 ; Data section of program
  message db "Hullo, World!"  ; Setting message variable to the string we want to print

section   .text               ; Indicates code section of program is below
start:

; section that deals with printing our string

  push dword 13       ; Pushes the length of the string onto the stack.
                      ; When you execute a program, a chunk of memory that's all
                      ; physically contiguous is set aside to process the
                      ; program. This is called the stack.

  push dword message  ; Pushes the string onto the stack

  push dword 1        ; Pushes the file descriptor - in this case 1, which is
                      ; standard output - onto the stack.

  mov eax,0x4         ; Now that I've prepared the arguments, this is the system
                      ; call number to write them. eax is a register
                      ; (small chunk of storage space on your computer's
                      ; central processing unit), that is 'general purpose' and
                      ; thus can be used freely in your program.

  sub esp,0x4         ; sub is the command for integer subtraction. It subtracts
                      ; the value of the second operand (0x4 ie. 4 bytes in this
                      ; case) from the first operand, and stores the result in the
                      ; first operand, sort of like 'variable -= 4' in Ruby. We've
                      ; been sneaky here, and esp is the stack pointer. There are
                      ; some complicated concepts rolled up in that, but broadly,
                      ; pointers are like addresses in memory of where data
                      ; is stored. The stack pointer is the smallest address in
                      ; memory that is a legitimate part of the program. Here,
                      ; I've set it down 4 more bytes to make room for another 4
                      ; byte variable on the stack. OSX needs this extra space.

  int 0x80            ; This is a system call that lets the operating system know
                      ; that an event has occurred. All processes start out in user
                      ; mode, which has limited privileges, and this allows access
                      ; to admin privileges, including the various powers of the
                      ; operating system such as, in our case, producing output.

  add esp,16          ; This time we're /adding/ 16 bytes to the stack! This is
                      ; cleaning up our program out of memory, letting the computer
                      ; know that everything below this point is garbage and the
                      ; space can be used by other programs after this.

; exiting

  mov eax,0x1         ; This is the system call number for exit.
  int 0x80            ; Letting the system know an event has occurred, thus
                      ; triggering the exit

Note that I specified this code example was written for a Mac - the physical layout of your computer's Logic Gates matter when it comes to writing Assembly code. In fact, the first computers were constructed for specific purposes and could perform only the task for which they had been designed and built. General-purpose computers were in and of themselves a technical achievement. The x86 processor architecture that Macs use is based on a central processing unit (CPU) that Intel put out in 1978. That CPU represented another important step toward computers as you know them today, as it allowed the processing power of 'business computers' of the day to coexist with features of 'home computers' such as color graphics and sound. It also allowed for Assembly programs written for one processor in the x86 family to be used on another. All in all, these features along with IBM's prominence as a company helped x86 family processors dominate the personal computer market.

Comparing that 13 line program in Assembly where I spent half my time managing the stack to Ruby’s 1 line puts “Hello, world!” must make the appeal of higher level languages immediately obvious. If we can turn Assembly into binary via assembler, why not make something that can do the same thing with much more intuitive, readable instructions? Hence, high-level programming languages, chock-full of abstraction, were born.

These high-level languages can be either compiled or interpreted - Ruby is interpreted, as are Python and JavaScript. C++, Go, and Haskell are compiled languages. The difference here is that compiled languages must be translated by a compiler into binary, and then that binary is run to execute the program. Interpreted languages, on the other hand, are executed directly from your source code and translated into binary and executed line-by-line at program runtime. The major differences between these two methods of translation are that interpreted languages are more flexible and are dynamically typed (meaning variable types are checked at runtime and you do not need to specify them in your code) while compiled languages determine variable types when they are compiled, run much faster than interpreted languages, and can be more dependent on the platform that you are using. It is also becoming more common to implement some combination of the two, compiling a program into an intermediate form (sometimes called bytecode) and then running that code via interpreter. Java and C# are both examples of languages that are compiled to this intermediary state.

This is far from a comprehensive explanation of how computers or programming languages work, but I hope it serves as a satisfying overview for beginners and makes what actually happens when you execute a program seem a little less mystical.

This article is deeply indebted to Vikram Chandra’s Geek Sublime. It’s not strictly a technical book, but I’d highly recommend it for its easy-to-understand explanations of fundamental computer concepts and analysis of programming’s relationship to language.

Get Updated

Apply