, , , , , , , , , , , , , ,

Everyone uses computers nowadays. The ’90s days where computers were the realm of students in the grade through college levels (who mostly used it to play Oregon Trail) and scientists (who just kept it around so they could play Chess with each other after World War III) are things of the past. Now, anyone can see the explicit content you post online, from your baby brother to your 90 year old grandmother. Computers, and, by extension, computer software, are here to stay.

Being a computer engineering student myself, I know a thing or two about computers. Like most things in science and engineering, they were discovered way before they existed. Intelligent people are so bored with society that they had nothing better to do than theorize things no one could possibly build. Well…scientists and whoever decided to carve Crazy Horse Monument, but moving on… Ideas from Babbage and Lovelace theorized what a computer could do decades before any of the technology existed to allow anyone to build it. Even when it came out, humanity so clumsily blundered on as we tend to do that we had cumbersome, slow, inefficient, and downright laughable “computers” (if you could even call them that) that were useless save for one application.

Times sure have changed. If all modern computers were like the old models we used in World War II, we’d probably need another Earth just to fit all of the functionality that’s on your iPhone…and that’s your iPhone alone, not counting all the iPhones on Earth. Something that’s stayed remarkably the same over the years, however (and not surprisingly), is computer languages. Oh sure, some are quite a bit different from older models, but overall the principles are the same. Once you’ve mastered one, you automatically have a “leg up” on all the other languages.

So, with all these computers in the world, it’s understandable more people might want to know about them for a hobby or even a profession. On that note, I present to you a brief introduction to the cumbersome, tedious, boring, reading-intensive…I mean, fun, exciting, innovative world of computer programming.

But, before we can begin, we have to know a few major rules about computers. They’re something you have to make sure you know downpat at the start or your life will be hellish. And first off the bat is the biggest one…

Myth: Computers are smart.

I mean, they have to be, right? Looks how James Cameron predicts them annihilating humanity when we’re dumb enough to let them use our nuclear weapons? Look at how Hal 9000 can interpret slang and read lips? Look at how the computer on the U.S.S. Enterprise can do anything you ask it?

…Let me drive home something to you that you should take away if you learn nothing else from me, because it’s 100% true.

Fact: Computers. Are. STUPID.

Programmers and computer engineers laugh at movies like “The Terminator”. Granted, some science fiction can be pretty “predictive”…but some science fiction can’t. Just as a physicist laughs at the idea of Hyperspeed, a computer engineer laughs at the idea of computers being smart or ever having a “real” AI.

Computers are the stupidest things on Planet Earth. The only things that are dumber are their users. A computer can (and, more importantly, will) only do what you tell it to do and absolutely nothing more. Even a small child won’t care if you tell them: “Pick up your toys.” or “Pick up your things.” It’s the same thing to them. A computer, on the other hand, will respond if you tell it: “Pick up your toys.”, but if you say “Pick up your things.”, it will crash and stare at you stupidly as if you just told it to make pancakes on Venus.

The reason is quite simple as to why computers are dumb: computers have no brains. And contrary to what the Scarecrow might lead you to believe from “The Wizard of Oz”, you can’t do sh’t without a brain. Even if you try to get a fly to move forward and put an open flame in front of it, it will stop or walk a short distance to one side to go around it before continuing on. A computer would walk right into the fire, stopping only to notify you that it’s burning before it dies. Maybe not even that.

Programmers would like computers to have artificial intelligence…if only enough to realize after writing 10,000 lines of code that it needs a semicolon at the one line where it forgot it. But, of course, the computer doesn’t know that and will crash every time as the programmer spends the next three weeks looking at every single line of code and changing a million things the wrong way trying to find out where the semicolon is missing.

That brings me to my second point.

ALL computers, and I don’t care what year, what era, whether it’s a cheap calculator or a desktop or a supercomputer, can do TWO things and ONLY TWO THINGS.

1. Recognize a high or a low signal.

2. Send a high or a low signal.

THAT’S ALL. Congratulations, you now know the secret to every computer on Earth. This is the “big trick”, the way it does everything from making a Microsoft Excel spreadsheet to taking the natural log of a number to letting you watch Youtube. This is literally the only thing the computer is doing and the only thing it realizes it’s doing. All chemical reactions come down to electrons…all biology comes down to making protein…all physics comes down to the four (or three) elementary forces…and all computers come down to that: high signals and low signals.

Kind of a let-down, eh?

Given enough time, I could tell you how you go from low signals and high signals to adding two numbers…but that would take a while and gets more into hardware. For now, I’ll talk about programming.

As I just said, computers are stupid, and can only recognize high and low signals and send high and low signals. High…low…that’s it. Two numbers. So…doesn’t really make sense to use a standard number sequence (0-9), does it? We only need two: 0 and 1. If we wanted to screw up everyone, we’d say 1 means a low signal and 0 means a high signal. But we’re nice, so we’ll say 0 is low and 1 is high. Binary numbers. Basic to all computers.

Time for another dirty little secret:

Myth: There are all sorts of different programming languages: C, C++, C#, Java, Python, etc.

Fact: There is only one programming language: machine.

Don’t believe me? Try it. Look at your computer right now and say: “Computer(add Scotty’s accent if you want), I want you to add 4 and 5 and tell me the answer right now.” The computer should just resume doing what it was doing before, namely running and scheduling all of the background processes it’s doing right now. Now look at it and say: “Computer, void main, open bracket, int one equals four, semicolon, int two equals five, semicolon, int three, semicolon, int three equals int one plus int two, semicolon, printf open parentheses quotation mark percentage d end quotation mark comma int three close parentheses, semicolon, closed bracket.” Assuming no one is around to give you a weird look, and regardless of whether your computer runs on the C language or not, it’s not going to do anything. You need to send these commands into it via an electric wire. And since you can’t say “words” on a wire, what will you do? Turn signals up and down to represent your words.

There you go. The computer only recognizes high and low signals AKA machine.

However, you can’t program in machine. Oh, you can try if you want, and you’ll only need two keys and nothing else for it. Here’s a sample code:


Having fun yet? I’m still trying to tell the computer just to “start main”. Now what happens if one of those “0”s is supposed to be a “1” and I mistyped it?

Yeah…nobody programs in machine unless they’re certifiably insane.

A more “useful” language is an abstraction of machine called “assembly language”. And it is, in fact, an abstraction…not anything that can actually be run on a computer. Here’s a dirty little secret: when you run a computer program, you assume you’re seeing the end result of a code rather than an implementation, which, indeed, you are. But even the most advanced programmer on Earth with the most rudimentary language is doing the exact same thing. They just see a bit more of the “overhead” and “details” than you do.

“Assembly language” has the lowest level of abstraction. Basically, the only way you can use assembly language as a programming language is if you know exactly the way the hardware on a computer is set up. Assembly language doesn’t directly tell the computer what to do in terms of 0s and 1s, but you’re still getting the impression you’re telling the computer what to do in terms of hardware…which you are, in fact, just abstractly. Going back to our example, it would be something like this (forgive me for not having memorized assembly directly).

Load register s0 with the constant 4.

Load register s1 with the constant 5.

Load accumulator with contents of s0 and s1.

Add contents of accumulator and output result to register s2.

Transfer contents of register s2 to output register.

Much simpler, right? But what the hell is a register s0, s1, and s2 or, for that matter, an accumulator? They’re all actual hardware components in your computer. It’s what your computer actually does every time you tell it to add 4 and 5 and give the result. To the computer, it’s still just high signals and low signals, but this is what is happening, in essence.

Naturally, the hardware will be different for every computer, but the nice thing about assembly is that all computers are still doing the same basic set of instructions: load address, load constant, load accumulator, shift register, jump, add…and that’s all your computer does. It builds these instructions from the highs and lows, and it ends up with this set of instructions. These, in turn, get combined to do everything from adding two numbers to videotaping you breaking your face riding a skateboard on webcam.

But, again, programmers don’t like this…and, to a reasonably degree, they shouldn’t. Engineering and programming is only good if it’s as modular as possible and if it puts as many things as possible “under the hood”. So…assembly language is still there. Part of what your computer does whenever it “compiles” a program is it takes all of your complex, abstract commands from your code and translates them into what they mean in assembly, which, in turn, gets turned into 0s and 1s so the computer knows what to do. All other computer languages are highly abstracted and eventually turned into assembly equivalents…which is easy. If you want me to, I can explain how multiplying two numbers in a computer is literally nothing more than adding and shifting. No magic, no new hardware…just that. That’s what most new operations and functions of a computer are, just making new things out of assembly building blocks. Yet to keep things easier on programmers, they use things like C++ and Java.

And there you have yet another dirty little secret: programming languages themselves are “cheats” to make programming easy. No matter what language you use, it will eventually make roughly the same assembly. Now it’s just easier to read. C is a fairly “low-level” high-level language (it’s an oxymoron, I know, but look into it enough and you’ll realize I’m telling the truth), and this is how it adds two numbers:

void main {

int int1 = 4;

int int2 = 5;

int int3;

int3 = int1 + int2;

printf(“%d”, int3);


Ok, a few more lines than the assembly version (not really, I gave you the “pseudocode” version for assembly…and if you get more into programming you’ll see that term a LOT), but you can tell what’s going on roughly, can’t you? If I tell you I’m adding 4 and 5 and putting the result to the screen, you can “pick it out” from this, can’t you? And C is one of the “harder” languages compared to the modern ones. You can “see more of what is going on”.

That brings us to the final myth for today:

Myth: All higher abstracted languages are superior to lower abstracted languages.

Fact: Well…yes and no.

I have to get more in detail to show you an example of precisely why programmers will continue to work directly with assembly and C as opposed to Java and Python. (Although, in the case of C, its survival is ensured for the time being because that’s what UNIX and Linux operating systems run on.) The short answer is because a lower-level language lets you see more of the “nitty gritty cogs and gears” of what’s going on, you can naturally do more and do more in a more efficient way than you can with higher definition languages. In some cases, where amount of code is at a premium, you must program in assembly or it’ll never work.

To illustrate with a better example: Say you are in a group of writers. Your employer comes up to you and divides you into two groups. Both groups are charged with writing a story. One group is allowed to write whatever they can think of provided they don’t make up new words. The other group is only allowed to cut out sentences from previously-written works and paste them together to make a story. The employer wants to see who does the best job.

Naturally, the first group will win. The second group has a tremendous amount of options available to them from all of human history, and parts of their story might end up being wonderfully good, especially if they can find sentences that perfectly or almost perfectly say everything they had in mind. But the first group has the advantage of making up totally new sentences. Their restriction is not sentences that have been written but the words of the English language themselves. Both could make a good work…but the first group can always make a better work.

The same principle applies to programming. Java (or even C) programmers have a wonderful set of nearly-all-encompassing tools to make a program work. However, assembly programmers aren’t restricted to those and can always make up a slightly better “tool” to do what they want. And if it was humanly possible, a machine language programmer would run rings around an assembly programmer as they can pretty much make the computer do anything not restricted by hardware limitations.

So there you have it…programming in a nutshell. This is what it all boils down to. All other programming builds upon this, and all new languages essentially take something an older language did and abstracts it. (Good example: C allows the programmer to manipulate one “character” (A-Z, a-z, 0-9, etc.) at a time, so in order to write the famous message: “Hello World!”, it stores the values in a character array, where each character can be manipulated individually. By comparison, C++ thought there was no need for all of the overhead of making a character array and just made a new variable, string, which does the same thing…except now you don’t know how the characters are being stored and have a harder time manipulating ones within the string. See what I mean about lower level languages having their advantages?)

From here on in, it all depends on where you go, which is why this is only an introduction. Although a lot of things transfer, what you specifically have to do in order to make anything happen depends on which language you learn. But…if you found all of this fascinating rather than brain frying…go play! Find a language you like, defend it until the day you die (and you will), and get to work writing your own game in the Zork universe!

And now I must go…

I need to find where I put a parentheses instead of a bracket in my 500 line program…