I’ve got a little story to share, about computers and hackers, and some stuff that eventually relates back to who I am and how I see the world – as well as, I’m sure, how a bunch of other people out there see the world. If you’re not one of these people (as you probably aren’t, since there are very few of these people, comparatively) then perhaps this will give you some insight or at least amuse you for a while, and maybe even enlighten you a little bit.
One of the last courses I took in college was an electronics class where we built a computer. Now, to some people, “building” a computer means buying a few parts (maybe from a store, but more recently probably from some on-line retailer or something) and putting them together.
That’s not how we built the computer.
Our computer was a very simple 4-bit computer – not even the 8 bits of the first IBM-compatible computers that people love to remember. We built it entirely from scratch, on a “breadboard,” where you could plug in wires and resistors and so forth. The only thing we didn’t build entirely from scratch was the arithmetic chip itself (the “cpu” if you will) and the 8-bar LED display (like the one you see in digital clocks that lets you display a single number). Everything else was built by us.
We had to design the bus that would transfer bits (really just pulses of electricity) around between components, the interface to the LED display, the interface between a once-writable flash ROM chip (the “bios” for our simple computer), and the timing circuit that would keep it all working.
A timing circuit was necessary because computers don’t think like we do – in streams of thought that just “flow.” Computers think in discrete steps, one thing at a time, much like, say, a mechanical clock. A mechanical clock doesn’t “know” that it is any one given time (say, 1:01 pm) – it only “knows” that gear #1 has tooth number 42 meshed with cog #4 at position 3 or something equally obscure like that. The position of those gears and cogs is all the clock “knows” (if it can be said to “know” anything). The fact that we recognize those positions as a time is strictly because we built the system to work that way – the position of the gears and cogs has meaning to us because we designed it to be so – we abstracted the concept of “time” as the relationship of seconds, minutes, and hours, into a set of physical objects – the gears – in such a way that they would represent “time” as we understand it. To the clock, none of that matters. It just moves the gears in the way it was designed, over and over again. If the gears were designed improperly, the time would be wrong, but the clock doesn’t care (again, if a clock can even be said to “care” about anything – we just use emotions as a metaphor for machines because that’s how we work, and how our language works) – it just continues to move the gears and cogs in the way that it was made.
A computer works the same way. The electronic “clock” is really just a pulse of electricity that is regular – it pulses in a set pattern, say once every 1/10 of a second (our example was a very slow computer). And on every pulse of that clock, electricity would pulse down wires and through resistors and transistors and diodes and so forth, in a very precise and controlled manner. And those pulses would have effects on certain things – a pulse through a certain diode would have the mechanical, electrical effect of changing the path for electricity to flow through the system. This change would be take effect during the next pulse, when electricity would move in a slightly different way, and so on, again and again – just like the gears and cogs in a mechanical clock.
However, since electrical components are very small, we were effectively creating a very, very complex mechanical clock, with lots of gears and cogs inter-meshing in different ways. Some of these different ways we (as humans) interpreted as ones and zeros – the “binary” language on which all computers are built. (We use binary in computers because it’s easy; computers work with electricity, and it’s easy to design electrical components to work one way or another, which is to say, on or off, and that is, effectively, binary. Designing components to work 3 or more ways is really, really hard, and often imprecise, which is why we don’t do it.)
So, building on this foundation of electrical pulses, controlled by a clock signal (really just a regular, timed pulse of electricity), we built up the idea of binary code – ones and zeros. We had an adding chip that would interpret these pulses, 4 at a time, into representations of numbers. It would then – according to a fairly simple design internally, but more complex than we could build in our lab – produce an output of signals that were different from the input it received. This output was interpreted by us as numbers that had been added together (again, because we -that is to say, humans – built it that way, again, just like our mechanical clock).
Once we had a circuit that could, metaphorically, add numbers together, we had the basic requirements for a computer. We looped circuits back onto one another, so that the output of some numbers added together would influence the next “operation,” and through some complex manipulation (mostly just building electrical paths in such a way that they followed the rules of logic as set out by us), we had a computer.
We programmed another chip – called a PROM, for “programmable read only memory,” with some numbers that we had put together on paper. These numbers were (according to a code designed by humans) representations of “instructions,” abstract concepts that we used to simplify working with the computer. We wrote the instructions to do something, translated the instructions into their numerical equivalents (you’ll understand now why the first computers were built by governments to make and break codes), and then used electricity to “burn” those numbers into the PROM, so that when we were done, and electricity was applied to the PROM, our numbers would come out the other end (again, as just pulses of electricity). These would be interpreted by our adding unit, which would “execute” our instructions (computing is just full of abstractions like this – abstractions and metaphors build on top of one another) and send out signals representing the “results” of our “instructions” which would be interpreted by another circuit, which would send the appropriate signals to our single LED block. If we had done our work right, the LED block would light up in a certain pattern which we would interpret as numbers or (if we had a better LED), letters.
This was really monumental, although you might not think so. The thing we built was butt-ugly, with wires popping out all over the place. It could only display one digit at a time on the LED, and to program it, you had to go through all those steps and make the PROM – which, once made, could never be changed. If you made a mistake, you had to throw out the PROM and make a new one. Compared to the computer on which I’m writing this (or to the one you are using to read this), our computer was about as sophisticated as smoke signals are to a Ferrari. (A bad analogy, I know, but somehow appropriate, when you think about it.)
What was important here was not the practical applications of this exercise – we certainly weren’t going to go out and work for Intel and design their next big chip or anything. Modern computers are so phenomenally more complex that it’s not even worth making an analogy. Just trust me on this one – they are way more complex. But the important thing is that they still operate in the same basic way, using the same basic rules. By building this 4-bit monstrosity, we now had a deep, fundamental understanding of how a “real” computer worked, on a very low level. It’s like a car, really (how I love car analogies) – anyone can learn to drive it, but a good driver, a really good driver, knows a bit about everything in the car. He may not be able to build one on his own, but he knows (generally) how the engine works, how the steering is designed, how the wheels interact with the road, and so on. By knowing these things, he can use the car more effectively. Likewise, by knowing these things about how a computer worked, we (as computer science students, mostly destined to be computer programmers in life, as I am) could use computers more effectively. It was no longer just a black box that did things – it was “real” to us. We understood it. There was meaning, logic, and sense there. I may not need to know (in fact, I don’t need to know) how my CPU works to write a program in VB or PHP or some other high-level language – but by knowing, generally, how it works, I can program it more effectively.
This is where the essence of a hacker comes into play.
An average person might be satisfied to know, very basically, how a computer works – maybe they know that there’s a CPU and it’s the “brain” of the computer, but that’s about it. They then quite happily use their computer to write documents, manage photos, listen to music, and do other stuff. But to a hacker, that’s not enough. A hacker wants to know how it works – and not just in some general, vague sense. A regular person might be bored with that class that I took – they might think, “yeah, this is all fine and dandy, but I’ll never use this, so why do I need to do it?” A hacker, on the other hand, would be excited by that class, he’d think “yes, now I’ll finally understand how an APU works and its relationship to the rest of the CPU architecture, as well as why assembly language works!”
Now, let me get to the point I wanted to make since the beginning of this story.
I am a hacker. I loved that class that I took back in college. I always want to know how things work. I may be a computer programmer, but I know more about how various things in my world work than most people because I have a hacker mentality. I know a bunch of stuff about how bodies work, about how oxygen is transported in blood by the binding of molecules, I know how many chemical steps are needed for blood to clot, I know how the inner workings of my car’s engine work, and how the feedback loops created therein create a working, modern engine, that has power in a certain RPM range. I know how this article I’m typing right now will get transmitted through a vast network of computers to your screen. I know all sorts of things about how systems work – governments, engines, plumbing, biology – all these things. I’ll never use many of them, but I had to know – because I’m a hacker. This mentality permeates every aspect of my life, and affects how I interact with the world around me. It’s not enough for me to know at the checkout that I owe $16.26; I want to know why and how. The scanner that scans my merchandise, the systems that track prices and inventory for the store, the anti-theft security tags on expensive merchandise that beep annoyingly at you if a cashier forgets to take them off what you bought. It’s not just the background of life to me – it’s part of the world, an amazing and complex place that I simply must understand, even if its only a little bit in some places.
In my opinion, that sort of inquisitiveness, that sort of curiosity about the world, is what leads to a well-rounded individual. Not just taking a few classes about psychology and world history to fill a requirement, but actually wanting to know something about subjects that are new to me, even if they are completely unrelated to computers.
That’s what makes a hacker – whether they hack computers, sound equipment, music, sculpture, cars, wood, or whatever. They’re the people that want to know – not just automatons following instructions, but curious, intelligent people with a desire to know things so that they can understand them and use them more effectively. In my opinion, they’re the best people in the world – and there are far too few of them, if you ask me.
But maybe now that you’ve read this, there will be one more.
Peace out, yo.