I Miss my Stick Shift

I miss my stick shift, I really do. And this is why.

I miss my stick shift, I really do. I miss being able to row through the gears on a twisty country road. You can approximate a stick shift with a manu-matic (as I have in my Outlander), but it’s not quite the same. In fact, there are some serious shortcomings with automatics of any type, but you can boil it all down to one thing: the ability to select a gear before you need it.

Let me explain. (First, though, a disclaimer – what I’m about to talk about applies largely to small-engine vehicles; i.e. 4-cylinder engines. With bigger engines, the power band is different, so some of the values I’m about to talk about will be different, although much of the same principle applies, it just applies at different speeds and engine RPMs.)

Let’s say you’re cruising around at about 35 MPH. In most cars, this is a gray area for the transmission – you could be in 3rd or 4th gear, depending on certain factors. If you’re cruising along a flat road, your automatic transmission will probably have you in 4th gear – the top gear in many cars – like mine. But let’s say you need to accelerate quickly, maybe to take a corner or zip past someone.

In an automatic, all you can do it put your foot down on the gas to get going. The car will detect the increased throttle and try to respond. Since you are in 4th gear, and probably running around 2,000 RPMs, this is too low for the gear ratio, so the automatic transmission will have to down shift into 3rd, or maybe even into 2nd, depending on how hard you mash the throttle.

Sounds well and good, but what you have to keep in mind is that you’ve already put your foot down – you need to go now. But your car has to wait a moment while it realizes that it can’t go, and then you’ve got all sorts of hydraulic (or, depending on your car, electronic) systems that need to adjust their settings so that the car can shift gears. This takes time – and that is the problem. There’s always a lag with automatic transmissions from when you mash the gas down, to when the car actually responds by shifting gears. Now you’ve wasted a second (or two, if your transmission is sluggish) just sitting there, hardly accelerating at all. This may be fine for around-town driving, but if you like spirited driving, it is no fun at all.

With a manual transmission, you know that you’re going to want to accelerate in a moment – because you are the one who’s going to do it. So you pop the clutch and shift gears, and then, when you mash on the throttle, the car is already in 3rd, and you zoom away – keeping the car’s engine in the “power band” for your engine, with the RPMs tuned just right. Zoom-zoom, baby!

Now, some of the more astute readers out there might think “but you still have a time delay – you have to shift gears yourself, mash the clutch in, move the gear shift lever, and that takes time too!” And you’d be right. But the important thing to remember is that YOU moved the gears BEFORE you needed them (mechanically speaking). With an automatic, the gear change happens AFTER you need it. That difference is what makes an automatic feel sluggish, while the same car with a manual transmission (and a competent driver, of course) feels “sporty” and responsive.

So what about a manual-automatic hybrid, what some people call a “sport-tronic” or “manu-matic” transmission? Just pop the lever over to “manual” and down-shift, right? Well, not quite.

You see, unlike a manual transmission, an automatic transmission is always “in gear,” so to speak. I’ll spare you the technical details of planetary gear assemblies and so forth, but suffice to say that in automatic transmissions, the engine is always connected to the drive shaft. In a manual transmission, by definition, when you push the clutch in, you are disconnecting the engine from the drive shaft. The engine is spinning freely, with no load on it. Because of this, you can use the accelerator to bring the engine up to the right speed (RPM) before you re-engage the drive shaft. Thus, when you let out the clutch, the engine is already at the speed (RPM) you need for the most power. An automatic has to struggle through a (very brief) period of going either too fast or too slow for the gear you are in, before things get back “in synch,” so to speak. (The more technical readers out there are going to take me to task over this simplification – bear with me here guys, I know the details and I know this isn’t exact, but I’m trying to make a point here.)

So there you have it – even with a “manu-matic” transmission, there will always be a delay in power delivery when shifting gears, while a manual gives you the ability to anticipate power needs and shift gears accordingly. When someone invents an automatic transmission that can read your mind, maybe this won’t be a problem anymore, but until then… a manual will always win.

(p.s. Let’s leave out of this discussion the “flappy-paddle” shifting cars that actually do have a clutch, but the car controls it, rather than the driver controlling it via a pedal. These sorts of systems are popular on high-performance – and expensive – sports cars, and they work surprisingly well, but the computer is still in control – not you – even though you can force gear changes with the paddles and get the same benefits described above to having a clutch. Such systems don’t exist in the “average” car yet, and I don’t know if they ever will, due to their complexity. And even if they do work their way down to everyday cars, as I said, the computer is still controlling the clutch, and it will never be as “smart” as you – the driver – nor will it be able to anticipate your intentions the way you, with full manual control over the gear changes, could do.)

The Hacker Mentality

I’ve got a little story to share, about computers and hackers, and the mindset of people who take things apart to understand how they work.

I’ve got a little story to share, about computers and hackers, and some stuff that eventually relates back to who I am and how I see the world – as well as, I’m sure, how a bunch of other people out there see the world. If you’re not one of these people (as you probably aren’t, since there are very few of these people, comparatively) then perhaps this will give you some insight or at least amuse you for a while, and maybe even enlighten you a little bit.

One of the last courses I took in college was an electronics class where we built a computer. Now, to some people, “building” a computer means buying a few parts (maybe from a store, but more recently probably from some on-line retailer or something) and putting them together.

That’s not how we built the computer.

Our computer was a very simple 4-bit computer – not even the 8 bits of the first IBM-compatible computers that people love to remember. We built it entirely from scratch, on a “breadboard,” where you could plug in wires and resistors and so forth. The only thing we didn’t build entirely from scratch was the arithmetic chip itself (the “cpu” if you will) and the 8-bar LED display (like the one you see in digital clocks that lets you display a single number). Everything else was built by us.

We had to design the bus that would transfer bits (really just pulses of electricity) around between components, the interface to the LED display, the interface between a once-writable flash ROM chip (the “bios” for our simple computer), and the timing circuit that would keep it all working.

A timing circuit was necessary because computers don’t think like we do – in streams of thought that just “flow.” Computers think in discrete steps, one thing at a time, much like, say, a mechanical clock. A mechanical clock doesn’t “know” that it is any one given time (say, 1:01 pm) – it only “knows” that gear #1 has tooth number 42 meshed with cog #4 at position 3 or something equally obscure like that. The position of those gears and cogs is all the clock “knows” (if it can be said to “know” anything). The fact that we recognize those positions as a time is strictly because we built the system to work that way – the position of the gears and cogs has meaning to us because we designed it to be so – we abstracted the concept of “time” as the relationship of seconds, minutes, and hours, into a set of physical objects – the gears – in such a way that they would represent “time” as we understand it. To the clock, none of that matters. It just moves the gears in the way it was designed, over and over again. If the gears were designed improperly, the time would be wrong, but the clock doesn’t care (again, if a clock can even be said to “care” about anything – we just use emotions as a metaphor for machines because that’s how we work, and how our language works) – it just continues to move the gears and cogs in the way that it was made.

A computer works the same way. The electronic “clock” is really just a pulse of electricity that is regular – it pulses in a set pattern, say once every 1/10 of a second (our example was a very slow computer). And on every pulse of that clock, electricity would pulse down wires and through resistors and transistors and diodes and so forth, in a very precise and controlled manner. And those pulses would have effects on certain things – a pulse through a certain diode would have the mechanical, electrical effect of changing the path for electricity to flow through the system. This change would be take effect during the next pulse, when electricity would move in a slightly different way, and so on, again and again – just like the gears and cogs in a mechanical clock.

However, since electrical components are very small, we were effectively creating a very, very complex mechanical clock, with lots of gears and cogs inter-meshing in different ways. Some of these different ways we (as humans) interpreted as ones and zeros – the “binary” language on which all computers are built. (We use binary in computers because it’s easy; computers work with electricity, and it’s easy to design electrical components to work one way or another, which is to say, on or off, and that is, effectively, binary. Designing components to work 3 or more ways is really, really hard, and often imprecise, which is why we don’t do it.)

So, building on this foundation of electrical pulses, controlled by a clock signal (really just a regular, timed pulse of electricity), we built up the idea of binary code – ones and zeros. We had an adding chip that would interpret these pulses, 4 at a time, into representations of numbers. It would then – according to a fairly simple design internally, but more complex than we could build in our lab – produce an output of signals that were different from the input it received. This output was interpreted by us as numbers that had been added together (again, because we -that is to say, humans – built it that way, again, just like our mechanical clock).

Once we had a circuit that could, metaphorically, add numbers together, we had the basic requirements for a computer. We looped circuits back onto one another, so that the output of some numbers added together would influence the next “operation,” and through some complex manipulation (mostly just building electrical paths in such a way that they followed the rules of logic as set out by us), we had a computer.

We programmed another chip – called a PROM, for “programmable read only memory,” with some numbers that we had put together on paper. These numbers were (according to a code designed by humans) representations of “instructions,” abstract concepts that we used to simplify working with the computer. We wrote the instructions to do something, translated the instructions into their numerical equivalents (you’ll understand now why the first computers were built by governments to make and break codes), and then used electricity to “burn” those numbers into the PROM, so that when we were done, and electricity was applied to the PROM, our numbers would come out the other end (again, as just pulses of electricity). These would be interpreted by our adding unit, which would “execute” our instructions (computing is just full of abstractions like this – abstractions and metaphors build on top of one another) and send out signals representing the “results” of our “instructions” which would be interpreted by another circuit, which would send the appropriate signals to our single LED block. If we had done our work right, the LED block would light up in a certain pattern which we would interpret as numbers or (if we had a better LED), letters.

This was really monumental, although you might not think so. The thing we built was butt-ugly, with wires popping out all over the place. It could only display one digit at a time on the LED, and to program it, you had to go through all those steps and make the PROM – which, once made, could never be changed. If you made a mistake, you had to throw out the PROM and make a new one. Compared to the computer on which I’m writing this (or to the one you are using to read this), our computer was about as sophisticated as smoke signals are to a Ferrari. (A bad analogy, I know, but somehow appropriate, when you think about it.)

What was important here was not the practical applications of this exercise – we certainly weren’t going to go out and work for Intel and design their next big chip or anything. Modern computers are so phenomenally more complex that it’s not even worth making an analogy. Just trust me on this one – they are way more complex. But the important thing is that they still operate in the same basic way, using the same basic rules. By building this 4-bit monstrosity, we now had a deep, fundamental understanding of how a “real” computer worked, on a very low level. It’s like a car, really (how I love car analogies) – anyone can learn to drive it, but a good driver, a really good driver, knows a bit about everything in the car. He may not be able to build one on his own, but he knows (generally) how the engine works, how the steering is designed, how the wheels interact with the road, and so on. By knowing these things, he can use the car more effectively. Likewise, by knowing these things about how a computer worked, we (as computer science students, mostly destined to be computer programmers in life, as I am) could use computers more effectively. It was no longer just a black box that did things – it was “real” to us. We understood it. There was meaning, logic, and sense there. I may not need to know (in fact, I don’t need to know) how my CPU works to write a program in VB or PHP or some other high-level language – but by knowing, generally, how it works, I can program it more effectively.

This is where the essence of a hacker comes into play.

An average person might be satisfied to know, very basically, how a computer works – maybe they know that there’s a CPU and it’s the “brain” of the computer, but that’s about it. They then quite happily use their computer to write documents, manage photos, listen to music, and do other stuff. But to a hacker, that’s not enough. A hacker wants to know how it works – and not just in some general, vague sense. A regular person might be bored with that class that I took – they might think, “yeah, this is all fine and dandy, but I’ll never use this, so why do I need to do it?” A hacker, on the other hand, would be excited by that class, he’d think “yes, now I’ll finally understand how an APU works and its relationship to the rest of the CPU architecture, as well as why assembly language works!”

Now, let me get to the point I wanted to make since the beginning of this story.

I am a hacker. I loved that class that I took back in college. I always want to know how things work. I may be a computer programmer, but I know more about how various things in my world work than most people because I have a hacker mentality. I know a bunch of stuff about how bodies work, about how oxygen is transported in blood by the binding of molecules, I know how many chemical steps are needed for blood to clot, I know how the inner workings of my car’s engine work, and how the feedback loops created therein create a working, modern engine, that has power in a certain RPM range. I know how this article I’m typing right now will get transmitted through a vast network of computers to your screen. I know all sorts of things about how systems work – governments, engines, plumbing, biology – all these things. I’ll never use many of them, but I had to know – because I’m a hacker. This mentality permeates every aspect of my life, and affects how I interact with the world around me. It’s not enough for me to know at the checkout that I owe $16.26; I want to know why and how. The scanner that scans my merchandise, the systems that track prices and inventory for the store, the anti-theft security tags on expensive merchandise that beep annoyingly at you if a cashier forgets to take them off what you bought. It’s not just the background of life to me – it’s part of the world, an amazing and complex place that I simply must understand, even if its only a little bit in some places.

In my opinion, that sort of inquisitiveness, that sort of curiosity about the world, is what leads to a well-rounded individual. Not just taking a few classes about psychology and world history to fill a requirement, but actually wanting to know something about subjects that are new to me, even if they are completely unrelated to computers.

That’s what makes a hacker – whether they hack computers, sound equipment, music, sculpture, cars, wood, or whatever. They’re the people that want to know – not just automatons following instructions, but curious, intelligent people with a desire to know things so that they can understand them and use them more effectively. In my opinion, they’re the best people in the world – and there are far too few of them, if you ask me.

But maybe now that you’ve read this, there will be one more.

Peace out, yo.

I Hate Daylight Saving Time

Spring has always been my least favorite season, and Daylight Saving Time (DST) has been one of the reasons I hate it – I hate loosing an hour of sleep!

Spring has always been my least favorite season, and Daylight Saving Time (DST) has been one of the reasons I hate it – I hate loosing an hour of sleep!

Although it’s not technically spring yet, thanks to the arrogance of the US Congress, DST came early this year (as you’re probably already aware).

This has been a HUGE pain in my ass.

I maintain a lot of systems, both professionally and for other people (I’m a computer guy, people come to me for help, what can I say). This DST change is going to cost companies more than we are ever going to “save” from switching early – I can assure you of that. It’s already driving me into an early grave!

Updating systems, updating servers… all of it is a huge, massive, fucking pain. Just search Google for stories of people trying to update calendar systems, cell phones, blackberries, palm pilots, and so forth. They ALL need to be updated to reflect the new time zone data.

I can tell you one thing – this week, a lot of people are going to miss appointments, be late, and so on. All because Congress felt the need to fuck around with time itself.

Thanks a lot, guys.

Why I Won’t Upgrade to Vista

Why I won’t Upgrade to Vista: Three letters: DRM.

Three letters: DRM.

I’ve seen it in action. Vista spends more processor cycles doing shit-all than any other Windows version yet produced. After spending my hard-earned dollars on an expensive, fancy, dual-core processor, I don’t want those processor cycles wasted on checking whether I have the right to play a particular MP3 file (or video, or whatever).

I happen to like the way my computer runs now – we’ve got an older, (fairly) stable OS running on hardware that’s evolved way beyond it – which is GOOD! When the hardware outpaces the software, things run FAST. When the software outpaces the hardware, things run S…L…O…W…

Check out this article, and be sure to follow the links it includes. Here’s a snippet that really gets my blood boiling:

Here’s another blatant lie:

Will Windows Vista content protection features increase CPU resource consumption?

Yes. However, the use of additional CPU cycles is inevitable, as the PC provides consumers with additional functionality. Windows Vista’s content protection features were developed to carefully balance the need to provide robust protection from commercial content while still enabling great new experiences such as HD-DVD or Blu-Ray playback.

For those of you running Windows Vista, start Windows Media Player and play a random MP3 audio file. Go into Task Manager and look for a process called “mfpmp.exe” with description “Media Foundation Protected Pipeline EXE.” Notice how much CPU it uses. On my machine it fluctuates between 10% and 20% CPU time. Other users are seeing even larger consumption of CPU resources, just check out this comment.

And now the question for Microsoft: Why exactly is mfpmp.exe needed to play an MP3 file, when you say the content protection technology is there for HD-DVD and Blu-Ray?? What additional functionality am I getting, exactly, from mfpmp.exe when I play an MP3 file? As it is now, the content protection technology just uses more resources while providing no benefits at all to the user, just like Peter Guttman wrote in his paper and we’ve all argued before. No wonder there are sometimes gaps in the audio on my PC, which by the way ran much faster on Windows XP. I thought Vista was about more robust video and audio playback?? Even high end systems have these issues. I find myself using VLC to play video files more often now because Media Player feels so slow and bloated. Even when playing MP3 files, VLC uses much less CPU resources compared to mfpmp.exe and wmplayer.exe combined!


Lessons from 1984

We all know about George Orwell’s 1984, right? Well, sometimes I think maybe we don’t, because we keep doing stuff that seems to be taken straight from the book.

We all know about George Orwell’s 1984, right? Well, I was re-reading it the other day (for perhaps the 100th time), and thought I’d post some relevant bits from the “Afterward” section, written by Erich Fromm. It’s relevant because it talks about the idea of constant war or aggression against an enemy that you can’t destroy (can we say “terrorism?”). Although he mentions atomic weapons and an arms race, the same idea can be applied to today’s world. (Emphasis is mine.)

Orwell’s picture is so pertinent because it offers a telling argument against the popular idea that we can save freedom and democracy by continuing the arms race and finding a “stable” deterrent. This soothing picture ignores the fact that with increasing technical “progress” … the whole society will be forced to live underground, … that the military will become dominant (in fact, if not in law), that fright and hatred of a possible aggressor will destroy the basic attitudes of a democratic, humanistic society.

Another section touches on “doublethink,” something that we tend to think doesn’t exist in the mainstream, but in fact – it does.

If I work for a big corporation which claims that its product is better than that of all competitors, the question whether this claim is justified or not in terms of ascertainable reality becomes irrelevant. What matters is that as long as I serve this particular corporation, this claim becomes “my” truth, and I decline to examine whether it is an objectively valid truth. In fact, if I change my job and move over to the corporation which was until now “my” competitor, I shall accept the new truth, that its product is the best, and subjectively speaking, this new truth will be as true as the old one. It is one of the most characteristic and destructive developments of our own society that man, becoming more and more of an instrument, transforms reality more and more into something relative to his own interest and functions. Truth is proven by the consensus of millions; to the slogan “how can millions be wrong” is added “and how can a minority of one be right.”

In case it isn’t clear to you how this applies to today’s society, I have only to point to the rhetoric of our own political parties for proof. They (meaning the people who represent the party and its ideas/beliefs/platform/etc.) exhibit this exact form of “doublethink,” or accepting “truth” without objective facts. Just think of the people who think that global warming isn’t real, or (to use a slightly less pleasant example) the people who claim the holocaust never happened, and you will see modern “doublethink” in action. It is a frightening trend to someone who thinks rationally and objectively considers the facts.

Let me continue with another quote, one that resonates with me in regards to all the “security” measures taken lately by our government – especially the “REAL ID” thing (emphasis mine):

Thus, for instance, if he has surrendered his independence and his integrity completely, if he experiences himself as a thing which belongs either to the state, the party or the corporation, then two plus two are five, or “Slavery is Freedom,” and he feels free because there is no longer any awareness of the discrepancy between truth and falsehood.

It goes without saying that I think these are all troubling signs, but what is to be done? The only thing I can think of is to subtly resist such change where possible (don’t give in to REAL ID; write your representatives in Congress; refuse to be afraid of unseen enemies) and to try to help others see things objectively as much as possible. I’m not talking about trying to spread an ideology here – the ideology is already spread; people are just giving it up for an easier, but less “free” one.

So do your duty. Talk with someone about these issues. Spend some time thinking about the implications of what you see & hear in the news, rather than just accepting the views given by those in power (both politically and in the mainstream media). You may be a minority of one, but as of yet, a minority of one can still be right.

Peace out, yo.