Some Completely Pointless Benchmarks

Just for fun, I decided to time how long it takes to boot my computers – both my main desktop computer and my little netbook (which can boot to either Windows 7 or Ubuntu).

This was done totally unscientifically of course – I just used a stopwatch and started as soon as the BIOS POST test was over. I recorded 2 different times – the time for the desktop to appear and the time for the computer to actually be usable (that is, all startup programs have opened and no hourglass cursor).

Want to know how things turned out? I thought you might, so here’s a handy chart summarizing the results:

Desktop Computer (Windows 7) Netbook (Windows 7) Netbook (Ubuntu)
Desktop Appears: 1:03 0:54 0:36
Computer usable: 2:30 1:30 0:47

It’s probably also worth mentioning that my desktop computer loads a fair number of programs on startup – wallpaper changers, my online backup program, dropbox, etc., which accounts for it’s poor time to “computer usable.”

It’s also probably worth mentioning the specs of the two computers – the desktop is an Intel Core 2 Quad @ 2.66 GHz with 7200 RPM hard drives and 6 GB of RAM, while the netbook is an Intel Atom (single core) @ 1.6 GHz with a 5200 RPM hard drive and 1 GB of RAM.

When comparing Windows 7 startup times, the netbook edges out the desktop by just a few seconds, probably because it has fewer device drivers to initialize (the desktop has literally a dozen USB devices hanging off it, plus dual monitors, plus whatever services are configured to run at startup), but the times are otherwise pretty close.

As I said before, the time to “computer usable” for the desktop is pretty horrific due to all the stuff I load on startup (but then again, I rarely reboot the desktop).

It’s also nice to see such great times for Ubuntu – which is what I use by default on my netbook.

Part of the reason I ran these tests is I’ve been mulling over whether to add an SSD or hybrid drive to these computers.

Certainly, the desktop could use the performance boost of an SSD… but my primary boot drive is 500 GB, and I can’t afford an SSD at that capacity (if one even exists!), and I don’t feel like splitting my boot drive so it can fit on a smaller drive.

I’ve heard some decent things about so-called “hybrid” drives, which are affordable for 500 GB, and which would give me a performance boost, especially for boot-up. But at the same time, I have to consider that I don’t reboot often, and is saving, say, 30 seconds of a 2 minute 30 second boot time really worth the cost and effort? Probably not – at least, not yet, not until SSD or hybrid drive prices come down a bit further.

As for my netbook, it boots pretty darn fast as-is, but an SSD would really make a huge difference. I figure an SSD would let me boot into Ubuntu and be ready to go in probably, oh, 10-15 seconds. That would put my netbook nearly as fast for “ready to use” as a tablet computer (e.g., iPad) for a lot less cost, which would be nice. But again, the netbook has a 160 GB hard drive, and although SSD prices have come down a lot, a 160 GB SSD is still a bit too pricey for me at the moment.

Still, these numbers are interesting to have, and I think it’s clear that as SSD prices continue to fall, my netbook will probably get the first upgrade, followed by a hybrid drive for my desktop later on (or possibly an SSD, if prices fall far enough).

Confusion, Misunderstandings, and Net Neutrality

I’ve seen a lot of argument back and forth on the issue of “Net Neutrality,” and one thing that really jumps out at me is how much confusion and misunderstanding there is regarding what the phrase “Net Neutrality” really means. This is an attempt to clear up some of the confusion.

At the root of the problem is that the phrase “Net Neutrality” is not actually a very accurate or descriptive phrase for the underlying problem it’s supposed to describe.

This is a problem because one you label a complex issue with a simple name, people will forget what the underlying issue is and simply take the meaning from the descriptive name – and if the name is misleading, then people will misunderstand the issue.

Don’t believe me? Just look at people arguing about “Global Warming.” Every time it snows somewhere that it doesn’t usually snow (or doesn’t usually snow very much) you will get people screaming that this means that “global warming” is a farce. “How can it be warming when it’s so cold out!” Because the phrase “global warming” was used to describe a more complex technical concept (e.g., the average temperature of the Earth rising by a few degrees, and the resulting climate changes that result from this), people forgot what the actual problem was and simply latched on to the name used to describe it. The same seems to be true for “Net Neutrality.”

People tend to see the word “neutrality” and think “that’s OK,” then they hear that Net Neutrality proponents want the government to step in to guarantee “net neutrality” and suddenly alarm bells start going off in their heads. “Wait a second, government is bad! If the network was ‘neutral’ wouldn’t that mean NO government regulation, instead of more?

So right off the bat here we’ve got misunderstandings caused simply by the name we use to refer to the problem.

The misunderstandings continue even as we try to clear up the confusion caused by having a confusing name. One common misunderstanding is that somehow the idea of “Net Neutrality” would forbid ISPs and such from enforcing QoS and other similar things. This again stems from a poor choice of words, usually someone trying to describe “Net Neutrality” as “treating all Internet traffic the same.”

A better way to describe it would be “not discriminating against Internet traffic based on where the traffic originated.

That is to say, as an ISP it’s fine for you to throttle video services, or VoIP, or whatever you want (or need), so long as you’re not doing that throttling (or, in some cases, blocking) solely based on where those bits originally came from.

To understand why anyone would want to do this in the first place, and why it suddenly seems like yes, they do want to do it and might even start trying to do it very soon unless we do something (which is why people are all in an uproar over “Net Neutrality” in the first place), it helps to understand a little bit of the situation with ISPs.

The biggest ISPs – at least consumer ISPs, and at least in America – are Phone (DSL) and Cable companies. These are companies that don’t just provide Internet access – they also provide another service along with it, and that other service is how they got their start (and may even still be their biggest provider of income).

The biggest concern to these companies is that you will use the Internet service they provide to get around the need for the other services that they provide (phone & cable TV), and eventually their other services will die out and they’ll be left as nothing but Internet providers. While they might do very well as Internet providers, they don’t want to give up the “sure thing” of their existing services – and they will fight hard to keep that from happening.

In the case of the phone companies, they don’t want you using VoIP or Skype or whatever, because then you won’t need a phone line anymore. With the cable TV companies, they don’t want you watching video online (especially things like Netflix streaming or Hulu or even some types of videos on YouTube) because then you won’t need their cable TV service anymore.

To put it more simply, ISPs want to be able to block (or force extra payment for) access to competing services, and Net Neutrality says that they shouldn’t be allowed to do this.

That phone and cable companies want to be able to block (or charge extra for) access to these competing services sort of makes sense, in a way. If you owned a coffee shop, you wouldn’t want lots of people sitting around in your shop drinking the coffee they bought from the competing shop across the street, taking up your space but not making you any money, right?

But this doesn’t work on the Internet any more than it does in real life. In most places you aren’t allowed do discriminate against your customers – you can’t kick someone out because they have a coffee cup from that chain across the street. (But it is worth noting that you can kick them out if they’re causing a ruckus, which in Internet terms means you can enforce QoS and throttling to prevent abuse.) You also aren’t allowed to build a wall across the street so that people can’t walk past your store to your competitor’s store.

Looking at this from another angle, imagine if your Verizon phone couldn’t call AT&T phones unless you paid extra money for that ability, or perhaps such calls would be billed at a higher rate.

In many ways, this is a lot like the concept of “common carriers.” Phone companies are considered “common carriers,” which is why the situation I described above can’t happen (it’s prohibited specifically by law). But ISPs aren’t considered “common carriers,” and this is the crux of Net Neutrality. It’s really more about fairness than neutrality in that way.

Think about it like this: I pay for my Internet access, which gives me a certain amount of bandwidth – which I can use however I want. The sites I choose to visit also pay for bandwidth on their end (often at a MUCH higher rate than I do). So why would you want to allow ISPs to charge these sites AGAIN just because the traffic from their site (which they have no control over who is requesting it) happens to go across the ISP’s network (on its way to customer who has already paid for this bandwidth, I might add)? This is what Net Neutrality advocates are worried will happen unless we adopt rules similar to those for common carriers.

This is especially troubling considering that many places (at least in the US) have very little choice in ISPs – for a very large portion of people, it’s either DSL from the phone company or cable Internet from the cable TV provider. So the usual answer to problems like this (“just vote with your wallet!”) doesn’t apply.

Other confusion regarding “Net Neutrality” of course comes from the fact that we’re trying to involve the government with it, and that’s always asking for trouble, no matter how noble the intentions. Suffice to say, politicians do not understand the concept embodied in the phrase “Net Neutrality” very well. As a result the legislative solutions they propose tend to fall short of addressing the real problem, or sometimes they go way too far and end up being more harmful than the original problem they were meant to solve!

However, just because government tends to be incompetent doesn’t mean that that the underlying issue doesn’t exist.

The concept of Net Neutrality, no matter how confusing its name, is an important issue. Government regulation may or may not be the ideal way to address it, but a lot of very smart people seem to think that it does at least need to be addressed somehow.

Hopefully this article has helped clear up some of the confusion about what “Net Neutrality” really means, so that at least we can all be on equal footing to debate its merits and potential solutions (or why no solution might be needed).

Why I Still Use My Canon PowerShot S3 IS Camera

Considering how fast the digital camera world moves forward (in terms of technology), you might find it surprising that I – a huge technology geek – am still using my 2006-vintage Canon PowerShot S3 IS camera, even though it has been replaced by more than a few new models from Canon (at least 5 new models, by my count – and quite possibly more).

Now you might be wondering why I’m sticking with an older camera like this – but I assure you, there is a very good reason. And that reason is, basically, that Canon has not come out with a newer, “better” camera that is comparable to the venerable S3 in terms of features, price, performance, and accessories.

For example, the direct successor to the S3 is the S5, which is basically the same camera, but with 8 megapixels instead of 6, a newer image processor, and a hot shoe for attaching an auxiliary flash.

Sounds great, right? Well, yes and no. While at first glance the S5 seems like it is “better,” there is one other change that’s really annoying – the memory card slot on the S5 is on the bottom of the camera, inside the battery compartment, instead of on the side like in the S3. This means that you can’t switch memory cards easily while on a tripod, since the battery compartment is usually blocked by your tripod mount. And while this seems like a minor nit-pick, you also have to consider that the other new features of the S5 just aren’t quite compelling enough to justify buying an entirely new camera. (Remember: these cameras aren’t cheap, and they don’t have the same resale value that a full DSLR would have.)

There are more examples as well. Moving up the Canon “S” series of cameras we come to the SX10 and Sx20. Now, these are both very nice cameras, but again, they have some downsides that make it just not-quite-good-enough to justify spending a whole bunch of money on a new camera.

One aspect of the new cameras in the “S” series is that the lens speed (i.e.,largest aperture setting) has been slowly going down.  My S3 has a max aperture of  f/2.7 at the wide end, and f/3.5 at full zoom – but the SX10 and SX20 have max apertures of  f/2.8 at the wide end and f/5.7 at full zoom.

And things don’t get any better if you jump up to the next range of Canon cameras – the PowerShot G series. Oh, sure, the early G series cameras had decently fast lenses (f/2.0 at the wide end, which is impressive for what is technically still a “point and shoot” camera), but the later G series all got bumped up to f/2.8 at the wide end, which is… not as impressive.

(For those who are a little confused as to what I’m talking about with these crazy f-numbers and references to “fast” lenses, this article from Wikipedia offers a good explanation. Generally speaking, a smaller f-number means a larger aperture, which means more light can come into the camera in a given amount of time.)

And let’s not forget that I’ve invested a fair bit of change into accessories for my camera. I’ve got filters and wide-angle lens adapters, which I would prefer not to have to re-buy with a new camera. Now, while the S5 would take the same accessories, but the SX10 and SX20 would not. And as for the G series, well, some of them support my accessories (mostly the earlier models) but some do not.

And I’m still not done – because some of the models above have the nice swivel-screen that is so handy to have, but others don’t. And some have the same electronic viewfinder, but others have a rather simple see-through preview hole, which does not actually show you what your picture will look like (instead, you have to use the full-sized screen).

I also am rather particular in my camera using regular AA-size batteries, so that I can find replacements easily in the field if I need to. Also, I can carry extra spares easily and charge them all using standard battery chargers, instead of needing special manufacturer-specific chargers.

So, as you can see, while there are many newer cameras to choose from, none offers the same excellent mix of features and accessories as my venerable old S3:

  • Swivel screen
  • Side-accessible memory card slot (not in the battery compartment)
  • Uses standard AA batteries
  • Accessories via a 58 mm mount on an adapter tube
  • Viewfinder that shows a full view of what the sensor sees (it’s electronic, not optical, but it’s still handy)
  • Good optical zoom range (12x)
  • Decent lens speed (f/2.7 – f/3.5)

For sure, newer cameras offer some of the same features (along with other benefits from being newer & using better technology), but none of them offers the same blend of features. And none of the benefits of the new cameras is, as of yet, compelling enough to make me spend several hundred dollars on a new camera, when my old one does just fine, thank you, and has all these features that I like, and won’t require me to re-purchase all new accessories.

Maybe someday Canon will come out with a new camera that offers the same features as the PowerShot S3, but with upgraded technology (hint hint, Canon!), then maybe I’ll consider upgrading. But until that day comes, I’m sticking with my trusty little S3.

Photos licensed under a Creative Commons Attribution-Share Alike 2.5 license. Photo credits: HendrixEesti, Yug and Rama. (Click on the photos themselves for further details.)

Joining the Dual-Monitor Club

After many, many years of dragging my feet, I have finally joined the dual-monitor club:

joining the dual monitor club

My wife’s company was getting rid of some surplus equipment and I managed to grab the 2nd monitor for just $25 – you can’t say no at that price! So I decided to give this dual-monitor thing a try.

I’ve long been… well let’s say ambivalent about the benefits of having dual monitors – despite the fact that most programmers swear by them (heck, dual monitors are item #1 on the Programmer’s Bill of Rights!).

My reluctance was partly due to the cost – especially back in the CRT days, when monitors (decently-sized ones, anyway) were not inexpensive. The other reason for my reluctance was that I’d tried the dual-monitor thing years ago and found it not very useful – the monitor I tried out was an old 15″ CRT, and the desk I was using at the time didn’t really fit a 2nd monitor very effectively. Also, back then there really wasn’t any such thing as a “dual-head” video card, so you had to add a 2nd video card (probably a slower PCI card, since your main video card was probably using up the sole AGP slot on your motherboard).

However, even when LCD monitors became relatively inexpensive and easy to get I still resisted getting a second monitor. The reason for this was that I just could not see how a second monitor would benefit me, given the type of work I do. Oh, I didn’t deny that it would be useful sometimes – but not necessarily enough to justify the cost/space/hassle/etc.

I just kept figuring that I really only “focus” on one thing at a time, so why bother having a second screen if I’m not going to be focusing on it? Plus, I worried about getting cramps in my neck & shoulders from turning to the side to stare at a second monitor for any length of time.

So I rationalized it to myself for a very long time, until this $25 monitor came along, and I just figured I’d give it a try (at worst I could decide I didn’t like it and give it away to a family member or friend who needs a new monitor).

So now that I’ve got it, how is it working out for me? Well, getting used to a second monitor actually takes some time and effort – when you have worked for so long with just one screen, it’s hard to “give up” a window and move it over to the second screen.

Of course, what stuff ends up on the 2nd screen is a tough choice to make. My “desktop” is now effectively twice as wide as it used to be, which means moving the mouse from the left side of the screen to the right side of the other screen takes a while – and again, I don’t like moving the mouse more than I have to (repetitive stress injuries are to programmers what black lung was to coal miners). So whatever went on the 2nd monitor would have to:

  • Only infrequently require mouse input
  • Be something I could glance at out of the corner of my eye, without needing to actually turn my head and stare at the 2nd screen for long periods of time
  • Not be distracting

Interestingly, not a lot falls into this category for me.

A lot of people using dual monitors will say how they love having their email open on the 2nd screen all the time. But I (mostly) follow the “Getting Things Done” philosophy, and I’m also a programmer so interruptions are anathema  to me, so having email always “in my face” is just not necessary. I check email when I’m ready to check email, and my computer will let me know that mail has arrived and I can then read it at my leisure.

Having IM or Twitter open on the second monitor might also seem like it might be useful, and after trying it out, I did actually decide to move my IM program to the 2nd monitor. It helps keep chats with co-workers “on the side” so I can keep working. And Twitter would probably be a good candidate, except I don’t use Twitter often enough for it to be that important to me. Plus, the Twitter client I use (Spaz) has growl-style notifications that let me know when new Tweets happen for the (relatively) few people I follow, so that’s good enough for me.

Another candidate for a 2nd monitor is for debugging – and that would be a good use for a 2nd monitor, depending on the type of debugging you are doing. But I mostly do .NET WinForms development these days, and debugging that is pretty easy on a single monitor. Perhaps when I have some web development to do, or other kinds of development, the second monitor will really come through for me – but right now, it’s just not helpful for the debugging I do.

However, a very good candidate for the 2nd monitor is for remote desktop/virtual machines. Often I have to remote control people’s computers, and putting that on the 2nd monitor allows me to effectively have their desktop right next to mine – it is very handy. Likewise for virtual machines – I will run the virtual machine on the 2nd monitor and I can keep an eye on it while working normally on my 1st monitor.

So that’s where I stand currently in regards to the dual-monitor club. I’m still a new convert, and I’m still getting my sea-legs, so to speak, as far as figuring out how best to use this 2nd screen I have. But I’m getting there.

Another Computer Conundrum: A Computer for MOM

Once again, I’m facing a computer conundrum. This time, however, it’s a bit trickier to find the “right” answer, because this computer isn’t for me: it’s for my mom.

My conundrum is this: I still have my old computer (Elysion) lying around, and since I love giving old technology a second life, I had planned to clean it up, install Windows 7 on it, and give it to my mom to replace her current computer – a very old Dell with a very slow early generation Pentium 4 CPU.

Now, you might be thinking:  “What’s the conundrum, Keith? Just give you mom your old computer; it’s obviously better than what she has!” And you’d be right – my old computer is better than what she has currently.

But there’s another choice I hadn’t considered originally: getting my mom a nettop computer instead.

To put it into perspective, he’s a handy comparison chart comparing my old computer vs. a new nettop (specifically, an Acer Aspire Revo AR2600 U9022 – gotta love Acer’s insane model numbering!):

My Old Computer (Elysion)
Acer Aspire Revo AR36010 U9022
CPU: Pentium 4 w/HT Intel Atom 330 w/HT
CPU Type:
32-bit 64-bit
CPU Architecture: “Prescott” “Diamondville”
CPU Cores: 1 (2 logical) 2 (4 logical)
L2 Cache: 1 MB 1 MB
Clock Speed: 3.2 GHz 1.6 GHz
Front-side bus: 800 MHz 533 MHz
Thermal Draw: 82W 23W
RAM: 1 GB DDR2 PC4300 + 2 GB DDR2 PC5300 2 GB DDR2 PC2 6400
Hard Drive: 160 GB + 500 GB (7200 RPM) 160 GB (5200 RPM)
Video: ATI Radeon X300 NVIDIA ION integrated graphics
Other Drives: 1x CD/DVD writer, 1x CD/DVD player SD/MMC/MemoryStick/xD memory card reader/writer
Cost:
Free (+ about $120 for a Windows 7 upgrade) $330 (all inclusive)

The problem I have is that I’m not always very good at picking out technology for other people – especially for people who plan to use technology in a very different way than I would. While my recommendations are still very, very good (the reason why people keep asking for my recommendations in the first place), they are still a little bit… biased.

On the surface, it seems like the Acer nettop is the way to go – although it may be a bit slower in terms of raw clock and front-side bus speed, it is a true dual-core CPU, with all the benefits that go along with that. (Astute readers might also remember that when I upgraded from Elysion I actually took a drop in raw CPU clock speed from 3.2 GHz to 2.6 GHz, and yet my new computer is much faster than my old one.)

On the other hand, there are other aspects of the Acer nettop that would suggest that maybe sticking with a full-fledged desktop PC is the way to go. The nettop is, with a few exceptions, basically a desktop version of my Acer Aspire One netbook. The CPU in my netbook runs at the same clock speed (although it is not dual-core) and has the same size (and same RPM speed) hard drive. And although I love my netbook and think it is a great little computer, it is not exactly “zippy” in terms of performance.

However, again, there are differences between the netbook and the nettop. For one, the nettop has more RAM than my netbook – 2 GB instead of 1. And the nettop has that new ION graphics package – remember, this nettop is often marketed as a great Media Center PC rather than as a desktop computer, and as such it has the necessary graphics power to drive a big HD screen. And my netbook runs Ubuntu Linux for the most part (with the factory-installed Windows XP on a separate partition), not Windows 7, so there may be performance differences there that I’m not aware of. And there’s that whole dual-core vs. single core thing, plus the fact that the nettop’s CPU is 64-bit vs the netbooks 32-bit CPU.

However, my old computer also has the advantage of being, well, free – since I already have it (I just have to pick up a Windows 7 upgrade CD). And in this case, cost is definitely a factor.

Making the decision even harder is that it’s very hard to find performance data that can be used to compare the old Pentium 4 (with Hyper-Threading!) against the very new Atom 330, especially since things like chipsets, graphics card performance, hard drive speed, and so forth can all very significantly affect perceived (and measured) performance.

So I’m just not sure what to do in this case – I think I will have to mull this over for a bit more still before I come to a decision. (Though I invite readers with an opinion one way or the other to chime in on this debate in the comments!) When I do come to a decision, I will post about it here (and update this article), since I think that this sort of computer conundrum is bound to be a common one among techno-savvy people with not-quite-as-tech-savvy family members. But we shall see!