Bringing back the classic “netbook remix” interface in Ubuntu 11.04 “Natty Narwhal”

The other day I saw the news that the latest version of Ubuntu 11.04 “Natty Narwhal” had been released. So, like any self-respecting geek, I updated my netbook (which runs Ubuntu).

The upgrade was smooth and easy, but one thing I noticed right away after rebooting was that nothing looked the same.

The thing is, Ubuntu has committed to using the new “Unity” interface for Ubuntu, and they have also folded the netbook remix stuff into the main “Ubuntu” release. What this means is that, starting with 11.04:

  • Ubuntu uses “Unity” by default, even on netbooks
  • There is no longer a separate “netbook remix” for Ubuntu

Now, don’t get me wrong – I appreciate the “Unity” interface, and I like the idea and the execution of it is pretty great… but I disagree with the idea that this is the perfect interface for netbooks.

First off, the “Unity” interface is rather graphically intensive – it has some neat 3-D effects as you mouse over the bar – and this just really kind of bogs down a netbook. Now, maybe newer netbooks have more powerful graphics cards, but I always think of Linux as being great for older computers too, and the “Unity” interface just doesn’t cut it on older hardware.

Now, you can always switch back to the Ubuntu Classic UI (by using the Logon Screen app, or by just choosing at the login screen itself), but even that is a bit of a compromise, especially for netbooks. The netbook UI was optimized for small screens, where every inch of screen space was valuable.

ubuntu classic desktop

The "Classic" desktop isn't that great for small netbook screens

So, I set about trying to find how to bring back that classic “netbook” look that previous versions of Ubuntu Netbook Remix (UNR) had. After some experimentation with a virtual machine (and, in the extreme, trying out some other Linux distros to see if they were more netbook-friendly) I found the way to do it.

Before we begin, I suggest that you switch to the classic UI before beginning – that way you won’t need to worry about fiddling with the “Unity” launcher bar thing.

There are 4 packages you need to have before you begin, so fire up the Synaptic package manager (or a terminal if that’s your thing) and make sure these packages are installed:

These 4 packages are what basically make up the older “Netbook Remix” edition of Ubuntu.

Ubuntu package manager - showing "netbook-launcher-efl" package

Selecting the "netbook-launcher-efl" package for installation

The first thing to do is to go to your startup applications in Ubuntu and add netbook-launcher-efl and maximus to your startup applications.

Ubuntu preferences menu - startup applications

Selecting the "Starup Applications" app from the "Preferences" menu

ubuntu startup program - add netbook-launcher-efl

Adding the Netbook Launcher to startup

ubuntu startup program - add maximus

Adding Maximus to startup

Ubuntu Startup Applications Preferences

Startup Applications

Next, add the window-picker-applet and go-home-applet to the top panel in Ubuntu. You may also want to remove some of the other panel items that are up there currently, and then re-size and re-position the panels so they look like the old netbook remix. If you have a panel at the bottom of the screen, remove that as well.

ubuntu - add to panel

Adding an applet to the panel

ubuntu - delete this panel

Deleting the lower panel

Finally, reboot the system and voilà! The look of the old Ubuntu Netbook Remix is back!

ubuntu - netbook look and feel

Ubuntu Netbook Look and Feel Restored!

ubuntu - netbook look - with windows open

Maximus and the window picker app keep things organized on small screens

I really like the netbook interface – I think it’s the best fit for netbooks, especially not-very-powerful ones like mine. The maximus package keeps windows from having a title bar (it gets merged into the panel at the top, where the window-picker-applet takes care of showing you the app’s name and giving you a close button) and of course keeps windows maximized all the time (which is the only way you’d ever want them to be on a netbook’s small screen). Plus, the netbook launcher is just great for launching the few programs you use on a netbook. The icons are huge and easy to click when using a little touchpad, and the graphics are smooth but not overdone.

Screenshot of Ubuntu on a Netbook

Ubuntu with these netbook changes on my little netbook

It’s worth mentioning that during my experimentation, I tried out a few other options, including some different distributions that claimed to be good for netbooks. One distribution I found called “EasyPeasy” was based on Ubuntu and was basically the classic “Netbook Remix” that I remember. However, it seems to lag behind Ubuntu in terms of releases – it was still using Firefox 3 for example. Still, if you’re just getting a new netbook, you might want to try EasyPeasy from the start, as it comes “out of the box” with the netbook look & feel.

However, if you want to stick with the Ubuntu you know and love, these steps will bring back that classic Ubuntu Netbook Remix interface, just the way you remember it.

(Update: if you’re using Ubuntu 11.10 Oneiric Ocelot, I’ve got some extra steps for you here that should do the trick.)

Some Completely Pointless Benchmarks

Just for fun, I decided to time how long it takes to boot my computers – both my main desktop computer and my little netbook (which can boot to either Windows 7 or Ubuntu).

This was done totally unscientifically of course – I just used a stopwatch and started as soon as the BIOS POST test was over. I recorded 2 different times – the time for the desktop to appear and the time for the computer to actually be usable (that is, all startup programs have opened and no hourglass cursor).

Want to know how things turned out? I thought you might, so here’s a handy chart summarizing the results:

Desktop Computer (Windows 7) Netbook (Windows 7) Netbook (Ubuntu)
Desktop Appears: 1:03 0:54 0:36
Computer usable: 2:30 1:30 0:47

It’s probably also worth mentioning that my desktop computer loads a fair number of programs on startup – wallpaper changers, my online backup program, dropbox, etc., which accounts for it’s poor time to “computer usable.”

It’s also probably worth mentioning the specs of the two computers – the desktop is an Intel Core 2 Quad @ 2.66 GHz with 7200 RPM hard drives and 6 GB of RAM, while the netbook is an Intel Atom (single core) @ 1.6 GHz with a 5200 RPM hard drive and 1 GB of RAM.

When comparing Windows 7 startup times, the netbook edges out the desktop by just a few seconds, probably because it has fewer device drivers to initialize (the desktop has literally a dozen USB devices hanging off it, plus dual monitors, plus whatever services are configured to run at startup), but the times are otherwise pretty close.

As I said before, the time to “computer usable” for the desktop is pretty horrific due to all the stuff I load on startup (but then again, I rarely reboot the desktop).

It’s also nice to see such great times for Ubuntu – which is what I use by default on my netbook.

Part of the reason I ran these tests is I’ve been mulling over whether to add an SSD or hybrid drive to these computers.

Certainly, the desktop could use the performance boost of an SSD… but my primary boot drive is 500 GB, and I can’t afford an SSD at that capacity (if one even exists!), and I don’t feel like splitting my boot drive so it can fit on a smaller drive.

I’ve heard some decent things about so-called “hybrid” drives, which are affordable for 500 GB, and which would give me a performance boost, especially for boot-up. But at the same time, I have to consider that I don’t reboot often, and is saving, say, 30 seconds of a 2 minute 30 second boot time really worth the cost and effort? Probably not – at least, not yet, not until SSD or hybrid drive prices come down a bit further.

As for my netbook, it boots pretty darn fast as-is, but an SSD would really make a huge difference. I figure an SSD would let me boot into Ubuntu and be ready to go in probably, oh, 10-15 seconds. That would put my netbook nearly as fast for “ready to use” as a tablet computer (e.g., iPad) for a lot less cost, which would be nice. But again, the netbook has a 160 GB hard drive, and although SSD prices have come down a lot, a 160 GB SSD is still a bit too pricey for me at the moment.

Still, these numbers are interesting to have, and I think it’s clear that as SSD prices continue to fall, my netbook will probably get the first upgrade, followed by a hybrid drive for my desktop later on (or possibly an SSD, if prices fall far enough).

Confusion, Misunderstandings, and Net Neutrality

I’ve seen a lot of argument back and forth on the issue of “Net Neutrality,” and one thing that really jumps out at me is how much confusion and misunderstanding there is regarding what the phrase “Net Neutrality” really means. This is an attempt to clear up some of the confusion.

At the root of the problem is that the phrase “Net Neutrality” is not actually a very accurate or descriptive phrase for the underlying problem it’s supposed to describe.

This is a problem because one you label a complex issue with a simple name, people will forget what the underlying issue is and simply take the meaning from the descriptive name – and if the name is misleading, then people will misunderstand the issue.

Don’t believe me? Just look at people arguing about “Global Warming.” Every time it snows somewhere that it doesn’t usually snow (or doesn’t usually snow very much) you will get people screaming that this means that “global warming” is a farce. “How can it be warming when it’s so cold out!” Because the phrase “global warming” was used to describe a more complex technical concept (e.g., the average temperature of the Earth rising by a few degrees, and the resulting climate changes that result from this), people forgot what the actual problem was and simply latched on to the name used to describe it. The same seems to be true for “Net Neutrality.”

People tend to see the word “neutrality” and think “that’s OK,” then they hear that Net Neutrality proponents want the government to step in to guarantee “net neutrality” and suddenly alarm bells start going off in their heads. “Wait a second, government is bad! If the network was ‘neutral’ wouldn’t that mean NO government regulation, instead of more?

So right off the bat here we’ve got misunderstandings caused simply by the name we use to refer to the problem.

The misunderstandings continue even as we try to clear up the confusion caused by having a confusing name. One common misunderstanding is that somehow the idea of “Net Neutrality” would forbid ISPs and such from enforcing QoS and other similar things. This again stems from a poor choice of words, usually someone trying to describe “Net Neutrality” as “treating all Internet traffic the same.”

A better way to describe it would be “not discriminating against Internet traffic based on where the traffic originated.

That is to say, as an ISP it’s fine for you to throttle video services, or VoIP, or whatever you want (or need), so long as you’re not doing that throttling (or, in some cases, blocking) solely based on where those bits originally came from.

To understand why anyone would want to do this in the first place, and why it suddenly seems like yes, they do want to do it and might even start trying to do it very soon unless we do something (which is why people are all in an uproar over “Net Neutrality” in the first place), it helps to understand a little bit of the situation with ISPs.

The biggest ISPs – at least consumer ISPs, and at least in America – are Phone (DSL) and Cable companies. These are companies that don’t just provide Internet access – they also provide another service along with it, and that other service is how they got their start (and may even still be their biggest provider of income).

The biggest concern to these companies is that you will use the Internet service they provide to get around the need for the other services that they provide (phone & cable TV), and eventually their other services will die out and they’ll be left as nothing but Internet providers. While they might do very well as Internet providers, they don’t want to give up the “sure thing” of their existing services – and they will fight hard to keep that from happening.

In the case of the phone companies, they don’t want you using VoIP or Skype or whatever, because then you won’t need a phone line anymore. With the cable TV companies, they don’t want you watching video online (especially things like Netflix streaming or Hulu or even some types of videos on YouTube) because then you won’t need their cable TV service anymore.

To put it more simply, ISPs want to be able to block (or force extra payment for) access to competing services, and Net Neutrality says that they shouldn’t be allowed to do this.

That phone and cable companies want to be able to block (or charge extra for) access to these competing services sort of makes sense, in a way. If you owned a coffee shop, you wouldn’t want lots of people sitting around in your shop drinking the coffee they bought from the competing shop across the street, taking up your space but not making you any money, right?

But this doesn’t work on the Internet any more than it does in real life. In most places you aren’t allowed do discriminate against your customers – you can’t kick someone out because they have a coffee cup from that chain across the street. (But it is worth noting that you can kick them out if they’re causing a ruckus, which in Internet terms means you can enforce QoS and throttling to prevent abuse.) You also aren’t allowed to build a wall across the street so that people can’t walk past your store to your competitor’s store.

Looking at this from another angle, imagine if your Verizon phone couldn’t call AT&T phones unless you paid extra money for that ability, or perhaps such calls would be billed at a higher rate.

In many ways, this is a lot like the concept of “common carriers.” Phone companies are considered “common carriers,” which is why the situation I described above can’t happen (it’s prohibited specifically by law). But ISPs aren’t considered “common carriers,” and this is the crux of Net Neutrality. It’s really more about fairness than neutrality in that way.

Think about it like this: I pay for my Internet access, which gives me a certain amount of bandwidth – which I can use however I want. The sites I choose to visit also pay for bandwidth on their end (often at a MUCH higher rate than I do). So why would you want to allow ISPs to charge these sites AGAIN just because the traffic from their site (which they have no control over who is requesting it) happens to go across the ISP’s network (on its way to customer who has already paid for this bandwidth, I might add)? This is what Net Neutrality advocates are worried will happen unless we adopt rules similar to those for common carriers.

This is especially troubling considering that many places (at least in the US) have very little choice in ISPs – for a very large portion of people, it’s either DSL from the phone company or cable Internet from the cable TV provider. So the usual answer to problems like this (“just vote with your wallet!”) doesn’t apply.

Other confusion regarding “Net Neutrality” of course comes from the fact that we’re trying to involve the government with it, and that’s always asking for trouble, no matter how noble the intentions. Suffice to say, politicians do not understand the concept embodied in the phrase “Net Neutrality” very well. As a result the legislative solutions they propose tend to fall short of addressing the real problem, or sometimes they go way too far and end up being more harmful than the original problem they were meant to solve!

However, just because government tends to be incompetent doesn’t mean that that the underlying issue doesn’t exist.

The concept of Net Neutrality, no matter how confusing its name, is an important issue. Government regulation may or may not be the ideal way to address it, but a lot of very smart people seem to think that it does at least need to be addressed somehow.

Hopefully this article has helped clear up some of the confusion about what “Net Neutrality” really means, so that at least we can all be on equal footing to debate its merits and potential solutions (or why no solution might be needed).

Why I Still Use My Canon PowerShot S3 IS Camera

Considering how fast the digital camera world moves forward (in terms of technology), you might find it surprising that I – a huge technology geek – am still using my 2006-vintage Canon PowerShot S3 IS camera, even though it has been replaced by more than a few new models from Canon (at least 5 new models, by my count – and quite possibly more).

Now you might be wondering why I’m sticking with an older camera like this – but I assure you, there is a very good reason. And that reason is, basically, that Canon has not come out with a newer, “better” camera that is comparable to the venerable S3 in terms of features, price, performance, and accessories.

For example, the direct successor to the S3 is the S5, which is basically the same camera, but with 8 megapixels instead of 6, a newer image processor, and a hot shoe for attaching an auxiliary flash.

Sounds great, right? Well, yes and no. While at first glance the S5 seems like it is “better,” there is one other change that’s really annoying – the memory card slot on the S5 is on the bottom of the camera, inside the battery compartment, instead of on the side like in the S3. This means that you can’t switch memory cards easily while on a tripod, since the battery compartment is usually blocked by your tripod mount. And while this seems like a minor nit-pick, you also have to consider that the other new features of the S5 just aren’t quite compelling enough to justify buying an entirely new camera. (Remember: these cameras aren’t cheap, and they don’t have the same resale value that a full DSLR would have.)

There are more examples as well. Moving up the Canon “S” series of cameras we come to the SX10 and Sx20. Now, these are both very nice cameras, but again, they have some downsides that make it just not-quite-good-enough to justify spending a whole bunch of money on a new camera.

One aspect of the new cameras in the “S” series is that the lens speed (i.e.,largest aperture setting) has been slowly going down.  My S3 has a max aperture of  f/2.7 at the wide end, and f/3.5 at full zoom – but the SX10 and SX20 have max apertures of  f/2.8 at the wide end and f/5.7 at full zoom.

And things don’t get any better if you jump up to the next range of Canon cameras – the PowerShot G series. Oh, sure, the early G series cameras had decently fast lenses (f/2.0 at the wide end, which is impressive for what is technically still a “point and shoot” camera), but the later G series all got bumped up to f/2.8 at the wide end, which is… not as impressive.

(For those who are a little confused as to what I’m talking about with these crazy f-numbers and references to “fast” lenses, this article from Wikipedia offers a good explanation. Generally speaking, a smaller f-number means a larger aperture, which means more light can come into the camera in a given amount of time.)

And let’s not forget that I’ve invested a fair bit of change into accessories for my camera. I’ve got filters and wide-angle lens adapters, which I would prefer not to have to re-buy with a new camera. Now, while the S5 would take the same accessories, but the SX10 and SX20 would not. And as for the G series, well, some of them support my accessories (mostly the earlier models) but some do not.

And I’m still not done – because some of the models above have the nice swivel-screen that is so handy to have, but others don’t. And some have the same electronic viewfinder, but others have a rather simple see-through preview hole, which does not actually show you what your picture will look like (instead, you have to use the full-sized screen).

I also am rather particular in my camera using regular AA-size batteries, so that I can find replacements easily in the field if I need to. Also, I can carry extra spares easily and charge them all using standard battery chargers, instead of needing special manufacturer-specific chargers.

So, as you can see, while there are many newer cameras to choose from, none offers the same excellent mix of features and accessories as my venerable old S3:

  • Swivel screen
  • Side-accessible memory card slot (not in the battery compartment)
  • Uses standard AA batteries
  • Accessories via a 58 mm mount on an adapter tube
  • Viewfinder that shows a full view of what the sensor sees (it’s electronic, not optical, but it’s still handy)
  • Good optical zoom range (12x)
  • Decent lens speed (f/2.7 – f/3.5)

For sure, newer cameras offer some of the same features (along with other benefits from being newer & using better technology), but none of them offers the same blend of features. And none of the benefits of the new cameras is, as of yet, compelling enough to make me spend several hundred dollars on a new camera, when my old one does just fine, thank you, and has all these features that I like, and won’t require me to re-purchase all new accessories.

Maybe someday Canon will come out with a new camera that offers the same features as the PowerShot S3, but with upgraded technology (hint hint, Canon!), then maybe I’ll consider upgrading. But until that day comes, I’m sticking with my trusty little S3.

Photos licensed under a Creative Commons Attribution-Share Alike 2.5 license. Photo credits: HendrixEesti, Yug and Rama. (Click on the photos themselves for further details.)

Joining the Dual-Monitor Club

After many, many years of dragging my feet, I have finally joined the dual-monitor club:

joining the dual monitor club

My wife’s company was getting rid of some surplus equipment and I managed to grab the 2nd monitor for just $25 – you can’t say no at that price! So I decided to give this dual-monitor thing a try.

I’ve long been… well let’s say ambivalent about the benefits of having dual monitors – despite the fact that most programmers swear by them (heck, dual monitors are item #1 on the Programmer’s Bill of Rights!).

My reluctance was partly due to the cost – especially back in the CRT days, when monitors (decently-sized ones, anyway) were not inexpensive. The other reason for my reluctance was that I’d tried the dual-monitor thing years ago and found it not very useful – the monitor I tried out was an old 15″ CRT, and the desk I was using at the time didn’t really fit a 2nd monitor very effectively. Also, back then there really wasn’t any such thing as a “dual-head” video card, so you had to add a 2nd video card (probably a slower PCI card, since your main video card was probably using up the sole AGP slot on your motherboard).

However, even when LCD monitors became relatively inexpensive and easy to get I still resisted getting a second monitor. The reason for this was that I just could not see how a second monitor would benefit me, given the type of work I do. Oh, I didn’t deny that it would be useful sometimes – but not necessarily enough to justify the cost/space/hassle/etc.

I just kept figuring that I really only “focus” on one thing at a time, so why bother having a second screen if I’m not going to be focusing on it? Plus, I worried about getting cramps in my neck & shoulders from turning to the side to stare at a second monitor for any length of time.

So I rationalized it to myself for a very long time, until this $25 monitor came along, and I just figured I’d give it a try (at worst I could decide I didn’t like it and give it away to a family member or friend who needs a new monitor).

So now that I’ve got it, how is it working out for me? Well, getting used to a second monitor actually takes some time and effort – when you have worked for so long with just one screen, it’s hard to “give up” a window and move it over to the second screen.

Of course, what stuff ends up on the 2nd screen is a tough choice to make. My “desktop” is now effectively twice as wide as it used to be, which means moving the mouse from the left side of the screen to the right side of the other screen takes a while – and again, I don’t like moving the mouse more than I have to (repetitive stress injuries are to programmers what black lung was to coal miners). So whatever went on the 2nd monitor would have to:

  • Only infrequently require mouse input
  • Be something I could glance at out of the corner of my eye, without needing to actually turn my head and stare at the 2nd screen for long periods of time
  • Not be distracting

Interestingly, not a lot falls into this category for me.

A lot of people using dual monitors will say how they love having their email open on the 2nd screen all the time. But I (mostly) follow the “Getting Things Done” philosophy, and I’m also a programmer so interruptions are anathema  to me, so having email always “in my face” is just not necessary. I check email when I’m ready to check email, and my computer will let me know that mail has arrived and I can then read it at my leisure.

Having IM or Twitter open on the second monitor might also seem like it might be useful, and after trying it out, I did actually decide to move my IM program to the 2nd monitor. It helps keep chats with co-workers “on the side” so I can keep working. And Twitter would probably be a good candidate, except I don’t use Twitter often enough for it to be that important to me. Plus, the Twitter client I use (Spaz) has growl-style notifications that let me know when new Tweets happen for the (relatively) few people I follow, so that’s good enough for me.

Another candidate for a 2nd monitor is for debugging – and that would be a good use for a 2nd monitor, depending on the type of debugging you are doing. But I mostly do .NET WinForms development these days, and debugging that is pretty easy on a single monitor. Perhaps when I have some web development to do, or other kinds of development, the second monitor will really come through for me – but right now, it’s just not helpful for the debugging I do.

However, a very good candidate for the 2nd monitor is for remote desktop/virtual machines. Often I have to remote control people’s computers, and putting that on the 2nd monitor allows me to effectively have their desktop right next to mine – it is very handy. Likewise for virtual machines – I will run the virtual machine on the 2nd monitor and I can keep an eye on it while working normally on my 1st monitor.

So that’s where I stand currently in regards to the dual-monitor club. I’m still a new convert, and I’m still getting my sea-legs, so to speak, as far as figuring out how best to use this 2nd screen I have. But I’m getting there.