Long-term Away Messages

Recently, I’ve started using Instant Messaging software again after a long hiatus. I stopped using it (for a variety of reasons) shortly after I left college (back in 2001). Now that I’m back “on IM,” there’s some things I’ve noticed – some of which I used to do myself, but that now just annoy me.

The main one is this: leaving your IM client on all the time – as in 24 hours a day, 7 days a week – but for most of the time, your status is “away.”

I was guilty of doing this back in my college days – leaving IM running all night (usually with some away/status message that I thought was very clever), and then again all day during my classes (again, with some not-really clever status message). Now, granted, being a CS (that’s computer science for the rest of you) major, I did spend a fair amount of time in front of my computer – but even still, something like 75+% of my time was spent “away.”

I never thought of it at the time, but really, in a world where sending an email is free, why in the world would you leave your IM client logged in all the time like that? If you’re not around, why get people’s hopes up by having your client logged in and broadcasting your status to the world? Isn’t it enough to say that if you’re not signed in, that you’re not at your computer? I mean, really, what’s the point of putting up a message saying “I’m not here,” when just … not being there … would send the same sort of message? You might as well put a sign on your empty seat at your desk that says “I’m not sitting here right now.”

Again, I have to say – I was guilty of doing exactly this for many years during college. But now, I just don’t see the point. If you’re going to be away from your computer for a little while (such as for lunch, or just the classic “BRB” – be right back), fine, put up a message. But if you are going to be away from your computer for a long time – for example, you’re going to work and you won’t be back for several hours, or you are going to bed for the night – then just sign off!

Or, at least, that’s my opinion. And with that in mind, I’m signing off. Goodbye!

Don’t Steal the Focus

Jeff Atwood made a wonderful post the other day called Please Don’t Steal My Focus, and I have to say I wholeheartedly agree with him.

Of course, the question that is raised is “why are programs still doing this?” My pick for “worst offender” is, ironically, Microsoft Word. When you open Word, it forces itself as the topmost window and steals your focus. Try opening a bunch of files (say, across a slow network connection) and switch to another window and try to get some work done. You won’t be able to – because as each document opens, Word pops up as the topmost window with the focus. Thanks, Microsoft, for ignoring your own guidelines:

“The strange thing is, there are provisions built into the operating system to protect us from badly written, focus stealing applications. The ForegroundLockTimeout registry setting is expressly designed to prevent applications from stealing focus from the user. The OS silently converts that inappropriate focus stealing behavior into friendlier, less invasive taskbar button flashing, which is the subject of the ForegroundFlashCount registry setting.”

Indeed. That’s why, when I need to notify the user of something, I use a notification balloon. It’s much less intrusive and (best of all) it doesn’t steal the focus. (Plus, it looks cool.) And when you must pop up a modal dialog box and steal the user’s focus (only acceptable in response to an action the user has made in your own program of course), I make sure that there are NO default buttons and NO one-key shortcuts. That way, someone typing won’t accidentally close, cancel, or delete something just by typing. They’d have to either use the tab/arrow keys, or use an ALT+Letter combo.

Perhaps Microsoft will fix this behavior in the next version of Word, but I’m not holding my breath. As for anyone else who abuses this power… shame on you!

UPDATE: Follow-up here.

The Right to Read

I stumbled across this the other day – it’s a sort of story about the future, or what it might be like, if we continue to allow both large corporations and the government dictate what we do with the information we buy.

I came across it because I was reading about Amazon’s new e-book reader thing, the Kindle. At first glance, I love the idea. However, more than a few people have looked at the logical conclusion of things like this (and the atrocious licensing agreements that accompany them) and suffice to say, they aren’t happy.

The basic problem here is, as usual, DRM. (That’s supposedly for “digital rights management,” but a more accurate description would be “digital restrictions management.”)

Think about the problem like this: when you buy a book, you OWN it. You can read it, give it to others to read, and so forth. You can even sell it if you want to – or give it to a used book store to re-sell to others. Or donate it to a library and let them lend it to people. These are inherent rights that you have based on your ownership of a physical object.

However, with an e-book, you don’t have those rights. Or, more accurately, with an e-book protected by draconian DRM, you don’t have those rights. DRM is designed specifically to prevent you from sharing with others or re-selling to anyone else. And what’s worse is that if you should find a way around the DRM, you’re in violation of the DMCA – and the punishment for that is quite severe.

With DRM, you don’t own anything anymore. You’re effectively “leasing” or “renting” or “subscribing” to a service – the book – which can be revoked at any time based on the terms of the agreement. And just like renting, you can’t sub-let (sell to someone else) or let someone else use it instead of you (at least, not without the consent of the original owner – which, in case you missed it the first time, is not you).

This is not a good situation to be in as a consumer, and the story I linked to in the first paragraph illustrates one possible future, if you draw things out to their logical conclusion.

Now, I’m not saying that DRM isn’t necessary (in certain cases), or that leasing/renting digital media (be it music, videos, books, or even software) isn’t a valid option – but as usual, it’s all about context. And, of course, striking a balance between the needs/desires of content owners/creators (control the means of production, prevent reselling, squeeze as much money from consumers as possible) and consumers (who basically want everything for free).

In this case, of course, the market has spoken quite loudly and clearly – we’re just waiting for the market to listen. So far, it hasn’t.

People (consumers) clearly want to be able to use digital media in the same way that they used physical media – i.e. books, CDs, tapes, movies, etc.; which is to say, they want to be able to occasionally lend them to a friend (without penalty), re-sell them at any time, and use/play them in any machine of theirs that they want (in the car, at the summer house, on a plane, etc.).

Most DRM at the moment does not allow you to do any of the above. You can’t lend a product with DRM to a friend (it’s tied to your account), you can’t re-sell it (again, tied to your account), and you can’t use/play it in any machine of yours that you want (you might be allowed to do so a few times, but after you exceed some arbitrary limit, it locks you out of your own content).

If you think about this for a moment, it seems very odd that a company that has customers is so willing to ignore what they want – and would be willing to pay for – just to slap “DRM” on it to maximize future profits. You’d think they’d realize that their consumers just won’t put up with it – I mean, people know file sharing is wrong, and yet they do it all the time. Why? Because they want to do these things, but DRM doesn’t let them. So they find ways around it – and they are so adamant about these “rights” of theirs that they are willing to break the law to do so. So why do companies continue to do it? How, in a free market, can they survive while mis-treating their customers so?

More astute readers might at this point be forming the word “monopoly” in their minds, and that’s… part of the issue. The other part is simply apathy on the part of the consumer, and the fact that their is a lot of slick advertising out there making it seem like DRM is a feature that we (as consumers) should love so much that we demand it be included in everything we buy. It also doesn’t help that this whole arena of digital products (and the distinction between digital media and physical media, which many people don’t quite get) is rather new, and most people aren’t really up-to-speed on the ramifications of it.

Basically, there are 2 ways that things can work out from here. One way is outlined in “The Right to Read,” which is the story that got this whole post rolling in the first place. The other way is an outcry from consumes so loud that media (and I’m talking all media companies here, from music & movies to books, software, and services) have no choice but to make certain concessions and adapt – giving us the rights we obviously want, but still being able to make a buck.

I buy DRM-free songs from iTunes specifically because I don’t want to see us end up in the kind of society outlined in The Right to Read. And the more people who read this article, and understand what it means, the more they will be able to make informed choices in the future, and educate more people, until that wonderful “democracy” effect comes into play (through either government action or, preferably, the free market effect) and things change for the better.

I’ll keep my fingers crossed. In the meantime… spread the word, and try to live DRM-free.

Why I Don’t Play Newer Games (Mostly)

It is a (sad?) fact that I play far, far fewer games than I used to.

Suffice to say, there were thousands upon thousands of games available for my Atari 7800. Ditto for my original Nintendo (NES) and Super Nintendo (SNES). And when I owned those systems, I had pretty large libraries of games – certainly larger than I have now – and there were always more games I wanted (but had to wait for birthdays/Christmas… hey, games were expensive!).

I’m going to come back to the SNES later on, so if you don’t know what it is or what it looked like, go look it up now. It’s OK – I’ll wait.

When I got into college, the gaming scene was filled with things like the original Playstation (PS1) and the Nintendo 64 (N64). This was the beginning of the end for me – which at first glance seems a little backwards. I mean, I was an adult now and had a job and money to buy the games I could never afford on my own as a kid, and I still liked playing games – so what gives?

This is going to start a lot of flame wars I’m sure, but I boil it down to one simple thing: too many buttons.

Look back at the controller for the SNES. A directional pad and 4 primary buttons. (The shoulder buttons were used rarely, by comparison, and Start and Select don’t count.) Simple. Elegant. And, perhaps more importantly, it’s what I grew up on. All button pressing was done with the tips of the thumbs – all of your other fingers were just to hold the controller.

Jump forward to the PS1 and things get a little bit more complex – now there are 2 sets of shoulder buttons, but more or less the layout is the same. I liked the PS1, and still play games from it to this day.

Now look at the N64 (a system I never owned, but played in college). Look at that controller. It’s got: a directional pad, a joystick, 4 “C” buttons, A and B buttons, two shoulder buttons and a trigger button underneath. You can hold it with your hand in 2 different ways – it has those 3 “prongs” so you can hold either the directional pad or the joystick. And since the games that came out for the N64 were all trying to do revolutionary things with 3-D, they all tended to use all of those buttons.

Think about what that means.

Now, sure, you could just say “suck it up and learn the new controls,” but you could also say the same thing about computer user interfaces (a topic which I am very familiar with and very vocal on). Now, has the shape of a mouse changed much in the last 10 years? Or the layout of menus or window controls? Not very much, if at all.

But for game consoles? The PS2 came along and gave us 2 joysticks! Both of which are also buttons! And don’t even get me started on things like the GameCube, the XBox, the XBox360, or the PS3. (The Nintendo Wii is a refreshing breeze amongst all these game systems – a simple controller! But one that has an inherent power and flexibility… more on that later.)

The bottom line is, playing games that use more than a few buttons quickly becomes tedious and difficult for me. I just don’t have the time, patience, or I guess dexterity to learn to use my thumbs, forefingers, and middle fingers (on both hands) at the same time while trying to hold an oddly shaped, vibrating controller in my hands.

The user interface for these games is just too complex/difficult.

Especially now that games are so realistic. It just takes a lot of mental effort to remember that the realistic looking character on the screen will only open a door when you press L2 (with your left middle finger) while steering him with your left thumb. I mean, c’mon!

As games become more and more complex, and more and more immersive, the user interface to these games (the controller) is going to have to evolve – and that doesn’t mean fancy boomerang shapes and more buttons!

In a way, Nintendo’s Wii has sort of figured it out – although I’m not sure the folks at Nintendo quite realize it yet. There’s also a reason that the Wii has sort of a passing resemblance to products of another company that does user interfaces really, really well – and of course I’m talking about Apple.

Still, there’s hope. The other day I started playing a game I got for my birthday – Lego Star Wars. Here’s a game that gets UI right. To play the game, you really only need the 1 joystick and 3 (maybe 4) buttons. (You can use more buttons, but they aren’t strictly necessary to play – and more importantly, to enjoy – the game.)

When Amanda can pick up a game and start kicking-ass at it (she never reads manuals and is horrible at managing more than a few buttons at a time without lots of practice), that’s how I know a game has a good user interface. (Coincidentally, Lego Star Wars also does a lot of other things right – easy pick up & dropping out of a game, good 2 player mode, and basically infinite lives.)

I’ve been thinking about picking up a Wii (or maybe even a Nintendo DS – again, fewer buttons!), but maybe I’ll hold off for one more generation of game consoles, and see whether the other companies “get it,” or whether I’ll have to start learning to operate controllers with my feet as well.

We’ll see.

The Great Wikipedia Schism

While I’ve always been a great supporter of Wikipedia, lately things have begun changing that have me questioning whether it’ll work out the way I hoped.

Let me explain.

Lately, the higher-ups at Wikipedia have made some policy decisions which are arguably aimed at increasing the perceived “quality” or “reliability” or “professionalism” of Wikipedia, in comparison to, say, Encyclopedia Brittanica, or any other major encyclopedia. Some of these changes, though, in my opinion at least, go against the original spirit of Wikipedia – the spirit that originally attracted me to the site.

One example: Wikipedia is trying to enforce the idea that articles should only be about things that are “noteworthy.”

OK – so, what the hell is “noteworthy,” anyway?

I’d be hard pressed to define it, and so would anyone else striving for an unbiased opinion. It just can’t be done. As soon as you bring something as ambiguous and subjective as “noteworthiness” into the picture, you’re just asking for trouble. It used to be enough if an article was well-written, factual (cited its sources), written from a neutral point-of-view, and contained no original research. It didn’t matter if it was an article on Barnard’s Star or  the fictional Dahak starship from the sci-fi novels written by David Weber – as long as it followed those few requirements, it was fine for Wikipedia. After all, what seems noteworthy to one person might seem totally useless and not worth remembering to another person – and vice versa.

What’s worse is that because of this desire for “noteworthiness,” some articles are being deleted – and that really just goes against the spirit of an encyclopedia of human knowledge!

There are other things, of course – the removal of “trivia” sections; the removal of plot summaries & episode lists for TV shows – but really, the “noteworthy” thing is probably my biggest pet peeve. I just don’t think it can be reasonably enforced and really, something as subjective as that has no business being in the criteria for a Wikipedia entry.