Category Archives: Tech

Chrome vs. Firefox revisited

Attention Facebook readers: You might want to click the “View Original Post” link at the bottom of this note. Facebook sometimes messes up the formatting.

Back in May 2009, I wrote an article comparing the Chrome and Firefox browsers. Since then, it has been by far the most viewed page on my blog. From the day it was posted until today (almost six months), that particular article has accounted for about 80% of all pageviews on my blog. I’ve had days where 110 people visit my blog and 103 of them view that page and that page alone. I use to track which pages are viewed the most and how people find my blog, and here’s a piece of the results for one day. Note that this is a fairly typical day. I don’t know why mybloglog can’t collapse all of the “chrome vs firefox” entries into one.


Anyway, after about six months of using Chrome pretty much exclusively, I decided to revisit this comparison and see how much of it is still valid. To that end, I reset my default browser back to Firefox for a week.

Note that I am comparing the “generally available” versions of Chrome ( and Firefox (3.5.5), not development or beta builds.

Advantages of Chrome

  1. Chrome starts up almost instantly, while Firefox takes several seconds before it’s ready to go. Both are still faster than IE for me.
  2. Chrome updates itself completely silently. Firefox tells you there’s an update available and asks if you want to install it. Actually doing the install is pretty painless, but it asks you if you want to install the update when you start the browser, which is usually when you are trying to do something with it. Frequently I don’t want to wait while it installs an upgrade and then restarts itself, so I end up trying to remember to do it when I’m done. I have no idea when Chrome updates itself, because it does it silently in the background and then the changes take effect the next time you shut it down and start it again.
  3. Chrome searches your bookmarks and previously visited sites extremely quickly, so when I start to type a URL, it comes up with probable matches really fast. For example, I don’t have bookmarked, but I can get there using tw because by the time I hit enter, Chrome has searched my previously visited sites and autocompleted “tw” to “”. Until I started using Firefox again, I did not realize how cool this feature was and how quickly I came to depend on it. I would visit a site and not bookmark it, and then the next day if I wanted to find it again, I could type whatever part of the URL I could remember into the address bar and it would just find it for me.

Advantages of Firefox

  1. Chrome still doesn’t have plug-in support. If this isn’t number one on the “must get this done” list for Chrome, someone needs to be fired. Yes, I know this is at least mostly working in the dev builds.
  2. When Chrome isn’t going really fast, it seems to be going really really slow. I had a situation on my computer recently where everything seemed to be taking forever – compiling was taking 20-30 seconds per file (rather than the <1 it should take), and a test that was running at the same time was taking minutes rather than seconds. I looked at the task manager, and the two processes taking up the most CPU were Chrome and our stupid virus scanner that grinds my machine to a halt and IT won’t let me configure it despite the fact that it prevents me from doing my job efficiently (but that’s a rant for another day). I shut down Chrome, and within a few seconds everything sped up noticeably (though not as much as it should have because of the stupid virus scanner). I am going to keep an eye on this, but it may be a showstopper.
  3. Perhaps related to the previous problem – every now and again, usually when my machine is very busy, I enter a URL in the address bar, hit enter, and nothing happens. I have seen pauses of 30+ seconds before it even changes the status to “resolving”. Firefox doesn’t have these complete blackouts, but just goes really slow in those situations. I rarely see this or the problem above (#2) at work, but it happens a lot at home – I think it may actually be related to the VPN I use.

Dead Heat

  1. When I first started using Chrome, it was quite a bit faster than Firefox, especially on javascript-heavy web sites. But when I switched back to Firefox for this comparison, I didn’t notice much of a difference in speed, certainly not enough of a difference to consider it a Chrome advantage.
  2. Bookmark support has been improved in Chrome to the point where this is no longer an advantage of Firefox. Firefox supports keymarks which Chrome does not, but Chrome’s searching of bookmarks is so fast this is hardly necessary, other than the magic %s searching thing that Firefox supports. XMarks support is still missing though (it’s in beta).
  3. On a site with lots of Flash (i.e. games), sometimes everything seems to slow down to a crawl after 10-15 minutes or so. Sometimes it speeds up again after a while, but other times I have to just give up on the game. This happens in both Chrome and Firefox. Don’t know about IE.

The Result

For now, I’m going to stick with Chrome, but as I said above, I’m going to keep an eye out for machine slowdowns and see if closing Chrome fixes them. If that continues to happen, I will have to go back to Firefox.

I kind of miss the plug-in support from Firefox, but Chrome is still pretty peppy and quite honestly, I feel like Firefox is starting to pick up the bloat that IE has had for years. Chrome still feels small and sleek.

I’m surprised that adding plug-in support is taking as long as it is, but I also understand that this basically amounts to allowing the general public to add executable code to your application on the fly. Getting this right and making it usable and flexible while remaining robust is difficult.

C/C++: Five Things I Hate About You

Jeff Attwood said recently in a StackOverflow podcast that if you can’t think of five things you hate about your favourite programming language, then you don’t know it well enough. I started writing C code in about 1988 and C++ in about 1992, so I think I can say I’m familiar with them. I know that C and C++ are different languages, but there’s enough overlap that I’m going to group them together. Here are five things I hate about C and C++.

  1. Lack of portability. Pure C or C++ code is generally portable, but we continually run into thing like “standard” libraries that aren’t standard. Libraries for things like file I/O and threading can be vastly different on different platforms so if your application has to run on multiple platforms, you have to write the same code several times in slightly different ways. There are functions that are defined in a different header file on one platform than another. Preprocessor macros that have a leading underscore on one platform and not on another. There are functions that exist on one platform that don’t exist – or work differently – on another. The C language has been around almost forty years, and we still have to have #defines in our code to cover stricmp on one platform and strcasecmp on another. We don’t use exceptions in our code because different compilers deal with them differently, and we just started using templates because all the compilers we use finally support them in a similar enough way that they’re usable. I suppose technically these are problems with the implementations rather than the language itself.
  2. Undetectable number errors. How many times have you done x-- on an unsigned type only to find that you “decremented” it from 0 to 4294967295, and everything went haywire? Doing this is completely legal and the only way to prevent it is to manually check for 0 before you decrement, and make sure you do it in a thread-safe way. PITA.
  3. Lack of memory checking. If you allocate fifty bytes and then access fifty-one of them, that’s totally fine. Accessing that fifty-first byte may work, giving you random data, or it may crash. Writing that byte may work, overwriting some other variable and creating a terribly hard-to-find bug, or it may crash. Or even worse: it may overwrite some unused piece of memory, thus having no effect, most of the time (i.e. during development and testing) but then crash or overwrite memory occasionally (i.e. in customer deployments).
  4. Braces aren’t required for if statements. (and while statements, and for statements) This is just asking for trouble. I’ve trained myself to see and fix things like this:
    if( condition )
    statement 1;
    statement 2;
    statement 3;

    and some editors and IDEs will automatically re-indent, making the problem obvious, but you can still miss them sometimes. In my code, I almost always put braces anyway, except for the occasional thing like this:

    if( condition1 ) continue;
    if( condition2 ) break;

  5. Named structures and typedefs are different. This has confused me for years. You can have a structure with a name, and also typedef it to another name, or you can typedef a structure without a name. For example, all of these are legal:

    // defines a structure called myStruct. You have to type "struct myStruct"
    // to use it
    struct myStruct {
    int a;
    int b;

    // also defines a structure called myStruct, but you can use "myStruct"
    // as a type now. Or you can continue using "struct myStruct". The two
    // names do not have to be the same.
    typedef struct myStruct {
    int a;
    int b;
    } myStruct;

    // No different from the second example
    typedef struct {
    int a;
    int b;
    } myStruct;

    The second and third examples are exactly the same, though I remember having to go through a bunch of code and change typedefs of the third type to have a name after struct because the debugger (CodeWarrior, if I remember correctly) didn’t understand them unless the struct had a name.

Tool review: Microsoft Network Monitor 3.3

I have used Wireshark for packet sniffing and analysis for a number of years, starting back when it was called Ethereal. A little while ago I was using it to look at broadcast packets that our clients send out, and decided that it would be great if Wireshark could interpret our wire-level protocol and display meaningful information about the packets. After a bit of searching, I found that you can add plug-ins to Wireshark, allowing you to do whatever you want with the packet data. I found some detailed instructions on how to do this, beginning with:

  • Install a version of the Microsoft C/C++ compiler
  • Install a particular platform SDK
  • Install Cygwin
  • Install Python
  • Install Subversion
  • Get the Wireshark source
  • Configure the source
  • Build Wireshark

Once you’re done all that, you can start looking at building your plug-in in C. I set up a Windows XP VM and spent a day or two doing all of this, but never got to the point of actually creating the plug-in. A few days later we had a team status meeting, during which I mentioned this project. A colleague, Peter, asked if I had looked at Microsoft NetMon, saying that he believed it allowed you to add your own parsers as well. I downloaded it and took a look. Thank you Peter, for saving me days, if not weeks of development time. In less time than it took me to set up the VM in preparation for writing a Wireshark protocol analyzer, I had analyzers written for the majority of both our UDP and our TCP protocols.

Writing parsers

As a packet sniffer, NetMon is not really much different from Wireshark, though I find the interface a little more intuitive. This might be because I’m running on Windows, and Wireshark has always looked to me like a Unix program that has been ported to Windows rather than an application written for Windows. They both support both capture and display filters. NetMon has colour filters as well – particular packets or conversations can be coloured based on the filter results. You can view packets as they are captured, save them to a file, and load them back in again later.

But writing a parser is orders of magnitude easier than writing a Wireshark plug-in. You simply tell it what ports your protocol uses and what the protocol looks like in a proprietary language (called NPL – Network Monitor Parser Language) that’s vaguely C-like but very simple. Some properties of this language:

  • it handles bitfields, ASCII and Unicode text, and binary data, as well as various types of numeric values (8, 16, 32, or 64 bits, integer or floating-point, signed or unsigned, big- or little-endian)
  • you can define your own data types
  • there are a number of special data types built-in; if your packet contains a 32-bit IP address, for example, you can just specify it as IPv4Address and it will get interpreted and displayed as expected
  • you can make structs which group pieces of the data together, and arrays which hold collections of the same type of data
  • you use while loops and switch statements to modify behaviour. For example, your protocol might have a byte that indicates the type of packet, and then the structure of the packet depends on the value of that byte. No problem.
  • you can indicate both storage format and display format, so if you have a byte that’s 0 for a request and 1 for a response, you can display the words “request” and “response” rather than just 0 or 1. The rest of the code can reference this value by name and get 0 or 1. The display string can be as complicated as you want, even referencing other pieces of the packet by name.
  • it supports conversations, and there are variables that have global, conversation, packet, or local scope

The help file installed with the app describes each of the language features, and I found a document that describes an example protocol in great detail.


The biggest drawback of this tool is the parser editor. It’s not very powerful – it makes notepad look feature-rich. I use Ctrl-Backspace (delete previous word) and Ctrl-Del (delete next word) a lot, since it’s supported in Windows Live Writer, Word, and emacs, but support is spotty – sometimes it works, sometimes it deletes the wrong word.

The main feature it’s missing is undo. It doesn’t even have a single-level undo. If you hit backspace one too many times, you’d better remember what that last character was because it’s gone. An editor that doesn’t support undo is pretty much unacceptable in this day and age, and I lost data more than once because of it. Once you realize that you can’t undo mistakes, you end up clicking Save a lot more often, and do things like copy the file to a backup file before you make big changes. I checked my files into source control and started checking them in periodically, which is a good idea anyway, but if the parser stuff wasn’t so damn cool, the lack of an undo feature might be a showstopper. Emacs supports Ctrl-A and Ctrl-E to get to the beginning and end of the current line respectively, and sometimes I instinctively use those keystrokes in editors where they’re not supported, like this one. Unfortunately, Ctrl-A here means “select all”, so doing that and then typing something is disastrous because there’s no undo, so you just lost your entire file. You need to quit the file (do not save!) and then reload it, losing whatever changes you had made. Even a single-level undo would save you from that.

The compiler has some problems as well – there were a number of times where I got compilation errors that were badly written or vague enough that I didn’t know what the problem was. It would point to what looked like a valid statement and say that it was unrecognized or invalid, and it turned out to be because of a missing (or extra) semi-colon on a different line, or a language rule that wasn’t obvious.

Once you’ve made the changes to your parser, you have to save it and then click “Reload Parsers”, which reloads all 370+ parser files it knows about. Surely there could be a way to just reload the one that I changed? Now, there are dependencies between files, so changing one file might require that a different file be reloaded, so reloading them all is the safest but it’s slow. Ideally, the tool should be able to figure out the dependency tree and only reload files that depend on the ones changed. And the tool should prompt me to save if I have an unsaved file and I click “Reload Parsers”.

If anyone from the NetMon dev team reads this, here’s a bug report: If I load a parser file, then paste some code into the file, it’s not marked as “dirty” until I actually type something. Also, if I load a display or capture filter from a file, this generally means “replace what’s in the textbox with the contents of the file”, not “insert the contents of the file into the textbox at the current cursor position”. I can see how that feature might be useful in combining filters, but it should not be the default.

As powerful as the NPL language is, there are things it simply can’t do. In my case, some of our packets can be encrypted or compressed, but the NPL language can’t decrypt or decompress them. It would be nice to be able to write a small plug-in that could do these types of things, but it’s not supported. The Wireshark approach would work for that.


For those analysis needs that are not satisfied by parsers, NetMon supports things called “experts”, which are external programs that can read the data from a capture file and analyze it in whatever way it wants. It sounds similar to a parser except that it’s written in C or C++ (or C#, I think) and has the limitation that it only works on saved files, so you can’t look at the results in real-time as you can with a parser. I’ve stared to write one of these to solve the decompression/decryption problem I mentioned above. There doesn’t seem to be a way to decrypt the data and write it out into a new capture file, but I can at least decrypt, parse, and display the data. I can reuse the parser code I’ve already written, since the program is dealing with pre-parsed information, but I have to grab each field individually and display it, so I essentially have to rewrite all the display code in C.


Overall, this is a very cool utility and has replaced Wireshark as my packet sniffer of choice. The documentation is pretty thorough and it includes some good examples. If all else fails, the parser code for all the other supported protocols is right there. There is a help forum to which I’ve posted a couple of questions and gotten quick and helpful responses. I wrote a while ago about how cool Windows Live Writer is, so kudos to Microsoft for yet another cool utility.

Alan Turing

Sorry, I’m a little late to the party on this one. After reading an article written by my colleague Glenn Paulley, I decided to write about it as well, mainly because Glenn’s blog and mine have different audiences. The story he writes about (and I’m about to cover) is both tragic and infuriating; I wouldn’t call the ending “happy”, but it’s certainly the best that could be expected under the circumstances. Note that this is not a technical article at all. It is about a computer scientist, but it’s mainly the story of a man.

If you’ve never studied computer science or cryptography, you have likely never heard of Alan Turing. Computer Science students don’t learn much about Turing the man, but you can’t study computer science for long before coming across his name. He was a brilliant mathematician and cryptanalyst who not only developed some of the most basic fundamentals of computer science and artificial intelligence, but helped to end World War II. Turing was one of the scientists who worked at Bletchley Park, and was instrumental in breaking the German “Enigma” code, among others. Turing was awarded the OBE (Officer of the Order of the British Empire) for his work during the war.

Turing also happened to be gay, which was illegal in Britain at the time (and remained so until the late 60’s). Only seven years after World War II ended, Turing was arrested, charged, and convicted of gross indecency. As a sentence, he was given a choice: chemical castration or prison. He chose the former, and was given estrogen treatments to attempt to kill his libido. This was successful, but also caused Turing to grow breasts. His security clearance was also revoked, thus ending his employment with the government. In 1954, two years after his conviction, Turing committed suicide by eating an apple laced with cyanide. There are some that say that his death was not a suicide at all, but accidental. Regardless, the death of this brilliant man at only 41 years of age was a tragedy.

In late July of this year, a British computer scientist named John Graham-Cumming started an online petition asking the British government to apologize for the treatment of Alan Turing. Within weeks he had several thousand signatures and on September 10th, British Prime Minister Gordon Brown issued an official apology to Turing.

Congratulations to John Graham-Cumming on getting this done, and kudos to Gordon Brown and the British government for doing the right thing and apologizing for the appalling treatment of Alan Turing.

Useful Windows Tools

Every year, Scott Hanselman posts a list of tools that he uses. After reading this year’s list, I decided to do my own. Why? Because I have many thousands of readers like he does? No. Because some of my readers are techies and would find the list useful? Well, maybe a couple of them. Because those readers who are not techies might find it interesting too? Not bloody likely. The real reason is the same as the reason for the majority of the rest of my postings: just ’cause.

Work Tools

  • 4NT – I use a zillion batch files at work for doing all kinds of repetitive tasks – anything I need to do more than once, I write a batch file for. The Windows batch file language is pretty lame, so Sybase has a site license for 4NT as a command shell replacement and it’s so much more powerful than the Windows one. I don’t think you can get 4NT anymore, but the latest version is called Take Command. Some people at work are using that but I’m still on 4NT. Not for any nostalgic reason, just because I can’t be bothered to change it. Windows 7 will ship with a thing called Powershell which is supposed to be pretty good, but I can’t imagine rewriting all the scripts I already have.
  • ActivePerl – I actually prefer writing python code to perl, but if you’re doing anything involving string manipulation or regular expressions, you can’t beat perl. Over the last couple of years I’ve rewritten a  lot of my 4NT batch files in perl.
  • ActivePython – One of our testing tools at work uses python so I’ve been writing a lot of python over the last few years. I thought the whole whitespace thing was crazy at first, but as long as you have a good editor (see emacs below) that knows about that stuff, it’s not so bad.
  • emacs – We don’t use an IDE at work, so most developers use either emacs or Watcom vi (since we used to be Watcom). Many are switching over to vim rather than vi. I can use vi and did for many years, but I usually use emacs.
  • Thunderbird for both email and news. Our company uses Lotus Notes, but I gave up on that years ago. I used Outlook for a number of years and it was OK, but now and again it would get into a state where it wouldn’t download any emails but wouldn’t give any errors either. I switched to Thunderbird and have been happy ever since. A few add-ons make it complete:
  • VMWare – we have a VMWare server set up on a big kick-ass machine in the lab, and I have a couple of VMs that are running 24/7 on that machine. One is running XP and I use it for network stuff as well as NetWare development (so I don’t have to install all the NetWare stuff on my laptop), and the other runs 64-bit Vista. I used to use actual physical computers for this type of stuff, but this is just so much easier and more convenient. Even from home, I can remote desktop into them and even reboot them.
  • Remote Desktop – comes with Windows, but I had to mention it. I use this all the time for connecting to test machines, our build machines, and my VMWare VM’s.
  • TightVNC for those older (Win 2000) machines that don’t support Remote Desktop, we use VNC. I also have a VNC server set up on a unix machine, so I can use VNC to connect to that and get a Unix desktop on my Windows machine. There are lots of VNC clients / servers available, but I’ve found TightVNC gives pretty good performance, even when I’m at home (and therefore using wireless networking through a VPN).


Web surfing and websites

  • Google Chrome – I’ve been using this pretty much exclusively since about May, and I still love it. Still fast, and it has a bookmark editor now. It periodically and silently updates itself so you always have the latest patches. Once XMarks is available for Chrome, it will be perfect.
  • Firefox – I still use Firefox now and again for sites that Chrome doesn’t support. Actually, I can only think of one. We recently started using a tool at work for code reviews which has a web interface that doesn’t play nicely with Chrome, so I use Firefox for that. Required add-ons:
    • XMarks for synchronizing bookmarks. I don’t use bookmarks all that often anymore (I usually use, but XMarks will also synchronize stored passwords, which is very useful.
    • AdBlock Plus
    • Gmail Notifier – puts an icon in the bottom corner of your browser with your current unread count.
    • NoScript
  • GMail is the best web-based email around. I get almost no spam that isn’t marked as spam, and being able to tag messages with multiple tags (rather than put them into one and only one folder) is amazing. Being from Google, the search feature is also very good, and filters let you do clever things with messages as they arrive. For example, every time I publish a blog posting, I have it emailed to my gmail account. A filter then tags it with the “BlogArchive” tag and archives it without me even seeing it, so I have a backup of every article.
  • Google Reader for reading blogs and other RSS feeds.
  • StackOverflow – programming Q&A site. Very useful for learning stuff and getting questions answered, but it’s fun to try and answer questions as well. Rather humbling sometimes, when I see a question and think “I know how to do that!” but before I post my brilliant solution, I read another answer saying “You could do this , but that’s inefficient (or slow or dangerous or…). Here’s a better way” and proceeds to explain something that is clearly superior to my idea. There are also (for IT pros) and (for general computer questions) as well as (for questions about SO itself), but SO is still my favourite.
  • – I rarely ever save bookmarks through the browser anymore, I just use delicious. Far easier to type now that it’s rather than

Music and Video

  • iTunes – Gail has a Sony Walkman MP3 player and uses Windows Media Player to set up playlists and stuff. iTunes is just so much easier. It makes it easy to view any MP3 tags on your songs, and also makes it easy to select multiple songs and change attributes of all of them at once. You can set it up to detect a new CD being inserted in the drive, automatically rip it and add it to your library, and then eject it, so you can rip new CDs and sync them to your iPod just by putting the disk in the drive.
  • Videora iPod Converter converts (hence the name) video files from whatever format they’re in into the appropriate format for your iPod, and automatically adds it to iTunes as well. Very handy for downloading TV shows that you missed. I have a dock for my iPod that connects to the TV, so we can watch stuff through the TV rather than on the iPod or computer.
  • CDBurnerXP – Windows Vista has built-in CD burning support, but I prefer CDBurnerXP. It gives you the whole drag-and-drop interface for selecting files, tells you how close you are to filling the disk as you add stuff, makes it easy to erase rewritable CDs / DVDs, writes both audio and data CDs as well as data DVDs, it does everything.

General Utilities

  • Jungle Disk – backs up all of our digital pictures and stuff online using Amazon’s S3 service. I paid $20 for Jungle Disk originally, which gives me free upgrades for life, and I can install the software on as many machines as I want. I pay Amazon directly for the S3 storage, which for me is under $5 a month. (I wrote about this last summer.) The data is fully encrypted and the encryption key is not stored on the Amazon servers. Restoring is even easier – set up a network drive and just copy whatever files you want.
  • DropBox – you install the (free) software on multiple machines and point each of them at a directory on a local drive, and the software keeps the directories synchronized. To copy a file from one machine to another, just drop it into the DropBox directory and it’s instantly copied to whatever other machines are synchronizing. Couldn’t be simpler. There’s even a web interface so you can access your data on a machine that doesn’t have the DropBox software installed. I use this with…
  • KeePass – for storing and generating passwords. I created a KeePass database file in my DropBox directory on my work machine and it keeps track of my eBay, Paypal, Twitter, Linked In, banking etc. passwords, plus my router’s WPA key. DropBox then syncs the file with my DropBox directory at home. I can change a password in either place and it gets synced with the other. When I set my password for a site, I use KeePass to generate a random password, then I modify it in a way that only I know and store that. I also have a text file in my DropBox directory that holds the unmodified passwords, in case I need a password in a place where I can’t install KeePass. When I double click on the entry, KeePass copies the password into the clipboard so I can paste it into the browser. KeePass automatically clears the clipboard after 15 seconds so I don’t accidentally paste it anywhere else later.
  • FileZilla – The best GUI FTP client I’ve used. I don’t do much with FTP; updating my lacrosse pool website is about it, but FileZilla makes it easy.
  • Foxit Reader – got this one from Scott’s list above. I got tired of Adobe Reader continually getting bigger and bigger. All I want to do is read PDF’s, I shouldn’t need tons of software to do this. Plus I kept hearing about security problems with Adobe.
  • µTorrent makes downloading torrents brain-dead easy. Set up the directory where the downloaded files should go, then whenever you click on a torrent, it just does the right thing. It even stays in the background and does everything silently.
  • Microsoft Money – Microsoft is killing this product, which sucks because the alternative is Quicken, which I tried earlier this year and wasn’t too impressed. I have a pretty old version anyway, so as long as it keeps working, I’m fine.
  • IrfanView – the best application for image manipulation. Allows you to specify a directory full of image files and do a batch rename/conversion/both, which is useful for taking 10 MP images and scaling them down for displaying on a family website, for instance.
  • QuickTax – I buy this every year during tax season. Asks you all the relevant questions and fills in your forms for you, or you can enter stuff directly if you want. Gives you tips on saving tax, copies relevant data from last year’s forms, can print out the forms, and can give you all the information you need to submit electronically. Well worth the $40.

The Interview

I graduated from the University of Waterloo in 1992 with a BMath in computer science. During my last term at Waterloo, those who were graduating went through interviews for full-time jobs. I interviewed for several companies, but I was only really interested in two: Microsoft in Redmond, Washington and Corel in Ottawa. I did my sixth work term at Microsoft, and they flew me out there again for the grad interviews. Unfortunately, due to some administrative mix-up, they had set up interviews for me on the assumption that I was a co-op student looking for a four-month position, not a graduate looking for full-time work, so those interviews didn’t amount to anything. I have no memory of flying or driving to Ottawa for the Corel interview, but I remember it taking place there, so I must have made my way there somehow.

I went through three interviews that day. The first was with the HR person (whose name, I believe, was Sandra Gibson – I have no idea why I remember that), telling me about compensation and benefits and such. The second was with the man who would be my boss if I got the job, Roger Bryanton, in which he told me about what their group did and the positions available. He asked me some technical questions as well as some more general ones like what I’d be interested in working on. Then came the third one, which is the only one I really remember. The interviewer was a man named Pat Beirne, who was Corel’s chief engineer and the man who originally wrote much of their signature application, CorelDRAW. I didn’t know it at the time, but the man was basically a living legend among Corel people. Roger brought me into Pat’s office, introduced us and left. I knew this was going to be a technical interview, so I put on my virtual propeller hat and got ready for the questions. Pat stood up and walked over to the large whiteboard on the wall to my left. It’s been over seventeen years since that interview, but I still clearly remember what he said next:

I’m here to find out if you know what you say you know.

I didn’t lie on my resume. I didn’t say I was an expert in anything. I didn’t say I had extensive C experience when I really only had some C experience. I didn’t say I was proficient in something I’d never used. But when the chief freakin’ engineer of the company says something like that and you’re twenty-two years old, even if you didn’t lie, and regardless of your self-confidence level, you’re gonna get nervous. And I was.

“Let’s start with an algorithm,” he said. I don’t remember what it was for, but he asked me to write some C code that would solve some fairly simple problem. There was a loop and an array and some numbers, but that’s all I remember. I wrote it up on the whiteboard in about 15-20 lines of code. “Great,” he told me, and I finally breathed out. I’d done it – I’d proven that I knew what I said I knew! I’d gotten the job, right? Not quite yet – we weren’t done. Not even close.

“Now make it faster.”

“Ummmm… OK…. I guess there are a few things being done here that don’t always need to be done, so you could add an if statement around them, and that would be a little faster.”

“Good. Now instead of handling just ten values, make it handle any number of values.”

“Oh… ummm… rather than using a static array here, you could dynamically allocate it.”

“Excellent. Now make it use half as much memory.”

“Uhhhhhh… you could…. ummmm…..”

“Here’s a hint: none of the numbers you’re storing is bigger than 50,000.”

“Oh, OK, you could use a short int rather than int and that would use half as much space.”

“Very good. Now make it faster.”

We went on like that for hours. Well, it felt like hours. “Make it faster.” “Make the code smaller.” “Make it handle negative numbers.” “Make it faster again.” By the time I was done, I’m sure I had three machine instructions that would handle an infinite number of values in a nanosecond using zero memory.

Of course even then I knew that he wasn’t testing to see how small and fast I could make this particular algorithm. He was testing three things:

  1. How well I knew the C language, and programming concepts in general
  2. What kind of problem solving skills I had
  3. How I perform under pressure

These are in no particular order; in fact #1 was by far the least important of the three. If I had #1 but was short on #2 or #3, well thanks for coming in and we’ll be in touch. Someone with #2 and #3 but was short on #1 – well, you can learn C. Which would you rather have on your team: A great C programmer who can only solve easy problems or falls apart under pressure, or a great problem solver who works well under pressure but doesn’t know C very well? The first one is useless – in fact he’s worse than useless, he’s a hindrance to the team. You take the second guy, send him on a five-day C course, and you’re all set. It doesn’t mean that he’s definitely going to turn into an awesome programmer, but he’s certainly got a better shot than the first guy.

“But how does the story end?”

I got the job, and worked for Roger on CorelSCSI Pro from June 1992 until August 1993, when I left Ottawa to start grad school at Western. One of the pieces of software I worked on at Corel was a CD-ROM driver for Novell NetWare, which did not support CD-ROMs at the time. When I started at Sybase in 1997, I was hired to replace someone who happened to be the NetWare guy. My boss saw NetWare on my resume, and I became the new NetWare guy. Twelve years later, I’m still the NetWare guy.

(Geek alert: technophobes stop reading now) I learned something about the C language that day as well – if you have a pointer to type X, then incrementing the pointer by one does not advance the pointer by one byte, it advances it by sizeof(X) bytes. During the interview, that bit of knowledge allowed me to make the code just a little smaller, but it’s such a fundamental part of how pointers work in C that I can’t begin to count the number of times I’ve made use of it since then. And I can honestly say that I learned it from Pat Beirne.

I want my ninety cents

The last time I posted about a problem I had with the Ontario Science Centre’s web site, I got comments from the webmaster himself, and the problem was resolved quickly. So I’m trying again.

I just ordered tickets to see Star Trek at the Omnimax Theater at the Science Centre in a couple of weeks. The options for getting your tickets are: print the tickets yourself, with a service charge of $0.90, or have have them print the tickets for you and hold them at the will-call window for free. This of course makes no sense – somebody has to print the tickets, so either I pay for the privilege of doing the work myself (calling it a “service charge” when I’m the one doing the work is amusing), or I give them nothing to do the work for me. Anyway, I happen to be doing this at home on my work laptop, which means I can’t get to the printer upstairs. (I’m sure there’s a way to do it but I’ve never been able to figure it out., here I come!) So I decide to take the free option and have them print my tickets. When I get the final “this has been charged to your credit card” page, I see that I was charged the $0.90 anyway. Well, I’m against paying to print my own tickets on principle anyway, so I’m certainly not going to pay the fee to not have this service. So I decide to call the box office.

First off, they list the opening days and hours and website and stuff before listing the options. Advice: nuke that crap and have a “for opening hours, press 1” option. Secondly, the “stay on the line to speak to a representative” option punts you off to their directory, where the options are:

If you know the person’s name or number, please press 1. For additional information, please press 9.

Pressing 9 boots you back out to the main recording, where you have to listen to the days and hours and website info again. Now, this is at 8:30 on a Friday night, so it’s highly possible that the box office is closed (I don’t know if the box office has the same hours as the Science Centre itself). But in that case, they should have a recording saying that the place is closed rather than pushing you off to another recording, especially since this was the “to speak to a representative” option. If there’s no representative, don’t list that option!

The original overcharge of 90 cents was likely a tiny oversight, no big deal. But someone designed that phone system, and that person needs to be punished – perhaps they should be forced to actually use it.

Update: Once again, the Science Centre people have responded quickly. Within a few days of originally posting this, I got a call from Bob, the customer service manager, who recognized the problem and told me that they are in the process of changing both their phone system and their ticket printing system. He told me that he would refund my ninety cents and assured me that with their new system, there would be no charge for printing tickets yourself. Again, major kudos to Bob and the Science Centre folks for responding to me directly and addressing the problems, however minor they may be.

But I bet it’s fast over a 2400 baud modem

I listen to TWiT every week and Jason Calacanis is a regular guest (though rarely when Chauncey John C. Dvorak is on). The guy knows the tech industry inside out and backwards and is an astute businessman, and he’s not afraid to air his opinions on anything (which is probably why he and Dvorak don’t seem to get along – they both have strong “I’m right and you’re an idiot” beliefs). Calacanis seems to be one of those love-him-or-hate-him kind of guys, though I keep flip-flopping. He can sometimes be an annoying blowhard while other times be one of the most insightful tech guys around. I always enjoy TWiT when he’s on though since he’s always entertaining; he does a spot-on Christopher Walken impression.

He even showed up on a StackOverflow podcast a little while ago and spent the entire hour begin clever and insightful, so I’m wondering if the blowhard is kind of a persona that gets people talking about him – he certainly seems to subscribe to the old mantra “I don’t care what you’re saying about me, as long as you’re saying something”.

He posted something on Twitter the other day telling people about his email list. Forget the irony of one of the pioneers of blogging using a mailing list. The really ironic thing about this is that the page to sign up for his mailing list is so 1998. No Javascript, no Flash or Silverlight, no images, doesn’t even use CSS. It’s just a very basic HTML 3 page with a couple of basic forms. It even uses and other UPPERCASE TAGS just like we did back in those early days of the web. Now granted, Jason didn’t create the page himself or anything, it looks like he’s just using some existing mailing list site to run things for him. This guy has his finger on the pulse of the technology industry – having him use a mailing list rather than a blog is a little confusing, but him agreeing to use this terribly dated web page to do it is completely inexplicable.

Chrome vs. Firefox

I have been a loyal Firefox user since version 0.8 or so, back in 2004 when it was still known as Firebird. When designing my web sites, I used Firefox exclusively, and before publishing them, I frequently forgot to make sure they worked properly in IE, which they usually didn’t because I used CSS standards (parts of which are either ignored or implemented wrong by IE) as much as possible. I installed the Adblock add-on the moment I heard about it, and have seen very few internet ads since then. It’s been great. There were only two major drawbacks to using Firefox:

  1. Some websites didn’t work properly in Firefox, either because they use evil ActiveX controls which only work on IE, or because they were simply developed using IE and other browsers were ignored. Notably, Sybase’s internal vacation request and scheduling system uses ActiveX so I have to use IE for that. Both of these issues are becoming less and less prevalent as browsers such as Firefox, Safari, Opera, and Chrome gain market share.
  2. Firefox uses a boatload of memory. I would sometimes have a Firefox window open with only one tab (usually showing my gmail inbox), and Task Manager would tell me it was using well over 200 MB of RAM.

Then Google Chrome was released, with the promise of much faster rendering and Javascript. I considered trying it out, but read a couple of reviews at the time saying that it was not bad, but not really “ready for prime time”. In recent weeks, I’ve read more reviews from people who have made the switch and are quite impressed with Chrome. A few weeks ago, after hearing from yet another source that Chrome used much less memory than Firefox, I decided to give it a try. Since then, I have used Chrome almost exclusively. I’ve noticed a few differences, both pro and con.

Advantages of Chrome

  1. Everything is faster. In particular, Javascript is much faster. Gmail is very snappy, and other sites that are heavy on the Javascript (like Stack Overflow) are also faster.
  2. Chrome uses much less memory. Right now, I have one Chrome window open, with one tab showing my gmail inbox. There are four (?) Chrome processes running, using a total of 43 MB of RAM. I’ve seen other times where I have a couple of tabs open, and there are seven or eight Chrome processes running. But the total amount of memory they’re using is still less than one Firefox.
  3. A problem in one tab that causes a crash will only cause that tab to vanish, not the whole application. I’ve only seen this happen once, and actually the tab didn’t vanish at all – the video that was supposed to play in it never did, but Chrome kept right on truckin’ along. Firefox doesn’t crash that often for me either, but when it does, the whole thing goes away.
  4. Some sites (like Google Reader or, again, Stack Overflow) have “tooltips” that don’t seem to work in Firefox, but do in Chrome and IE.
  5. Text areas are always resizable. Very nice.
  6. Chrome detects known malware sites and prevents you from going there and even from loading third-party javascript from them, though you can bypass the protection if you really want to. Firefox, without NoScript, will happily serve you up any nasty Javascript it’s told to.

Advantages of Firefox

  1. Firefox has a rich community of add-ons. For Chrome it’s already begun with user scripts, but there aren’t many of them and it’s a lot more manual work to install them, and you also have to use the less-stable beta branch version of Chrome. I’m sure that in future versions there will be automated installation and lots more to choose from, but for now Firefox wins. Some of the ones I love that have no equivalent in Chrome (yet):
    • NoScript disables Javascript entirely unless you manually enable it for the particular site you are on. I have it set so that sites I frequently visit have Javascript enabled just enough for the site to work. If a site uses its own stuff plus something from, the doubleclick stuff is disabled. AFAIK, there’s no way to do this in Chrome, so I probably have doubleclick cookies on my machine now. Damn those doubleclick people, damn them all to hell. (Yes I know they’re now Google people)
    • AdBlock for Firefox rocks. So much so that I’ve linked to it twice in this article. With Chrome, I am seeing ads on pages that I never knew had ads. After a while I discovered a similar thing for Chrome called AdSweep, which worked pretty well, though I saw more ads than I did with Firefox. Unfortunately, AdSweep requires the beta branch, as I mentioned above.
    • XMarks (formerly FoxMarks) synchronizes your bookmarks and saved passwords between instances of Firefox (i.e. work and home). It doesn’t yet exist for Chrome.
  2. Firefox can re-open tabs that have been accidentally closed. I haven’t found a way to do that with Chrome. It is possible in Chrome, though not exactly intuitive. When you open a new tab, it shows you some frequently-viewed and recently-viewed pages, and there’s also a list of “recently closed” pages.
  3. Firefox supports keymarks in their bookmarks, which are just shortcuts. For example, I can enter “fb” to go to Chrome doesn’t support these directly, but does a very fast search (hey, it’s Google) on your bookmarks and brings up bookmarks that match what you’ve typed in the bar. However, Firefox keymarks supports parameters, so I can do a search on IMDB by saving a bookmark like “;s=all“. The %s is replaced with the parameter you enter, so if I enter “imdb glitter” in the address bar, it does an IMDB search on the Mariah Carey movie “Glitter“, if for some reason I wanted to. Chrome seems to understand “imdb” and immediately does an IMDB search, so that’s fine, but I have another one that accesses our internal bug tracking web site (called iReport). If I enter “ir 12345” in the Firefox address bar, the bookmark will create the proper URL to take me to the web page for iReport issue #12345. Doing the same on the Chrome address bar ignores the ir bookmark and does a Google search, which obviously doesn’t do what I want.
  4. In Firefox, there is a separate downloads window which lists what’s being (and has been) downloaded. If you’re downloading something large, you can minimize the actual browser window and just leave the downloads window open and watch the progress that way. You can even minimize the downloads window and watch the title of the button in the taskbar, since the title of the window contains the percentage complete. Very handy. In Chrome, it seems to be associated with the tab that started the download. I downloaded a fairly large file earlier today using Chrome, and the only way to see the progress of the download was to have the browser open to the page where I started the download. You can create a tab that shows the download progress, but you still need the entire browser window open.
  5. Firefox allows you to select some text on the web page and “View selection source”, which is easier when debugging problems then downloading the entire source for the page and searching through it. No such option on Chrome.
  6. Firefox has the “Manage bookmarks” window which makes dealing with bookmarks easy. With Chrome, you have to do it one at a time, and there’s no way to sort bookmarks. However, I use a lot, so that’s where the majority of my bookmarks are anyway.
  7. On at least one message board site, the keyboard shortcuts to add italic and bold indicators to text don’t work on Chrome.

The result

I’m sticking with Chrome. There seem to be more advantages to Firefox but the only one that was really significant to me is NoScript, and many of the rest are fairly simple things that will likely be fixed before long (I know the sorting bookmarks one is already fixed, just not released yet). I’m generally pretty careful about what web sites I visit – if a site is in any way questionable, I don’t visit it at work, and at home I’m protected by OpenDNS, which I have configured to completely block all porn sites as well as known phishing and adware sites. Chrome’s built-in protection is nice too.

Other than that, the Firefox advantages are either no big deal or easily worked around. The speed of Chrome (not just browsing speed, but the overall speed of my machine is faster without Firefox using 1/4 of my RAM) is just too big of a win.

Update: I revisited this comparison six months later and posted a updated review.