Category Archives: Work

Nicky learns to debug

Nicky came with me to work this past Wednesday for “Take your kid to work day”. Not only did he see first hand what I do, he even helped me do it.

I showed him the “Database Explorer” software that I’m helping to develop for the next version of SAP HANA. It allows you, among many other things, to execute arbitrary SQL statements against any HANA database. But before he would have any idea what that was good for, I had to tell him what a database was. I simplified it to: a database is a collection of data stored in tables, where a table has a bunch of columns and each column has a type (i.e. text, numbers, dates, etc.). As an example, we created a table called “person”, with columns like “name”, “height”, “weight”, and “birthdate”. I inserted a row for Nicky and one for me.

Then I showed him how we can use the SELECT statement to get the data back. So “select * from person” shows both of us, with our names, heights, weights, and… incorrect birthdates. Each date was off by one day. Hmmm… that’s weird. Perhaps I made a typo on one of them, but not likely both. I updated a row and double-checked the birthdate but when I retrieved the data again, it was still wrong. OK, well, guess what Nick? We get to look through the code, find the error, and fix it. This is what I do.

I knew where the data retrieval code was, so Nicky and I looked it up and found where we get the data from the database and format it for display. I added some code to print out the value we retrieved and found that we were actually retrieving the correct date. This meant that it had nothing to do with the insertion process, and that the database itself wasn’t involved. Then I added a line that displayed the value again after we formatted it and found that it was wrong. So it was definitely our formatting code that was to blame.


This is written in Javascript, and we were creating a Date object from the original value. We looked up what the Date constructor expects, and found that the format we were passing in was incorrect. Then we parsed the year, month, and day out of the date and used those directly to create the Date object. We fired it up again, and presto, it worked. High five, Nick.

Our next task was to write a barrier test. We have a suite of automated tests that run and must pass before anyone is allowed to make any changes to any piece of code. This is to make sure that nobody ever makes a change in the future that causes existing functionality to stop working. I knew there was a file that contained tests for many different data types, so I loaded that up to add a test for dates. Oddly, I found that there was one already there, but commented out. Then I read the comment, which stated that the test was failing but the author didn’t know why, so he was temporarily disabling it until later when he had some time to figure it out. The comment was signed “gperrow”.

I enabled the test, corrected one typo, and presto, we have a fixed bug and a barrier test. High five, Nick.

Nick said that between the bug squashing (he prefers that term to “debugging”) and playing with the Arduino and 3D printer in the morning, he had a really fun day, as did I.


Habitat for Humanity

Every year, SAP has an event called the Month of Service, where every employee is given time off from work to volunteer in one of a number of venues. Some are right in the building and only take a couple of hours, others are outside the building. The participation in this event is impressive – numbers from the Canadian SAP offices varied between 42% (c’mon Calgary!) to 72% in Waterloo to a confusing 129% in Ottawa. Overall, over 60% of Canadian SAP employees participated in some volunteer event and in addition, SAP donated money to some of the charities involved. This is a great event and kudos to SAP for doing this.

Oddly, I don’t remember hearing about this event last year, though I am now aware that many employees did take part. I wanted to do my part and bump up those Waterloo numbers, so I chose Habitat for Humanity. They are doing a build in Kitchener, and an employee who volunteered with them last year was organizing it again this year. I love building stuff but I’m not very knowledgeable about such things, so a place like this is perfect for me: they’ll tell me exactly what to do, they have the tools and equipment available to do it, and they have truly knowledgeable people around to help and advise.

The block we were working onWhen we first bought our house, we had lots of repairs to be done and I always enjoyed doing them. I can do simple electrical stuff and I don’t totally suck at working with wood, though I’m not going to be building a dining room suite anytime soon. I’ve installed phone lines and electrical outlets and replaced light fixtures and such, and I enjoyed helping my father-in-law and brother-in-law build our “cold room” (pantry) and workshop in the basement as well as replacing our deck. One thing I don’t enjoy is plumbing. For some reason I can just never get the hang of plumbing, though I have installed new showerheads a couple of times. That’s brain-dead easy but Gail’s done some more difficult stuff like replacing the kitchen faucets and our bathroom sink and she’s pretty good at it.

But I digress. We arrived on site around 8am where there were some snacks and coffee/tea available. There were about 15 of us from SAP plus a bunch of contractors. At 8:30 they had a safety presentation and after that we all geared up in our borrowed CSA-approved boots and hardhats and headed out to the work site. The area they’re working on is huge (see the picture below, taken from the top of the scaffolding on the new block). The site manager was telling us that if they were to buy just enough land for one house in this area, it would cost near $200,000 – and that’s without a house. Much cheaper to buy a large lot and put up townhouses, which is what they’re doing in Kitchener. There was one block of six houses that were mostly finished, and they’d started another block of four in the back corner, which was where we were working.

We started by bringing a pickup-truck worth of 4’x8′ sheets of plywood down to the site, and then passed them all up to the second floor. Those were later installed on the roof by some of my colleagues. We then installed a couple of 2x4s horizontally through the trusses, which I believe were to be used for running wire between rooms. The trusses needed spacers installed, so that was our next task. While getting ready for that, another contractor came by and asked if we had extra people since he could use a hand. I volunteered and so he and I headed to the other end of the building, where the all of the siding had been done except for the very top pieces. My first task was to install five pieces of J-channel, so I needed to measure the pieces, cut them to length, and nail them in place. Once that was done, I cut the last pieces of siding for each of the five sections – this took a lot longer than I expected and I burned through a couple of utility knife blades doing it. Finally that was done and I was able to install them and screw them into place.

The work siteThat’s it. That’s all I got done all day. Doesn’t sound like much, does it? It was a lot of work though and my shoulders and legs were feeling it over the next couple of days.

There was a 10-15 minute break at 10:45 or so, where they supplied some ham sandwich fixin’s as well as some crackers, cheese, and fruit. Lunch was at 12:30 and was catered by people from a local church. They had brought a bunch of baked potatoes and all kinds of toppings: grated cheese, bacon, onions, diced tomatoes, chili, sour cream, and a bunch more. For dessert there was more fruit as well as brownies, lemon bread, date bread, and banana bread. The food was excellent.

I have lots of kudos to go around. First to my SAP colleagues and particularly Dave Brandow for organizing it. Secondly to the church people for supplying lunch – it was fantastic. Thirdly to Andrew and Darrell and Marcus and the other construction pros who were all helpful and very patient with a bunch of non-pros. Fourth, to the person who gave us the introduction and safety training – I believe her name was Janine but I feel terrible that I don’t remember for sure. She reminded me of Penelope Garcia from Criminal Minds – particularly her voice. And finally to all of the Habitat for Humanity people everywhere, those who donate, and those who volunteer. You are doing a great thing and I am looking forward to volunteering again next year.

Yahoo decides this mobile thing is a fad

According to All Things D, Yahoo has made a change to their company policy on working remotely. The new policy is, in a nutshell, don’t. Employees who currently work remotely will have to either move so they can work in a Yahoo office or resign. This seems to apply to workers who work 100% remotely as well as those who work from home one or two days a week. Does Yahoo really not understand mobile yet? The entire point of the mobile industry is to allow people to do stuff wherever they happen to be – you don’t have to go to your bank to do your banking. You can shop without going to a store. You can send email, surf the web, watch TV and movies, and listen to whatever music you want from anywhere. But Yahoo employees must be physically located in their offices in order to be productive? Really?

The reasoning Yahoo has given for making this decision makes little sense: they had lots of people who worked remotely and weren’t productive. So instead of firing the unproductive workers or making them come into the office, they decide to punish all of the productive remote workers as well.

Many tech companies talk about hiring the brightest and the best. Google is notorious for their hiring conservatism; they’d much rather pass on someone good than hire someone who turns out to be a bad fit. Yahoo is obviously not concerned with this. It sounds like they’d rather hire someone who lives physically close to a Yahoo office (or is willing to move) than someone awesome who doesn’t (and isn’t). Maybe they have great people up the wazoo and have decided they can afford to lose some of them, which they will. Maybe this is a cheap way of getting rid of some employees without having to pay them severance. That strategy would only work if the remote employees are the ones you want to get rid of and you don’t mind having some that you’d rather keep quit.

I work from home at least once a week (and more if there’s nasty weather), and have for ten years. Even though I don’t work for Yahoo, I take it personally when I read stuff like “Speed and quality are often sacrificed when we work from home”. I obviously can’t speak for everyone who works at home, but it’s quite the contrary for me. I frequently get a fair bit done at home – at least partially to avoid this very stereotype. If my manager decides that I don’t get as much done at home as in the office, he may decide to revoke this privilege, and that’s a privilege I greatly appreciate and don’t take for granted. I certainly have the occasional work at home day where I don’t get much done, but I also have the occasional work in the office day where I don’t get much done. I also have days both at home and in the office where I’m very productive. And this is all ignoring the fact that I work at least two hours longer when I work at home since I’m not driving to Waterloo and back.

I’ve done work in a number of different rooms in my house. I’ve brought my laptop and gotten work done in mechanic’s waiting rooms, doctor’s and dentist’s offices, hotel rooms, friends’ houses, my parents’ and in-laws’ places up north, and even a couple of Tim Horton’s. Every SAP employee worldwide is given a laptop so that they can work remotely if necessary. If I worked for Yahoo, their company policy would ensure that none of that would ever happen again.

Dear SAP/Sybase: I’d advise against this strategy. The goodwill that you’d lose from your employees would vastly outweigh any potential (and purely theoretical) productivity gains. Not only does it limit the people you can hire in the future, but I know of a few people who’d likely quit. In fact, I know of one brilliant engineer who you’d lose because he lives far away from the office and works from home a lot. And trust me, you really don’t want to lose this guy.

Yes, that’s right – you’d lose Ivan. Oh, and I’d probably be outta there too.

Disclaimer: I am not speaking for Ivan, nor am I making any kind of ultimatum to SAP/Sybase. Just saying that I disagree with this policy.

If the coffee machine breaks, just drink water

At work, we have a fancy coffee machine in the kitchen which is similar to the Tassimo thing that’s all the rage these days. (A friend of mine who didn’t drink coffee bought one for his wife, and now he drinks at least a cup a day. You can judge for yourself whether that’s a good thing or not.) The one at work takes little pouches (called “pods”) of coffee, tea, or hot chocolate, pushes hot water through them at high pressure, and gives you a steaming mug within about 30 seconds. I don’t drink coffee but I like the tea and hot chocolate it makes, and the fact that it’s ready so quickly is very convenient.

When it’s done making your beverage, it automatically drops the used pod out the bottom into a big bin that gets emptied regularly. Now and again a used pod will get stuck, but the people who supply us with the coffee pouches have posted a helpful (hand-written) list of instructions on how to clear it:

The order is VERY specific!!

Turn off, unplug. Open big door, then put your hand under silver packet door, pull off, set aside. Look inside. If you see a pod give 1/4 turn, GENTLY slide out the back (DON’T FORCE).

Plug in, turn on, close big door IN THAT ORDER.

Next, put silver packet door on by putting top into place, smack bottom with your hand. PACKET DOOR  MUST BE PUT ON LAST OR ELSE IT WILL NOT RESET! Good luck.

Good luck indeed. Sorry, but if your product needs this level of detailed instructions (complete with UPPERCASE COMMANDS) to fix a basic problem, you need to revisit your design. Luckily this has never happened to me but if it did, Tim Horton’s is only a 3 minute drive away.

Tool review: Microsoft Network Monitor 3.3

I have used Wireshark for packet sniffing and analysis for a number of years, starting back when it was called Ethereal. A little while ago I was using it to look at broadcast packets that our clients send out, and decided that it would be great if Wireshark could interpret our wire-level protocol and display meaningful information about the packets. After a bit of searching, I found that you can add plug-ins to Wireshark, allowing you to do whatever you want with the packet data. I found some detailed instructions on how to do this, beginning with:

  • Install a version of the Microsoft C/C++ compiler
  • Install a particular platform SDK
  • Install Cygwin
  • Install Python
  • Install Subversion
  • Get the Wireshark source
  • Configure the source
  • Build Wireshark

Once you’re done all that, you can start looking at building your plug-in in C. I set up a Windows XP VM and spent a day or two doing all of this, but never got to the point of actually creating the plug-in. A few days later we had a team status meeting, during which I mentioned this project. A colleague, Peter, asked if I had looked at Microsoft NetMon, saying that he believed it allowed you to add your own parsers as well. I downloaded it and took a look. Thank you Peter, for saving me days, if not weeks of development time. In less time than it took me to set up the VM in preparation for writing a Wireshark protocol analyzer, I had analyzers written for the majority of both our UDP and our TCP protocols.

Writing parsers

As a packet sniffer, NetMon is not really much different from Wireshark, though I find the interface a little more intuitive. This might be because I’m running on Windows, and Wireshark has always looked to me like a Unix program that has been ported to Windows rather than an application written for Windows. They both support both capture and display filters. NetMon has colour filters as well – particular packets or conversations can be coloured based on the filter results. You can view packets as they are captured, save them to a file, and load them back in again later.

But writing a parser is orders of magnitude easier than writing a Wireshark plug-in. You simply tell it what ports your protocol uses and what the protocol looks like in a proprietary language (called NPL – Network Monitor Parser Language) that’s vaguely C-like but very simple. Some properties of this language:

  • it handles bitfields, ASCII and Unicode text, and binary data, as well as various types of numeric values (8, 16, 32, or 64 bits, integer or floating-point, signed or unsigned, big- or little-endian)
  • you can define your own data types
  • there are a number of special data types built-in; if your packet contains a 32-bit IP address, for example, you can just specify it as IPv4Address and it will get interpreted and displayed as expected
  • you can make structs which group pieces of the data together, and arrays which hold collections of the same type of data
  • you use while loops and switch statements to modify behaviour. For example, your protocol might have a byte that indicates the type of packet, and then the structure of the packet depends on the value of that byte. No problem.
  • you can indicate both storage format and display format, so if you have a byte that’s 0 for a request and 1 for a response, you can display the words “request” and “response” rather than just 0 or 1. The rest of the code can reference this value by name and get 0 or 1. The display string can be as complicated as you want, even referencing other pieces of the packet by name.
  • it supports conversations, and there are variables that have global, conversation, packet, or local scope

The help file installed with the app describes each of the language features, and I found a document that describes an example protocol in great detail.


The biggest drawback of this tool is the parser editor. It’s not very powerful – it makes notepad look feature-rich. I use Ctrl-Backspace (delete previous word) and Ctrl-Del (delete next word) a lot, since it’s supported in Windows Live Writer, Word, and emacs, but support is spotty – sometimes it works, sometimes it deletes the wrong word.

The main feature it’s missing is undo. It doesn’t even have a single-level undo. If you hit backspace one too many times, you’d better remember what that last character was because it’s gone. An editor that doesn’t support undo is pretty much unacceptable in this day and age, and I lost data more than once because of it. Once you realize that you can’t undo mistakes, you end up clicking Save a lot more often, and do things like copy the file to a backup file before you make big changes. I checked my files into source control and started checking them in periodically, which is a good idea anyway, but if the parser stuff wasn’t so damn cool, the lack of an undo feature might be a showstopper. Emacs supports Ctrl-A and Ctrl-E to get to the beginning and end of the current line respectively, and sometimes I instinctively use those keystrokes in editors where they’re not supported, like this one. Unfortunately, Ctrl-A here means “select all”, so doing that and then typing something is disastrous because there’s no undo, so you just lost your entire file. You need to quit the file (do not save!) and then reload it, losing whatever changes you had made. Even a single-level undo would save you from that.

The compiler has some problems as well – there were a number of times where I got compilation errors that were badly written or vague enough that I didn’t know what the problem was. It would point to what looked like a valid statement and say that it was unrecognized or invalid, and it turned out to be because of a missing (or extra) semi-colon on a different line, or a language rule that wasn’t obvious.

Once you’ve made the changes to your parser, you have to save it and then click “Reload Parsers”, which reloads all 370+ parser files it knows about. Surely there could be a way to just reload the one that I changed? Now, there are dependencies between files, so changing one file might require that a different file be reloaded, so reloading them all is the safest but it’s slow. Ideally, the tool should be able to figure out the dependency tree and only reload files that depend on the ones changed. And the tool should prompt me to save if I have an unsaved file and I click “Reload Parsers”.

If anyone from the NetMon dev team reads this, here’s a bug report: If I load a parser file, then paste some code into the file, it’s not marked as “dirty” until I actually type something. Also, if I load a display or capture filter from a file, this generally means “replace what’s in the textbox with the contents of the file”, not “insert the contents of the file into the textbox at the current cursor position”. I can see how that feature might be useful in combining filters, but it should not be the default.

As powerful as the NPL language is, there are things it simply can’t do. In my case, some of our packets can be encrypted or compressed, but the NPL language can’t decrypt or decompress them. It would be nice to be able to write a small plug-in that could do these types of things, but it’s not supported. The Wireshark approach would work for that.


For those analysis needs that are not satisfied by parsers, NetMon supports things called “experts”, which are external programs that can read the data from a capture file and analyze it in whatever way it wants. It sounds similar to a parser except that it’s written in C or C++ (or C#, I think) and has the limitation that it only works on saved files, so you can’t look at the results in real-time as you can with a parser. I’ve stared to write one of these to solve the decompression/decryption problem I mentioned above. There doesn’t seem to be a way to decrypt the data and write it out into a new capture file, but I can at least decrypt, parse, and display the data. I can reuse the parser code I’ve already written, since the program is dealing with pre-parsed information, but I have to grab each field individually and display it, so I essentially have to rewrite all the display code in C.


Overall, this is a very cool utility and has replaced Wireshark as my packet sniffer of choice. The documentation is pretty thorough and it includes some good examples. If all else fails, the parser code for all the other supported protocols is right there. There is a help forum to which I’ve posted a couple of questions and gotten quick and helpful responses. I wrote a while ago about how cool Windows Live Writer is, so kudos to Microsoft for yet another cool utility.

Telecommuting tools

I wrote earlier this week about my experiences telecommuting, and after reading a comment left on that posting, I wanted to write a little about the tools that I use to be more productive when working at home. But first, a bit of history.

Back when I started at Sybase in August of 1997, my friend and colleague Lisa suggested I ask the IT people for an extra monitor, keyboard, mouse, and power cable so that if I wanted to work from home, I’d just have to bring my desktop machine home and plug ‘er in. I did this, and this made things pretty easy for the one day every few months that I worked at home. My desktop machine, running Windows NT 4.0, had a modem installed, and when I wanted to check my email, I had to unplug the phone on the desk and plug the cable into the modem, dial into Sybase, and then synchronize Lotus Notes. I only did this about once an hour because it was a pain. If I wanted to check some files out of source code control, I had to write down the name of the file in my notebook, manually reset the read-only bit on the file, and make a copy of the file in case I needed to revert it. Many times I forgot the copy and was unable to revert if I needed to. When I got to the office the next day, I’d have to go through the list of files that I wrote down and check each one out.

After a few years of this, management sent an email around asking if anyone would be interested in having a laptop rather than a desktop the next time that machines were refreshed. I responded with something like “Yesyesyesyesyesyes” several milliseconds after reading the email, and a few months later, I had an IBM laptop. This made things orders of magnitude better — I brought the extra monitor and stuff back to the office, and was then able to sit at the kitchen table when working. I had broadband internet at home by this point but no router, so I still had to use the modem to get email. Another couple of years later, I bought a wireless router for home, as well as a wireless PCMCIA card that I could plug into my laptop. I installed the Sybase VPN software and nirvana was achieved. I could then simply run Notes like I normally would to send and receive email, and I could also use our source code control software directly. I subsequently tired of Notes so I moved to Outlook and then a few years later, Thunderbird.

Back to the present. Here is a list of tools I use to make telecommuting easier:

  • Firefox for web, Thunderbird for email, MSN Messenger for IM (this is true in the office as well as at home)
  • A lot of people seem to use Skype for phone, but I don’t really use the phone all that often. My regular phone works just fine. It does have a speakerphone, which makes things easier, especially for long conversations. Our old phone had a headset that worked pretty well too. That allowed me to walk around while talking on the phone which I always tend to do when not typing.
  • Broadband internet (absolutely required!) and wireless network, though wired would work fine if the router was handy or there were drops available.
  • VPN software is obviously a must. I won’t say which VPN product Sybase uses for security reasons (security through obscurity, dontcha know!), but one of the “features” is that it automatically drops the VPN connection every 12 or 24 hours or something, even if the connection is in use, and with no way to cancel it. When the connection has been idle for a while, I can understand it but every now and again I’m in the middle of copying some large file to or from work and I get a popup saying something like “The VPN connection will be dropped in 2 minutes“. Since there’s no way to cancel it, the message may as well say “The VPN connection will be dropped in 2 minutes. I hope you’re not actually using it, but if you are, well, it sucks to be you.” I just have to hope the file copy finishes in that time, or that I can re-connect the VPN fast enough that the copy just continues. If not and the copy fails, I have to reconnect the VPN and start the copy all over again. My description makes it sound like a huge problem, but it’s actually only bitten me once or twice in however-many years. It’s just annoying that I have to reconnect, especially since the VPN software is buggy and sometimes crashes while connecting.
  • Remote Desktop when connecting to Windows machines if possible. Some of our older (Windows 2000) test machines don’t support this, so we use VNC for those. But Remote Desktop is preferable because it’s faster and replicates the user experience more closely. If you maximize the Remote Desktop screen and the machine you’re connected to isn’t heavily loaded, you can almost forget that you’re connected to a remote machine. This is not the case with VNC.
  • When doing Unix stuff, I use VNC to connect to a Unix machine in the office and then use that to rlogin to other Unix machines. This works quite nicely, except that every now and again, I’ll be in the middle of typing some stuff and a character will get repeated for no apparent reason. I’ll be typing and something like cd /tmp/grrrrrrraeme will show up. Very irritating. I’m sure it’s a problem with the VNC client software, because I occasionally see it in the process of repeating – like it thinks I’m holding the key down when I’m not – but when I hit that key again, it stops. I suspect this is because it got a KEYDOWN message but missed the corresponding KEYUP message. I have never seen this when VNC’ing into a Windows machine.
  • I have a couple of VMware VM’s set up on our VMware server so I can do stuff on a machine that’s in our engineering subnet when I’m at home. Another VM has all the NetWare development stuff installed on it, though I rarely need that anymore.
  • Apple iPod (5G, 80 GB) along with a Logitech Pure-Fi Express Plus dock for music. Another absolute must.


I’ve worked as Sybase since August 1997, and have been a part-time telecommuter since January 2004. I already worked at home infrequently when the need arose (as did many others in our group), but at that time, my (old) car was around 275,000 km, and I wanted to reduce the mileage I was putting on it so that it would hopefully last a little longer. I asked my boss if I could regularly work at home one day a week (every Friday). He asked his boss who asked the President of the company (who to this day I have never met), and they all OK’ed it on a trial basis. Five years later, I’m still at home every Friday, and sometimes on other days as well. If there is a lot of snow in the forecast, I will generally work from home; in the past, I have had days where it took me two hours to get to work and the same to get home, and wasting that much time (and gas) seems really dumb if I can work at home and avoid it all. I’ve done this for a couple of years now, and I’m sure there have been days where the traffic would have been fine despite the snow, but one day a few weeks ago it was snowy but I didn’t think it was that bad, so I figured I’d brave the weather. Stupid move. It took me a little over two hours to get to work, and then about an hour and a half to get home.

I’ve read a number of articles on telecommuting, and one of the pieces of advice I’ve seen the most often is that you should treat working at home as the same as going to work, meaning that you should sit down and work during your regular office hours, you should have a separate “office” space and not just sit at the kitchen table, things like that. I’ve even read about people who close the door to their “office” and force their family to either call or email if they need him, just as if he were at an external office. This seems a little extreme to me, but it does avoid persistent interruptions. I would love to have a dedicated place in the house where I could work more comfortably than the dining room. We do have an office upstairs, but the desk is so cluttered with stuff that there’s no room for my laptop. If I were to clear off the desk and use that as my telecommuting “office”, I think I’d have to invest in a new chair. Hmmmm…. I’ve thought about that idea in vague terms before but never really thought about it until now, and I’m starting to think that it’s a really good idea.

Treating working at home like working in the office is particularly important if you telecommute 5 days a week, since you don’t want to feel like you live in your office — you want a place that you can “walk out of” at 5:00 and feel like you’re back home. For me, I only work at home one day a week most of the time, so I set my laptop up at the dining room table and sit there. Sometimes I used to sit at the kitchen table, since it’s closer to the entertainment centre so I can plug my iPod in and listen to music while I work. I recently bought a speaker device for my iPod so I can listen in the dining room, so now I don’t need to move. But generally, it’s a normal working day. I get up at the same time, have a shower and get dressed, get the boys breakfast and make their lunches, just like any other day. It’s just that when I’d normally kiss everyone goodbye and leave, I simply walk into the dining room and sit down.

It does take discipline to work at home. It’d be very easy for me to sit with my laptop in front of the TV all day, but I know that I’d get much less (read: nothing) done, so the TV never goes on. Surfing the web is harder to avoid since the browser is right there, but I’m getting pretty good at not sitting on Facebook or writing blog entries all day. Most of the incentive to not do this comes from my work ethic — I know that if I’m goofing around when I’m supposed to be working I’m essentially ripping off the company, and so I feel guilty. I do have to admit that some comes from the fact that working from home is a privilege that Sybase has given me. If they decide I’m not getting as much done when I work from home, they might decide that they don’t want me to do this anymore, and I don’t want to lose the privilege. It’s something like: I want to be able to work from home and goof off, so when I work from home, I don’t goof off in case they don’t let me work at home.

One of the huge advantages of my job, from the point of view of telecommuting, is that from a work perspective, there’s not much I can do in the office that I can’t do from home. (Obviously teachers, policemen, and anyone who works in retail or deals face-to-face with customers doesn’t have this luxury.) Copying large files over the network is much slower (100 Gb line vs. VPN over wireless G). I do a lot of network-related projects, and sometimes that doesn’t work very well. As I’ve mentioned before, the product I work on is a mobile database called SQL Anywhere (SA), and the clients use UDP broadcasts for locating the server. When I’m at home, my machine is essentially on its own private LAN separate from the work one (VPN does stand for Virtual Private Network after all), so any broadcasting stuff doesn’t work properly since UDP packets don’t span subnets. I have a couple of VMWare images running on our VMWare server in the office, so whenever I need to do network stuff, I can simply remote desktop into one of those. I used to do a lot of work on the NetWare version of our product, and I can’t do NetWare stuff at home either. But we don’t support NetWare in the latest version of SA, and we get very few bug reports from previous versions (that’s obviously because my code is robust and efficient, not because we only have a handful of customers using NetWare). I have my NetWare development environment set up on a VM now so I can do that from home anyway.

The obvious advantage to telecommuting is the lack of travel time and effort — not only does it reduce the time spent travelling (on Fridays I generally spend the extra two hours working), but it also reduces the gasoline used and the extra mileage on the car. On days where the traffic or driving conditions are bad, it also eliminates the likelihood of accidents, and lowers my general stress level as well. It’s also very nice to be able to schedule things like dentists appointments and visits from service people (the furnace guy, the guy who will hopefully fix our dishwasher next week so I don’t have to wash a thousand dishes every night, etc.) on Fridays and not have to take vacation days.

Other than work stuff I can’t do from home, the main downsides to telecommuting are things like participation in meetings, whether scheduled or impromptu (Aside: “impromptu” is a really weird word), and socializing. Some things are just more difficult over email or IM.

From the company’s point of view, there are only one real advantage: keeping employees happy (and therefore keeping employees). I do love my job, but if Sybase didn’t allow me to work from home, I might have grown tired of the commute by now and left to find a job closer to home. In terms of job perks, it costs the company nothing, and is a display of trust on their part, further enhancing my overall job satisfaction.

I’ve written before about IvanAnywhere, the telepresence robot in our office controlled by my colleague Ivan Bowman, who lives in Nova Scotia. Ivan used to live and work in Waterloo, and now travels here a few times a year. But I’m curious how Ivan’s working relationship with colleagues that he has never worked with “in person” differs from those with whom he has.

Technical Debt

Jeff Attwood wrote an article on his blog Coding Horror yesterday all about paying down your technical debt. This is when you do something “the quick and dirty way”, which then costs you “interest” in the future in terms of bug fixes, workarounds when new functionality is needed, and just extra time for developers unfamiliar with the code to understand why something was done the way it was. There are certainly times in every developer’s life when you have a choice between doing something “the right way”, which might take weeks to design and implement properly, or you could do it the easy way, which gets the job done for now, but may have consequences later. If you’re under a tight deadline, often the easy way wins out — that’s your debt.

People often complain about Microsoft Windows being bloated, and that’s largely because of technical debt that they can’t easily pay off. When they released Windows NT in 1993, they made sure that all existing Windows and DOS programs would still run. That decision saved them — who’s going to upgrade to a brand new OS when there are no programs and drivers for it, and none of your existing stuff will work? — but they incurred a huge debt because of it. Backwards compatibility has always been a huge issue for Microsoft — it’s only recently (2007) that they released an OS (Vista) that won’t run 16-bit DOS software from the 80’s. I cannot imagine how much of the Windows source code is dedicated to running legacy software.

I love this “technical debt” metaphor, as we’ve gone through it a couple of times on our mobile database product, SQL Anywhere, most notably a few years ago on SQL Anywhere version 10.

One of the advantages of SQL Anywhere is the way we save data in the database file. We do it in such a way that a database created on any supported platform can be copied and used on any other supported platform. Also, if you create your database with one version of our product, you can continue to use it when we release updates for that version, or even completely new versions. Version 9.0.2 of our server, released in 2005, can still run databases created with Watcom SQL 3.2, released in 1992. I remember my time as an Oracle DBA – every time we upgraded Oracle, we had to “fix” the database, and by “fix” I mean we had to rebuild it or upgrade it or something. I don’t remember what we had to do, but we had to do something. We also had Oracle on our test server, which was a different platform than the production server, which means that we couldn’t just copy the production database to our debug server for testing or debugging purposes, which was quite a pain.

Anyway, while this was a very convenient feature, we did accrue some “technical debt”. This is not quite the same as described above, in that we never took the “quick and dirty way”, but we still had to have code in the server to support features that had been removed from the product and very old bugs that had long been fixed. After six major versions and thirteen years, there was a lot of these. After much discussion, we decided to take the big plunge with the 10.0 release (known internally as “Jasper” — the last few releases have all had code names from ski resorts, “Aspen”, “Vail”, “Banff”, “Panorama”, and the next one is “Innsbruck”), since we were adding a ton of other new functionality with that release. The decision: version 10 servers would not run databases created with version 9 or earlier servers. Everyone would have to do a full unload of all their data and reload it into a new database when upgrading to version 10, and they’d have to do this for all their databases. This would allow us to remove thousands of lines of code from the product, making it smaller, and since we have far less cases of “what capabilities does this database have?”, the code can be more efficient. As a simple example, we now know every database that the server can run supports strong encryption, checksums, clustered indexes, and compressed strings, among others, so we don’t need to check for those capabilities before using them. There are a lot more assumptions we can make about the layout of the database that makes the code simpler, smaller, and more efficient. We can also add new features that might have clashed with old databases. We knew that the rebuild itself might be inconvenient, and upgrading to version 10 wouldn’t be nearly as seamless as previous upgrades, but we also knew that once the initial pain of the rebuild was over with, life would be much better for everyone. We even put a lot of work into streamlining the rebuild process so that it was as fast and simple as possible.

As you can imagine, there was some resistance to this, and I’m sure product management had to handle more than one call from a customer asking “I have to do what with my multi-terabyte database?”, but to their credit, they stuck to their guns and told the customers that yes, we know it’s inconvenient, but it’s really for the best, and you’ll appreciate it once the rebuild is done. Or perhaps they blamed it on us, telling the customers “We know it’s a pain, but engineering won’t budge. They’re determined to do this.” Either way, it happened, and we did get some more bug reports because of problems with the rebuilds, but for the most part, things went pretty well. That pain paid off the technical debt that we’d accumulated over the previous decade.

Of course, we’ve since released version 11, which added new stuff to the database file, and we’re working on version 12 which adds even more, so now some of those “if the database file has this capability, then do something, otherwise do something else” conditions are creeping back into the product. So far, there aren’t a ton of them, so our current interest payments are pretty low, but perhaps in five or six more versions we’ll have accumulated enough technical debt that we’ll have to bite the bullet and pay it off again.

IvanAnywhere on Space TV

Space TV interviewed my co-workers Glenn Paulley, Ian McHardy, and Ivan Bowman about IvanAnywhere a few weeks ago, and the results aired last Friday night on their show “The Circuit”. The piece is online: go here and click the link at the top that says “Ivan Anywhere, the robot telecommuter”. There is also a direct link to the video, but note that the link resizes your browser window. The bit about Ivan is about four minutes long, and starts a minute or so into the video.

I PVR’ed the show, but I’ll be damned if I can figure out how to copy it to my computer. I thought I could record it straight to my digital video camera, but the camera doesn’t have inputs, so I’d have to play the video and then actually record the TV screen with the camera. Video and audio quality would both suck, so I didn’t bother. Of course, even if I could get it in digital format, I couldn’t post it to YouTube or anything, since it’s copyrighted.