Please join me at my new location bryankyle.com

Thursday, December 22, 2011

Twelve South Compass iPad Stand

I recently had a chance to try out the Twelve South Compass iPad stand. I’ve had my eye on it for a long time but never had opportunity to give it a test run for myself.

The Packaging

Unlike pretty much every other accessory maker, Twelve South totally nails it when it comes to packaging. They know that the packaging is the first taste of the product that the customer gets. Well designed, tasteful, easy to open packaging gives the customer a glimpse of how much effort they put into designing the product inside.

In the case of the Compass stand, the package consists of a box containing the stand encased in a plastic shell. The box doesn’t have a lid so the customer can see the stand inside the box through the outer plastic shell. There’s no tape to fuss with, just open the packaging and slide out the box. The Compass sits on a cardboard table in the box and is held in place with white elastic. The fact that they chose to use elastic instead of twist ties is a gift in and of itself.

Underneath the cardboard table there’s a slipcase. The slipcase is simple and speaks for itself. Under the slipcase is a spartan instruction manual with the sole words “Thank You” emblazened on the front. The instruction manual is fairly short: one page. Simple directions for how to use the stand are included. At the end of the instructions is a whimsical section on some uses for the box that the stand came in.

These little touches: the easy to open packaging, well designed box, the full front page of the instructions dedicated to thanking the customer, and whimsical instructions aren’t frivolous. They all come together to give the customer a great first experience. Not just a great first experience with the product, but potentially these details serve as an introduction to the company that made them. They exhibit the ethos of the maker: its attention to detail, its gratitude towards customers, and its fun side.

But enough about the packaging, lets talk about the stand.

The Stand

The Compass isn’t just an iPad stand, it’s a work of art. The stand is solidly built and feels great. It’s heavier than you’d think for something of its size. The brushed finish nicely matches the finish on Apple’s own hardware making the Compass feel at home with your iPad and MacBook Air.

When folded up the Compass is very compact; it’s small enough that you could fit it easily in your pocket or bag and hardly know its there — aside from the weight that is. The small carrying pouch for Compass is a nice touch; it protects not only the stand but anything else that might be rattling around with it in your bag.

The Compass has 2 different modes when unfolded. It can either prop the iPad up like an easel or lay the iPad down at an incline providing for a nice keyboarding surface. When using the Compass as an easel the iPad can sit in either landscape or portrait orientation. In addition, the legs that the iPad stands on are quite a bit longer than needed. The legs being longer means that the compass will work with both iPad 1 and 2, with or without a case. Both of these features make the Compass stand one of the most versatile stands I’ve seen. The versatility coupled with its elegant design, and compact size make for a very compelling and attractive stand.

Unfortunately, as compelling and attractive as the stand is, it has some problems. The main problem I found with the Compass is that the hinge on the back leg was extremely loose. When handling the stand I had to be careful otherwise it would open up. Putting it in its case could sometimes be troublesome as well. But these are minor inconveniences when compared to the biggest problem with the hinge being loose. After some use the Compass would fold itself up and collapse.

The rubberized feet on the bottom of the legs, the shallow angle of the back leg, and the loose hinge worked together to produce a perfect storm. Tapping around the center of the screen would cause the front 2 legs of the stand to lift off the table. With all the force in the back leg and the loose hinge the leg would eventually move into a position where it would fold up and collapse onto the table. If the hinge were tighter not only would this not be a problem, but the stand would also feel much higher quality.

I’m sure they had good reasons for the hinge having as much friction as it does. But a tighter hinge isn’t the only solution. A pin or something similar that holds the back leg in one of two positions would do the trick.

Ignoring the collapsing issue, I’m not sure that the Compass provides a lot of value if you have and iPad 2 with a Smart Cover. The only thing that the Compass provides over the Smart Cover is the freedom to choose orientation — portait or landscape.

Conclusions

While the Compass Stand looks and feels great, but it’s expensive for what benefit you may get out of it. If you have an iPad 2 you likely already have a Smart Cover which provides more value for your money. Regardless, the fact that the stand collapses after moderate tapping is a major problem in my books.

Thursday, November 24, 2011

The iPad HDMI Connector

I've been fighting a bit of a cold for the last few days. With a baby and mom that I don't want to give it to I've been sleeping downstairs on our futon. It's actually suprisingly comfortable. But one thing that's been getting tedious over the past few nights is our selection of movies. I usually like to turn on a movie as I wind down for the night, but since our Apple TV with Netflix is upstairs ensconced in the bedroom I've been forced to watch DVDs. As I'm sure you can imagine, its been a while since we've bought any of those, so our selection is pretty thin.

Seeing as I have a perfectly good iPad and HDMI connector I thought I'd give watching Netflix on the TV using the iPad a shot. The good news is that it works as advertised; you plug it all together, set the input on the TV the voila: your Netflix movie on the TV. The bad news is the decidedly un-Apple-like experience of using the thing.

To start its ugly, if utilitarian. It connects through the only multipurpose port on the device: the 30-pin connector. The connector is really wide and the HDMI port is offset to one side with a 30-pin connector next to it. This design makes the whole thing feel bigger than it is and awkward. A better design might have been vertically stacking the HDMI and 30-pin connector ports so that there is a closer ratio of size between the two ends of the HDMI connector.

Thick HDMI cables have a mind of their own in terms of how they bend. Through its own force of will it bends the HDMI connector in all sorts of directions that seem like they'll either break the wires in the connector by twisting them or pull the connector out of the iPad by its weight. I found the contortions of the HDMI connector with a thick HDMI cable to be really worrying.

If I was trying to use the HDMI connector to mirror the iPad's display for a presention it would certainly feel awkward. But these things don't really matter that much if you're just using the dongle as a way of getting the iPad's screen on to the TV for watching a movie as I was. Once the connector and HDMI cable were connected and the iPad was laying down these issues weren't so bad. But then I ran into another problem: the device cannot be asleep while playing and the Smart Cover has to be open.

Once you close the Smart Cover or press the sleep/wake button the signal to the TV shuts off. I can see why this is done when you're running on battery; you want to conserve power so if someone has put the iPad to sleep then don't waste power sending the display over the HDMI connector. I get that, but its a horrible user experience in some situations. A better design would have been to just turn off the display if full screen video is playing when the device is put to sleep and the HDMI connector is attached. Then after the video stops playing automatically sleep the whole device after a minute or two. Wouldn't it be better for battery life to allow the display to be off when the user is doing something non-interactive and using the HDMI connector?

In any case, it did what I needed done so for that I'm thankful. But there are a few design bugs that I think need to be worked out. For an extra $60 you can get an Apple TV that does display mirroring without having to physically attach your iPad to the TV, plus a whole lot more. Either way you go, the iPad has to be awake when the display is mirrored to the TV which is an issue I hope gets addressed in a future release.

Monday, October 31, 2011

SuperDuper! Pro Tip

As much as I love my new MacBook Air, the transition to the new machine hasn't been completely seamless. Like any transition to a new computer, there are bound to be hiccups. One of these hiccups was around my backups. Since I take my backups very seriously, this was something that really bothered me until I was able to figure out what was going wrong.

After migrating all of my data to the new machine I made sure to run SuperDuper! (the best disk cloning software in the universe). It ran perfectly for quite some time but after a few weeks it started to fail by running out of disk space on the backup drive.

When I first partitioned my backup drive I made the clone parition the same size as the boot drive: 120GB. But the MacBook Air has a larger drive in it, double the size at 250GB. I've been careful to make sure that I don't use more than 120GB on the new machine, so surely the problem wasn't that I was using more space than I should be. Or at least that's what I thought.

It turns out that Lion comes with some enhancements to Time Machine. Under Lion, if your backup drive is not connected, Time Machine will continue to run. The backups are stored in /.MobileBackups -- a hidden folder at the root of the boot drive. Since this is effectively temporary storage, Lion doesn't report any of the space used by this directory in the Unix disk free or df command or in the Finder. Essentially, Lion hides the fact that it's using this space from you at all. It does this because it will automatically remove old backups to make room for new files written to disk if need be.

Since SuperDuper! didn't know this it would attempt to copy this directory to the backup drive. Ordinarily this wouldn't be a problem since the clone drive is usually the same size as the drive being cloned. This wasn't the case for me however. After I figured this out I created a new backup script and excluded the /.MobileBackups directory. After applying the changes I haven't had any problems.

So, if SuperDuper! is complaining about running out of disk space when running on Lion, you might want to either:

  1. ensure that the size of the drive you're cloning from is the same size as the drive you're cloning to. or;
  2. exclude /.MobileBackups from your SuperDuper! backup script.

Thursday, October 27, 2011

MacBook Air

A few months back I bought a new MacBook Air. I absolutely love it, it's the best computer I've ever owned. I'm pretty sure I say that about every new computer I get, but this one is head and shoulders above the rest.

Upgrades in the past used to be just incremental updates to the processor, memory, and hard drive. This machine has all of that, but what's amazing is how much faster it feels than every other upgrade I've done. It's really no surprise when you think about it.

There are really only 2 main driving forces that relate to the performance of any application: its use of the procesor, and its use of the hard drive1. Yes, I'm simplifying here, but typically when you want to optimze a program you need to look at those 2 things.

When a program uses a lot of processor cycles its said to be CPU-bound. That is, the program's performance is bound by the speed of the processor. Conversely, when a program reads and writes to the hard drive a lot it's said to be I/O bound; the performance of the program is bound by how quickly it can read and write data. Very rarely will you ever see a program that's solely CPU bound or I/O bound. Usually different parts of every program have different performance characteristics.

So why is it no surprise that the MacBook Air feels so much faster than any other computer I've ever owned? The MacBook Air has a solid state drive. Solid state drives are a new class of storage media. They optimize for speed while sacrificing total size. To give you an example, you can buy a 256GB SSD for roughly the same price as a 4TB spinning rust2 drive. But the speed of these drives is amazing due to how they work. SSDs have more-or-less direct access to any piece of information on the drive. Traditional hard drives have to wait for a spinning platter to come within range of a little arm that can picks the data off the drive. There are physics involved here, traditional hard drives will never be as fast as an SSD, but if you need a lot of storage space you can't go wrong.

You can get an SSD for pretty much any computer, but at this point they're a fairly expensive upgrade. It's been said before that installing an SSD is just like getting a new computer. And while my MacBook Air is certainly a new computer, it feels amazing every time I use it. It's fast. Faster than anything I've ever used before, and that's mostly thanks to the SSD.

There's a lot more that can be said about the MacBook Air. It's simple, tossing out things that most people rarely need anymore like DVD drives, FireWire, and extra USB ports. It's ultra lightweight, something that's really nice to have regardless of whether you care about the weight or not. The choices you have to make are pretty minimal; pick a screen size, processor and memory. The price is very reasonable for a premium product. The fit and finish is excellent. All told, the MacBook Air truly is the Volkscomputer.

  1. Yes, I'm simplifying quite a bit here. Performance can also be affected by how much parallel computation a program can do, the layout of the program in memory so that it makes efficient use of the CPU's caches, etc.
  2. The term spinning rust refers to the fact that traditional mechanical hard drives contain rust colored platters that spin. Wikipedia has a great article about how these types of drives work.

Sunday, October 2, 2011

5 Things You Can Do to Ensure Safety of Your Data and Recoverability of Your Computer

In my previous article I described some of the problems with the approach most people use to secure their data. The problems were all essentially the same: security theatre. In this article I'll outline 5 things that you can do to ensure that your data is safe and increase the likelyhood that you'll be able to get your computer returned.

The advice in this column isn't meant to be prescriptive. Instead, read through the suggestions and make sure that they make sense to you, and for your situation. If you have backups and don't need to be bothered with ensuring the returnabilty of your computer by all means, tighten your machine down. If you're like the rest of us, read on.

1. Backup Your Data -- Offsite

This one's a no-brainer. Backing up your data is one thing, but making sure that you have a good copy of it off site is another. If someone breaks into your house a backup isn't going to be much good to you if it's sitting on the external hard drive conveniently located next to your computer. It doesn't matter if you use one of those automated off site backup solutions like Backblaze or Carbonite, or if you use an old fashioned sneakernet like me. Just make sure you have a recent copy of your data off site.

2. Make Your Computer as Inviting as Possible

If you're used to a higher level of security, this tip might not make a ton of sense. It's true, your computer will be wide open if you do this. While you may want to lock down your computer for the most part, in order to ensure the safe return of your computer you'll want me make it as easy and inviting as possible for a thief to use your computer. If you make it too difficult either they'll never use it, or they'll find someone to wipe it clean so that they can start fresh. If they do the latter you'll never see your data again.

So what do I mean by "make your computer as inviting as possible"? I mean that you should:

  1. Set up your account to automatically log in.
  2. Remove a power-on password
  3. Remove disk encryption

By doing these things you'll ensure that anyone that sits down at your computer will be able to use it for whatever purposes they want. It also means that everything on your computer will be wide open to anyone that wants access to it. To fix that you're going to want to:

1. Lock your keychain
2. Use encrypted disk images

3. Lock your Keychain

The Keychain Access application on the Mac is the unsung hero of password management. Applications use it to store credentials for web sites you go to and services you use. The Finder uses it to store passwords for remote file shares, logins for wireless access points, etc. By default the password to unlock your keychain is synchronized with your login password, and the Keychain remains unlocked while you're logged in. These defaults optimize for user experience, not necessarily security. But hey, at least these can be configured.

To change these settings your going to want to open the Keychain Access application and open its preferences. From the preferences window select the First Aid tab and uncheck the last 2 checkboxes: Set login keychain as default, and Keep login keychain unlocked.

By changing these settings you will need to enter your password whenever an application wants to access some data within the keychain. This will certainly be more annoying than the default settings, but your passwords and anything else stored in the keychain will remain safe should your computer fall into the wrong hands.

4. Use Encrypted Disk Images

As I discussed in a previous post, encrypting your entire hard disk is a one way street. Your data will be safe if your computer gets lost or stolen, but it also means that the computer is completely useless to anyone that finds it. But what if you have sensitive data on your computer? Clearly you want that data to be secure, you just don't want blanket security across the entire hard drive. That's where encrypted disk images come in.

Disk Utility will allow you to create encrypted disk images to store any sensitive data. You can make them virtually any size you want, and use either 128- or 256-bit AES encryption. As of later releases of Mac OS X you can also use a sparse image format. Sparse formats allow you to create a disk of virtually any size, but it will only take up as much physical space on disk as the files that are contained within it. For example if you had a 500MB sparse image but only put 50MB of data in it, the image on disk would only be about 50MB. The best format to use a whole other discussion. But for our purposes, it doesn't matter which one you pick, just make sure its encrypted.

Once you have an encrypted disk image you can then store all of your files within the image. Images can be configured to be mounted automatically upon login by adding them as a login item, but if you don't need to access those files very frequently, its best to leave the images un-mounted until they're needed.

With all of your sensitive data stored in encrypted disk images you can be assured that your data will be safe if your computer gets lost or stolen.

5. Install a Snooping Tool

Lastly, to have any hope of getting a stolen computer back, you're best bet is to install a snooping tool. These tools take screen shots and pictures with a computer's camera, report location and IP information, and do many other things to snoop on a thief or help get your computer back. An excellent and free tool that does this is Prey.

Once installed, Prey sits idle until you log into the web site to report your computer as lost or stolen. From there you can configure it to snoop on the thief at selected intervals. By using information gathered by Prey and the help of police many people have been able to retrieve their stolen computer.

Sunday, September 25, 2011

Safety and Security

It occurred to me recently that if someone were to steal a computer there are really 2 things the victim needs to think about: the privacy of their data, and whether or not they'll be able to get their computer back. These goals aren't entirely opposed, but it does require a little bit of work in order to have it both ways.

There are lots of choices available to ensure the privacy of your data. You can use a power-on password to prevent the machine from being booted without the correct password. This is a fairly good trade off between a hassle for you and protection, but it does have some serious flaws. Using a power-on password doesn't actually do anything other than the name implies. The data on the disk is still unencrypted. If someone wants to get at the data, they can simply put the hard drive in another machine and have free range access to anything on the disk. A power-on password will just make the machine less valuable to the average thief, but they won't know that until after they've taken it.

Another option is to leave the machine without a power-on password but instead password protect your account. In this scenario, the computer will boot into the OS, but will not allow anyone to use it without first logging in. The only difference between account passwords and power-on passwords is that it requires the password later. Account passwords have a side benefit in that if you forget your password you can re-install the OS and retain all of the user files from the previous install. Your data isn't safe; someone can put the hard drive into another machine to get access to your information. But it does make the machine slightly less valuable. And as with power-on passwords they won't know until after they've taken it.

These solutions are all privacy theatre; they appear to protect your data when in actuallity your data is still unsafe, just more difficult to access. As any security expert1 will tell you, security through obscurity is not security. The only true way to protect your data is to encrypt it. Most operating systems support some for of full disk encryption, whether its built into the operating system ala Mac OS X Lion, or provided by a 3rd party with PGP. Full disk encryption does what the name implies. It encrypts the entire contents of the disk. If someone was to take the encrypted disk from one computer and put it into another, the contents of the disk would still be inaccessible without a password. This is about your only option in terms of ensuring the privacy of your data.

But this is all for naught if you need your data and your computer gets stolen. It doesn't matter how private your data is if you'll never see it again2. If you've got a backup strategy then at least your data will be safe, but computers aren't cheap and if you're like me, you'd like to get it back3. So what can you do to protect your data and help to ensure that you get your computer back if it gets stolen?

That's a good question, worthy of its own post. Come back next week to find out.

  1. Is there really such a thing as a security expert?
  2. It does matter since all of your secrets will remain secrets. But if you're storing that data it's likely that you need it.
  3. Really, I'm not sure I'd want

Saturday, August 13, 2011

Starting a New Project

I'm between personal projects right now. I don't have any ideas for interesting applications or libraries to write, so I've been doing a lot of thinking. I've come to the conclusion that when starting a new project, there's simply too much room for choice. I don't think I'm alone in feeling this way, but I'm convinced that it's not a problem most developers experience.

There's only one reason anyone starts a new project: to solve a problem. The average developer doesn't need to look much further than their known solution domain to start their new project. They have one language and some libraries that they're comfortable with and they forge the solution using them, not pausing to think about whether the tools are the right fit for the solution.

Thinking about whether the tools are right for the job is something that more senior developers tend consider. They tend to think about their craft in way different from other developers. Through experience they've developed a spidey-sense about how well tools and solutions work together, and when that spidey-sense starts tingling they can look back one their experience to find a set of tools that will work well for the solution, or at least something that they know will work.

The problem I think I'm running into is that, while I think I've developed that spidey-sense, I'm not satisfied with any of the tools I have for something so personal as a project to call my own. Sure, I can look at my experience and pull out some tools to get the job done, but I also want to enjoy the experience. So what's a guy to do when he's unhappy with all of the tools he has? Start learning how to use some new ones.

When I speak of tools I don't mean just frameworks and libraries, languages are tools too. They define what you can do and how you can do it. None of the languages I know feel right for me. They're all too constricting, or inconsistent, or too new, or too complicated etc. As for frameworks and libraries: there's just too many of them, and most of them have the same afflictions. Supposing that I do find a language and framework that I like there's always nagging question of where the holes are. Usually these tools were written for a specific need. Once that need was addressed they didn't develop it further, and therefore there will be things that I want to do that either just aren't possible or are only possible with herculean efforts. Neither of these is what I would consider a good time.

Unfortunately, near as I can tell I have 2 choices. Either go with the status quo or blindly forge ahead into the unknown with tools and languages that I don't know. The former while boring is less frustrating. I know I'll be able to accomplish whatever I want to accomplish. I may not have the best time doing it, and I probably won't learn much of anything. But at least I'll have produced what I set out to produce. The latter offers no guarantees. I might succeed in accomplishing what I set out to do, or I might not. I might enjoy the experience, or I might not. At least with the latter I'll learn something new, which definitely counts for something.

So where do I go from here? While waiting for an idea to pop into my head I've started going through the Ruby Koans. After that I'll have to take a look at Scala. Maybe inspiration will strike, maybe it won't. But at least I'll have a deeper pool of knowledge to pull from when inspiration does strike.

Wednesday, May 11, 2011

The JavaScript Curse

There are 2 fundamentally opposing approaches to programming language design: large and small. Large languages have lots of features, most of which will include some sort of syntax. As the language gets larger, so too does the syntax. This means that there can be a lot of memorization involved for the programmer. In extreme cases (C++) no one really uses the entire language. Instead everyone uses their own subset that they’re comfortable with. One nice thing about large languages is that they have codified a way of performing certain tasks. For example: there is a single way to define a class. There are only a handful of ways to loop over a list or branch.

Small languages by contrast are very spartan in terms of their syntax, but this by no means affects what the language is capable of. Small language designers tend to pull in very few, but few powerful features that can be combined in numerous ways. Since the language is so small everyone tends to use all of it. However, there is such a thing as too much power. While larger languages have a chosen and blessed way of performing certain fundamental tasks small languages do not. Instead each developer comes up with their own way of doing things. This is the crux of the Lisp Curse.

Lisp is so powerful that problems which are technical issues in other programming languages are social issues in Lisp.

I posit that this same curse affects JavaScript, however the penalty for this curse isn’t nearly as high in JavaScript as it is in Lisp. Before I explain why I think that, let me first try to convince you that JavaScript is also affected by the curse.

Classes vs Prototypes

JavaScript took a different approach when it came to designing its object system. Unlike most other object-oriented languages JavaScript uses prototypal inheritance instead of the more common class-based model. We could argue until we’re blue in the face about which is better or more expressive, but that’s not the point. Developers weren’t happy with this model. And so, since the language is extremely malleable and the effort required to implement class-based inheritance is minimal everyone and their dog rolled their own.

Now we have a fractured situation. Every JavaScript library has their own prescriptive way of doing inheritance. Learning a new library is no longer about learning an API, now you have to learn an object system and its library-specific syntax too. Granted, all of the different syntaxes are made of the common building blocks present in JavaScript, but how they’re assembled is generally quite different. The issue of how-do-I-write-a-class is now a social issue since how you do it depends on the library you or your company decides to work with.

Concurrency

JavaScript is a single threaded language. This means there is only ever a single thread of JavaScript code running at a time. Some might consider this a good thing, I know I do. It makes JavaScript programs easier to reason about since you don’t have to worry about dead locks, race conditions or much of the complexities of writing a multi-threaded program. This also means that it needs a strategy for performing long running operations “in the background.”

In the context of a web browser, a typical long running operation would be using the XMLHttpRequest object to make an HTTP request. Due to network conditions or response size the amount of time it takes to make a request and get a response varies. If this were done in a single thread as a blocking call the JavaScript thread would pause until the response was received. This in turn means that the browser’s UI thread would block as well. Effectively it would look to the user as if the browser had hung. The solution to this problem is to perform the request asynchronously (hence the first A in AJAX – Asynchronous JavaScript And XML).

Asynchronous requests work by registering a callback and performing the operation in another OS thread. Given the restrictions of the language this implies that the asynchronous call is done by a native component (XMLHttpRequest). When the operation is complete or something interesting happens the a call to the callback is issued. This more or less works well. But cracks start to appear when you need to chain asynchronous operations together or perform several operations concurrently and proceed when they all complete.

In my experience this is something that happens quite often when writing programs in node.js. At least, when you’re doing asynchronous I/O with node.js. The language itself gives you no help at all. Instead you need to either handle the complexity yourself or use a library that someone else built. While this is not a trivial as implementing classes, it’s a fairly well understood problem and a library can be written if fairly short order. As with classes, this issue which should be addressed at the language level ends up being addressed at the library level, which in turn becomes a social issue.

The Way Forward

Hopefully I’ve convinced you that JavaScript has at least a few problems. The language is extremely powerful in terms of its capabilities. But some of the fundamental building blocks that are needed or expected aren’t present. And due to the power of the language, developers can and go off and write the pieces that they’re missing.

This effort at the library level, while fun and impressive, is ultimately not productive. Every body either writes their own library or uses an existing one leading to fragmentation. Since there a hundreds of different libraries for doing essentially something that should have been provided at the language level, the language suffers.

This is similar to the Lisp Curse in that the power of the language enables developers to do this sort of thing. The language can be tailored to the needs of the developer or software under development. While this was a curse for Lisp which lead to its niche status, the same fate will not befall JavaScript.

There were several issues that lead to Lisp’s downfall, but the biggest one, aside from having too much power, was the ability for developers to switch platforms. Developers could retreat to another language that better suited their needs. They could do this because Lisp works at the machine level. Programs developed in Lisp are compiled directly into machine code. This effectively puts Lisp on the same footing as C, C++, and other native languages. With JavaScript this isn’t the case.

JavaScript is the native language of the web. To compare, writing in JavaScript is akin to writing in assembler. There isn’t a level below JavaScript that a web developer can write to. It goes without saying that JavaScript as a language is fairly stagnant. Sure, the mailing lists are quite active, and Mozilla is incorporating new features from the standards body. But ultimately there is a several year delay on the wide spread use of these language features. Older browsers that are still in use and supported don’t get these updates. As a result, developers tend to aim for the lower common denominator. In order to get a more modern language there is only one way to go: up.

Languages and tools have started to emerge over the past few years that allow developers to write code for the web in languages other than JavaScript. Google Web Toolkit, CoffeeScript, Objective-J are a few examples. These languages treat JavaScript as their target, much like a traditional compiler would treat assembler or machine code. But why are these new languages croping up instead of enhancements to JavaScript? I think it goes without saying

So while the Lisp Curse was fatal to Lisp, it is simply a nuisance that most developers have to deal with. For a few there are languages that compile down into JavaScript that can provide the missing features that we’ve all come to expect from a modern language. But, as I’ll discuss in a later post, these present their own issues.

Thursday, March 10, 2011

Xcode 4 Features Single Window Interface, new Price Tag

Yesterday afternoon a noticed a big storm of tweets about Xcode 4. One in particular caught my interest:

@JimRoepcke Can you imagine being a Mac App Store Reviewer, looking at your queue and seeing Xcode 4?

At first I figured it was just a thought experiment but a few minutes later I started seeing more tweets about Xcode 4 and Mac App Store. Since the Developer Preview for Lion was sent out through the store I put 2-and-2 together and figured that Xcode 4 was finally released. What I found out shortly there after is that you now have to pay for it.

That's right, Apple now charges for its developer tools. The whopping 4.6GB (that's giga, with a G) is $4.99. Naturally, this set Reddit, Hacker News and my Twitter feed on fire.

Now, I'm not opposed to paying for developer tools. Back in the day I used to do a lot of development on Windows. I went through several versions of Delphi (best language evar) and numerous editions of Visual Studio. The price though, $4.99? What's that about?

After a few minutes of thought a tweeted that the price seemed too low to offset the bandwidth costs. But bandwidth costs aren't what this is about. Neither is offsetting the cost of development for that matter. Remember a few years back when iPod Touch owners were charged a nominal fee to upgrade to a later version of iOS 3.0? After some thought I'm guessing that the same thing is happening again. And like the iOS on iPod Touch, later versions of Xcode will probably not cost anything.

Putting the precedent of iOS 3.0 aside, Apple has come out on several occasions and touted that they give away all the same tools to build apps that they themselves use. This has been a big point for them in the past and it's unlikely that they'd want to make a change now. Granted, Apple has been known to change its mind, and make decisions that are in its best interests. But $4.99 is too low of a price to charge if their planning on breaking even. Especially when you look at the fact that Google gives away all of the development tools for Android -- a platform that currently has more market share.

No, I'm pretty sure that this isn't a money grab. It's probably just a blip required by the bean counters.

Tuesday, March 8, 2011

My Backup Strategy

Both my wife and I have suffered through a catastrophic hard drive failure in the past. Neither of us really enjoyed losing irreplaceable photos or countless hours trying to piece back together our digital lives. Since then, we've learned our lesson and took a few steps to reduce the likelihood of ever going through that again. So what exactly did we do? In short, we've started applying the 3-2-1 backup rule:

We recommend keeping 3 copies of any important file (a primary and two backups) We recommend having the files on 2 different media types (such as hard drive and optical media), to protect against different types of hazards. 1 copy should be stored offsite (or at least offline).

The Hardware

Recently I bought 2 1TB hard drives. I opted for the Western Digital Caviar Green. They offered a decent amount of size and performance for the price. An added benefit of these drives is that they're fairly quiet and turn themselves off when they aren't in use. For drives that are only in use sporadically throughout the day these were perfect.

To hold these drives, I first considered using a SATA drive dock. However, being a Mac user I prefer the things on my desk to be well designed and look great. I had a really hard time finding a dock that met my standards. Design concerns aside, I've got a few cats that like to walk all over my desk. I'm not sure that having exposed electronics would be such a good idea. Moreover, I also needed a safe place to store the drive that wasn't in use. Hard cases do exist, but NCIX, the place I ordered my backup stuff from didn't sell any -- yet another nail in the SATA dock coffin.

Instead, I opted to get a few enclosures by Macally -- the G-S350SUAB to be precise. These enclosures look just like tiny Mac Pro towers. Being made of aluminum they don't require any fans to keep quiet, and since the tolerances are fairly tight they don't rattle when the drive spins up.

The Software

I sliced up each of the drives into 3 partitions: a 120GB, 240GB, and 640GB.

The 120GB partition is the same size as the internal drive in my MacBook Pro and is used as a clone of the internal drive -- cloned with SuperDuper! an awesome tool for cloning drives for the Mac. Having a clone means that I don't need to go to the trouble of replacing the internal drive, installing the OS, and restoring data immediately. All I need to do is reboot off of the clone and I'm up and running with a fairly recent backup. However, just having a clone isn't enough. Typically cloning takes a long time to run and therefore is done less often. In my case it's done nightly but sometimes I'll go a few days without running it.

With the 240GB partition I use Time Machine. Every hour or so Time Machine will make an incremental backup of what's on my Mac. Having this partition be larger than the internal drive means that I can keep several revisions of files in case I need to restore old versions or deleted files. Incremental backup decreases the mean time between backups. I can't boot off of the Time Machine backup, but the number of files changed since the last clone will probably be small and can be restored to the clone if need be.

The last partition, large partition stores archived data -- photos, music, old projects, etc. Stuff that I don't need to work with regularly. The archived files on this partition are cloned from a Time Capsule I have running on the network (2 copies of everything, remember).

In order to get the offline side of the 3-2-1 backup I swap out the hard drives once a week. If the house were to burn down, or someone stole everything the most I'd be out is a week's worth of work. And since most of my work is stored in Dropbox anyway it's likely that I'd lose less than that.

Weaknesses and Pain Points

So far this strategy is working fairly well. It can be a pain to have to swap the drives but since I only have to do this once a week the pain is tolerable. I could have opted for an online storage system like Carbonite or Crashplan but decided that I wanted to do the whole thing myself without worrying about long restore operations, monthly fees, or feeling socially obligated to host someone else's backup. This does mean, however, that if there were an earthquake and the city was levelled I'd lose my data but I'm pretty sure that my data would be the last thing I'd be thinking of if that were to happen.

Time Machine automatically remembers some identifying information about the drive used as the backup drive. When I do my weekly swap I have to force Time Machine to do an initial backup. This prompts Time Machine to warn me that I might be backing up to another drive and performs a fairly long scan and backup on that first hit. It doesn't back up every file, it's still smart about just backing-up the files that have changed, so it's not as bad as it could be.

Conclusion

In the end I'm fairly happy with my backup plan. It's not perfect, but I feel safe knowing that my data is well protected and that the chance of me losing all of my and my families important data is low.

Sunday, February 13, 2011

Scripting Inkscape

tl;dr

$INKSCAPE/Contents/Resources/bin/inkscape in.svg -e out.png -D -h 100 -w 100

SVGs, like all other vector graphics formats are scalable and work well for any sort of graphic that needs to scale well to different resolutions. So if you have an image that needs to displayed at many different sizes vector graphics are a good bet.

Vector graphics take much more overhead to render than do raster images. Since raster graphics are simply an array of bytes in memory that represent each pixel of the image they are relatively simple for a computer to perform operations on. Vector graphics on the other hand are usually stored as a series of drawing operations: draw a rectangle, fill it with a gradient, clip it with circle, etc. Because these operations have to be executed in order, sometimes drawing over the same pixels a few times they can be slower. Hence, vector graphics applications, such as Inkscape, have a rasterizer in them that takes the output of the SVG and converts it into a raster image. You might think of a vector graphic as a recipe, or the source code, for a raster graphic.

Using Inkscape to rasterize images can be a chore since you have to open manually adjust the size of the output graphic and filename for each different size that you want to generate. Luckily inkscape includes a command line rasterizer that you can script!

On the Mac, the actual executable is within the Inkscape.app bundle in Contents/Resources/bin/inkscape. You can invoke it from the terminal with options to control the height and width of the output graphic, it's name as well as a few other options. Running inkscape --help will list all of the available options. The ones we're concerned with are: -e, -C/-D, -h, and -w. -C is used when you want to export the drawing and -D when you want to export the entire page. The difference between the two being that exporting the page will export everything including white space around your drawing and exporting the drawing will trigger Inkscape to find the smallest rectangle that includes everything you've drawn (which might be smaller than the page) and export that.

To export the drawing contained in "Scenery.svg" to a 100x100 PNG called "Scenery.png" you would use the following command in Terminal:

$INKSCAPE/Contents/Resources/bin/inkscape Scenery.svg -e Scenery.png -D -h 100 -w 100

Where $INKSCAPE-APP is the location of Inkscape.app.

Since rasterizing SVGs is something you might be doing frequently you can always write a script to run the above command with different sizes and output names. If you want to get fancy you could even automate the process using Make or another build tool.