Burned by Chrome

Last night I was adding a new wifi device to my home network.  One of the security features I make use of is MAC address filtering (and yes, I know a motivated attacker can spoof a MAC address easily enough, but there are many easier targets in my neighbourhood).  My primary router is a Linksys WRT54GL running DD-WRT.  If you have one of the Linksys WRT54G* routers, my experience is the stock firmware is terrible – you should either run DD-WRT or Tomato and if you insist on staying stock, at least get the latest version from Linksys.

I’ve been playing with Google Chrome a little bit.  The various “run it under Linux” solutions simply don’t work well enough to bother with.  So until Google gets a Linux version going – I’ve been playing with it under Windows.  There are some things I like about Chrome: the ability to tear off a tab and get a new window; the omnibar; and their process per tab is pretty nifty.  On the downside is plug-in stability (Adobe Reader really sucked in the 1st release, better in the update), general stability (I’ve crashed it a few times) and compatibility (some websites don’t recognize it).  I’m not ready to make it my primary browser, but I do use it regularly.

In order to edit the MAC filter list, I need to navigate some of the DD-WRT administration pages to make some changes.  I was surprised to get an error message from Chrome when trying to save my changes, it turns out I was hitting a bug that was only very recently fixed.  So, no worries – I just switched to Firefox and redid the change.  However, to my dismay – it seems that the partial commit from Chrome caused my router to completely clear my nvram (in all honesty I can’t prove this, but the sequence of events makes sense).

DD-WRT has been very stable for me for quite some time.  I was running v23 SP2 and had looked at the newer v24 version since it had some neat bandwidth monitoring features, but had decided to stall as migrating my configuration would be a bunch of work (and risk if I got it wrong).  Lucky for me I had gotten a dump of the nvram by logging into the router via telnet and issuing a “nvram show” command, having this dump gave me a reference to help move my config over.

Faced with the complete loss of my configuration, I figured it was as good a time as any to upgrade to the latest version (v24 SP1).  The upgrade went smoothly, and the new firmware looks really slick.  In a way I’m sort of glad that I finally got around to doing it.

FrankenPod

This is the follow up posting to my tale of two iPods.  Tonight I used the ‘extra hour‘ to perform the logic board swap on my busted iPod from the ‘used’ one I picked up a week ago.

The first step was to take the used iPod apart.  By starting with the used one, I could learn how to do it before taking mine apart. I knew the used one was a little beat up, but I was still surprised by how dirty it was inside as well (mostly lint).  Below is a picture of it fully disassembled.

I found a pretty good .pdf file on powerbookmedic.com that documented the tear down, but found I had to also refer to the ifixit.com site to get a better idea of how some of the cable clips worked.  Now that I’ve got hands on experience doing it, the process is pretty straight forward.  Even cracking the iPod apart is quite easy now.  I do need to point out how insanely small those 6 screws are.

After taking everything apart – I wanted to verify the donor logic board was working. In the picture below you should be able to see the “Please Wait.  Very Low Battery.” message.  If I tried this with my non-working logic board, the screen did not light up at all.

Once I had both iPods completely disassembled, I performed the logic board swap and began to re-assemble my (hopefully working) iPod.  The reason for going to this extreme is that the used iPod was pretty beat up, and the only part I wanted to take from it was the logic board.  In the picture below, I’ve swapped the logic board and reassembled the click-wheel and screen into the front panel.

From this point it was only a matter of minutes before I had a completely assembled iPod and was able to connect it to my PC.  Again I was greeted by the ‘very low battery’ messaage and a pretty long wait – long enough I was starting to think I had done something wrong.

I was relieved when enough juice had made it into the battery and my Ubuntu system recognized the device.  From this point on things were pretty smooth sailing.  I booted up my vmware image of WindowsXP that I use for iTunes and there were no problems connecting and synchronizing the iPod.  I had wondered if the logic board would be tied into my serial number, but apparently the data on the drive alone defines the iPod.

I then proceeded to reassemble the used iPod using the bad logic board.  While I will likely sell this used one for parts, I figure it may as well be together instead of a jumble of parts.  Once I did this, I was surprised to hear whirring coming from the device (sounded like the hard drive spinning).  It was unresponsive to the reboot sequence (menu + center) and continued to whirr away.  I figured this explains why my iPod had such a flat battery, clearly when the logic board failed – it went into this mode and drained the battery completely.

A few minutes later I realized that the used iPod was getting warm to the touch.  This was a little bit alarming, so I popped the cover off and disconnected the battery.  The battery was quite hot, clearly some unexpected load being drawn by the bogus logic board.

At this point it looks like things went as planned.  I was able to transplant the logic board from the used iPod which was pretty beat up into my “like new” iPod.  I also have pretty good evidence to back up my guess that it was the logic board.  And I was able to do it for less than a refurb nano would cost me.

I may still use the shuffle as it is certainly easy to carry around, but I’ll certainly appreciate the video capability on my next boring plane ride.


Java Performance in 64bit land

If you were buying a new car and your primary goal was performance, or more specifically raw power – given the choice between a 4 cylinder and a 8 cylinder engine, the choice is obvious. Bigger is better. Generally when we look at computers the same applies, or at least that is how the products are marketed. Thus a 64bit system should out-perform a 32bit system, in the same way that a quad core system should be faster than a dual core.

Of course, what a lot of the world is only starting to understand is that more isn’t always better when it comes to computers. When dealing with multiple CPUs, you’ve got to find something useful for those extra processing units to do. Sometimes your workload is fundamentally single-threaded and you have to let all those other cores sit idle.

The 32bit vs. 64bit distinction is a bit more subtle. The x86-64 architecture adds not only bigger registers to the x86 architecture, but more registers. Generally this translates to better performance in benchmarks (as having more registers allows the compilers to create better machine code). Unfortunately until recently, moving from a 32bit java to a 64bit java mean taking a performance hit.

When we go looking at java performance, there are really 2 areas of the runtime that matter: the JIT and the GC. The job of the JIT is to make the code that is running execute as fast as possible. The GC is designed to take as little time away from the executing of code as possible (while still managing memory). Thus java performance is all about making the JIT generate more optimal code (more registers helps), and reducing the time the GC has to use to mange memory (bigger pointers makes this harder).

J9 was originally designed for 32bit systems and this influenced some of the early decisions we made in the code base. Years earlier I had spent some time with a PowerPC system that ran in 64bit mode trying to get our Smalltalk VM running on it, and had reached the conclusion that the most straight forward solution was simply to make all of the data structures (objects) twice as big to handle the 64bit pointers. With J9 development (circa 2001), one of the first 64bit systems we got our hands on was a Dec Alpha so we applied the straight forward ‘fattening’ solution, allowing a common code base to support both 32bits and 64bits.

A 64bit CPU will have a wide data bus, but recall that this same 64bit CPU can run 32bit code as well and it still has the big wide data bus to move things around with. When we look at our 64bit solution of allowing the data to be twice as big, we’re actually at a disadvantage relative to 32bits on the same hardware. This isn’t a problem unique to J9, or even Java – all 64bit programs need to address this data expansion. It turns out that the dynamics of the java language just tend to make this a more acute problem as java programs tend to be all about creating, and manipulating objects (aka data structures).

The solution to this performance issue is to be smarter about the data structures. This is exactly what we did in the IBM Java6 JDK with the compressed references feature. We can play tricks (and not get caught) because the user (java programmer) doesn’t know the internal representation of the java objects.

The trade off is that by storing less information in the object, we limit the total amount of memory that can be used by the JVM. This is currently an acceptable solution, as computer memory sizes are nowhere near the full 64bit address range. We only use 32bits to store pointers, and take advantage of 8 byte aligned objects to get a few free bits [ pointer << 3 ]. Thus the IBM Java6 JDK using compressed references (-Xcompressedrefs) can address up to 32Gb of heap.

We’re not the only ones doing this trick, Oracle/BEA have the -XXcompressedRefs option and Sun has the -XX:+UseCompressedOops option. Of course, each of the vendors implementations are slightly different with different limitations and levels of support.  Primarily you see these flags used in benchmarking, but as some of our customers are starting to run into heap size limitations on 32bit operating systems they are looking to move to 64bit systems (but would like to avoid giving up any performance).

There is a post on the websphere community blog that talks about the IBM JDK compressed references and has some pretty graphs showing the benefits. And Billy Newport gives a nice summary of why this feature is exciting.