EHL

brainstorming -...Control PanelEscape Hatch Labs?

brainstorming - the rich thin client

X

Warning - I write this standing on a soapbox, and this is pretty open and unorganized. This is how I brainstorm in general. I've decided to start putting these up on my wiki instead of keeping them in my head.

I am big on thin client lately. I believe there is a general sine-wavyness in the industry that causes things to move towards ubiquitous computing and back to having everything localized again because some things about thin client just plain suck. Namely, bandwidth, latency, and the power of the client.

Lately though, you can get a cortex a8-based SOC for fairly cheap. And ubiquitous (buzzword alert) bandwidth really is starting to become a reality. So I think we're hitting the era of the rich thin client. It's cheap, too. Think of the rich thin client as everything you see in a couple-hundred dollar netbook today. There really is a lot of power there.

Netbooks are able to sell themselves. I haven't seen any artsy commercials purporting how cool you will be if you have one. They're just cheap and useful. And these devices are not only economically feasible, they're nearly disposable. I have a friend who drives those boxy Scion xBs - he calls it his throwaway car and jokes that he has another one still in the plastic.

They have a right-sizing problem though. They're starting to get discreet GPUs, larger & higher resolution screens, multiple cores, etc - and that's going to blur things quite a bit. You can already configure one to cost 8 times the amount that the eeepc goes for. So I think that the enthusiasm created behind them will also be their downfall - unless the need for these devices to become more powerful goes away, the line between a netbook and a full blown laptop is going to get very, very blury.

Again, I believe the reason a netbook is popular, is because it's cheap and it's 'rich' enough to do what you need. Heck, you could call it a different factoring of the hardware platform in the iPod touch if you want.

That hardware platform, respun in any form factor is the perfect rich thin client. It gives me accelerated access to a framebuffer, plenty of kinds of input, and a fat link to the ether. And this wonderful bit in between where it doesn't matter if I'm not always connected because the machine is powerful enough on its own to do some things.

Wouldn't it be great if someone stepped up to the bat and created a software platform for these devices that made the line between what work is done on the client versus what work is done in the (dammit) cloud transparent to the user?

I think some folks when asked this question would say "Hey we're done, it's called web 2.0, and Palm WebOS is the perfect realization of that."

Unfortunately it's not. It's a good step forward, that I believe was delivered before completion. The reasons WebOS sucks can be fixed, but so far they are things that any good operating system should have. Heck, I'd love to take the webkit porting layer and build that directly on top of a super thin hardware abstraction, and add modern operating system features. WebOS sits on top of linux, but does not make use of a lot of benefits that a full blown operating system provides, simply because it tries to be the operating system in the webkit abstraction.

Inside of its "nearly everything is javascript running in a browser" (which is fine, given canvas & local storage), WebOS lacks real support for isolating tasks from each other, and this includes prioritization and accounting, both brain-dead important things for any operating system that is multitasking.

Because of this I had a phone call go to voice mail simply because I was loading a traffic webpage and the machine couldn't render the incoming call info - it drew black, then zoomed out from the webbrowser app, and then sent my friend to voicemail, all without me touching it. I also tried out an html 5 webpage that used canvas, and it leaked a ton of JS allocations to the point where the machine popped an out of memory dialog. Fine. Navigate away from the page and the device should have memory again right? Nope. Close the webbrowser? Nope. Not even enough available memory to open the system dialog to reboot.

WebOS also does not use acceleration for anything outside of video, from what I've found so far (I expect at least fast blts between memory allocated by the application and the framebuffer or window backing store).

These things can be fixed but really should have been designed in from the ground up. Fortunately or unfortunately, this is the best realized standard I've seen for a software platform on a rich thin client.

Jury-rigging existing architectures to support thin client is an interesting spectrum - at one end you have "gimme a frame and the capabilities of html5 or just flash and I'll do the rest" which I look at as google gears, adobe flex, etc. I don't have enough depth in that area but so far I really believe it's the wrong direction. At the other end of that spectrum you have the Pano Logic - a shiny little box that offers most of the IO you get from an actual PC - audio, video, USB, ethernet, all glued together by a xilinx spartan 3e 1600k. That may well have an ARM core on it, but they call it "cpu-less." This device talks over ethernet to a server that can be in their (dammit) cloud, which runs software that arbits between the device and a vmware session.

The entry-level Pano Logic 'license' costs around $2000 for 5 devices and a year of service, unfortunately. (Tangent warning) I picked up 5 on e-bay for $20 to check them out. They have a pretty incredible design and if I can't get them to work with vmware, I'll just probe all the connections to the FPGA and make the device do something different. I have what I believe is an original, expedient hack to do this that I'll write about later - if you think about it, it'll probably be obvious, I used the same thing for the 32x24 pixel RGB LED displays.

Anyway, to summarize:

  • The netbook is the hardware platform headed towards rich thin client computing. Forcing the evolution of that platform by souping it up with expensive hardware is the wrong way to go. That expensive hardware will eventually get cheaper because the process to make it will become cheaper, while at the same time more powerful and expensive hardware is becoming available. It's not a rich thin client if it's using the latest powerful and expensive hardware. It's a rich thin client because it's using proven hardware that is now cheaper to make. That 'rich thin client' hardware will always be a constant target in the shallow end of the affordable computing spectrum just as the sooped up hardware is at the other edge. Blurring the line is foolish and will kill this hardware platform's ability to sell itself. The nature of this platform lends itself towards the development of shared computing in a self-sustaining way.
  • For the software, execution on the device versus in the cloud should be very blurred. Some things are better right next to the input and framebuffer. Other things are expensive, but also pipelinable, cachable, and should be run on the big iron in the cloud. Often these are both a part of the same task. The external requirements are being filled by the evolution of computing. We have the pervasiveness of the rich client hardware, a low latency high bandwidth network, and are riding yet another wave towards attempting to do more work in the cloud. The software platform for this is currently evolving in a genetic manner that is much more haphazard than the evolution of the hardware. 'Survival of the species' lends itself more towards physical things. With hardware we are eager to throw out the old for the new, but after hitting a certain point of pervasiveness, software just won't die. It is much more expensive and time consuming for specific hardware to hit the same level of pervasiveness.
  • There have been many interesting attempts to build platforms for different pieces of the offload-work software puzzle, and currently developers pick and choose those pieces to put together. For example, there are already C compilers for offloading parallelizable work to DSPs, FPGAs, GPUs, and CPU clusters. But there is no overall coherent vision for doing this. It is meta to the role of today's operating system design, but ideally should not be completely on the shoulders of the task being performed either. It is a reaching statement, but I believe any efforts towards a good rich thin client platform will step into and help standardize this arena with the realization of a structure that helps engineers more easily balance between the location of the resources, the latency between them, and their execution properties with the tasks themselves. I believe this is the meta operating system, and it will need to span machines.
Last modified 22:57 Thu, 10 Sept 2009 by Main. Accessed 1,084 times Children What Links Here share Share Except where expressly noted, this work is licensed under a Creative Commons License.