the open web, web development, and more

Showing posts with label web applications. Show all posts
Showing posts with label web applications. Show all posts

Wednesday, July 8, 2009

Better Objects Using jQuery

I've been working almost exclusively with JavaScript building single page applications recently. jQuery has made developing the apps enjoyable but one area I find a bit awkward is creating complex objects.

Classes work well but can be cumbersome when creating a user interface since the object reference returned from the constructor is not a DOM node but a JavaScript object. Therefore, in order to insert the user interface component into the page, a special method like getHTML must be available for that object. This isn't terrible but creates more complicated code. Here's an example user interface widget - a simple box displaying a name:

function Widget(name) {
var container = $("<div/>");

function setName(newName) {
name = newName;
container.html(name);
}

this.getHTML = function () {
setName(name);
return container;
};

this.getName = function () {
return name;
};

this.setName = function (newName) {
setName(name);
};
}

And the code that uses it:

var nameWidget = new Widget("Dave");
var widgetNode = nameWidget.getHTML();
$(document.body).append(widgetNode);

This isn't too bad but gets a bit ugly when you need the ability to hide, show, animate, or do anything interesting. The choice is either to attach more methods to the Widget class or work with the jQuery object returned by getHTML. Pattern-wise, adding more methods to Widget is the cleanest but could require creating a lot of methods. Working with the jQuery object returned by getHTML is easy but leads to managing two objects in your code: nameWidget and widgetNode

What would be really nice is the ability to call any jQuery method on the nameWidget object PLUS all the methods specific to the Widget class. It occurred to me that this could be possible with the addition of a small jQuery Plugin I've dubbed Methods.

(function ($) {

$.fn.methods = function (arg0) {
if (arg0)
{
return this.each(function (i) {
$(this).data("methods", arg0);
});
}
else
{
return $(this).data("methods");
}
};

})(jQuery);

This plugin adds the methods method to jQuery. A collection of functions can be passed to methods to attach it to a specific jQuery object. Calling methods then returns the collection of functions. Here's a rewrite of the Widget class to explain this better:

function widget(tag, name) {
var container;

container = $(tag)
.html(name)
.methods({
getName: function () {
return name;
},

// Return "this" for chainability
setName: function (newName) {
name = newName;
container.html(name);
return this;
},

// Add the end method for chainability
end: function () {
return container;
}
});
return container;
}

And the code that uses this:

var nameWigdet = widget("<div/>", "Dave");
$(document.body).append(nameWidget);

Now we have a nameWidget that directly corresponds to the user interface component and, using jQuery, can be inserted anywhere in the page. To access the object's specific methods, you simply call methods. Here's an example that shows the combination of the methods and chaining:

nameWidget
.hide()
.methods()
.setName("Bob")
.end()
.show()
.addClass("widget");

Since the Methods plugin uses jQuery's data method behind the scenes, the nameWidget can actually be retrieved through a DOM lookup.

var nameWidget = $(".widget")
.hide()
.methods()
.setName("Sally")
.end()
.show();

And that's it. In the end, Methods is a simple jQuery plugin that lets you create complex objects with custom methods and closures and can still be used like a regular jQuery object.

Tuesday, March 11, 2008

Firefox 3 Beta 4 and Prism 0.9 Work Very Well

As promised, here is a follow up on my previous post about running the latest version of Prism as an extension. Good news is: Prism on Ubuntu actually works now.

It would seem that installing Firefox 3 beta 4 from the Mozilla site solved a lot of the problems I was having. So far I haven't had any problems creating Webapp shortcuts for the Google Apps I use and as an added bonus it is easy to select an alternative icon (the favicon is a bit too small for some uses).

The most noticeable improvement is really not Prism related but due to the improvements in the Gecko Platform (see beta 4 release notes). Webapps start significantly faster now.

Another nice feature is that Prism instances run in a separate process now. It used be that all Webapps shared the same process. A poorly written Webapp should not be able to take down all Prism instances (in theory). Additionally, since cookies are isolated to each instance of Prism the chance of one Webapp running in Prism making an XSRF is greatly reduced since it won't have access to session cookies from other Webapps.

Happy Webapping

Friday, March 7, 2008

Prism: Now an Extension for Firefox 3

Mozilla has announced that Prism version 0.9 is now available as an extension for Firefox 3. See the official Mozilla Labs announcement or Marke Finkle's Blog for more information.

Installing the extension is simple but does require Firefox 3 since it relies on Firefox 3's ability to allow other applications to use its XULRunner platform. As an Ubuntu user, the simplest way to install Firefox 3 is to enable the Unsupported Updates Repository and use Synaptic. Update: Save yourself some headaches with the Prism extension and follow the instructions at Tombuntu (still very simple and applies to Firefox beta 4 too). Once installed, start Firefox 3 and install Prism.

My first (and only) attempt at converting a website to an application failed. The problem was that the wizard wouldn't actually save the Web Application icon to the desktop. And without this I saw no way to launch Prism. I hope this is a Linux specific bug that will be ironed out soon. I'll post again once I get to play with Prism more.

As an aside, using Firefox 3 was nice. Compared to Firefox 2, it felt a lot snappier and faster at rendering pages. Hopefully some lagging extensions will support Firefox 3 soon.

Wednesday, December 5, 2007

Picnik For Linux Users

Much of the blogosphere has been talking about Flickr finally adding more advanced some photo editing features through a partnership with Picnik. Advanced photo editing features have been on my Flickr wish list for a long time, so I was excited to take it for a test spin and see how well it works.

Having tried Picnik in the past, I was less than impressed as a Linux user. On my first attempt to upload an image I came across a Linux specific bug that prevented me from seeing any of my image files in their file picker. A month or so after reporting the bug I heard back from Picnik and the bug had been fixed (a bad regex was to blame). Great, now I can try Picnik again, or so I thought. Although I could select files, every attempt to upload failed (badly - taking Firefox down with it). So I reported the bug and promptly forgot about Picnik after being told it was due to a bug in Adobe's Linux Flash player and that I should take it up with Adobe.

Fast forward to today and Picnik is available through Flickr - and working much better on Linux (this is probably due to the fact that I don't have to use Picnik's file uploader since files are already on Flickr).

How is Picnik from a Linux users point of view? Awesome because it brings photo editing tools into the work flow of uploading and organizing photos in Flickr. Picnik also does a good job of making image editing tools more understandable than PhotoShop or the GIMP and increasing the chance that I will actually use them.

Picnik still leaves room for improvement however. The application (flash-based) took a long time to load (a full 60 seconds). For now, I'm attributing the slow load to poor Picnik server performance after the Flickr launch. This will be a show stopper if it doesn't improve though. Also, some of the image editing tools performed quite well while other tools made the application unresponsive and hard to work with. Perhaps these are rough edges in the Linux Flash Player or just problems with Picnik itself.

Overall, Picnik is a nice option to have integrated into Flickr. As I get more time to play with the tools, I'll post tips and tricks that might be useful.

Sunday, November 4, 2007

Prism Available for Linux

As previously posted, the latest version of WebRunner has been released under the name Prism. Until recently, Prism was only available for Windows users but now Linux and Mac users can use Prism too. Running Web Apps in Prism has three advantages:

  • Reduces the frequency of Firefox crashes. I'm not sure why Firefox crashes as often as it does but I suspect it has to do with all the extensions and running Web Apps like GMail all day long. Hopefully Firefox 3 will fix these problems, but isolating GMail to Prism has already helped a lot.
  • Reduces the chance of a cross-site request forgery (xsrf) since cookies from Web Apps that run for a long time are not available to malicious sites (assuming you would come across a malicious site using your regular browser and not using Prism)
  • Reduces needless browser UI elements, freeing up more space the the Web App itself.
I have been using WebRunner/Prism for a bunch of Web Apps I use and, generally, I have been happy with the results. The latest release under the Prism name has made some key improvements:
  • Spell checker works all the time. I'm highly dependent on the spell checking feature in Firefox in order to compose semi-intelligent email. Suffice it to say, using GMail in WebRunner made for confusing, if not humorous, reading on behalf of my email recipients.
  • The cursor is visible when composing email in GMail. Sounds weird but actually trying to write an email without a cursor is even weirder.
  • Prism start-up time is faster. Not only compared to WebRunner but Firefox 2 as well. It also seems that Prism is able to load the Web App faster, but I have no tests to back this up.
  • Favicons appear in the panel. Before the Prism release all Web Applications running in WebRunner used the same WebRunner icon. This improvement sounds trivial but is a big usability improvement, making it much easier to move between minimized applications.
There are a few annoyances/bugs with Prism worth noting:
  • Opening multiple Web Applications simultaneously throws errors
  • No Linux version of the favicon resizing/styling tool available to Windows users
  • General rendering weirdness due to the fact that Prism uses the still-evolving Mozilla 1.9
  • Lack of Prism documentation on webapp bundles that were used in WebRunner
  • Cookies are shared between all Web Apps running in Prism. This is the norm for a browser, but leads to the xsrf vulnerability. Although less of a concern since most Web Apps running in Prism will be trusted, it would still be nice to tighten the security. Perhaps Web Apps launched from a webapp bundle instead of a uri could keep cookies isolated to themselves?
If you are a Web Developer and interested in seeing how users might be interacting with your Web Applications in the future, definitely give Prism a spin. Tombuntu posted instructions on installing WebRunner on Ubuntu and these are still applicable to Prism. Personally, I extracted the Prism archive to /opt/prism and then saved all webapp bundles in my home directory.

Friday, October 26, 2007

WebRunner is Now Prism and an Official Mozilla Labs Project

If you are a regular reader of Planet Mozilla this is not news to you. However, if you are not a reader but you are interested in seeing how Web Applications will evolve, then the news of Prism becoming an official Mozilla Labs project might interest you.

Prism is currently only available for Windows but this should change in the next week or so with the release of a Linux and Mac version. However, WebRunner (the precursor to Prism) is available for all platforms now. My advice is to wait until Prism is available on your platform as it has a number of bug fixes that have made WebRunner annoying to use (no spell checker being the biggest). My experience with WebRunner has been mixed. The idea is good, but the project was very much in its infancy and not ready to replace Firefox. The fact that Prism is an official project now bodes well for its future and I look forward to new versions.

The big question, however, is: how many more times will WebRunner/Prism be renamed?

Wednesday, August 15, 2007

Thoughts on GBrowser

The web is buzzing with rumours of Google releasing its own browser. Here's my additions to the rampant speculation.

Seems to me that the most likely scenario is that GBrowser would compete with Adobe Air and provide an off-line environment for Web Applications to run in. Not only would it provide off-line support but it would likely provide a faster runtime than current browsers provide.

This make sense as it would not compete with Firefox (Google has invested a lot into it and Mozilla has asserted that their primary goal is to create a general purpose browser) and it could greatly improve the performance and features of Google's Web Applications that are trying to compete with desktop applications.

Hopefully, if GBrowser does materialize it will be open sourced and available on Linux, Mac, and Windows. Here's hoping!

Friday, August 3, 2007

My Flickr Wish List


The G.I. Joe Folk Art image was taken at a little cafe in Victoria, BC called Habit. Quite possibly the Coolest. Coffee-house-art-show. Ever. Check it out if you are in town.

The image is composed of 4 digital photographs that I had to painfully stitch together using the GIMP. As I was doing this, I began to think of all the features I wish Flickr had:

  • Photo Stitching Tool
  • Other photo editing tools (filters, effects etc. - perhaps like www.picnik.com)
  • Trip Plotter - a tool to place all your photos in a set onto a private map and add reach annotations (useful for documenting trips)
  • Some way to download all your photos and meta information
  • Last but not least, any sort of evolution in the service. Is it just me or has Flickr not introduced any new features in a while?

Wednesday, June 20, 2007

A Box of Google Gears

Google Gears is a plugin that has been a long time coming and addresses some the of the ideas I discussed in my earlier post, "Offline Web Applications". My only exposure to it has been as a Google Reader user and to be honest I was a bit disappointed. My main gripe is that it requires the browser to remain open after going offline. It works fine if you go offline, then disconnect from the Internet and keep your browser open. But shutting down the browser, then restarting without an Internet connection does not work at all. Also, reloading Google Reader while in offline mode (without an Internet connection) does not yield pleasant results.

I don't suspect many users will find this acceptable. But since this a Google backed plugin I have hope that it will improve soon. In fact, I would not be surprised if Google rushed the plugin out the door in reaction to the spat of RIA Frameworks being announced.

Even though the plugin needs some refining, the potential is there for Web Applications to become viable alternatives to Desktop Applications. Fast-forward a few years and Google Gears is probably more stable and sophisticated. In fact, the API it exposes is likely included by default in most browsers (i.e. Firefox 3). More importantly, Web Application Developers have figured out how to incorporate offline mode seemlessly into the user experience.

But how do Web Applications gain greater acceptance in the enterprise? Perhaps by bringing the server closer to the user. This could come in the form of a box that plugs into a network and services requests for certain URLs itself (i.e. intercepts requests for Google Docs or GMail), runs code locally against a local database, and responds back to the browser. Then, depending on how the box is configured, synchronizes data back to an origin server. With the box in place, data is stored in-house (with optional offsite storage) and application response rates are much faster. Devices like the Asus Eee PC or OLPC could replace the desktop for most needs and would have the added benefit of continuing to work as normal outside of the enterprise network since requests for Web Applications would go directly to the origin server.

This box could be equally useful in a home setting (likely even more accepted) as a combo unit that acts as a wifi access point, router, and file server in addition to its application server role. Imagine using an App like Flickr served from a Gears box (leap of faith: assume box is open). Upload rates would be super fast as would editing and organizing your media. So fast in fact, there would be no need to use a Desktop App to first organize and upload your media (Goodbye F-Spot). What does Google have to do with this box? Nothing, other than they seem to be a company that wants to push the usefulness of Web Applications and become a dominant application provider.

But that's enough baseless speculation over the future of computing. At this point I am merely hopeful that in a few years Web Applications will have the option to be delivered this way.

Friday, May 25, 2007

Offline Web Applications

So, apparently, the Achilles heel of Web Applications is being worked on. Mozilla has added DOM Storage (access to an SQLite database through JavaScript) to Firefox 2.

I won’t claim to have read the Web Applications 1.0 working draft (which proposes DOM Storage and other features) in its entirety. However, from what I got out of it, it doesn’t seem to go far enough. Web Apps will benefit from offline storage but they would also benefit from greater interaction/freedom with the host Operating System. Not as much freedom as a traditional Desktop App but a larger sandbox than current Web Apps operate in now. In a way, the user should be able to “Dock” a Web Application in order to expand the sandbox security model. Maybe this is just wishful thinking, but here are my thoughts on Dockable Web Applications.

Example User Flow

  1. User visits a website using a web browser and tries out a web application
  2. User sees an indication that the web application is “dockable”
  3. User clicks on dockable indication and drags it to a docking area
    • Docking is, of course, optional and should never happen without a user initiating the event
    • Ideally, the docking indication is shown in the browsers interface and not the web page itself. This way, an OS/Browser combination that does not support docking will not indicate a docking ability.
    • A good UX for this action could be a visual representation of the web page being torn from the browser and consumed by the docking area
  4. Application is docked
    • Becomes Accessible through Applications or Start Menu
    • Components indicated by manifest are downloaded and saved to computer
  5. User launches Docked Web Application (DWA)
  6. DWA opens but in the Dock (browser minus location bar, navigation, etc.)

How it might work

This is a bit tricky. A DWA would have to be designed with docking in mind. However, reuse of existing Web App resources like HTML, CSS, JavaScript, and Images should be a primary goal so that a DWA is not a complete rewrite of an existing Web App.

The Dock host environment can provide JavaScript with an API to additional features not found in a browser. This API could provide JavaScript with the ability to read and write to certain areas of the file system, access to an application specific local database (i.e. the Dock comes with a database program and each DWA can create a new database for itself), and perhaps image manipulation libraries. With the API, any existing JavaScript that makes XmlHttpRequests to fetch server-side data, can first method sniff for the Dock API, and if found, route the request locally instead.

With this in mind, a DWA must have another design change. HTML can not be personalized (rendered dynamically via sever-side technologies) since the DWA would have no server to generate it. Instead, a Web App would use HTML and CSS to provide the structure, style and sensible default content and JavaScript to make XmlHttpRequests which would then personalize it. When “docked”, this same app would then make API requests to the local file system or database and receive data (Strings, XML, JSON, etc.) back which would be parsed and dealt with just as if it came from a remote server.

Lastly, the DWA must provide some initialization scripts to set-up the local environment. These would be JavaScript that use the Dock’s API to save files to certain locations, create directories, and create necessary tables in the database if required.

With data saved locally, the DWA must eventually synchronize its content to the remote server. To accomplish this, the Dock should provide an API to send and receive updates to its local content when it detects it is online. The frequency at which it pings the remote server ought to be configurable by the DWA. Also, at configurable intervals, the Dock can check with the remote server and download any updates to its resources: HTML, CSS, JavaScript, Images, etc.

Additional Considerations:

  • Where are passwords/usernames stored to allow the Dock to send and receive data and updates?
  • What is the security model (should the DWA only be able to send and receive data with the source domain)?
  • Is a shared image directory of icons required? - No, since the Web App version would not have access to them.
  • Since it is possible to access a DWA and at the same use its online counterpart, how will conflicts in synchronization be resolved?
  • What happens to XmlHttpRequests when a DWA has access to the Internet? Are requests still routed locally then synced or is this bypassed?

Steps to a Prototype

What is the simplest possible way a working prototype could be made?
Keep the problem small and don’t try to solve all the details. That being said, since the Mozilla foundation is providing some needed “Dock” features in the Firefox 2.0 release. Namely, it comes with DOM Storage and Storage and the ability to build other applications based on the XUL Runner framework.

The “Dock” environment could be built as a simple XUL application and provide a convenient API to SQLite. It can also, hopefully provide the ability to save files to the file system.

A simple Firefox extension could be created to act as the DWA notifier. It would simply scan the currently viewed Web page and indicate if the web application is also a DWA (nothing complicated, just meta-tag indicating yes or no in the HTML page).

Does this sound like lunacy? Is it already being developed?

Sunday, March 18, 2007

Web Apps: The good and the ugly.

Admittedly, I’m a fanboy when it comes to web applications. This isn’t surprising since developing web applications is my job. But, in all honesty, I find my self using them more and more. Not only for the new functionality not available in desktop apps (think community sites like Yahoo Groups or Flickr) but, increasingly, in place of desktop apps that essentially do the same thing (think Google Docs).

So, why are Web Applications so great? I see a number of reasons:

  • No installation (Enter a URL, create an account, and go)
  • No upgrade hassles. You get new features frequently without a version number to care about.
  • Safe to run. No worries about installing spyware or poorly written software that can run amok on your computer (as a non-IE user at least)
  • Usually compatible with your platform (if it works in Firefox, it will usually work on Linux, Mac OS, Windows, etc.)
  • Safe and securely store your data. It is safe from your own blundering and from the devious activities of others.
  • Access from anywhere. You can use the same applications from home, work, an Internet cafe, a library, etc.

Now most, if not all, the strengths above can be argued, but I think on average they hold true. However, one of the huge weaknesses that needs to be remedied is the lack of an offline state.

Take for example, the web application GMail. As soon as Google made GMail available for other domains I jumped on it. It works great, ample space, powerful search, get to keep my email address, and no losing data from hardware failures or user errors (OS upgrades gone awry without backups). Wonderful, right? Except now I am completely and utterly tied to the Internet if I want to look up a phone number or read an old email.

This really needs to improve. Internet access is not all pervasive, and where it is available it is certainly not 100% reliable. Not being able to send an email without Internet access is understandable, but not being able to compose an email is frustrating. Obviously, I could switch to a desktop application for an email client. But then I would lose all the benefits outlined above. Rather, I would like to see web applications evolve and provide an offline state.

It is easy to ask for this seemingly contradictory feature and another thing to come up with a realistic framework that current web sites are willing to implement.