Part 2: Improvement to the worse?
In the first part we remembered that using full-fledged pc's for running the user interface of enterprise applications had become expensive. Each new application required a new front end on the client, which in turn reached its limits increasingly often. Keep in mind, that at that time (the first half of the first decade) there were no dual cores, no gigabytes of ram, no 64 bit systems, no gigabit Ethernet - at least not in the offices.
So, was the idea of building the user interfaces of enterprise applications using traditional client technologies bad? By no means. It offered tight integration with the client, for example by accessing local hardware (printer, scanner, chip card reader, ...), or to communicate with other apps. Today, Android developers take it for granted that they can utilize functionality of other apps simply by firing and consuming intents. In the early 2000s (and even before) that would have been possible, too. Typical Windows apps heavily relied on the component object model, which exposed functionality of a program to other apps. Sadly, competing technologies relied on incompatible object models. Out of the box, it was impossible to have a Java Swing-based client app talk to, say, MS Office, and vice versa. The constraints imposed by the hardware have already been mentioned. As I wrote in the first part of this series, the solution seemed simple.
A web browser seemed like a reasonable execution environment for user interfaces. If the user interface is rendered by the browser, there is no need for an additional rollout when a new enterprise application is introduced. What the browser would render, had to be prepared by the backend and then sent to the client. Hence, this transmission contains data and display instructions. User input is sent back to the backend and processed. Early web frameworks produced user interfaces that could not compete with well-designed rich client applications. No validation of user input, bad usability, delays due to server roundtrips, ... Even a decade later some aspects still require ridiculous workarounds. For example, have you asked yourself why generally agreed upon shortcuts (hotkeys) are not used in web based apps?
Anyway... This is not meant to be a rant against certain technologies. I am merely trying to set the stage for what I would like to call the mobile enterprise, that is, how organizations and their applications can embrace mobile devices. To do this, quite a few prerequisites must be met. A few of them are:
- Applications need to be distributed.
- Applications need to be properly layered.
- Functionality must be accessible individually.
If a physically distant client program is used as the user interface of an enterprise application, there MUST be a public interface. If this was well-written and thoughtfully designed remains to be seen, but at least it is there. My experience is that in typical web apps the separation between business logic and the ui layer is often fuzzy, if present at all. If all melts into one single .war or .ear file, why bother a costly separation of layers? Test yourself. What is a front controller or a business delegate?
The take away of this part: the need to properly structure an application and to establish well-defined interfaces is as urgent as ever. How this can be achieved shall be the topic of a future installment.
Go to Thoughts on the mobile enterprise #1