Monday, July 2, 2012

Business Analysis and Process Design

Process Design Considerations

In this blog we look at some of the effects that might occur as a result of an analyst's re-design of processes. This section is important as it provides a broad view of some hidden dangers that a novice analyst may easily overlook. These impacts are, again social, organisational and specific to the operation of the task being modified. We begin with social impacts.

Democratisation of the Workplace

Rochlin (1997) suggests that ready access to information may empower some individuals to make better decisions but that this does not necessary extend to a social context in which centralised coordination may have gained in power. Within organisations Rochlin (1997) questions the arguments of technology promoters that both workers and managers have better information, more autonomy and greater control. Instead Rochlin (1997) argues that with any new technology there is an initial democratising effect, but once this transient phase passes both workers and managers find their descretion and autonomy reduced whilst their workloads increase and they face acquiring and maintaining a more complex body of knowledge. He argues that workers seeking autonomy either forego benefits available to other workers, or are bound by new rules that are strict in relation to their domain of "autonomous" behaviour. Increasingly, according to Rochlin (1997), these rules are imposed by people who are more skilled in computers than in the practices they are designing, (business analysts?) and often these designers are even external to the organisation.

Attempts to defend the role of human experience and integrative judgement are confronting "a new breed of neo-Taylorists who seek to automate everything in sight in the name of reliability, efficiency, and progress" (Rochlin 1997, p 10). However, Rochlin (1997) warns that the outcome of redesigning complex task environments is difficult to predict. If introduced carefully and with sensitivity (see the XP development principles in Lesson 6 for example) human capabilities can be augmented, particularly in stable environments where people have time to learn to incorporate the new approaches into their cognitive frames. In such cases, people can become comfortable and expert in the new environment, whether work or social (Rochlin 1997). However, often things do not stay stable. Particularly as computers allow high degrees of interconnectivity and complexity, both within organisations and to the political and social environments outside (eg: Facebook?). However, Rochlin (1997) warns:

"because computers are inherent promoters of efficiency, and because people who run the organisations and activities in question are constantly seeking to improve it, the tendency to tighten coupling, to reduce slack and time of response, also tends to increase (Rochlin, 1997, p 11)."

These concepts of slack and coupling in relation to process design are important, and we will return to them later in the article.

Processes of Design

Rochlin (1997) cites Landauer (1995) in relation to one problem of software design (p 33):
"Unfortunately, the overwhelming majority of computer applications are designed and developed solely by computer programmers who know next to nothing about the work that is going to be done with the aid of their program. Programmers rarely have contact with users; they almost never test the system with users before release ... So where do new software designs come from? The source is almost entirely supply-side, or technology push."
This instantly raises a number of questions. One is how true is this over a decade later? We look in a minute at the imbalances between users and designers which predominantly remain today. However, it is likely that Agile and XP approaches are largely directed at this problem of involving the user. Consider also how much design is done by Google and Facebook without involvement of users outside of the designing community. Another question that arises is whether this problem is overcome by having business analysts who can take a broader view of the problems being addressed. This may help widen the focus of problem solving somewhat, but there is no guarantee that business analysts do not, as designers, suffer the same short-comings as the computer programmers described above. One reason for this is that, according to Rochlin (1997) designers and users come from different communities, work in different environments and with different organisational contexts. Rochlin (1997) also argues that designers tend to have a privileged position in relation to dialogue, whereas users can provide input and give advice, but only from a subordinate position. A critical issue that affects operability, reliability and performance is whether or not users have real control to allow new techniques to be introduced or not, rather than just some control over how they are introduced.

Rochlin (1997) later considers how technology has progressively separated people from control over their work. The first stage of this was the introduction of production lines which transformed craftspeople into machine operators. The second stage was computerisation of production lines, which shifted people back another step, now they do not even operate the machine that manufactures anymore, but simply supervise the controls of the computer which now does this. Thus workers are no longer skilled in a variety of areas of production, but rather in standardised aspects of control. Rochlin (1997) argues that this separation of people from their work now extends even to knowledge-based work. Managers and supervisors often do not manage and supervise, but operate computers that manage and supervise. This leads to new forms of technical systems whose risks are not yet understood. Interestingly, Rochlin (1997) devoted two chapters to the risks associated of the use of such systems for computer trading and in finance, this of course prior to the Global Financial Crisis (GFC).

Rochlin (1997)'s explanation of the consequences of computerisation may offer some insights as to why productivity may not increase:
"to compensate for the loss of broadly skilled workers with integrative knowledge of plant and process, more elaborate and detailed models of work and task were created; control was asserted by increasing the direct, line authority of foremen and other intermediate managerial classes, under the assumption that better data and information would continue to ensure smooth integration. As the transition from skilled to standarised labor proceeded, more and more oversight was demanded for process reliability, which in turn increased organisational complexity and the need for managerial coordination. Despite its absence from traditional measures of productivity, the extensive bureaucratization, with the attendant increase in office and managerial staff it required, was now seen as necessary and productive, rather than as an expensive and wasteful consumer of resources (p 57)".
Rochlin (1997) argues that this lead to the replacement of workers who had detailed knowledge of their fields with professional engineers who had knowledge of general theories as well as management practices and objectives. Associated with this was a shift in importance from workers to plant and machinery. This in turn elevated the status of those responsible for the formal design and organisation of plant flows.

Braverman (1975) also discussed this theme, but argued that while technology deskilled workers, the main problem was not the changing relation between workers and their machinery but rather the increased control by management being made in the name of technical efficiency. Rochlin (1997) suggest that it is this control of the workplace that threatens white-collar workers more so than intelligent machines or office automation. This threat was also discussed by Ford (2009) as covered in an earlier blog.

Rochlin (1997) suggests that technology has changed hierarchical workplaces. Traditionally, middle managers would collect, process and forward information up the chain. But now managers at higher levels cannot only oversee the work of subordinates, at any level, but also monitor them. This gives them unprecendented opportunities to micro-manage, or otherwise interfere in, the work of those subject to their authority, often in relation to tasks and processes for which they have no expertise or formal training and where blame for mistakes will fall on others. Rochlin (1997) refers to research that revealed that plant managers overwhelmingly reported a desire for a central control screen in which they could run the entire plant. A similar study found that information systems in organisations were being used to produce systems with centralised knowledge and top-down control along the ideals of Taylor's 'scientific management'. Rochlin (1997) fears that this will lead to operators that appear to be autonomous, but will be actually working in jobs that are so bounded and circumscribed that they have no room for skill development or discretion. Managers may find themselves in similar situations unable to deviate from the plans and programs of the organisation. Rochlin (1997) quotes Zuboff in relation to this threat to workers:

"Post-industrial technology threatens to throw them out of production, making them into dial watchers without function or purpose"

However, there is a related problem here, called "defunctionalisation" which concerns the loss of skills and expertise.

Expertise, Skills and Automation

In discussing expertise and skills, Rochlin (1997) distinguishes between someone who is proficient and someone who is an expert. Proficiency can be gained by rational and reductionist approaches whereby people are trained to follow logically deductive chains (i.e rote learning or practice). But such learning does not produce an expert; this requires discretion and trial-and-error experience. The result is more of an integrated representation of knowledge (tacit knowledge) rather than a series of causally linked steps (Rochlin, 1997). There are vast differences then between the abilities of an expert when compared with someone who is proficient, and we will return to this shortly.

Operational divisions of plants often regard themselves as repositories of expert knowledge of the systems they work with. They have experience with the actual working of the plant rather than the more formal understandings that engineers and managers get from specifications, rules and procedures. And while the knowledge of engineers is respected, operators can worry about interference from people who have no hands-on or practical experience. Concerns are particularly raised when professional consultants are used to improve performance or reliability at the human-machine interface (Rochlin 1997).

Rochlin (1997) describes the opinions of nuclear plant operators as:

"the 'beards' come in here, look around for an hour or two, and then go back and write up all these changes to the controls. Because it will cost the company a fortune, the decisions are going to be made upstairs (i.e by managers and professional engineers). We'll have to argue for hours and hours about changes that will interfere with the way we work. And sometimes they make them anyway" (pg 110).

These are important considerations when you are concerned with air traffic control centres, utility grids, and military combat centres. Rochlin (1997) cites studies where 90% of pilots felt that the logic of designers was substantially different to that pilots and other users.

Criticism of designers who deliberately limit what pilots can make aircraft do has lead to this type of phenomenon being called "glass cockpit syndrome". This term has been more widely adopted to apply more generally to situations where human operators are separated from direct, tactile, modes of control and instead placed in automated control rooms where computerised displays are their only sensory inputs (Rochlin, 1997). Glass cockpit syndrome has been reported not only by pilots and air traffic controllers, but also nuclear plant operators and other similarly hazardous, but still complex, systems (Rochlin, 1997). One source of difficulty is that the technical constructs integrate technical, process and situational complexity into a single spatio-temporal image. Another source of concern comes from observations of teamwork where cultures of safety have been based on the "silent" practices of mutual manual monitoring and tacit task allocation. The effects of automation on these interactions may be unclear, and how these functions might be re-allocated if the human operators have to step in and retake control of the system, as may happen in an emergency (Rochlin, 1997). A third concern, is that the introduction of automated control allows systems to deal with more traffic; levels of traffic that cannot be controlled without the automated systems. Air-traffic control is an example here. Operators at many airports are capable of managing the system manually if computers go down using their manual slips of paper that they maintain alongside the automated tools. Automation threatens to allow higher densities of traffic and numbers of airplanes which could not be managed manually. Actually, air traffic controllers have been identified as a unique case, due to their access to decision makers and the dependence of those decision makers on the controllers (as most powerful people are regular users of airports). It has been suggested that because of these factors air-traffic controllers have been able to resist design changes to their work environment that they consider dangerous or detrimental (Rochlin, 1997).


Human concerns about "Glass Cockpits" (Rochlin, 1997). Aviation accident investigations have linked the last three of these to new categories of errors and mistakes, some with fatal consequences.
  • Too much workload associated with re-programming flight management systems.
  • Too much heads-down time in the cockpit attending to the systems
  • Deterioration of flying skills because of over-reliance on automation
  • Increasing complacency, lack of vigilance, and boredom
  • Lack of situational awareness when automated systems fail, making it difficult to identify and correct problems
  • Reluctance to take over from automated systems, even in the face of compelling evidence that somethings is wrong.


However, few groups have the sort of control over their environment that air traffic controllers enjoy. Nuclear and chemical plant operators and many others, face automation and lack the public access and visibility to challenge changes to their work environment (Rochlin, 1997).

Their concerns are that with more automation old expertise will be lost as new staff are trained more in computer and management skills at the expense of developing a deeper knowledge of the systems they are operating. It is this deeper knowledge that comes from real experience with controlling the systems. In fact, there are fears that automating the 'easy' tasks of system control will actually make it harder for operators to control the system when the 'hard' problems arise (Rochlin, 1997). Rochlin (1997) describes the instincts that are developed from coal-face experience of controlling systems directly rather than operating a computer control:
"This was brought home to me quite sharply when I was inteviewing in a nuclear power plant control room, and heard an operator say that he "did not like the way that pump sounded" when it started up. Although the instrumentation showed no malfunction, the pump was stripped down at the operator's recommendation, at which point it was found that one of the bearings was near failure. My research notes (and those of others doing similar work) contain dozens of similar stories, ranging from detection of the onset of mechanical failures to air traffic controllers intervening in an apparently calm situation because they did not like the way the 'pattern' of traffic was developing" (pg 124).
The possible consequences of automation are explain by Rochlin (1997) as follows:
"Human learning takes place through action. Trial-and-error defines limits, but its complement, trial-and-success, is what builds judgment and confidence. to not be allowed to err is to not be allowed to learn; to not be allowed to try at all is to be deprived of the motivation to learn. This seems a poor way to train a human being who is supposed to act intelligently and correctly when the automated system fails or breaks down - that is, in a situation that comes predefined as requiring experience, judgment, and confidence as a guide to action" (pg 126).
Rochlin (1997) continues on to suggest that computerised control systems could be designed sensitively and interactively to support both safety and performance. However, he argues that this is not what usually happens. What does happen is that computer implementations sooner or later lead organisations to try and maximise efficiency. This reduces the margin of time in which a human could assess situations so as to take appropriate action in the case of problems. Thus human oversight of such systems is effectively useless. The engineering solution to such a risk is to provide additional redundancy by providing systems that operate in parallel for monitoring and control tasks. But Rochlin (1997) suggests that redundancy still does not provide that essential resource necessary to guard against failure: slack. Slack in the sense of a small excess margin that leaves resources available to deal with problems. In the opinion of the author (M. Mitchell) slack is critical issue in not only control systems, but also organisation design, as it is slack that allows degrees of variation and adaptation in times of difficulty. Note, that in such times it is human experience and judgement that comes in to play, thus the value of having staff who have experience beyond just operating machines, or following pre-determined processes.

C3I

Rochlin (1997) distinguishes clearly between command and control. Contrary to what is suggested in much managment literature, these two terms are not synonymous. Control involves feedback mechanisms which allow learning in relation to a specific purpose. It is suited for situations that are deterministic and relatively certain (eg: thermostat control in a house). Command, on the other hand, draws on learning from a wide range of circumstances associated with various purposes. Command is used where is there is significant uncertainty and draws on a much broader set of experience combined with heuristics (rules of thumb) and/or intuition.

Rochlin (1997) believes that technocrats often mistakenly try and treat command problems as though they were control problems (thus treating command-and-control as a single indivisable term). The belief of these technocrats seems to be that the increasingly complex operational environments - which emerge from having mutiple missions, highly differentiated and specialised units, complex bureacracies, etc - can be controlled using information systems involving increasingly complex models and integrated networks. Rochlin (1997) argues that this is a mistake. They would be better off to accept the "necessity to cope with the irreducable increase in uncertainty" (pg 189) bought about by the factors above. Instead what is required is people who are able to make decisions based on partial knowledge and information and correct on the fly through processes of trial and error (this links back to his point in an earlier section whereby people develop this ability by direct 'real' experience, not by operating machines). Instead of this in modern business environments power has been transferred from people with experience that is tacit and difficult to quantify to people with data and model maniputation skills. This change has been coupled with a corporate culture "where quantitative analysis and esoteric computer skills were becoming increasingly valued" (pg 190) (Rochlin, 1997). He argues that a similar change has occurred in the military; flexible and adaptive control by people high in the hierarchy has been fostered (based on the availability of information) rather than allowing those in the field to exercise powers of command. Rochlin (1997) refers to this as a "dream of being able to cut through the fog of war". He cites several disasterous historical failures based on these approaches. One example was the battle of the Somme in 1916. Rochlin argues this is an example of where "More of a premium was put on retaining control to assure that the battle went according to the pre-scheduled timetable than to managing the actual advance toward the German lines". The consequences of this resound through the history books. A critical point being that the troops on the ground could see the strategy failing within a few hours, but were prohibited by the control structure from exploiting any available advantages. General Haig, on the other hand, did not realise the strategy had failed until days later. Rochlin (1997) compares the minutely detailed and rigorous plans of the Fourth Army at the Somme with the battle of Waterloo where Wellington's victory was achieved against Napolean without a written battle plan.

Rochlin (1997) continues on to analyse the campaigns in Korean and Vietnam. In Korea, the US forces were not organised into separate units with self-contained targets that could be autonomously pursued. Instead the units were "loosely-coupled" which in practice meant encumbered with the need to negotiate with each other in "real-time". Such reliance on communication also allowed commanders to exercise too much control, not allowing room for discretion and adjustment, with potentially disasterous consequences. Rochlin (1997) also describes the information "pathologies" introduced by centralisation in the Vietnam war. Rochlin (1997) states that, based on the Vietnam experience, General Heiser recommended resorting to a less centralised system thus reducing requirements for information even if it created some 'slack' in resources. Rochlin (1997) concludes this section with the following quote from van Creveld:

"To study command as it operated in Vietnam is, indeed, almost enough to make one despair of human reason; we have seen the future, and it does not work" (pg 199).

Automation, Standardisation and Slack

By now we have established that Rochlin (1997) is concerned about the limiting effects of automation on:
  • individual human development;
  • the evolution of human knowledge and skills; and
  • safety in times of emergency
Rochlin (1997) argues that "what is lost in many cases is not just variety, and specific human skills, but the capacity to nurture, enhance, and expand them through the messy processes of direct, trial-and-error learning" (pg 213).

Rochlin (1997) calls the elaborate, long-term collective effects of computerising and networking everything the "computer trap". He is concerned that this process (in the large) is mostly unexamined and that its effects may be not only irreversible, but may also create large scale vulnerabilities in the social and socio-technical systems that are essential to managing the structures and complexities of modern life.
Rochlin (1997) argues that one factor behind all this seems to be a push to elimate from hazardous systems all possible sources of "human error". No matter what type of system is being dealt with this push tends to increase the tightness of coupling, increase response times and reduce redundancy. The final effect is that mechanisms of operation are deeply embedded in computer systems so as to make human operation in times of emergency impossible.

References

Braverman, H 1975, Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century, Monthly Review Press, New York.
Ford, M.R 2009 The lights in the tunnel: automation, accelerating technology and the economy of the future, Acculant.
Landauer, T.K 1995 The Trouble with Computers: Usefulness, Useability, and Productivity, The MIT Press.
Rochlin, G.I 1997, Trapped in The Net: The Unanticipated Consequences of Computerization, Princeton University Press.

Monday, July 18, 2011

Business Analysis, Automation, Offshoring and Jobs

One aspect of traditional business analysis was to look at how tasks within an organisation could be made more efficient.  Initially this meant looking at how database systems and software applications could be created to support people in organisations with the intention of lifting the productivity of these people. One effect of this (if done successfully) is that businesses need to employ less people to do the same amount of work. With the emergence of the internet it is now apparent that a large number of organisations have participated, or are participating in, a new wave of automation that significantly affects jobs.  This is particularly obvious in the retail sector, where the low overheads of having an online business allow prices that challenge many brick-and-mortar retailers (See for example: Harvey's war on Cheap TVs, Harvey Norman Reveals Online Move and Second Wave of Closures Hits Angus-Robertson). Of course, there is another element to this: offshoring.  While business analysis may involve looking at which processes can be offshored, Ford (2009) takes this further by linking the offshoring of jobs with the automation of jobs and suggests that in many cases this is an either-or choice.

Ford (2009) discusses how "knowledge economy jobs" are the ones that are most ripe for automation (counter-intuitively as they are supposed to be high skill jobs), however, a common precursor to automation of these jobs is that they are offshored to countries with lower wages. This again impacts local employment as these jobs are lost. See, for example, Online Shopping Threatens Local Jobs and also Foxconn Robots. Of course, one may argue that retail jobs are not really knowledge economy jobs. This is true and I will present Ford (2009)'s arguments that knowledge jobs are subject to the same offshoring/automation pressures in more detail shortly. Before that, lets just quickly consider the effects of either automation or offshoring.  One effect is that the price of goods and services goes down as costs are reduced. This is the claimed economic positive. However, another effect is that some people may be now unemployed or underemployed (at least temporarily). This effect is more dubious, as in theory the reduction in income is offset by cheaper goods and services and now - the big leap of faith - the loss of jobs in one area allows people to be re-deployed in other areas of the local economy.  Why wouldn't this happen? Rather than entering into an in-depth analysis of this, let me just suggest that it doesn't always work very well, with the American Rust Belt perhaps being one example (the last two paragraphs of this article are interesting in this regard). In such cases, mass relocations of people may be necessary leading to other, hidden, costs such as loss of amenity from prior infrastructure investments etc. Ford (2009) argues that there is also a psychological effect at play here.  As people see jobs dissappearing due to offshoring and/or automation, they reduce spending. This drop in demand for good and services may in itself be sufficient to lead to further job loses.

Now let us examine Ford (2009)'s case that knowledge workers' jobs are at risk from automation. The basic premise on which he builds his argument is that people underestimate what can be done with software and hardware.  Jobs, or aspects of jobs, that appear to require high levels of knowledge and skill can be automated sufficiently well to replace humans, even if the quality of outcomes is not quite as high in some cases (although it may well be higher).  The second aspect of his argument is that knowledge workers are often highly educated and highly paid.  Their high pay is what makes it worth trying to automate (or offshore) as the savings are considerably more than for a lower paid job. Even if only some aspects of a knowledge job are automated you can potentially reduce the workforce considerably. However, Ford imagines a far bleaker scene: wholesale automation of jobs across the board, well beyond the capacity for any 'new' jobs that might be created to absorb the losses.  He imagines a world where the rich get richer while the poor get poorer - until the economy stalls entirely at which point all wealth declines as demand for goods and services crashes (this is somewhat consistent with Wealth Trickling Up Not Down). Ford thinks this in fact was a contributing factor to the 2008 Global Financial Crisis (GFC).

There are a number of the reasons why Ford thinks that automation will be so successful.  While offshoring reduces costs eventually companies will look to automation . This will allow them to compete with cheap labour AND produce the high quality, precision products that people expect.  The other thing is that products themselves are being designed so as to support automation. Even a job such as motor mechanic which requires human skills such as visual recognition and physical manipulation he sees as threatened. Already cars can be repaired based on computerised diagnostic tools and it seems feasible that they may be designed in such a manner as to simplify automation for robotics. However, there are many more vulnerable jobs which require less human skill (see for example Food Inc). Supermarket shelf stocking is one with an appropriately designed physical layout. As another example we are all now familiar with the self-serve automated checkouts at our local supermarkets.  In fact, Ford picks up on this very type of activity: in the service sector automation pushes the difficult parts of the job onto the customer.   Another group he sees at immediate risk are those with what he calls "interface" jobs. These are people who essentially collect together documents and enter them into systems or send them to other systems.  As more businesses provide documents online (eg: bank statements) these types of exchange between systems will be easier to automate.

Ford is not the only one who has spotted this trend towards eliminating highly paid, highly educated specialised workers. Schmidt (2000) also points out that highly trained workers are expensive which drives employers to try and reduce the discretion of professionals by either standardising the work procedure or introducing "'expert' computer systems" (pg 38) the intention of which is to "transform the employee's decision making into a routine or rote activity and tend to strip the work-result of any imprint of the employee's own thinking" (pg 36).

Ford continues in his book to identify a number of social dangers that may arise from increasing automation and wealth inequality. This raises some very interesting questions, most of which I will leave for discussion at another time. However, one of interest is the predicament of China. China has developed as the "world's factory" due to its low-cost low-paid labour force.  These people expect at some stage to reap the benefits of their industrialisation by achieving higher standards of living.  However, Ford fears that any attempt to improve the conditions or pay of Chinese workers will lead to automation of China's factories and mass-unemployment in China (see the Foxconn article again). He also accuses automation and off-shoring of contributing to the GFC, but more on this later.


References

Ford, M. 2009. The Lights in the Tunnel, Acculant.

Schmidt, J. 2000. Disciplined Minds: A Critical Look at Salaried Professionals and the Soul-Battering System that Shapes their Lives, Rowman and Littlefield.

Labels: ,

Tuesday, July 12, 2011

Technical Aims and Approaches

Issues for Technology in Business

Common technical issues that arise in relation to software systems (beyond meeting functional requirements) are:
  • Reuse - avoiding repeating work
  • Reliability - having results that are accurate (eg. correct reports, correct customer information).
  • Availability – systems being available when required (controlled down time)
  • Scalability – the system's ability to cope with increases in workload (eg: large rise in customer base and therefore transactions).
  • Performance – system response is timely.
  • Agility - the system is adaptable to changes in the business and its environment.
Apart from supporting the day-to-day transaction and decision requirements of organisations, there is a need for IT systems to support Agile Information Organisations. Awazu and Desouza (2005) argue that organisations should:
  1. Sense signals in the environment
  2. Process them adequately
  3. Mobilize resources and processes to take advantage of opportunities
  4. Continuously learn and improve operations

Approaches to the Issues

There are a variety of solutions to the technical issues and problems identified above. Some of these, which we will be looking at, are:
  • Application servers and middleware.
  • Web-services.
  • Services Oriented Architecture (SOA).
  • Outsourcing
  • Proprietry solutions (eg. SAP)
All these approaches conceptually consist of three layers representing differentaspects of the system. These layers are: The presentation layer which is responsible for the user interface; the application layer which represents the business rules; and the data layer which represents the database system storing and retrieving information. These are shown in the following diagram:



Traditionally in large organisations the three layers were primarily implemented on a single centralised mainframe or mini computer system as shown in the next diagram:


However, more modern systems tend to use multiple computers and spread the tasks across them. For example, a web system would take adavantage of the user's computer and browser software to take care of much of the presentation of the application (the user interface (although the code sent to the browser is still constructed on the server computer ) with maybe many smaller computers being used to run business rules and database systems, for example as follows:


In these cases the software on the middle layer is called middleware, which is often implemented based around an application server.

Application Servers

Application servers provide an infrastructure for building systems that attempts to address the problems of: reuse, reliability, availability, scalability, performance and agility. This type of approach has become popular for rapidly growing internet businesses as well as for existing businesses expanding services for e-Commerce. An application server is basically like an operating system for a business. Just as Windows is an operating system for a personal desktop. Windows supports all sorts of personal applications such as word processors, music players, web-browers etc. An application server supports all sorts of business applications like: sales, accounts receivable, logistics, customer relationship management, etc.

All business applications have a core component which is the business rules it is implementing and enforcing. On one side of this core is the data that is been processed by the rules (usually provided by a database system) and on the other side is the user's view of the application. These three abstract components of an application are called the data layer, the business rules or logic and the presentation layer. Two common operating systems for supporting business applications are the Java Enterprise Edition (JEE) and Microsoft's .NET framework. For each of the three abstract components of a business system JEE provides a container (sub-system of the businesss operating system) to house the relevant aspect of the application.

These are as follows:

For the presentation layer: Web-container along with the Client Application Container

For the application layer (business rules): Enterprise Java Bean Container (Java Beans are the programming elements that business applications are constructed from.

For the data layer: Application server with Java Bean Container (to connect to database systems)
An application server architecture may look something like the following:



In this case, Tier 2 is the middleware layer and the business applications are the oblongs which are constructed from programming objects. Both Tier 2 and Tier 3 may consist of many computers with the work being automatically spread across these machines by the application server software. The resulting logical and physical organisation of the system looks something like the following:



Web Services

Traditionally the web has been built up from HTML documents which referenced each other. Traversing links, downloading files and initiating purchases and on-line transactions has been done manually by a user through a browser (typically anyway - there are also many web-crawlers, softbots, etc operating over the web). Web-services moves the model from end-users initiating transactions to having programs initiate transactions on the user's behalf. Services can be described, published, discovered and utilised dynamically. This allows auctions, marketplaces and intelligent agents to exploit these new features.

Business functions can be published on the web, and are accessible universally so clients can invoke methods on objects remotely through the web. The services available are maintained in a directory and can be looked up and accessed using standard description languages and protocols - often automatically by software. Web-services are a combination of the web with distributed components (also called objects) and XML (eXtensible Markup Language) technologies.

Web services are organised using directories (similar to telephone books) as follows:
  • White pages - Provides info about a service provider
    • business name
    • text description
    • contact info
    • identifiers - tax number, etc.
  • Yellow Pages - Business categories:
    • Industry - US gov . industry codes
    • Products/services - ECMA
    • Location - Geographical taxonomy
    • Implemented as name/value pairs.
  • Green Pages - Describes how to do e-business with companies
    • Provides business processes, service descriptions, binding info.
    • Platform independent
    • Services are categorized

Services Oriented Architecture (SOA)

Components in application servers can fairly easily be exposed as web-services. This allows 2 types of reuse:
  1. Internal to the business.
  2. External services for others to use with an arranged fee systems.
Some organisations may specialise in providing web-services for others to use, for example, credit card payment processing. Touted benefits of SOA include:
  • Faster time to market – pull together existing services as components in some new service enterprise.
  • Lower cost – as services are used by multiple clients, rather than just a single developing organisation, the costs can be spread over all users.
  • Supports agility – new services can be quickly created or adapted based on existing service components.
Possible problems with building systems using SOA are (Craggs, 2007):
  • Finding services which do exactly what is needed.
  • Overheads in using XML as a data format rather than more efficient custom formats. Poorly prepared business cases which do not get management support and therefore are not given adequate budgets.
An example of SOAs is provided by Kevin Clugage, Product Director for Oracle Fusion Middleware (paraphrased):
An example is a European leasing company with multiple lines of businesses for different types of leasing assets, each one supported by different legacy systems. The disparate back end systems created a lot of inconsistencies and process delays for their customer service representatives who were managing multiple order-to-quote processes for the same customers across these different lines of businesses. They were able to rapidly build a composite application on top of those systems that enabled their customer service reps to process orders for each individual line of business through one consistent interface. A typically implementation time for this type of systems could be one to two years, but by leveraging their existing systems to build a composite application, this company was able to complete the project in only six months.

Outsourcing and Proprietry Solutions

Outsourcing includes Application Service Providers (ASPs), such as web-service providers. Ideally, services provided should be as reliable as utilities such as an electricity or water supply. These has lead to the term utility computing. In practice, there are risks with outsourcing and using 3rd party software. Some of these risks are:
  • Shirking - vendor bills for more work than provided, replaces highly qualified staff with lesser qualified staff.
  • Poaching – vender develops a strategic application for one client then uses it for others.
  • Opportunistic repricing – client enters into a long-term contract then vendor increases prices or charges for unexpected extras.
  • Loss of business knowledge – vendor gains knowledge about your business making you more dependent.
  • Scope creep – the work they are doing and charging for starts to extend beyond what was originally intended.
Some options to reduce the risks of outsouring include:
  • Short –term contracts may allow companies to adapt to changing environments.
  • Only contract out select services.
  • Align payments to ASPs to measurable performance.
  • Divide large projects into smaller pieces to reduce risks.
Despite the risks, outsourcing may offer benefits to organisations that lack the skills or expertise to develop their own in-house solutions for specialised problems. Another problem with outsourcing to provider who provides the same service to all customers is that this reduces the opportunities for an organisation to customise its operations and services and thus differentiate itself in the market place. This problem also arises when using common business packages or solutions such as offered by organisations like SAP. It may be best to outsource or buy existing solutions to problems that are not core to your business while developing your core business applications in-house either with employed staff or contracted development organisation such as ThoughtWorks.

References

Awazu, Y. and Desouza, K.C. 2005. Designing Agile Information Organisations: Information, Knowledge, Work , Technology. Available form: http://ifipwg82.org/Oasis2005/Awazu%20and%20Desouza.pdf
Craggs, S.T. 2007. SOA is Rubbish. Integration Consortium. Available from: http://www.btquarterly.com/?page=Application-Infrastructure
Kroenke, D.K. 2008. Experiencing MIS. Pearson. Chapter Extension 20.
Matena, V. et al. 2003. Applying Enterprise JavaBeans: Component Based Development for the J2EE Platform. 2nd Edition. Addison-Wesley.
Turban et al, 2007. Information Technology for Management: Transforming Organisations for the Digital Economy. Wiley. Chapter 14.

Monday, July 11, 2011

Business Process Modelling Basics

This material is sourced (except where noted) from the BPMN Introductory Whitepaper available here.

Business Process Modelling Language

Business Process Modelling Language (BPML) like UML and many other graphical modelling languages aims to represent processes. However, it is intended in particular to be a non-technical language for non-technical business users. As such it allows quite detailed descriptions of processes, but not down to the technical level of UML. In this lesson we will consider some simple representations of processes as an introduction to BPML. You will only be required to produce diagrams to the level described here, although more sophisticated representations are possible. As with UML, the analyst determines how much detail should be included and what is represented. Also like UML, BPML is primarily a communication and documentation tool. It is intended to be used to help both users and analysts understand how processes currently do, or will work. Also like UML it relies on a graphical representation, in this case called a Business Process Diagram (BPD) which is produced using Business Process Modelling Notation (BPMN).

Business Process Diagrams

BPDs are designed to be simple while allowing for the complexity of business processes. They have four categories of graphical elements:

  • Flow Objects
  • Connecting Objects
  • Swimlanes
  • Artifacts

Flow Objects

There are three core elements in the flow objects category:
  • Event – represented by a circle. Show something that “happens”. Three different types depending on when they affect flow.
  • Activity – represented by a round cornered rectangle. Represents work performed, can be atomic or compound.
  • Gateway – represented by a diamond shape. Used to represent decisions or alternatives.
These three objects are shown in their diagrammatic form below:



Activities and gateways are similar to their graphical equivalents represented in flowcharts. Activities can be though of as some action taking place, while gateways indicate some choice or alternative paths for process to progress along. However, events have no obvious analogy so we will look at these in more detail.

Events

Recall that these are represented by circles. Events affect the flow of a process usually have a cause or an impact (also called triggers and results). There are three types of events differing by when they affect the flow:
  1. Start
  2. Intermediate
  3. End
A Start event is one that indicates something has happened in the environment that triggers a process to start. For example, a customer telephone call or email being received. It is shown by a circle with a thin border. An End event shows the logical completion of a process, it is depicted by circle with a single thick border. The Intermediate event shows the temporary suspension of a process and may be used to continue the process on a separate diagram.

Connecting Objects

Connecting objects are arrows that indicate various relationships between other graphical elements on the diagram. There are three objects in this category:
  1. Sequence flow – represented by a solid line and arrow head. Shows sequence (order). Most usually connects the events and activities that make up the process showing the path of activity followed under various circumstances.
  2. Message flow – represented by dashed line and open arrow head. Shows the flow of messages or information. These arrows typically show communication between processes, for example information flowing from a customer's ordering process to a businesses sales process and vice-versa.
  3. Association – represented by a dotted line with a line arrowhead. Associated things with flow objects (such as inputs and outputs). May indicate various data flowing in and out of the process but not directly into other processes (i.e may be stored before later use by another process or the same process later).

 

 

Pools and Swimlanes

Businesses are often divided logically into functional departments (eg. accounting, marketing, sales, warehouse etc). Swimlanes show each of these departments relevant to a process. Using swimlanes it can be seen when a process is crossing from the responsibility of one department to another. If two or more organisations are involved in a transaction (and typically for our purposes two are) then each organisation ay have its own internal swimlanes (i.e departments). In this case, each organisations is considered as a pool consisting of its own swimlanes. Therefore the two graphical objects related to different organisations and their departments are:
  • Pool – represents a participant in a process. Usually in the context of B2B situations.
  • Lane – represents a sub-partition within a pool. Lanes are used to organise and categorise activities.
The diagram below shows a pool (could represent one party/organisation in the process) and a pool divided into swimlanes (shows two departments in one party/organisation associated with the process. Note, only those departments relevant to the process are depicted in a BPD).




Activities in pools are self contained processes. Sequences cannot cross between pools, but messages may be used to show communication between pools. For example, in the following diagram a process cannot cross from one organisation to another, but messages can, and always in sequence. Like a telephone conversation, each request receives an answer before the next request is placed.




In the following example, we can see one party/organisation divided into swimlanes. Typically this will be the organisation you are analysing. Unless you are also analysing the other organisation you typically will not show their internal departments (i.e their pool will not have swimlanes) as you cannot control their internal operations and divisions. You do, however, need to understand how they will interact with your process (i.e the messages you will exchange with them and for this purpose it is necessary to show some details of their internal processes (in a single lane). The diagram below shows that within swimlanes it may be possible for a process to branch (fork) of multiple parallel processes internally.



Artifacts

Context can be added through the creation of artifacts. Three types are defined (but others can be added by the modeller ):
  • Data Object – represented by a page icon. These show how data is required or produced by activities. They are connected through associations.
  • Group – represented by a rounded corner rectangle with a dashed outline. Groups items for documentation or analysis purposes but not for sequencing.
  • Annotation – represented by half box and a dotted line. These are a mechanism to provide text information. 


    Uses of BPMN

    BPMN is designed to cover process segments as well as end-to-end business processes at different levels of granularity. Within these objectives, two basic models are possible with BPD:
    1. Collaborative (B2B) Processes
    2. Internal Business Processes

    Collaborative B2B Processes

    These diagrams show the interactions between 2 or more business entities and show a global point of view (i.e. do not favour any particular participant. Compare with a context diagram). The interactions consist of sequence of activities and message exchange patterns.
    The activities for collaboration participants are considered as “touch-points”. These are activities that are visible to the public for each participant. They are called public or abstract (if looking at just one participant).
    The diagram below shows an example of a collaboration. Each of the processes will have more detail internally, but we are only interested in the abstraction seen by the public:




    Internal Business Processes

    Internal business processes are based on the view point of a single organisation, although they will still show interactions with external participants. Activities that are not visible to the public are depicted. The sequence for a single process cannot cross out of a pool. We may start with a high level process and then show more detail in additional diagrams (drill-down). For our Bookshop example the sub-process Send Catalog may be represented in more detail as:




    Note, that purists will argue that a lane should not have two or more consecutive activities without crossing lanes such as seen with the last three activities in the Sales lane.

    Why Use BPMN?

    BPMN provides a single standard notation intended to replace a range of varying standards used across industry. The process models developed by business people need to be manually translated to technical execution models for IT people and BPMN provides a mapping to technical specifications but avoiding the overt complexity and technical focus of UML.

    Labels: , ,