Wednesday, August 26, 2009

The Age of Adaptability

Part 8 of Prisoners of the Real


Successful organizations, according to the equilibrium theorists of the 1950s, strike a balance between inducements and contributions. Inducements are "payments" made by the organization – wages paid to workers, services provided to customers, or income produced for investors. Though they may vary widely, inducements can be measured independently of their use to those who receive them, and have a specific value in each situation. Contributions, on the other hand, are payments made to the organization, things such as work from the workers, fees from clients, or capital from investors. They can also be measured and have "utility" values.


The main idea was that individuals and groups receive inducements from the organization, in return for which they make contributions to it. A related idea was that organizations must balance production and satisfaction, known in systems parlance as group need satisfaction (GNS) and formal achievement (FA).


Striking the proper balance between inducements and contributions is the main trick. In equilibrium theory, success depends primarily on the answers to two questions: how desirable is it to leave the organization, and how easy is it? If the inducements are greater than the contributions, people should be less likely to leave. If the inducements are too great, however, the organization will very likely go under.


Management theorists think mainly in terms of behavior. In the case of equilibrium theory, they attempted to take into account the "subjective and relative character of rationality," something that had been missed by classical theorists. Nevertheless, they relied on a theory of rational choice that continued to simplify real situations. People make choices, they argued, either as a routine response to some repeated stimuli or as a problem-solving response to a novel situation. Still, an acceptably "rational" choice might be simply "satisfactory" rather than "optimal."


This distinction was critical. An optimal choice is only possible if all conceivable alternatives can be compared and the preferred one freely selected, a relatively uncommon state of affairs. A satisfactory choice, on the other hand, can be made with minimally acceptable alternatives. Put another way, you can search through a haystack for the sharpest needle, but most people will settle for any needle that is sharp enough. Rational behavior, as defined by equilibrium theorists, called for simplified situations that captured the basic aspects of a problem without worrying about all the complexities.


In addition to this "satisficing" approach to choice, equilibrium theory considers several other aspects of rational behavior. Various actions and their consequences, for example, are examined sequentially, aided by systematic tools such as checklists. In recurring situations, routines are developed to serve as satisfactory alternatives. These routines can be based on past experience, and can break tasks into their elementary steps. Under the assumption that routines can be adapted to meet any organization's objectives, matters such as the time needed, the appropriate activities and the specifications of the product itself become merely technical questions. As a consequence, however, individual discretion is severely limited to matters of form, and routines are often executed in a semi-independent fashion.


Basically, rational behavior is supposed to be goal-oriented and adaptive. It should deal with a few things at a time, under the assumption that simultaneous action to both maintain and improve things isn't normally possible. Thus, a "one-thing-at-a-time" approach, with each thing placed in an orderly sequence, becomes fundamental to the structure of the organization. Speaking for many management professionals in the post-war period, equilibrium theorists asserted that work groups can only function effectively within "boundaries of rationality," and only if they recognize that "there are elements of the situation that must be or are in fact taken as givens."


By the 1970s, the burgeoning science of management had spun off a variety of approaches to organizational and human resource development. While not all of them classified managers as scientists, most did urge them to appreciate and emulate scientific methods and attitudes. The scientist, a model of educated skepticism and caution, attempts to arrive at verifiable information through logical and experimental procedures. Results rest upon premises that are themselves verifiable, or at least aren’t inconsistent with the body of recognized scientific information. Any manager who adopts this style must accept the intrinsic superiority of the analytical framework of thought. He has to accept as reality what he sees, and also believe that by describing how the parts work he can eventually gain control of the whole.


Two of the most prominent schools of management science that evolved were the behaviorist and operations research. The former is inductive and problem-centered, focusing on observable and verifiable human behaviors within organizations. Behavior-oriented management emphasizes how decisions are made, and focuses on describing the nature of operations. The latter, known as OR, is deductive, and looks at decisions in terms of how they ought to be made.


Operations research prescribes the nature of operations, often by building a mathematical model of the problem. Behavioral theory draws from psychology, sociology and anthropology within a framework of scientific procedure. OR is tied to mathematics, statistics and economics in its search for optimal solutions. Over the past few decades most management practice has incorporated aspects of both.


Although most leaders say they believe that there are limits to the use of scientific method, they continue the search for fully refined "operational" methods. Despite the increasing complexity of organizational life, management's bottom line remains its faith in the possibility of constructing systems based on what philosophers call "the true nature of the real."


The first step for any organization is, of course, planning. For rational managers, the planning process involves mainly defining the means necessary to reach desired ends. Plans provide standards against which performance can be measured; they are the basis of organizational control, and their main value to most managers is their ability to help reduce ambiguity.


There are certainly various kinds of planners. A short-range planner, for example, who thinks that only the objectives visible on the horizon can be realized, might be called "practical" or "realistic." A long-range planner, in contrast, might argue that the only truly important objectives are those beyond the visible horizon. Such a planner would undoubtedly be called a "visionary" or "idealist." The difference between these apparent extremes, however, isn’t as great as it sounds.


In government and the corporate world, most idealists and realists agree that the main tasks are to prepare early and be ready to deal with rapid change. For both types, planning is "a process of preparing for the commitment of resources in the most economical fashion and, by preparing, of allowing this commitment to be made faster and less disruptively." That definition, provided by Columbia's E. Kirby Warren in his book, Long Range Planning: The Executive Viewpoint, assumes that the purposes of planning are to speed the flow of information, help managers understand how today's decisions will affect the organization, and keep disruptions to a minimum.


The central goal of planning is to increase predictability. In order to do this, however, managers must often depend on others. As the economist F. A. Hayek explained, "The more complicated the whole, the more dependent we become on that division of knowledge between individuals whose separate efforts are coordinated by the impersonal mechanisms for transmitting the relevant information." This principle of "bounded rationality" has led many managers to favor decentralized approaches to planning, since usually no one person can balance all the considerations that have a bearing on important decisions. But delegation often leads to conflicts, since various analyses must be combined in one general plan. Thus, despite efforts to decentralize planning, decision-making must remain relatively centralized.


In order to resolve conflicts, managers rely on "satisficing" techniques such as performance measurement and estimates of costs and returns. In educational and other human resource organizations, administrators call such an approach "participatory planning." This may involve the sharing of decision-making authority, less intrusive supervision, and more accessible relationships. The goals are generally improvement of efficiency, orderly operations, better staff morale, and positive relations between the administrator and the public.


Despite a participatory style, which is basically a way to divide the labor of planning in order to solve problems more rapidly, managers and administrators remain the regulators of the process, bargaining or relying more heavily on analysis as circumstances dictate. In either case, rational planning assumes that if people pursue their various self-interests within an appropriate structure, they will be led by an invisible hand to promote ends that may have little or nothing to do with their intentions.


Next: Living with Rational Management


Previous:

The Creative Also Destroys

Deconstructing Leadership

Anatomy of Insecurity

Managers and Their Tools

The Corporate Way of Life

The Dictatorship of Time

Rules for Rationals

Post a Comment