My Journey to the IIM !!

Re: WORST CASE SOME ONE!!!

When Does Deflation Occur?
Conquer the Crash 2nd Edition, Chapter 9
by Robert Prechter


Defining Inflation and Deflation
Webster's says, "Inflation is an increase in the volume of money and credit relative to available goods," and "Deflation is a contraction in the volume of money and credit relative to available goods." To understand inflation and deflation, we have to understand the terms money and credit.

Defining Money and Credit
Money is a socially accepted medium of exchange, value storage and final payment. A specified amount of that medium also serves as a unit of account. According to its two financial definitions, credit may be summarized as a right to access money. Credit can be held by the owner of the money, in the form of a warehouse receipt for a money deposit, which today is a checking account at a bank. Credit can also be transferred by the owner or by the owner's custodial institution to a borrower in exchange for a fee or fees - called interest - as specified in a repayment contract called a bond, note, bill or just plain IOU, which is debt. In today's economy, most credit is lent, so people often use the terms"credit" and "debt" interchangeably, as money lent by one entity is simultaneously money borrowed by another.

Price Effects of Inflation and Deflation
When the volume of money and credit rises relative to the volume of goods available, the relative value of each unit of money falls, making prices for goods generally rise. When the volume of money and credit falls relative to the volume of goods available, the relative value of each unit of money rises, making prices of goods generally fall. Though many people find it difficult to do, the proper way to conceive of these changes is that the value of units of money are rising and falling, not the values of goods. The most common misunderstanding about inflation and deflation - echoed even by some renowned economists - is the idea that inflation is rising prices and deflation is falling prices. General price changes, though, are simply effects.

The price effects of inflation can occur in goods, which most people recognize as relating to inflation, or in investment assets, which people do not generally recognize as relating to inflation. The inflation of the 1970s induced dramatic price rises in gold, silver and commodities. The inflation of the 1980s and 1990s induced dramatic price rises in stock certificates and real estate. This difference in effect is due to differences in the social psychology that accompanies inflation and disinflation, respectively, as we will discuss briefly in Chapter 12.

The price effects of deflation are simpler. They tend to occur across the board, in goods and investment assets simultaneously.

The Primary Precondition of Deflation
Deflation requires a precondition: a major societal buildup in the extension of credit (and its flip side, the assumption of debt). Austrian economists Ludwig von Mises and Friedrich Hayek warned of the consequences of credit expansion, as have a handful of other economists, who today are mostly ignored. Bank credit and Elliott wave expert Hamilton Bolton, in a 1957 letter, summarized his observations this way:

In reading a history of major depressions in the U.S. from 1830 on, I was impressed with the following:

(a) All were set off by a deflation of excess credit. This was the one factor in common.

(b) Sometimes the excess-of-credit situation seemed to last years before the bubble broke.

(c) Some outside event, such as a major failure, brought the thing to a head, but the signs were visible many months, and in some cases years, in advance.

(d) None was ever quite like the last, so that the public was always fooled thereby.

(e) Some panics occurred under great government surpluses of revenue (1837, for instance) and some under great government deficits.

(f) Credit is credit, whether non-self-liquidating or self-liquidating.

(g) Deflation of non-self-liquidating credit usually produces the greater slumps.

Self-liquidating credit is a loan that is paid back, with interest, in a moderately short time from production. Production facilitated by the loan - for business start-up or expansion, for example - generates the financial return that makes repayment possible. The full transaction adds value to the economy.

Non-self-liquidating credit is a loan that is not tied to production and tends to stay in the system. When financial institutions lend for consumer purchases such as cars, boats or homes, or for speculations such as the purchase of stock certificates, no production effort is tied to the loan. Interest payments on such loans stress some other source of income.

Contrary to nearly ubiquitous belief, such lending is almost always counter-productive; it adds costs to the economy, not value. If someone needs a cheap car to get to work, then a loan to buy it adds value to the economy; if someone wants a new SUV to consume, then a loan to buy it does not add value to the economy. Advocates claim that such loans "stimulate production," but they ignore the cost of the required debt service, which burdens production. They also ignore the subtle deterioration in the quality of spending choices due to the shift of buying power from people who have demonstrated a superior ability to invest or produce (creditors) to those who have demonstrated primarily a superior ability to consume (debtors).

Near the end of a major expansion, few creditors expect default, which is why they lend freely to weak borrowers. Few borrowers expect their fortunes to change, which is why they borrow freely. Deflation involves a substantial amount of involuntary debt liquidation because almost no one expects deflation before it starts.

What Triggers the Change to Deflation
A trend of credit expansion has two components: the general willingness to lend and borrow and the general ability of borrowers to pay interest and principal. These components depend respectively upon (1) the trend of people's confidence, i.e., whether both creditors and debtors think that debtors will be able to pay, and (2) the trend of production, which makes it either easier or harder in actuality for debtors to pay. So as long as confidence and production increase, the supply of credit tends to expand. The expansion of credit ends when the desire or ability to sustain the trend can no longer be maintained.

As confidence and production decrease, the supply of credit contracts.

The psychological aspect of deflation and depression cannot be overstated. When the social mood trend changes from optimism to pessimism, creditors, debtors, producers and consumers change their primary orientation from expansion to conservation.As creditors become more conservative, they slow their lending. As debtors and potential debtors become more conservative, they borrow less or not at all. As producers become more conservative, they reduce expansion plans. As consumers become more conservative, they save more and spend less. These behaviors reduce the "velocity" of money, i.e., the speed with which it circulates to make purchases, thus putting downside pressure on prices. These forces reverse the former trend.

The structural aspect of deflation and depression is also crucial. The ability of the financial system to sustain increasing levels of credit rests upon a vibrant economy. At some point, a rising debt level requires so much energy to sustain - in terms of meeting interest payments, monitoring credit ratings, chasing delinquent borrowers and writing off bad loans - that it slows overall economic performance. A high-debt situation becomes unsustainable when the rate of economic growth falls beneath the prevailing rate of interest on money owed and creditors refuse to underwrite the interest payments with more credit.

When the burden becomes too great for the economy to support and the trend reverses, reductions in lending, spending and production cause debtors to earn less money with which to pay off their debts, so defaults rise. Default and fear of default exacerbate the new trend in psychology, which in turn causes creditors to reduce lending further. A downward "spiral" begins, feeding on pessimism just as the previous boom fed on optimism. The resulting cascade of debt liquidation is a deflationary crash. Debts are retired by paying them off, "restructuring" or default. In the first case, no value is lost; in the second, some value; in the third, all value. In desperately trying to raise cash to pay off loans, borrowers bring all kinds of assets to market, including stocks, bonds, commodities and real estate, causing their prices to plummet. The process ends only after the supply of credit falls to a level at which it is collateralized acceptably to the surviving creditors.
 
Re: WORST CASE SOME ONE!!!

Inflation is defined as a sustained increase in the general level of prices for goods and services. It is measured as an annual percentage increase. As inflation rises, every dollar you own buys a smaller percentage of a good or service.

The value of a dollar does not stay constant when there is inflation. The value of a dollar is observed in terms of purchasing power, which is the real, tangible goods that money can buy. When inflation goes up, there is a decline in the purchasing power of money. For example, if the inflation rate is 2% annually, then theoretically a $1 pack of gum will cost $1.02 in a year. After inflation, your dollar can't buy the same goods it could beforehand.

There are several variations on inflation:
Deflation is when the general level of prices is falling. This is the opposite of inflation.
Hyperinflation is unusually rapid inflation. In extreme cases, this can lead to the breakdown of a nation's monetary system. One of the most notable examples of hyperinflation occurred in Germany in 1923, when prices rose 2,500% in one month!
Stagflation is the combination of high unemployment and economic stagnation with inflation. This happened in industrialized countries during the 1970s, when a bad economy was combined with OPEC raising oil prices.

In recent years, most developed countries have attempted to sustain an inflation rate of 2-3%.

Causes of Inflation
Economists wake up in the morning hoping for a chance to debate the causes of inflation. There is no one cause that's universally agreed upon, but at least two theories are generally accepted:

Demand-Pull Inflation - This theory can be summarized as "too much money chasing too few goods". In other words, if demand is growing faster than supply, prices will increase. This usually occurs in growing economies.

Cost-Push Inflation - When companies' costs go up, they need to increase prices to maintain their profit margins. Increased costs can include things such as wages, taxes, or increased costs of imports.

Costs of Inflation
Almost everyone thinks inflation is evil, but it isn't necessarily so. Inflation affects different people in different ways. It also depends on whether inflation is anticipated or unanticipated. If the inflation rate corresponds to what the majority of people are expecting (anticipated inflation), then we can compensate and the cost isn't high. For example, banks can vary their interest rates and workers can negotiate contracts that include automatic wage hikes as the price level goes up.

Problems arise when there is unanticipated inflation:

Creditors lose and debtors gain if the lender does not anticipate inflation correctly. For those who borrow, this is similar to getting an interest-free loan.
Uncertainty about what will happen next makes corporations and consumers less likely to spend. This hurts economic output in the long run.
People living off a fixed-income, such as retirees, see a decline in their purchasing power and, consequently, their standard of living.
The entire economy must absorb repricing costs ("menu costs") as price lists, labels, menus and more have to be updated.
If the inflation rate is greater than that of other countries, domestic products become less cPeople like to complain about prices going up, but they often ignore the fact that wages should be rising as well. The question shouldn't be whether inflation is rising, but whether it's rising at a quicker pace than your wages.

Finally, inflation is a sign that an economy is growing. In some situations, little inflation (or even deflation) can be just as bad as high inflation. The lack of inflation may be an indication that the economy is weakening. As you can see, it's not so easy to label inflation as either good or bad - it depends on the overall economy as well as your personal situation.
ompetitive.
 
Re: WORST CASE SOME ONE!!!

It seems that people often confuse the cause of inflation with the effect of inflation and unfortunately the dictionary isn't much help. As you can see in my article What is the Real Definition of Inflation? the modern definition of inflation is
"A persistent increase in the level of consumer prices or a persistent decline in the purchasing power of money..."

In other words according to this definition inflation is things getting more expensive.

But that is really the effect of inflation not inflation itself. The American Heritage® Dictionary of the English Language, Fourth Edition, Copyright © 2000 Published by Houghton Mifflin Company goes on to say:

...caused by an increase in available currency and credit beyond the proportion of available goods and services.

In other words, the common usage of the word inflation is the effect that people see. When they see prices in their local stores going up they call it inflation.

But what is being inflated? Obviously prices are being inflated. So this is actually "price inflation".

Price inflation is a result of "monetary inflation".

Or "monetary inflation" is the cause of "price inflation".

So what is "monetary inflation" and where does it come from?

"Monetary inflation" is basically the government figuratively cranking up the printing presses and increasing the money supply.

In the old days that was how we got inflation. The government would actually print more dollars. But today the government has much more advanced methods of increasing the money supply. Remember, "monetary inflation" is the "increase in the amount of currency in circulation".

But how do we define currency in circulation? Is it just the cash in our pockets? Or does it include the money in our checking accounts? How about our savings accounts? What about Money Market accounts, CD's, and time deposits?

"The Federal Reserve tracks and publishes the money supply measured three different ways-- M1, M2, and M3.

These three money supply measures track slightly different views of the money supply with M1 being the most liquid and M3 including giant deposits held by foreign banks. And M2 being somewhere in between i.e. basically Cash, Checking and Savings accounts.

Interestingly, the FED has decided to stop tracking M3 effective March 23, 2006 for some mysterious reason. See the article on M3 Money Supply for what they could be hiding.

But back to the question of the cause of inflation. Basically when the government increases the money supply faster than the quantity of goods increases we have inflation. Interestingly as the supply of goods increase the money supply has to increase or else prices actually go down.

Many people mistakenly believe that prices rise because businesses are "greedy". This is not the case in a free enterprise system. Because of competition the businesses that succeed are those that provide the highest quality goods for the lowest price. So a business can't just arbitrarily raise its prices anytime it wants to. If it does, before long all of its customers will be buying from someone else.

But if each dollar is worth less because the supply of dollars has increased, all business are forced to raise prices just to get the same value for their products.
 
Re: WORST CASE SOME ONE!!!

In common usage deflation is generally considered to be "falling prices". But there is much more to it than that. Often people confuse deflation with disinflation or with Depression (as in "the Great Depression"). These three terms are related but not synonymous.

According to Investorwords.com the definition of Deflation is "a decline in general price levels, often caused by a reduction in the supply of money or credit. Deflation can also be brought about by direct contractions in spending, either in the form of a reduction in government spending, personal spending or investment spending. Deflation has often had the side effect of increasing unemployment in an economy, since the process often leads to a lower level of demand in the economy. The opposite of inflation."

What Causes Deflation?
Although everything said above is true it doesn't present the true nature of deflation. It tries to define it by presenting several possible causes. For a true understanding of both Inflation and Deflation we need to understand Supply and Demand. Just like every other commodity there is a supply of and a demand for "Money".

In this article I am not going to address the issues of what true money is, for the sake of this article we will assume money is simply something other people are willing to accept in exchange for goods or services.

Price levels are the direct result of the relationship between the supply and the demand for any given item. But the value of the money used to pay for those items is also subject to the same relationship.

For the sake of simplicity let's assume that we are on an island and there are ten equally desirable goods in our universe and ten $1.00 bills available to purchase them with. We can safely assume that each item will end up costing $1.00 each.

If the quantity of money increases to $20 (without increasing the quantity of goods) the price of the goods will increase to $2.00 - that is inflation.

If, however, the quantity of money decreases to $5.00 the price will fall to 50¢ (deflation). This is what the first part of the above definition is referring to. The money supply can also be reduced if someone on our island hoards half of it and refuses to spend it on anything no matter what. This is the second part of the definition (reduction in spending).

So far we have only looked at part of the equation, the supply of money. But what happens if the quantity of goods available increases? What if instead of having ten items we build ten more? We now have twenty items and only $10. 00 so once again each item is worth 50¢.

This form of deflation is the good type. Everyone assumes that deflation is bad because the last major deflation that we had was during the "Great Depression" so deflation and Depression are synonymous in many peoples minds. In actuality if prices go down because the goods can be manufactured more cheaply this ends up increasing everyone's wealth.

This is exactly what happened in the late 1990s , with cheap productivity available from former Communist countries the quantity of goods is increased while the money supply increased at a slower rate.

What about Demand?
What about the demand for goods? If everyone on our island already has one of the items available and no one needs any more, naturally the price will also fall as sellers try to find someone to take them off their hands.

So far we have dealt with the supply of money, the supply of goods and the demand for goods, but what about the demand for money?

Is it possible that the demand for money could increase or decrease? Generally, the demand for money is measured by how much people are willing to pay to borrow it (i.e. interest rates). If inflation is high, interest rates will have to be higher to compensate for the loss of purchasing power. But also if the demand for money rises banks can charge more to loan it. Conversely, if the demand for money falls interest rates will also fall.

So there are four causes for Deflation.

Decreasing Money Supply
Increasing Supply of Goods
Decreasing Demand for Goods
Increasing Demand for Money
Is Deflation Good or Bad?
Actually, deflation itself is neither good nor bad. It depends on the cause of the deflation whether people will suffer or rejoice. As I said, if the cause is increasing supply of goods that would be good. Another example of this is in the late 1800's as the industrial revolution dramatically increased productivity.

However, if deflation is caused by a decreasing supply of money as in the great depression, that would be bad. The stock market crash sucked all the liquidity out of the market place, the economy contracted, people lost their jobs and then banks stopped loaning money because people were defaulting. The problem compounded as more people lost their jobs and demand fell further causing more people to lose their jobs, etc. etc.

So deflation can be caused by several different things and thus can be good or bad depending on the cause.
 
Re: WORST CASE SOME ONE!!!

Inflation as a Problem
Clearly, price inflation is a major modern economic phenomenon, but why is it a problem? As we have observed, there are no natural units for the measurement of purchasing power, and there is no natural landmark to tell us that one purchasing power is better than another, either. If it is important (say) that the dollar have the purchasing power it had in 1918, we could arrange that by printing bills with one fewer zeros, so the $100 bill would become a $10 bill, and so on. And some countries have done that -- though, in most cases, they were countries where the price level had gone up a hundred or a thousand times or more. So why does it matter how many zeros we have on our bills?
It probably doesn't, but most economists would say that even though the price level doesn't matter, changes in the price level do matter. A rising price level -- inflation -- has the following disadvantages:
  • It creates uncertainty, in that people do not know what the money they earn today will buy tomorrow.
  • Uncertainty, in turn, discourages productive activity, saving and investing.
  • Inflation reduces the competitiveness of the country in international trade. If this is not offset by a devaluation of the national currency against other currencies, it makes the country's exports less attractive, and makes imports into the country more attractive, which in turn tends to create unbalance in trade.
  • Inflation is a hidden tax on "nominal balances." That is, people who hold bonds and bank accounts in dollars lose the value of those accounts when the price level rises, just as if their money had been taxed away.
  • The inflation tax is capricious -- some lose by it and some do not without any good economic reason.
  • As the purchasing power of the monetary unit becomes less predictable, people resort to other means to carry out their business, means which use up resources and are inefficient.
These inconveniences might be reason enough to call for a stable price level. If it doesn't matter what the price level is, and changing it causes problems, why not keep it stable? But the strong concern about inflation in some countries stems from experiences that are worse than inconvenient: hyperinflation.
 
Re: WORST CASE SOME ONE!!!

The term "hyperinflation" refers to a very rapid, very large increase in the price level. Measurement problems will be too minor to notice on this scale. There is no strict formal definition for the term, but cases of hyperinflation tend to be expressed in terms of multiples rather than percentages. "For example, in Germany between January 1922 and November 1923 (less than two years!) the average price level increased by a factor of about 20 billion." Some representative examples of hyperinflation include
"Hyperinflation
  • 1922 Germany 5,000%
  • 1985 Bolivia >10,000%
  • 1989 Argentina 3,100%
  • 1990 Peru 7,500%
  • 1993 Brazil 2,100%
  • 1993 Ukraine 5,000%"
These quotations from other web pages are given mainly as examples of what people have in mind when they talk about hyperinflation, and I cannot say just how accurate the figures are. In any case, figures for the purchasing power lost in hyperinflations can only be rough estimates. Numismatics (coin and currency collecting) gives some examples of just how far hyperinflations can go: an information page for currency collectors tells us that, in the Hungarian hyperinflation after World War II, bills for one hundred million trillion pengos were issued (the pengo was the Hungarian currency unit) and bills for one billion trillion pengos were printed but never issued. (I'm using American terms here -- the British express big numbers differently).
The story behind the German hyperinflation illustrates how all hyperinflations have come about, and is of particular interest in itself. After World War I, Germany had a democratic government, but little stability. A general named Kapp decided to make himself dictator, and marched his troops and militias into Berlin in an attempted coup d'etat known as the "Kapp Putsch." However, the German people resisted this attempt at dictatorship with nonviolent noncooperation. The workers went out in a general strike and the civil servants simply refused to obey the orders of Kapp and his men. Unable to take command of the country, Kapp retreated and ultimately gave up his attempt.
However, the German economy, never very sound, was further disrupted by the conflict surrounding Kapp's putsch and by the strike against it; and production fell and prices rose. The rise in prices destroyed the purchasing power of wages and government revenues, and the government responded to this by printing money to replace the lost revenues. This was the beginning of a vicious circle. Each increase in the quantity of money in circulation brought about a further inflation of prices, reducing the purchasing power of incomes and revenues, and leading to more printing of money. In the extreme, the monetary system simply collapses. In Germany, people would rush out to spend the day's wages as fast as possible, knowing that only a few hours' inflation would deprive today's wages of most of their purchasing power. One source says that people might buy a bottle of wine in the expectation that on the following morning, the empty bottle could be sold for more than it had cost when full. Those with goods to barter resorted to barter to get food; those with nothing to barter suffered.
This is the way that hyperinflations happen: by a self-reinforcing vicious cycle of printing money, leading to inflation, leading to printing money, and so on. This is one reason why inflation is feared. There is always the concern that even a little inflation this year will lead to more next year, and so on. But some countries have experienced very great inflations -- 50 to 100% per year -- without ever falling into the cycle of hyperinflation, and there has never been a hyperinflation that could not have been avoided by a simple government determination to stop the expansion of the money supply. The key point is this: the monetary system can function reasonably well as long as the value of the monetary unit is reasonably stable and predictable, and the high standards of living of modern societies cannot exist without a functioning monetary system
 
Re: WORST CASE SOME ONE!!!

The fourth macroeconomic problem mentioned in this chapter is the problem of economic stagnation.
Stagnation A stagnation is a period of many years of slow growth of gross domestic product, in which the growth is, on the average, slower than the potential growth in the economy. We should stress that this is quite controversial. There are economists who do not believe that stagnation exists as a problem. The difficulty is with the idea that growth is "less than the potential growth in the economy." But what is the potential? Those who see a great potential will see stagnation where those who see less potential will not.
In any case, the idea of stagnation was first discussed in the 1930's. The Great Depression, a period in which the growth of national product was generally negative, was thought of "by some economists" as a symptom of "secular stagnation." The word "secular" in this context meant that the causes of the stagnation were beyond the control of the government. The closing of the frontier, slowing technological progress, and higher savings rates because of higher average incomes were mentioned as possible causes.
Some economists believe that the U. S. A. has suffered from a stagnation in recent decades. One reason for their thinking is expressed in the following table:
Growth of Real GDP by Decades, U.S.A.

decaderate of
growth1960's4.46%1970's3.24%1980's2.84%1990-19951.81%
Remember that in this table we are looking at rates of growth, not the levels of RGDP. Real GDP was greater in the 1990's, but is rising at a slower rate than in the 1960's. Since the middle of the 1990's, we have had a period of fairly steady higher rates of economic growth, around 4%, so it may be that the period of stagnation is over. It is clear that in 1970-1995, American economic growth slowed down. But is a growth slowdown a problem? It might not be a problem, depending on the reasons for the slowdown.
 
Re: WORST CASE SOME ONE!!!

There are several reasons why actual or potential Real GDP growth might slow down. The table below shows evidence on some of these possibilities. If both potential and actual growth have slowed to about the same extent, then perhaps we do not have a stagnation problem. (The Table is derived from data from the Bureau of Labor Statistics, Penn World Tables, The Economic Report of the President, and the U. S. Census Bureau).
Population growth might slow Population growth increases both the demand for goods and services and the supply of labor to produce them, so slower population growth would mean slower potential economic growth. American population growth has slowed to some extent. Fewer people might choose to work The proportion of the population who choose to work is called the "rate of labor force participation." A decrease in the rate of labor force participation would slow the potential growth of output, while an increase in the rate of labor force participation would increase it. The American rate of labor force participation has tended to increase in the last few decades, to some extend offsetting the slowing of population growth. The growth of labor productivity might slow One of the most important sources of economic growth is the increase in output per worker. Labor productivity is output per unit of labor. If this growth of labor productivity is slower, the growth of total output would also be slower. Productivity growth itself might be stagnant -- that is, less than its own potential -- so it is not clear whether a decrease in productivity growth would be associated with stagnation or not. If productivity growth is itself below potential, we would see that as stagnation, but if potential and actual productivity growth have decreased about the same, then we would not see that as stagnation.
 
Re: WORST CASE SOME ONE!!!

Cost Curves and How They Relate
[SIZE=+0]In a supplemental note to a reprint of the classic "Cost Curves and Supply Curves," Jacob Viner relates his confusion about how long-run and short-run cost curves relate.[/SIZE][SIZE=-1] 1 [/SIZE][SIZE=+0]He asked a draftsman to make the long-run curve a U-shaped envelope that consisted of the minimum points on all short-run curves. The draftsman pointed out the impossibility of this construction, causing Viner to realize that what he really wanted was that the long-run curve consist of points on the the average cost curves at the minimum-cost scale for each quantity, the well-established envelope. Despite more than a half-century of fine-tuning, imprecision about how the various curves—long-run and short-run, average and marginal—relate remains a source of potential confusion, at least when drafting is involved.[/SIZE]
[SIZE=+0]This paper presents a set of Excel workbooks (see footnote 4 for information on downloading the workbooks) that produce graphs of long-run and short-run cost curves that are consistent. That is, the long-run curve is the proper envelope of the short-run curves and the average and marginal curves (both long-run and short-run) are consistent. Before presenting the workbooks, we review the relationships that must hold if graphs are to represent economic relationships accurately.[/SIZE]
[SIZE=+0]The paper is organized as follows. The next section reviews textbook materials, how the various long-run and short-run curves relate to each other. It is followed by a section that discusses the specific functional forms we use to illustrate these relationships. Then a section describes how a set of Excel workbooks provides a graphic representation of these curves and describes the options the user has when employing these workbooks. Finally, we extend the analysis briefly to include revenue as well as costs, for both price-taking and price-making firms.[/SIZE]




[SIZE=+1]The Cost Curves and How They Bend[/SIZE]
[SIZE=+0]The typical microeconomics textbook and classroom development of cost curves consists of two parts. One shows how the per-unit cost curves (average and marginal) relate to total costs. The second shows how long-run cost curves (total, average, and marginal) relate to their short-run counterparts.[/SIZE]
[SIZE=+0]First consider the relationships between average and marginal curves. When the average value involves a linear relationship, the representation is simple: the marginal curve is half the horizontal distance to the average curve. For nonlinear cost curves, however, drawing the marginal curves so that they correspond to the average curve (or vice versa) can be tedious. Too often we simply sketch a marginal cost curve that cuts the average cost curve at its minimum point and assume that this is good enough. Even textbook authors commit this error fairly frequently. (This is not an exercise in textbook bashing. We do not cite the textbooks that commit the errors noted below. Readers may contact the authors for examples.)[/SIZE]
[SIZE=+0]Failure to draw the curves consistently causes at least two inconsistencies. One is that the quantity at which marginal revenue equals marginal cost will not be the quantity at which profit—(price less average cost) times quantity—is, in fact, maximized. The other is that profit as defined above will not equal profit, defined as the area between the marginal revenue and marginal cost curves. One of the leading principles textbooks contains a graph in which the area between the marginal revenue curve and the marginal cost curve is roughly two thirds larger than the area defined in terms of price, average cost, and quantity. Such a discrepancy is large enough to confuse students.[/SIZE]
[SIZE=+0]The representation of long-run vs. short-run curves contains the above difficulties and at least one more. When drawing the long-run/short-run per-unit curves, a common error is to forget that, when the average curves are tangent, the marginal curves must intersect. Generally textbook authors are careful to get this right (or to avoid it by drawing sections of the long-run marginal curves that do not approach the point of necessary intersection), but the occasional error occurs. Certainly in hand-drawing examples for classroom or for examinations, it is easy to overlook this necessary relationships.[/SIZE]
[SIZE=+0]To summarize: For representations of cost curves to be consistent, the conditions below must pertain.[/SIZE]
  • [SIZE=+0]For both long-run and short-run curves, the average and marginal curves must derive from the same total cost curve;[/SIZE]
  • [SIZE=+0]At the quantity for which long-run and short-run average cost curves are tangent, the accompanying marginal cost curves must intersect.[/SIZE]
[SIZE=+0]One other condition must be imposed if the long-run average (or total) cost curve is to be an envelope consisting of minimum points on a series of short-run average (or total) curves:[/SIZE]
  • [SIZE=+0]The short-run average cost curve must exhibit more curvature than its long-run counterpart (or equivalently the short-run marginal cost curve must intersect its long-run counterpart from below).[/SIZE]
[SIZE=+1]Functional Form[/SIZE]
[SIZE=+0]Our task is to define a family of long-run cost curves and short-run cost curves such that the above interrelations can be ensured. Each of the restrictions cited above provides two conditions at some specified quantity (one for the average function's average value and the other for its derivative, the marginal value). This requires that the functional form used be defined by two parameters. For tractability, we use the polynomial form. The third condition establishes limits on the orders of polynomials that can be employed. After considerable experimentation, we determined that the following curves perform quite well.[/SIZE]
[SIZE=+0]The long-run total cost curve has the form:[/SIZE][SIZE=-1] 2 [/SIZE]

1.[SIZE=+0]LTC = aQ[/SIZE][SIZE=-1]2[/SIZE][SIZE=+0] + bQ[/SIZE][SIZE=-1]0.5[/SIZE][SIZE=+0]. [/SIZE][SIZE=-1]3 [/SIZE][SIZE=+0]The short-run total cost curve’s form is:[/SIZE]

2.[SIZE=+0]STC = mQ[/SIZE][SIZE=-1]3[/SIZE][SIZE=+0] + n.[/SIZE][SIZE=+0]This form for the STC has one drawback: The average variable cost curve approaches zero as quantity decreases. Circumventing this difficulty involves arbitrary impositions on the functional form. Rather than encumber the analysis with such impositions (which can result in absurd results like downward-sloping total cost curves), we add a set of worksheets for short-run curves, based on the functional form:[/SIZE]

2'.[SIZE=+0]SAC = p + q(Q – Q*)2 (or equivalently SAC = rQ2 + sQ + t).[/SIZE][SIZE=+0]Implementing this form requires two arbitrary impositions, that SAC approach a specific finite value as Q approaches zero and that fixed cost be a specified fraction of total cost at a specified value of Q.[/SIZE]

[SIZE=+1]The Spreadsheets[/SIZE]
[SIZE=+0]To make the workbooks as useful as possible, we allow considerable flexibility.[/SIZE][SIZE=-1] 4 [/SIZE] [SIZE=+0] Consider first the workbook that relates to costs alone. The user provides two pieces of data that establish the long-run cost curves. These are the quantity at which the long-run cost curve is minimized and the long-run average cost at that quantity. The first two sheets in the workbook report the cost curves consistent with this information. First, the total cost curve (both the graph and a table of points on the graph) appears; then, the next sheet shows the long-run total cost and the associated per-unit (average and marginal) curves.[/SIZE]
[SIZE=+0]The next three sheets involve the short run. The user provides a quantity at which long-run total cost equals short-run total cost. The first of this set of sheets returns the long-run and short-run total cost curves (and associated tabled values). The next sheet does the same for per-unit curves. The third sheet shows the per-unit short-run curves: short-run average cost, average variable cost, short-run marginal cost, and average fixed cost.[/SIZE]
[SIZE=+0]The figure below is representative. The user specifies the quantity at which long-run average cost is minimized (Q[/SIZE][SIZE=-1] 0[/SIZE][SIZE=+0]), the long-run average cost at that quantity (C[/SIZE][SIZE=-1]0[/SIZE][SIZE=+0]), and the quantity at which long-run and short-run average cost curves are tangent (Q[/SIZE][SIZE=-1] 1[/SIZE][SIZE=+0]). (Also, for purposes of controlling appearance, the user may change the size of the increments between adjacent observed quantities). The resulting short-run average cost at Q[/SIZE][SIZE=-1] 1[/SIZE][SIZE=+0] is reported along with the graphs of the pertinent average and marginal relationships. The user may type a chosen value for the variables or may use the scroll bars. The "Reset Values" button returns the values to their default values. The other two buttons provide navigation, to the "Definitions" sheet or to the table on this sheet on which the graphs are built.[/SIZE]
LR_SR_curves.JPG

Figure 1. Worksheet from CostCurves_Basic.
Click on the title to download the workbook.







[SIZE=+0]The sheets described above provide an accurate drawing of long-run and short-run cost curves, depicting the pertinent relationships among them. As noted above and as observed in Figure 1, the choice of functional form for the long-run curves dictates that AVC and SMC achieve their minimum at a quantity of zero. Inter alia, this implies that the firm’s short-run supply curve begins at the origin. To allow more flexibility, an additional set of graphs based on a cubic short-run cost curve is appended. Figure 2 shows one such curve.[/SIZE]



quadratic_cost.JPG

Figure 2. Worksheet from CostCurves_quadratic_AVC
Click on the title to download the workbook.

[SIZE=+1]Costs and Revenue[/SIZE]
[SIZE=+0]While the primary purpose of this article is to provide an easy way to depict cost curves accurately, we add two more workbooks that show revenue as well as cost. The first depicts a price-taking firm, the second a price-making firm. In the former, the user specifies a price; in the latter, the price intercept for the demand curve. The sheets that previously showed total cost now also show total revenue and profits along with total cost.[/SIZE][SIZE=-1]5 [/SIZE] [SIZE=+0] The sheets that show per-unit costs now also show the demand curve and the marginal revenue curve. In each case, the profit-maximizing quantity (and, for the price-making firm, the price) and the maximum profit level are shown.[/SIZE] Figure 3 shows one of the graphs, this one for a price-making firm. The graph shows the short run only, but the table reminds the user that the firm will behave differently given more time to adjust.
cost_revenue.JPG

Figure 3. Worksheet from CostCurves_w_revenue
Click on the title to download the workbook.

[SIZE=+1]Conclusion[/SIZE]
It is important that we represent economic relationships accurately. Failure to do so can confuse students, whose grasp of graphical representations is often tenuous at best. This paper provides a means to draw total and per-unit cost curves for either the short run or the long run and to depict the relationship between costs in the short run and the long run. The analysis also incorporates revenue relationship, thereby showing how profits relate to production.
The main purpose of the paper is to present the instructor a way to draw these relationships accurately using Microsoft Excel. The spreadsheets can be used for developing displays in classroom instruction and for handouts. Instructors may also use the workbooks as the basis for homework assignments. Students can explore how changes in the model's parameters affect efficient output levels and the resulting levels of profits.
 
Re: WORST CASE SOME ONE!!!

Mintzberg: The Managerial Roles
Mintzberg (1973) groups managerial activities and roles as involving:


Managerial activities Associated roles
interpersonal roles - arising from formal authority and status and supporting the information and decision activities. figurehead
liaison
leader

information processing roles monitor
disseminator
spokesman

decision roles: making significant decisions improver/changer
disturbance handler
resource allocator
negotiator




The broad proposition is that, as a senior manager enacts his/her role, these will come together as a gestalt (integrated whole) reflecting the manager's competencies associated with the roles. In a sense therefore they act as evaluation criteria for assessing the performance of a manager in his/her role.


Figurehead.
Social, inspirational, legal and ceremonial duties must be carried out. The manager is a symbol and must be on-hand for people/agencies that will only deal with him/her because of status and authority.

The leader role
This is at the heart of the manager-subordinate relationship and managerial power and pervasive where subordinates are involved even where perhaps the relationship is not directly interpersonal. The manager

defines the structures and environments within which sub-ordinates work and are motivated.
oversees and questions activities to keep them alert.
selects, encourages, promotes and disciplines.
tries to balance subordinate and organisational needs for efficient operations.

Liaison:
This is the manager as an information and communication centre. It is vital to build up favours. Networking skills to shape maintain internal and external contacts for information exchange are essential. These ontacts give access to "databases"- facts, requirements, probabilities.

As 'monitor'
- the manager seeks/receives information from many sources to evaluate the organisation's performance, well-being and situation. Monitoring of internal operations, external events, ideas, trends, analysis and pressures is vital. Information to detect changes, problems & opportunities and to construct decision-making scenarios can be current/historic, tangible (hard) or soft, documented or non-documented.This role is about building and using an intelligence system. The manager must install and maintain this information system; by building contacts & training staff to deliver "information".

As disseminator
- the manager brings external views into his/her organisation and facilitiates internal information flows between subordinates (factual or value-based).
The preferences of significant people are received and assimilated. The manager interprets/disseminates information to subordinates e.g. policies, rules, regulations. Values are also desseminated via conversations laced with imperatives and signs/icons about what is regarded as imprtant or what 'we believe in'.

There is a dilemma of delegation. Only the manager has the data for many decisions and often in the wrong form (verbal/memory vs. paper). Sharing is time-consuming and difficult. He/she and staff may be already overloaded. Communication consumes time. The adage 'if you want to get things done, (it is best to do it yourself' comes to mind. Why might this be a driver of managerial behaviour (reluctance or constraints on the ability to delegate)?


As spokesman (P.R. capacity)
- the manager informs and lobbies others (external to his/her own organisational group). Key influencers and stakeholders are kept informed of performances, plans & policies. For outsiders, the manager is an expert in the field in which his/her organisation operates.

A senior manager is responsible for his/her organisation's strategy-making system - generating and linking important decisions. He/she has the authority, information and capacity for control and integration over important decisions.
As initiator/changer
- he/she designs and initiates much of the controlled change in the organisation. Gaps are identified, improvement programmes defined. The manager initiates a series of related decisions/activities to achieve actual improvement. Improvement projects may be involved at various levels. The manager can


delegate all design responsibility selecting and even replace subordinates.
empower subordinates with responsbility for the design of the improvement programme but e.g. define the parameters/limits and veto or give the go-ahead on options.
supervise design directly.
Senior managers may have many projects at various development stages (emergent/dormant/nearly-ready) working on each periodically interspersed by waiting periods for information feedback or progress etc. Projects roll-on and roll-off,


the disturbance handler
- is a generalist role i.e. taking charge when the organisation hits an iceberg unexpectedly and where there is no clear programmed response. Disturbances may arise from staff, resources, threats or because others make mistakes or innovation has unexpected consequences. The role involves stepping in to calm matters, evaluate, re-allocate, support - removing the thorn - buying time. The metaphors here are
If you are up to your backside in alligators it is no use talking about draining the swamp.

and

Stop the bleeding as only then can you take care of the long term health of the patient. (not Mintzberg's anecdote)


As resource allocator
- the manager oversees allocation of all resources (£, staff, reputation). This involves:

scheduling own time
programming work
authorising actions
With an eye to the diary (scheduling) the manager implicitly sets organisational priorities. Time and access involve opportunity costs. What fails to reach him/her, fails to get support.

The managerial task is to ensure the basic work system is in place and to programme staff overloads - what to do, by whom, what processing structures will be used.

Authorising major decisions before implementation is a control over resource allocation. This enables coordinative interventions e.g. authorisation within a policy or budgeting process in comparison to ad-hoc interventions. With limited time, complex issues and staff proposals that cannot be dismissed lightly, the manager may decide on the proposer rather than proposal.

To help evaluation processes, managers develop models and plans in their heads (they construe the relationships and signifiers in the situation). These models/constructions encompass rules, imperatives, criteria and preferences to evaluate proposals against. Loose, flexible and implicit plans are up-dated with new information.


The negotiator
- takes charge over important negotiating activities with other organisations. The spokesman, figurehead and resource allocator roles demand this.


--------------------------------------------------------------------------------

Conclusions?
The roles point to managers needing to be organisational generalists and specialists because of

system imperfections and environmental pressures.
their formal authority is needed even for certain basic routines.
in all of this they are still fallible and human
The ten roles offer a richer account of managerial tasks than the learnership models of Blake or Bersey and Blanchard etc. They explanation (and justify/legitimise) managerial purposes (contingency theory) in terms of


designing and maintaining stable and reliable systems for efficient operations.in a changing environment.
ensuring that the organisation satisfies those that own/control it.
boundary management = maintaining information links between the organisation and players in the environment.


--------------------------------------------------------------------------------

Seminar Questions


How do these role propositions compare with your current role behaviour and your need to change your role capabilities in the future?

How do such descriptions contribute to
an ideology of management?
manager training and development?
management recruitment and selection?


Taking a stakeholder perspective on organisational management and the role of various managers - how would these views on managerial roles be modified?

What do these role descriptions offer practicing managers - anything.

How robust is this type of theorising?
 
Re: WORST CASE SOME ONE!!!

Managerial Roles According To Henry Mintzberg
Interpersonal roles.

Description of actions
Examples from managerial practice requiring activation of corresponding roles

1. Figurehead
Symbolic leader of the organization performing duties of social and legal character
Attending ribbon-cutting ceremonies, hosting receptions, presentations and other activities associated with the figurehead role

2. Leader
Motivating subordinates, interaction with them, selection and training of employees
Virtually all managerial operations involving subordinates

3. Liaison
Establishing contacts with managers and specialists of other divisions and organizations, informing subordinates of these contacts
Business correspondence, participation in meetings with representatives of other divisions (organizations)




Informational roles.
1. Monitor (receiver)
Collecting various data relevant to adequate work
Handling incoming correspondence, periodical surveys, attending seminars and exhibitions, research tours

2. Disseminator of information
Transmitting information obtained from both external sources and employees to interested people inside the organization
Dissemination of information letters and digests, interviewing, informing subordinates of the agreements reached

3. Spokesperson
Transmitting information on the organization’s plan’s, current situation and achievements of the divisions to outsiders
Compiling and disseminating information letters and circulars, participation in meetings with progress reports




Decisional roles.
1. Entrepreneur (initiator of channge)
Seeking opportunities to develop processes both inside the organization and in the systems of interaction with other divisions and structures, initiates implementation of innovations to improve the organization’s situation and employee well-being
Participation in meetings involving debating and decision making on perspective issues, and also in meetings dedicated to implementation of innovations

2. Disturbance handler
Taking care of the organizations, correcting ongoing activities, assuming responsibility when factors threatening normal work of the organization emerge
Debating and decision making on strategic current issues concerning ways of overcoming crisis situations

3. Resource allocator
Deciding on expenditure of the organization’s physical, financial and human resources
Drawing up and approving schedules, plans, estimates and budgets; controlling their execution

4. Negotiator (mediator)
Representing the organization in all important negotiations
Conducting negotiations, establishing official links between the organization and other companies
 
Re: WORST CASE SOME ONE!!!

Decision Making and Problem Solving
by Herbert A. Simon and Associates
Associates: George B. Dantzig, Robin Hogarth, Charles R. Piott, Howard Raiffa, Thomas C. Schelling, Kennth A. Shepsle, Richard Thaier, Amos Tversky, and Sidney Winter.
Simon was educated in political science at the University of Chicago (B.A., 1936, Ph.D., 1943). He has held research and faculty positions at the University of California (Berkeley), Illinois Institute of Technology and since 1949, Carnegie Mellon University, where he is the Richard King Mellon University Professor of Computer Science and Psychology. In 1978, he received the Alfred Nobel Memorial Prize in Economic Sciences and in 1986 the National Medal of Science.
Reprinted with permission from Research Briefings 1986: Report of the Research Briefing Panel on Decision Making and Problem Solving © 1986 by the National Academy of Sciences. Published by National Academy Press, Washington, DC.
Introduction
The work of managers, of scientists, of engineers, of lawyers--the work that steers the course of society and its economic and governmental organizations--is largely work of making decisions and solving problems. It is work of choosing issues that require attention, setting goals, finding or designing suitable courses of action, and evaluating and choosing among alternative actions. The first three of these activities--fixing agendas, setting goals, and designing actions--are usually called problem solving; the last, evaluating and choosing, is usually called decision making. Nothing is more important for the well-being of society than that this work be performed effectively, that we address successfully the many problems requiring attention at the national level (the budget and trade deficits, AIDS, national security, the mitigation of earthquake damage), at the level of business organizations (product improvement, efficiency of production, choice of investments), and at the level of our individual lives (choosing a career or a school, buying a house).
The abilities and skills that determine the quality of our decisions and problem solutions are stored not only in more than 200 million human heads, but also in tools and machines, and especially today in those machines we call computers. This fund of brains and its attendant machines form the basis of our American ingenuity, an ingenuity that has permitted U.S. society to reach remarkable levels of economic productivity.
There are no more promising or important targets for basic scientific research than understanding how human minds, with and without the help of computers, solve problems and make decisions effectively, and improving our problem-solving and decision-making capabilities. In psychology, economics, mathematical statistics, operations research, political science, artificial intelligence, and cognitive science, major research gains have been made during the past half century in understanding problem solving and decision making. The progress already achieved holds forth the promise of exciting new advances that will contribute substantially to our nation's capacity for dealing intelligently with the range of issues, large and small, that confront us.
Much of our existing knowledge about decision making and problem solving, derived from this research, has already been put to use in a wide variety of applications, including procedures used to assess drug safety, inventory control methods for industry, the new expert systems that embody artificial intelligence techniques, procedures for modeling energy and environmental systems, and analyses of the stabilizing or destabilizing effects of alternative defense strategies. (Application of the new inventory control techniques, for example, has enabled American corporations to reduce their inventories by hundreds of millions of dollars since World War II without increasing the incidence of stockouts.) Some of the knowledge gained through the research describes the ways in which people actually go about making decisions and solving problems; some of it prescribes better methods, offering advice for the improvement of the process.
Central to the body of prescriptive knowledge about decision making has been the theory of subjective expected utility (SEU), a sophisticated mathematical model of choice that lies at the foundation of most contemporary economics, theoretical statistics, and operations research. SEU theory defines the conditions of perfect utility-maximizing rationality in a world of certainty or in a world in which the probability distributions of all relevant variables can be provided by the decision makers. (In spirit, it might be compared with a theory of ideal gases or of frictionless bodies sliding down inclined planes in a vacuum.) SEU theory deals only with decision making; it has nothing to say about how to frame problems, set goals, or develop new alternatives.
Prescriptive theories of choice such as SEU are complemented by empirical research that shows how people actually make decisions (purchasing insurance, voting for political candidates, or investing in securities), and research on the processes people use to solve problems (designing switchgear or finding chemical reaction pathways). This research demonstrates that people solve problems by selective, heuristic search through large problem spaces and large data bases, using means-ends analysis as a principal technique for guiding the search. The expert systems that are now being produced by research on artificial intelligence and applied to such tasks as interpreting oil-well drilling logs or making medical diagnoses are outgrowths of these research findings on human problem solving.
What chiefly distinguishes the empirical research on decision making and problem solving from the prescriptive approaches derived from SEU theory is the attention that the former gives to the limits on human rationality. These limits are imposed by the complexity of the world in which we live, the incompleteness and inadequacy of human knowledge, the inconsistencies of individual preference and belief, the conflicts of value among people and groups of people, and the inadequacy of the computations we can carry out, even with the aid of the most powerful computers. The real world of human decisions is not a world of ideal gases, frictionless planes, or vacuums. To bring it within the scope of human thinking powers, we must simplify our problem formulations drastically, even leaving out much or most of what is potentially relevant.
The descriptive theory of problem solving and decision making is centrally concerned with how people cut problems down to size: how they apply approximate, heuristic techniques to handle complexity that cannot be handled exactly. Out of this descriptive theory is emerging an augmented and amended prescriptive theory, one that takes account of the gaps and elements of unrealism in SEU theory by encompassing problem solving as well as choice and demanding only the kinds of knowledge, consistency, and computational power that are attainable in the real world.
The growing realization that coping with complexity is central to human decision making strongly influences the directions of research in this domain. Operations research and artificial intelligence are forging powerful new computational tools; at the same time, a new body of mathematical theory is evolving around the topic of computational complexity. Economics, which has traditionally derived both its descriptive and prescriptive approaches from SEU theory, is now paying a great deal of attention to uncertainty and incomplete information; to so-called "agency theory," which takes account of the institutional framework within which decisions are made; and to game theory, which seeks to deal with interindividual and intergroup processes in which there is partial conflict of interest. Economists and political scientists are also increasingly buttressing the empirical foundations of their field by studying individual choice behavior directly and by studying behavior in experimentally constructed markets and simulated political structures.
The following pages contain a fuller outline of current knowledge about decision making and problem solving and a brief review of current research directions in these fields as well as some of the principal research opportunities.
Decision Making
SEU THEORY
The development of SEU theory was a major intellectual achievement of the first half of this century. It gave for the first time a formally axiomatized statement of what it would mean for an agent to behave in a consistent, rational matter. It assumed that a decision maker possessed a utility function (an ordering by preference among all the possible outcomes of choice), that all the alternatives among which choice could be made were known, and that the consequences of choosing each alternative could be ascertained (or, in the version of the theory that treats of choice under uncertainty, it assumed that a subjective or objective probability distribution of consequences was associated with each alternative). By admitting subjectively assigned probabilities, SEU theory opened the way to fusing subjective opinions with objective data, an approach that can also be used in man-machine decision-making systems. In the probabilistic version of the theory, Bayes's rule prescribes how people should take account of new information and how they should respond to incomplete information.
The assumptions of SEU theory are very strong, permitting correspondingly strong inferences to be made from them. Although the assumptions cannot be satisfied even remotely for most complex situations in the real world, they may be satisfied approximately in some microcosms--problem situations that can be isolated from the world's complexity and dealt with independently. For example, the manager of a commercial cattle-feeding operation might isolate the problem of finding the least expensive mix of feeds available in the market that would meet all the nutritional requirements of his cattle. The computational tool of linear programming, which is a powerful method for maximizing goal achievement or minimizing costs while satisfying all kinds of side conditions (in this case, the nutritional requirements), can provide the manager with an optimal feed mix--optimal within the limits of approximation of his model to real world conditions. Linear programming and related operations research techniques are now used widely to make decisions whenever a situation that reasonably fits their assumptions can be carved out of its complex surround. These techniques have been especially valuable aids to middle management in dealing with relatively well-structured decision problems.
Most of the tools of modern operations research--not only linear programming, but also integer programming, queuing theory, decision trees, and other widely used techniques--use the assumptions of SEU theory. They assume that what is desired is to maximize the achievement of some goal, under specified constraints and assuming that all alternatives and consequences (or their probability distributions) are known. These tools have proven their usefulness in a wide variety of applications.
THE LIMITS OF RATIONALITY
Operations research tools have also underscored dramatically the limits of SEU theory in dealing with complexity. For example, present and prospective computers are not even powerful enough to provide exact solutions for the problems of optimal scheduling and routing of jobs through a typical factory that manufactures a variety of products using many different tools and machines. And the mere thought of using these computational techniques to determine an optimal national policy for energy production or an optimal economic policy reveals their limits.
Computational complexity is not the only factor that limits the literal application of SEU theory. The theory also makes enormous demands on information. For the utility function, the range of available alternatives and the consequences following from each alternative must all be known. Increasingly, research is being directed at decision making that takes realistic account of the compromises and approximations that must be made in order to fit real-world problems to the informational and computational limits of people and computers, as well as to the inconsistencies in their values and perceptions. The study of actual decision processes (for example, the strategies used by corporations to make their investments) reveals massive and unavoidable departures from the framework of SEU theory. The sections that follow describe some of the things that have been learned about choice under various conditions of incomplete information, limited computing power, inconsistency, and institutional constraints on alternatives. Game theory, agency theory, choice under uncertainty, and the theory of markets are a few of the directions of this research, with the aims both of constructing prescriptive theories of broader application and of providing more realistic descriptions and explanations of actual decision making within U.S. economic and political institutions.
LIMITED RATIONALITY IN ECONOMIC THEORY
Although the limits of human rationality were stressed by some researchers in the 1950s, only recently has there been extensive activity in the field of economics aimed at developing theories that assume less than fully rational choice on the part of business firm managers and other economic agents. The newer theoretical research undertakes to answer such questions as the following:
• Are market equilibria altered by the departures of actual choice behavior from the behavior of fully rational agents predicted by SEU theory?
• Under what circumstances do the processes of competition "police" markets in such a way as to cancel out the effects of the departures from full rationality?
• In what ways are the choices made by boundedly rational agents different from those made by fully rational agents?
Theories of the firm that assume managers are aiming at "satisfactory" profits or that their concern is to maintain the firm's share of market in the industry make quite different predictions about economic equilibrium than those derived from the assumption of profit maximization. Moreover, the classical theory of the firm cannot explain why economic activity is sometimes organized around large business firms and sometimes around contractual networks of individuals or smaller organizations. New theories that take account of differential access of economic agents to information, combined with differences in self-interest, are able to account for these important phenomena, as well as provide explanations for the many forms of contracts that are used in business. Incompleteness and asymmetry of information have been shown to be essential for explaining how individuals and business firms decide when to face uncertainty by insuring, when by hedging, and when by assuming the risk.
Most current work in this domain still assumes that economic agents seek to maximize utility, but within limits posed by the incompleteness and uncertainty of the information available to them. An important potential area of research is to discover how choices will be changed if there are other departures from the axioms of rational choice--for example, substituting goals of reaching specified aspiration levels (satisficing) for goals of maximizing.
Applying the new assumptions about choice to economics leads to new empirically supported theories about decision making over time. The classical theory of perfect rationality leaves no room for regrets, second thoughts, or "weakness of will." It cannot explain why many individuals enroll in Christmas savings plans, which earn interest well below the market rate. More generally, it does not lead to correct conclusions about the important social issues of saving and conservation. The effect of pensions and social security on personal saving has been a controversial issue in economics. The standard economic model predicts that an increase in required pension saving will reduce other saving dollar for dollar; behavioral theories, on the other hand, predict a much smaller offset. The empirical evidence indicates that the offset is indeed very small. Another empirical finding is that the method of payment of wages and salaries affects the saving rate. For example, annual bonuses produce a higher saving rate than the same amount of income paid in monthly salaries. This finding implies that saving rates can be influenced by the way compensation is framed.
If individuals fail to discount properly for the passage of time, their decisions will not be optimal. For example, air conditioners vary greatly in their energy efficiency; the more efficient models cost more initially but save money over the long run through lower energy consumption. It has been found that consumers, on average, choose air conditioners that imply a discount rate of 25 percent or more per year, much higher than the rates of interest that prevailed at the time of the study.
As recently as five years ago, the evidence was thought to be unassailable that markets like the New York Stock Exchange work efficiently--that prices reflect all available information at any given moment in time, so that stock price movements resemble a random walk and contain no systematic information that could be exploited for profit. Recently, however, substantial departures from the behavior predicted by the efficient-market hypothesis have been detected. For example, small firms appear to earn inexplicably high returns on the market prices of their stock, while firms that have very low price-earnings ratios and firms that have lost much of their market value in the recent past also earn abnormally high returns. All of these results are consistent with the empirical finding that decision makers often overreact to new information, in violation of Bayes's rule. In the same way, it has been found that stock prices are excessively volatile--that they fluctuate up and down more rapidly and violently than they would if the market were efficient.
There has also been a long-standing puzzle as to why firms pay dividends. Considering that dividends are taxed at a higher rate than capital gains, taxpaying investors should prefer, under the assumptions of perfect rationality, that their firms reinvest earnings or repurchase shares instead of paying dividends. (The investors could simply sell some of their appreciated shares to obtain the income they require.) The solution to this puzzle also requires models of investors that take account of limits on rationality.
THE THEORY OF GAMES
In economic, political, and other social situations in which there is actual or potential conflict of interest, especially if it is combined with incomplete information, SEU theory faces special difficulties. In markets in which there are many competitors (e.g., the wheat market), each buyer or seller can accept the market price as a "given" that will not be affected materially by the actions of any single individual. Under these conditions, SEU theory makes unambiguous predictions of behavior. However, when a market has only a few suppliers --say, for example, two--matters are quite different. In this case, what it is rational to do depends on what one's competitor is going to do, and vice versa. Each supplier may try to outwit the other. What then is the rational decision?
The most ambitious attempt to answer questions of this kind was the theory of games, developed by von Neumann and Morgenstern and published in its full form in 1944. But the answers provided by the theory of games are sometimes very puzzling and ambiguous. In many situations, no single course of action dominates all the others; instead, a whole set of possible solutions are all equally consistent with the postulates of rationality.
One game that has been studied extensively, both theoretically and empirically, is the Prisoner's Dilemma. In this game between two players, each has a choice between two actions, one trustful of the other player, the other mistrustful or exploitative. If both players choose the trustful alternative, both receive small rewards. If both choose the exploitative alternative, both are punished. If one chooses the trustful alternative and the other the exploitative alternative, the former is punished much more severely than in the previous case, while the latter receives a substantial reward. If the other player's choice is fixed but unknown, it is advantageous for a player to choose the exploitative alternative, for this will give him the best outcome in either case. But if both adopt this reasoning, they will both be punished, whereas they could both receive rewards if they agreed upon the trustful choice (and did not welch on the agreement).
The terms of the game have an unsettling resemblance to certain situations in the relations between nations or between a company and the employees' union. The resemblance becomes stronger if one imagines the game as being played repeatedly. Analyses of "rational" behavior under assumptions of intended utility maximization support the conclusion that the players will (ought to?) always make the mistrustful choice. Nevertheless, in laboratory experiments with the game, it is often found that players (even those who are expert in game theory) adopt a "tit-for-tat" strategy. That is, each plays the trustful, cooperative strategy as long as his or her partner does the same. If the partner exploits the player on a particular trial, the player then plays the exploitative strategy on the next trial and continues to do so until the partner switches back to the trustful strategy. Under these conditions, the game frequently stabilizes with the players pursuing the mutually trustful strategy and receiving the rewards.
With these empirical findings in hand, theorists have recently sought and found some of the conditions for attaining this kind of benign stability. It occurs, for example, if the players set aspirations for a satisfactory reward rather than seeking the maximum reward. This result is consistent with the finding that in many situations, as in the Prisoner's Dilemma game, people appear to satisfice rather than attempting to optimize.
The Prisoner's Dilemma game illustrates an important point that is beginning to be appreciated by those who do research on decision making. There are so many ways in which actual human behavior can depart from the SEU assumptions that theorists seeking to account for behavior are confronted with an embarrassment of riches. To choose among the many alternative models that could account for the anomalies of choice, extensive empirical research is called for--to see how people do make their choices, what beliefs guide them, what information they have available, and what part of that information they take into account and what part they ignore. In a world of limited rationality, economics and the other decision sciences must closely examine the actual limits on rationality in order to make accurate predictions and to provide sound advice on public policy.
EMPIRICAL STUDIES OF CHOICE UNDER UNCERTAINTY
During the past ten years, empirical studies of human choices in which uncertainty, inconsistency, and incomplete information are present have produced a rich collection of findings which only now are beginning to be organized under broad generalizations. Here are a few examples. When people are given information about the probabilities of certain events (e.g., how many lawyers and how many engineers are in a population that is being sampled), and then are given some additional information as to which of the events has occurred (which person has been sampled from the population), they tend to ignore the prior probabilities in favor of incomplete or even quite irrelevant information about the individual event. Thus, if they are told that 70 percent of the population are lawyers, and if they are then given a noncommittal description of a person (one that could equally well fit a lawyer or an engineer), half the time they will predict that the person is a lawyer and half the time that he is an engineer--even though the laws of probability dictate that the best forecast is always to predict that the person is a lawyer.
People commonly misjudge probabilities in many other ways. Asked to estimate the probability that 60 percent or more of the babies born in a hospital during a given week are male, they ignore information about the total number of births, although it is evident that the probability of a departure of this magnitude from the expected value of 50 percent is smaller if the total number of births is larger (the standard error of a percentage varies inversely with the square root of the population size).
There are situations in which people assess the frequency of a class by the ease with which instances can be brought to mind. In one experiment, subjects heard a list of names of persons of both sexes and were later asked to judge whether there were more names of men or women on the list. In lists presented to some subjects, the men were more famous than the women; in other lists, the women were more famous than the men. For all lists, subjects judged that the sex that had the more famous personalities was the more numerous.
The way in which an uncertain possibility is presented may have a substantial effect on how people respond to it. When asked whether they would choose surgery in a hypothetical medical emergency, many more people said that they would when the chance of survival was given as 80 percent than when the chance of death was given as 20 percent.
On the basis of these studies, some of the general heuristics, or rules of thumb, that people use in making judgments have been compiled---heuristics that produce biases toward classifying situations according to their representativeness, or toward judging frequencies according to the availability of examples in memory, or toward interpretations warped by the way in which a problem has been framed.
These findings have important implications for public policy. A recent example is the lobbying effort of the credit card industry to have differentials between cash and credit prices labeled "cash discounts" rather than "credit surcharges." The research findings raise questions about how to phrase cigarette warning labels or frame truth-in-lending laws and informed consent laws.
METHODS OF EMPIRICAL RESEARCH
Finding the underlying bases of human choice behavior is difficult. People cannot always, or perhaps even usually, provide veridical accounts of how they make up their minds, especially when there is uncertainty. In many cases, they can predict how they will behave (pre-election polls of voting intentions have been reasonably accurate when carefully taken), but the reasons people give for their choices can often be shown to be rationalizations and not closely related to their real motives.
Students of choice behavior have steadily improved their research methods. They question respondents about specific situations, rather than asking for generalizations. They are sensitive to the dependence of answers on the exact forms of the questions. They are aware that behavior in an experimental situation may be different from behavior in real life, and they attempt to provide experimental settings and motivations that are as realistic as possible. Using thinking-aloud protocols and other approaches, they try to track the choice behavior step by step, instead of relying just on information about outcomes or querying respondents retrospectively about their choice processes.
Perhaps the most common method of empirical research in this field is still to ask people to respond to a series of questions. But data obtained by this method are being supplemented by data obtained from carefully designed laboratory experiments and from observations of actual choice behavior (for example, the behavior of customers in supermarkets). In an experimental study of choice, subjects may trade in an actual market with real (if modest) monetary rewards and penalties. Research experience has also demonstrated the feasibility of making direct observations, over substantial periods of time, of the decision-making processes in business and governmental organizations--for example, observations of the procedures that corporations use in making new investments in plant and equipment. Confidence in the empirical findings that have been accumulating over the past several decades is enhanced by the general consistency that is observed among the data obtained from quite different settings using different research methods.
There still remains the enormous and challenging task of putting together these findings into an empirically founded theory of decision making. With the growing availability of data, the theory-building enterprise is receiving much better guidance from the facts than it did in the past. As a result, we can expect it to become correspondingly more effective in arriving at realistic models of behavior.
Problem Solving
The theory of choice has its roots mainly in economics, statistics, and operations research and only recently has received much attention from psychologists; the theory of problem solving has a very different history. Problem solving was initially studied principally by psychologists, and more recently by researchers in artificial intelligence. It has received rather scant attention from economists.
CONTEMPORARY PROBLEM-SOLVING THEORY
Human problem solving is usually studied in laboratory settings, using problems that can be solved in relatively short periods of time (seldom more than an hour), and often seeking a maximum density of data about the solution process by asking subjects to think aloud while they work. The thinking-aloud technique, at first viewed with suspicion by behaviorists as subjective and "introspective," has received such careful methodological attention in recent years that it can now be used dependably to obtain data about subjects' behaviors in a wide range of settings.
The laboratory study of problem solving has been supplemented by field studies of professionals solving real-world problems--for example, physicians making diagnoses and chess grandmasters analyzing game positions, and, as noted earlier, even business corporations making investment decisions. Currently, historical records, including laboratory notebooks of scientists, are also being used to study problem-solving processes in scientific discovery. Although such records are far less "dense" than laboratory protocols, they sometimes permit the course of discovery to be traced in considerable detail. Laboratory notebooks of scientists as distinguished as Charles Darwin, Michael Faraday, Antoine-Laurent Lavoisier, and Hans Krebs have been used successfully in such research.
From empirical studies, a description can now be given of the problem-solving process that holds for a rather wide range of activities. First, problem solving generally proceeds by selective search through large sets of possibilities, using rules of thumb (heuristics) to guide the search. Because the possibilities in realistic problem situations are generally multitudinous, trial-and-error search would simply not work; the search must be highly selective. Chess grandmasters seldom examine more than a hundred of the vast number of possible scenarios that confront them, and similar small numbers of searches are observed in other kinds of problem-solving search.
One of the procedures often used to guide search is "hill climbing," using some measure of approach to the goal to determine where it is most profitable to look next. Another, and more powerful, common procedure is means-ends analysis. In means-ends analysis, the problem solver compares the present situation with the goal, detects a difference between them, and then searches memory for actions that are likely to reduce the difference. Thus, if the difference is a fifty-mile distance from the goal, the problem solver will retrieve from memory knowledge about autos, carts, bicycles, and other means of transport; walking and flying will probably be discarded as inappropriate for that distance.
The third thing that has been learned about problem solving--especially when the solver is an expert--is that it relies on large amounts of information that are stored in memory and that are retrievable whenever the solver recognizes cues signaling its relevance. Thus, the expert knowledge of a diagnostician is evoked by the symptoms presented by the patient; this knowledge leads to the recollection of what additional information is needed to discriminate among alternative diseases and, finally, to the diagnosis.
In a few cases, it has been possible to estimate how many patterns an expert must be able to recognize in order to gain access to the relevant knowledge stored in memory. A chess master must be able to recognize about 50,000 different configurations of chess pieces that occur frequently in the course of chess games. A medical diagnostician must be able to recognize tens of thousands of configurations of symptoms; a botanist or zoologist specializing in taxonomy, tens or hundreds of thousands of features of specimens that define their species. For comparison, college graduates typically have vocabularies in their native languages of 50,000 to 200,000 words. (However, these numbers are very small in comparison with the real-world situations the expert faces: there are perhaps 10120 branches in the game tree of chess, a game played with only six kinds of pieces on an 8 x 8 board.)
One of the accomplishments of the contemporary theory of problem solving has been to provide an explanation for the phenomena of intuition and judgment frequently seen in experts' behavior. The store of expert knowledge, "indexed" by the recognition cues that make it accessible and combined with some basic inferential capabilities (perhaps in the form of means-ends analysis), accounts for the ability of experts to find satisfactory solutions for difficult problems, and sometimes to find them almost instantaneously. The expert's "intuition" and "judgment" derive from this capability for rapid recognition linked to a large store of knowledge. When immediate intuition fails to yield a problem solution or when a prospective solution needs to be evaluated, the expert falls back on the slower processes of analysis and inference.
EXPERT SYSTEMS IN ARTIFICIAL INTELLIGENCE
Over the past thirty years, there has been close teamwork between research in psychology and research in computer science aimed at developing intelligent programs. Artificial intelligence (AI) research has both borrowed from and contributed to research on human problem solving. Today, artificial intelligence is beginning to produce systems, applied to a variety of tasks, that can solve difficult problems at the level of professionally trained humans. These AI programs are usually called expert systems. A description of a typical expert system would resemble closely the description given above of typical human problem solving; the differences between the two would be differences in degree, not in kind. An AI expert system, relying on the speed of computers and their ability to retain large bodies of transient information in memory, will generally use "brute force"--sheer computational speed and power--more freely than a human expert can. A human expert, in compensation, will generally have a richer set of heuristics to guide search and a larger vocabulary of recognizable patterns. To the observer, the computer's process will appear the more systematic and even compulsive, the human's the more intuitive. But these are quantitative, not qualitative, differences.
The number of tasks for which expert systems have been built is increasing rapidly. One is medical diagnosis (two examples are the CADUCEUS and MYCIN programs). Others are automatic design of electric motors, generators, and transformers (which predates by a decade the invention of the term expert systems), the configuration of computer systems from customer specifications, and the automatic generation of reaction paths for the synthesis of organic molecules. All of these (and others) are either being used currently in professional or industrial practice or at least have reached a level at which they can produce a professionally acceptable product.
Expert systems are generally constructed in close consultation with the people who are experts in the task domain. Using standard techniques of observation and interrogation, the heuristics that the human expert uses, implicitly and often unconsciously, to perform the task are gradually educed, made explicit, and incorporated in program structures. Although a great deal has been learned about how to do this, improving techniques for designing expert systems is an important current direction of research. It is especially important because expert systems, once built, cannot remain static but must be modifiable to incorporate new knowledge as it becomes available.
DEALING WITH ILL-STRUCTURED PROBLEMS
In the 1950s and 1960s, research on problem solving focused on clearly structured puzzle-like problems that were easily brought into the psychological laboratory and that were within the range of computer programming sophistication at that time. Computer programs were written to discover proofs for theorems in Euclidean geometry or to solve the puzzle of transporting missionaries and cannibals across a river. Choosing chess moves was perhaps the most complex task that received attention in the early years of cognitive science and AI.
As understanding grew of the methods needed to handle these relatively simple tasks, research aspirations rose. The next main target, in the 1960s and 1970s, was to find methods for solving problems that involved large bodies of semantic information. Medical diagnosis and interpreting mass spectrogram data are examples of the kinds of tasks that were investigated during this period and for which a good level of understanding was achieved. They are tasks that, for all of the knowledge they call upon, are still well structured, with clear-cut goals and constraints.
The current research target is to gain an understanding of problem-solving tasks when the goals themselves are complex and sometimes ill defined, and when the very nature of the problem is successively transformed in the course of exploration. To the extent that a problem has these characteristics, it is usually called ill structured. Because ambiguous goals and shifting problem formulations are typical characteristics of problems of design, the work of architects offers a good example of what is involved in solving ill-structured problems. An architect begins with some very general specifications of what is wanted by a client. The initial goals are modified and substantially elaborated as the architect proceeds with the task. Initial design ideas, recorded in drawings and diagrams, themselves suggest new criteria, new possibilities, and new requirements. Throughout the whole process of design, the emerging conception provides continual feedback that reminds the architect of additional considerations that need to be taken into account.
With the current state of the art, it is just beginning to be possible to construct programs that simulate this kind of flexible problem-solving process. What is called for is an expert system whose expertise includes substantial knowledge about design criteria as well as knowledge about the means for satisfying those criteria. Both kinds of knowledge are evoked in the course of the design activity by the usual recognition processes, and the evocation of design criteria and constraints continually modifies and remolds the problem that the design system is addressing. The large data bases that can now be constructed to aid in the management of architectural and construction projects provide a framework into which AI tools, fashioned along these lines, can be incorporated.
Most corporate strategy problems and governmental policy problems are at least as ill structured as problems of architectural or engineering design. The tools now being forged for aiding architectural design will provide a basis for building tools that can aid in formulating, assessing, and monitoring public energy or environmental policies, or in guiding corporate product and investment strategies.
SETTING THE AGENDA AND REPRESENTING A PROBLEM
The very first steps in the problem-solving process are the least understood. What brings (and should bring) problems to the head of the agenda? And when a problem is identified, how can it be represented in a way that facilitates its solution?
The task of setting an agenda is of utmost importance because both individual human beings and human institutions have limited capacities for dealing with many tasks simultaneously. While some problems are receiving full attention, others are neglected. Where new problems come thick and fast, "fire fighting" replaces planning and deliberation. The facts of limited attention span, both for individuals and for institutions like the Congress, are well known. However, relatively little has been accomplished toward analyzing or designing effective agenda-setting systems. A beginning could be made by the study of "alerting" organizations like the Office of Technology Assessment or military and foreign affairs intelligence agencies. Because the research and development function in industry is also in considerable part a task of monitoring current and prospective technological advances, it could also be studied profitably from this standpoint.
The way in which problems are represented has much to do with the quality of the solutions that are found. The task of designing highways or dams takes on an entirely new aspect if human responses to a changed environment are taken into account. (New transportation routes cause people to move their homes, and people show a considerable propensity to move into zones that are subject to flooding when partial protections are erected.) Very different social welfare policies are usually proposed in response to the problem of providing incentives for economic independence than are proposed in response to the problem of taking care of the needy. Early management information systems were designed on the assumption that information was the scarce resource; today, because designers recognize that the scarce resource is managerial attention, a new framework produces quite different designs.
The representation or "framing" of problems is even less well understood than agenda setting. Today's expert systems make use of problem representations that already exist. But major advances in human knowledge frequently derive from new ways of thinking about problems. A large part of the history of physics in nineteenth-century England can be written in terms of the shift from action-at-a-distance representations to the field representations that were developed by the applied mathematicians at Cambridge.
Today, developments in computer-aided design (CAD) present new opportunities to provide human designers with computer-generated representations of their problems. Effective use of these capabilities requires us to understand better how people extract information from diagrams and other displays and how displays can enhance human performance in design tasks. Research on representations is fundamental to the progress of CAD.
COMPUTATION AS PROBLEM SOLVING
Nothing has been said so far about the radical changes that have been brought about in problem solving over most of the domains of science and engineering by the standard uses of computers as computational devices. Although a few examples come to mind in which artificial intelligence has contributed to these developments, they have mainly been brought about by research in the individual sciences themselves, combined with work in numerical analysis.
Whatever their origins, the massive computational applications of computers are changing the conduct of science in numerous ways. There are new specialties emerging such as "computational physics" and "computational chemistry." Computation--that is to say, problem solving--becomes an object of explicit concern to scientists, side by side with the substance of the science itself. Out of this new awareness of the computational component of scientific inquiry is arising an increasing interaction among computational specialists in the various sciences and scientists concerned with cognition and AI. This interaction extends well beyond the traditional area of numerical analysis, or even the newer subject of computational complexity, into the heart of the theory of problem solving.
Physicists seeking to handle the great mass of bubble-chamber data produced by their instruments began, as early as the 1960s, to look to AI for pattern recognition methods as a basis for automating the analysis of their data. The construction of expert systems to interpret mass spectrogram data and of other systems to design synthesis paths for chemical reactions are other examples of problem solving in science, as are programs to aid in matching sequences of nucleic acids in DNA and RNA and amino acid sequences in proteins.
Theories of human problem solving and learning are also beginning to attract new attention within the scientific community as a basis for improving science teaching. Each advance in the understanding of problem solving and learning processes provides new insights about the ways in which a learner must store and index new knowledge and procedures if they are to be useful for solving problems. Research on these topics is also generating new ideas about how effective learning takes place--for example, how students can learn by examining and analyzing worked-out examples.
Extensions of Theory
Opportunities for advancing our understanding of decision making and problem solving are not limited to the topics dealt with above, and in this section, just a few indications of additional promising directions for research are presented.
DECISION MAKING OVER TIME
The time dimension is especially troublesome in decision making. Economics has long used the notion of time discounting and interest rates to compare present with future consequences of decisions, but as noted above, research on actual decision making shows that people frequently are inconsistent in their choices between present and future. Although time discounting is a powerful idea, it requires fixing appropriate discount rates for individual, and especially social, decisions. Additional problems arise because human tastes and priorities change over time. Classical SEU theory assumes a fixed, consistent utility function, which does not easily accommodate changes in taste. At the other extreme, theories postulating a limited attention span do not have ready ways of ensuring consistency of choice over time.
AGGREGATION
In applying our knowledge of decision making and problem solving to society-wide, or even organization-wide, phenomena, the problem of aggregation must be solved; that is, ways must be found to extrapolate from theories of individual decision processes to the net effects on the whole economy, polity, and society. Because of the wide variety of ways in which any given decision task can be approached, it is unrealistic to postulate a "representative firm" or an "economic man," and to simply lump together the behaviors of large numbers of supposedly identical individuals. Solving the aggregation problem becomes more important as more of the empirical research effort is directed toward studying behavior at a detailed, microscopic level.
ORGANIZATIONS
Related to aggregation is the question of how decision making and problem solving change when attention turns from the behavior of isolated individuals to the behavior of these same individuals operating as members of organizations or other groups. When people assume organizational positions, they adapt their goals and values to their responsibilities. Moreover, their decisions are influenced substantially by the patterns of information flow and other communications among the various organization units.
Organizations sometimes display sophisticated capabilities far beyond the understanding of single individuals. They sometimes make enormous blunders or find themselves incapable of acting. Organizational performance is highly sensitive to the quality of the routines or "performance programs" that govern behavior and to the adaptability of these routines in the face of a changing environment. In particular, the "peripheral vision" of a complex organization is limited, so that responses to novelty in the environment may be made in inappropriate and quasi-automatic ways that cause major failure.
Theory development, formal modeling, laboratory experiments, and analysis of historical cases are all going forward in this important area of inquiry. Although the decision-making processes of organizations have been studied in the field on a limited scale, a great many more such intensive studies will be needed before the full range of techniques used by organizations to make their decisions is understood, and before the strengths and weaknesses of these techniques are grasped.
LEARNING
Until quite recently, most research in cognitive science and artificial intelligence had been aimed at understanding how intelligent systems perform their work. Only in the past five years has attention begun to turn to the question of how systems become intelligent--how they learn. A number of promising hypotheses about learning mechanisms are currently being explored. One is the so-called connexionist hypothesis, which postulates networks that learn by changing the strengths of their interconnections in response to feedback. Another learning mechanism that is being investigated is the adaptive production system, a computer program that learns by generating new instructions that are simply annexed to the existing program. Some success has been achieved in constructing adaptive production systems that can learn to solve equations in algebra and to do other tasks at comparable levels of difficulty.
Learning is of particular importance for successful adaptation to an environment that is changing rapidly. Because that is exactly the environment of the 1980s, the trend toward broadening research on decision making to include learning and adaptation is welcome.
This section has by no means exhausted the areas in which exciting and important research can be launched to deepen understanding of decision making and problem solving. But perhaps the examples that have been provided are sufficient to convey the promise and significance of this field of inquiry today.
Current Research Programs
Most of the current research on decision making and problem solving is carried on in universities, frequently with the support of government funding agencies and private foundations. Some research is done by consulting firms in connection with their development and application of the tools of operations research, artificial intelligence, and systems modeling. In some cases, government agencies and corporations have supported the development of planning models to aid them in their policy planning--for example, corporate strategic planning for investments and markets and government planning of environmental and energy policies. There is an increasing number of cases in which research scientists are devoting substantial attention to improving the problem-solving and decision-making tools in their disciplines, as we noted in the examples of automation of the processing of bubble-chamber tracks and of the interpretation of mass spectrogram data.
To use a generous estimate, support for basic research in the areas described in this document is probably at the level of tens of millions of dollars per year, and almost certainly, it is not as much as $100 million. The principal costs are for research personnel and computing equipment, the former being considerably larger.
Because of the interdisciplinary character of the research domain, federal research support comes from a number of different agencies, and it is not easy to assess the total picture. Within the National Science Foundation (NSF), the grants of the decision and management sciences, political science and the economics programs in the Social Sciences Division are to a considerable extent devoted to projects in this domain. Smaller amounts of support come from the memory and cognitive processes program in the Division of Behavioral and Neural Sciences, and perhaps from other programs. The "software" component of the new NSF Directorate of Computer Science and Engineering contains programs that have also provided important support to the study of decision making and problem solving.
The Office of Naval Research has, over the years, supported a wide range of studies of decision making, including important early support for operations research. The main source of funding for research in AI has been the Defense Advanced Research Projects Agency (DARPA) in the Department of Defense; important support for research on applications of A1 to medicine has been provided by the National Institutes of Health.
Relevant economics research is also funded by other federal agencies, including the Treasury Department, the Bureau of Labor Statistics, and the Federal Reserve Board. In recent years, basic studies of decision making have received only relatively minor support from these sources, but because of the relevance of the research to their missions, they could become major sponsors.
Although a number of projects have been and are funded by private foundations, there appears to be at present no foundation for which decision making and problem solving are a major focus of interest.
In sum, the pattern of support for research in this field shows a healthy diversity but no agency with a clear lead responsibility, unless it be the rather modestly funded program in decision and management sciences at NSF. Perhaps the largest scale of support has been provided by DARPA, where decision making and problem solving are only components within the larger area of artificial intelligence and certainly not highly visible research targets.
The character of the funding requirements in this domain is much the same as in other fields of research. A rather intensive use of computational facilities is typical of most, but not all, of the research. And because the field is gaining new recognition and growing rapidly, there are special needs for the support of graduate students and postdoctoral training. In the computing-intensive part of the domain, desirable research funding per principal investigator might average $250,000 per year; in empirical research involving field studies and large-scale experiments, a similar amount; and in other areas of theory and laboratory experimentation, somewhat less.
Research Opportunities: Summary
The study of decision making and problem solving has attracted much attention through most of this century. By the end of World War II, a powerful prescriptive theory of rationality, the theory of subjective expected utility (SEU), had taken form; it was followed by the theory of games. The past forty years have seen widespread applications of these theories in economics, operations research, and statistics, and, through these disciplines, to decision making in business and government.
The main limitations of SEU theory and the developments based on it are its relative neglect of the limits of human (and computer) problem-solving capabilities in the face of real-world complexity. Recognition of these limitations has produced an increasing volume of empirical research aimed at discovering how humans cope with complexity and reconcile it with their bounded computational powers. Recognition that human rationality is limited occasions no surprise. What is surprising are some of the forms these limits take and the kinds of departures from the behavior predicted by the SEU model that have been observed. Extending empirical knowledge of actual human cognitive processes and of techniques for dealing with complexity continues to be a research goal of very high priority. Such empirical knowledge is needed both to build valid theories of how the U.S. society and economy operate and to build prescriptive tools for decision making that are compatible with existing computational capabilities.
The complementary fields of cognitive psychology and artificial intelligence have produced in the past thirty years a fairly well-developed theory of problem solving that lends itself well to computer simulation, both for purposes of testing its empirical validity and for augmenting human problem-solving capacities by the construction of expert systems. Problem-solving research today is being extended into the domain of ill-structured problems and applied to the task of formulating problem representations. The processes for setting the problem agenda, which are still very little explored, deserve more research attention.
The growing importance of computational techniques in all of the sciences has attracted new attention to numerical analysis and to the topic of computational complexity. The need to use heuristic as well as rigorous methods for analyzing very complex domains is beginning to bring about a wide interest, in various sciences, in the possible application of problem-solving theories to computation.
Opportunities abound for productive research in decision making and problem solving. A few of the directions of research that look especially promising and significant follow:
• A substantially enlarged program of empirical studies, involving direct observation of behavior at the level of the individual and the organization, and including both laboratory and field experiments, will be essential in sifting the wheat from the chaff in the large body of theory that now exists and in giving direction to the development of new theory.
• Expanded research on expert systems will require extensive empirical study of expert behavior and will provide a setting for basic research on how ill-structured problems are, and can be, solved.
• Decision making in organizational settings, which is much less well understood than individual decision making and problem solving, can be studied with great profit using already established methods of inquiry, especially through intensive long-range studies within individual organizations.
• The resolution of conflicts of values (individual and group) and of inconsistencies in belief will continue to be highly productive directions of inquiry, addressed to issues of great importance to society.
• Setting agendas and framing problems are two related but poorly understood processes that require special research attention and that now seem open to attack.
These five areas are examples of especially promising research opportunities drawn from the much larger set that are described or hinted at in this report.
The tools for decision making developed by previous research have already found extensive application in business and government organizations. A number of such applications have been mentioned in this report, but they so pervade organizations, especially at the middle management and professional levels, that people are often unaware of their origins.
Although the research domain of decision making and problem solving is alive and well today, the resources devoted to that research are modest in scale (of the order of tens of millions rather than hundreds of millions of dollars). They are not commensurate with either the identified research opportunities or the human resources available for exploiting them. The prospect of throwing new light on the ancient problem of mind and the prospect of enhancing the powers of mind with new computational tools are attracting substantial numbers of first-rate young scientists. Research progress is not limited either by lack of excellent research problems or by lack of human talent eager to get on with the job.
Gaining a better understanding of how problems can be solved and decisions made is essential to our national goal of increasing productivity. The first industrial revolution showed us how to do most of the world's heavy work with the energy of machines instead of human muscle. The new industrial revolution is showing us how much of the work of human thinking can be done by and in cooperation with intelligent machines. Human minds with computers to aid them are our principal productive resource. Understanding how that resource operates is the main road open to us for becoming a more productive society and a society able to deal with the many complex problems in the world today.
 
Re: WORST CASE SOME ONE!!!

Extension Organization of the Future: Linking Emotional Intelligence and Core Competencies
Deliece Ayers


Barbara Stone
Extension Planning and Performance Specialist


Texas Agricultural Extension Service
Texas A&M University
College Station, Texas


Introduction

Recruiting, hiring, and keeping desirable employees are fundamental goals in institutions. Developing an exceptional workforce, however, has usually been a hit-and-miss process. Traditionally, organizations have relied on scholastic achievement, standardized tests, and an assortment of other pedagogical measures to recruit and keep good workers; oftentimes decisions about hiring have been made within the first thirty seconds of an interview.

In 1973, however, David C. McClelland, in a paper titled "Testing for Competence Rather Than Intelligence," related what every public school teacher knows: academic over-achievers are not always the most successful people in their professions (Spencer and Spencer, 1993). McClelland concluded that job selection and performance should be based on desired, observable behaviors, instead of on traditional standardized tests.

National and international companies like Amoco, Dupont, Federal Express, Proctor and Gamble, and Sony are developing competency models to improve the quality of the employees hired and to improve employee performance in the workplace. A national survey of American employers revealed that six of seven desired traits for entry-level workers were non-academic (Goleman, 1998, pp. 12-13). The six were about Emotional Intelligence (EI), defined as "an understanding of how you and others feel, and what to do about it" (Sims, 1998. p. E2).

A 1996-97 study of the Texas Agricultural Extension Service was designed to identify outstanding characteristics of Extension educators. The results were similar to the business survey. The majority of core competencies related to emotional intelligence. This paper discusses emotional intelligence as it relates to a competency model for a large Extension organization.

Emotional Intelligence

Generations ago, John Adams, Robert E. Lee and others understood the concept of emotional intelligence. They valued ambition, achievement, flexibility, and sensitivity and knew that people could learn to delay immediate self-gratification (Eicher, 1997; Ellis, 1994). More recently, authors like Daniel Goleman (1998) and Hendrie Weisinger (1998) have discussed the importance of emotional intelligence at work.

Emotional intelligence at work is the ability to understand yourself and others well enough to express emotions in a healthy way, which is critical to job success and career satisfaction (Sims, 1998). Goleman says that professionally successful people have high emotional intelligence in addition to the traditional cognitive intelligence or specialized content knowledge (Goleman, 1998, Linkage Conference). For example, having the expertise to conduct soil sample analyses is important, but it is critical to have the ability to communicate in an effective and sensitive way when the soil results need to be interpreted, or when they are late, or when another sample is needed because the first one was lost or inappropriately secured, or when one of the five or six people you have to work with to get the analyses done is argumentative or uncooperative.

Likewise, it may be important to be able to successfully execute an Extension program, but it may be more essential to be able to effectively interpret more non-traditional programs and graciously retire programs that no longer add value, and then effectively communicate the decision to loyal customers. Similarly, it is good to be technologically adept, but it is invaluable to be able to accept the ambiguity that comes with change, or with the evolution of a project, or with working with diverse groups of people. Successful people, then, have high-level critical thinking skills, technical expertise, and, most importantly, emotional maturity/emotional intelligence.

Emotional Intelligence and Core Competencies

In his book, Emotional Intelligence...Why it can matter more than IQ, Goleman states that self-motivation, self-control, persistence, and zeal are all a part of Emotional Intelligence (Goleman, 1997). Not surprisingly, these behaviors are also associated with core competencies. Competencies are desired behaviors, and core competencies are "personal competencies required of everyone..." (Anderson, 1998, p. 211).

Core competencies identified in the Texas Extension study paralleled the emotional intelligence competencies. For example, "Personal Learning/Self-Development," "Achievement Motivation," and "Initiative" are part of the Extension Competency Model. The definitions of these competencies feature emotional intelligence characteristics. For instance, Personal Learning/Self-Development is defined by the authors as "an important part of creativity in which a person is willing to consider different ideas, and is willing to change and grow according to legitimate feedback from others."

Because emotional intelligence can better predict job success than traditional measures, and because it can be learned, emotional intelligence in competency curriculum development is practical and advisable. The Texas Agricultural Extension Service has begun writing competency activities that introduce emotional intelligence concepts and that develop emotional intelligence attributes. In the Organizational Savvy Competency, participants who seek to develop this competency are asked to summarize ideas about social harmony, group dynamics, and academic talent as they relate to emotional intelligence.

Employees who wish to increase their organizational savvy are asked to record when they have provided leadership for a collaborative project, when they have tried to meet new agents by calling and welcoming them to Extension, and when they have identified national and international trends that might affect the Extension organization. Other activities in the Organizational Savvy Competency refer interested employees to books and web sites that keep them current about a broad scope of Extension programs.

In the Personal Learning/Self-Development Competency, participants are encouraged to take assigned personality tests for self-awareness and interpersonal understanding in the "Activities" section. In the Initiative Competency, employees are encouraged to look for several alternatives before making a decision; they are asked to practice this skill and record their experiences.

These competency-based, development activities help employees increase their emotional intelligence, and they help them to strengthen areas they have personally identified, or that have been identified by their colleagues or supervisors.

Summary

The current emphasis on emotional intelligence supported the identified Extension core competencies in the sense that the Extension competencies were comparable to emotional intelligence competencies. Emotional intelligence and competencies are a natural fit, and they will, no doubt, become increasingly popular as Extension organizations recognize the importance of emotional intelligence and competencies in building the Extension workforce for the 21st century.
 
Re: WORST CASE SOME ONE!!!

Group decision-making.
by Schwartz, Andrew E.


Abstract- Many managers like to believe that they are accomplished in such group decision-making processes as action planning, goal setting and problem-solving. However, their ability to implement such techniques effectively is often hindered by their lack of understandiing of the dynamics of these group decision-making processes. As a result, these managers often end up perpetuating problems that they themselves create through their insensitivity to the needs of other group members. Hence, instead of achieving a consensus, such managers only serve their own interests by leading the group to situations such as decision-making by lack of response or by authority role. Sometimes, they lead the group toward decision-making by minority rule or by majority role, as the case might be. The better way to achieve consensus would be for them to track how decisions are made and ensure that they are achieved by true consultation.


Decision By Lack of Response (The "Plop" Method)

The most common--and perhaps least visible--group decision-making method is that in which someone suggests an idea and, before anyone else has said anything about it, someone else suggests another idea, until the group eventually finds one it will act on. This results in shooting down the original idea before it has really been considered. All the ideas that are bypassed have, in a sense, been rejected by the group. But because the "rejections" have been simply a common decision not to support the idea, the proposers feel that their suggestions have "plopped." The floors of most conference rooms are littered with "plops."

Decision by Authority Rule

Many groups start out with--or quickly set up--a power structure that makes it clear that the chairman (or someone else in authority) will make the ultimate decision. The group can generate ideas and hold free discussion, but at any time the chairman may say that, having heard the discussion, he or she has decided upon a given plan. Whether this method is effective depends a great deal upon whether the chairman is a sufficiently good listener to have culled the right information on which to make the decision. Furthermore, if the group must also implement the decision, then the authority-rule method produces a bare minimum of involvement by the group (basically, they will do it because they have to, not necessarily because they want to). Hence it undermines the potential quality of implementation.

Decision by Minority Rule

One of the most-often-heard complaints of group members is that they feel "railroaded" into some decision. Usually, this feeling results from one, two, or three people employing tactics that produce action--and therefore must be considered decisions--but which are taken without the consent of the majority.

A single person can "enforce" a decision, particularly if he or she is in some kind of chairmanship role, by not giving opposition an opportunity to build up. For example, the manager might consult a few members on even the most seemingly insignificant step and may get either a negative or positive reaction. The others have remained silent. If asked how they concluded there was agreement, chances are they will say, "Silence means consent, doesn't it? Everyone has a chance to voice opposition." If the group members are interviewed later, however, it sometimes is discovered that an actual majority was against a given idea, but that each one hesitated to speak up because she thought that all the other silent ones were for it. They too were trapped by "silence means consent."

Finally, a common form of minority rule is for two or more members to come to a quick and powerful agreement on a course of action, then challenge the group with a quick, "Does anyone object?," and, if no one raises their voice within two seconds, they proceed with "Let's go ahead then." Again the trap is the assumption that silence means consent.

Decision by Majority Rule (Voting and Polling)

More familiar decision-making procedures are often taken for granted as applying to any group situation because they reflect our political system. One simple version is to poll everyone's opinion following some period of discussion. If the majority of participants feels the same way, it is often assumed that is the decision. The other method is the more formal one of stating a clear alternative and asking for votes in favor of it, votes against it, and abstentions.

On the surface, this method seems completely sound, but surprisingly often it turns out that decisions made by this method are not well implemented, even by the group that made the decision. What is wrong? Typically, it turns out that two kinds of psychological barriers exist:

First, the minority members often feel there was an insufficient period of discussion for them to really get their point of view across; hence they feel misunderstood and sometimes resentful.

Second, the minority members often feel that the voting has created two camps within the group and that these camps are now in a win-lose competition: The minority feels that their camp lost the first round, but that it is just a matter of time until it can regroup, pick up some support and win the next time a vote comes up.

In other words, voting creates coalitions, and the preoccupation of the losing coalition is not how to implement what the majority wants, but how to win the next battle. If voting is to be used, the group must be sure that it has created a climate in which members feel they have had their day in court--and where all members feel obligated to go along with the majority decision.

The Better Way

Because there are time constraints in coming to a group decision and because there is no perfect system, a decision by consensus is one of the most effective methods. Unfortunately, it is one of the most time- consuming techniques for group decision-making. It is also quite important to understand that consensus is not the same thing as unanimity. Rather, it is a state of affairs where communications have been sufficiently open (and the group climate has been sufficiently supportive) to make everyone in the group feel that they have had their fair chance to influence the decision. Someone then tests for the "sense of the meeting," carefully avoiding formal procedures like voting. If there is a clear alternative to which most members subscribe and if those who oppose it feel they have had their chance to influence, then a consensus exists. Operationally, it would be defined by the fact that those members who would not take the majority alternative nevertheless understand it clearly and are prepared to support it in deference to any others that are probably about as good.

In order to achieve such a condition, time must be allowed by the group for all members to state their opposition--and to state it fully enough to get the feeling that others really do understand them. This condition is essential if they are later to free themselves of the preoccupation that they could have gotten their point of view across if others had understood what they really had in mind. Only by careful listening to the opposition can such feelings be forestalled, thereby allowing effective group decisions to be reached.

Of course, recognizing the several types of group decision-making is only part of the process. Managers must be specific in their approach to the one that is best in their own situation.

What are the actual steps in a decision made by a group?

1. Identify the Problem. Tell specifically what the problem is and how you experience it. Cite specific examples.

"Own" the problem as yours -- and solicit the help of others in solving it, rather than implying that it's someone else's problem that they ought to solve. Keep in mind that if it were someone else's problem, they would be bringing it up for discussion.

In the identification phase of problem-solving, avoid references to solutions. This can trigger disagreement too early in the process and prevent the group from ever making meaningful progress.

Once there seems to be a fairly clear understanding of what the problem is, this definition should be written in very precise language. If a group is involved, it should be displayed on a flip chart or chalkboard.

2. Clarify the Problem. This step is most important when working with a group of people. If the problem is not adequately clarified so that everyone views it the same, the result will be that people will offer solutions to different problems. To clarify the problem, ask someone in the group to paraphrase the problem as they understand it. Then ask the other group members if they see it essentially the same way. Any differences must be resolved before going any further.

In clarifying the problem, ask the group the following questions: Who is involved with the problem? Who is likely to be affected? Can we get them involved in solving the problem? Who legitimately or logically should be included in the decision? Are there others who need to be consulted prior to a decision?

These questions assume that commitment from those involved (and affected by the problem) is desirable in implementing any changes or solutions. The best way to get this commitment is to include those involved and affected by the problem in determining solutions.

3. Analyze the Cause. Any deviation from what should be is produced by a cause or interaction of causes. In order to change "what is" to "what is wanted," it is usually necessary to remove or neutralize the cause in some way. This calls for precise isolation of the most central or basic cause (or causes) of the problem and requires close analysis of the problem to clearly separate the influencing from the non-influencing factors.

This is probably an easier process to follow when dealing with problems involving physical things rather than with interpersonal or social issues. Typically, interpersonal and social problems are more likely to spring from a dynamic constellation of causes that will be more difficult to solve if the causes are only tackled one at a time. Still, whether dealing with physical or social problems, it is important to seek those causes that are most fundamental in producing the problem. Don't waste energy on causes that have only a tangential effect.

4. Solicit Alternative Solutions To the Problem. This step calls for identifying as many solutions to the problem as possible before discussing the specific advantages and disadvantages of each. What happens frequently in problem-solving is that the first two or three suggested solutions are debated and discussed for the full time allowed for the entire problem-solving session. As a result, many worthwhile ideas are never identified or considered. By identifying many solutions, a superior idea often surfaces that reduces or even eliminates the need for discussing details of more debatable issues. These solutions may be logical attacks at the cause or they may be creative solutions that need not be rational. Therefore, it is important at this step to limit the time spent discussing any one solution and to concentrate instead on announcing as many as possible.

5. Selecting One or More Alternatives for Action. Before selecting specific alternatives for action, it is advisable to identify criteria the desired solution must meet. This can eliminate unnecessary discussion and help focus the group toward the solution (or solutions) that will most likely work.

At this point, it becomes necessary to look for and discuss the advantages and disadvantages of options that appear viable. The task is for the group members to come to a mutual agreement on which solutions to actually put into action. It is desirable for positive comments to be encouraged (and negative comments to be ignored or even discouraged) about any of the solutions. One solution should be the best, of course, but none should be labeled as a "bad idea."

6. Plan for Implementation. This requires looking at the details that must be performed by someone for a solution to be effectively activated. Once the required steps are identified, it means assigning these to someone for action: it also means setting a time for completion.

Not to be forgotten when developing the implementation plan: Who needs to be informed of this action?

7. Clarify the Contract. This is to insure that everyone clearly understands what the agreement is that people will do to implement a solution. It is a summation and restatement of what people have agreed to do and when it is expected they will have it done. It rules out possible misinterpretation of expectations.

8. The Action Plan. Plans are only intellectual exercises unless they are transformed into action. This calls for people assigned responsibility for any part of the plan to carry out their assignments according to the agreed upon contract. This is the phase of problem- solving that calls for peopel to do what they have said they would do.

9. Provide for Evaluation And Accountability. After the plan has been implemented and sufficient time has elapsed for it to have an effect, the group should reconvene and discuss evaluation and accountability. Have the agreed upon actions been carried out? Have people done what they said they would do?

If they have not accomplished their assignments, it is possible that they ran into trouble that must be considered. Or it may be that they simply need to be reminded or held accountable for not having lived up to their end of the contract. Once the actions have been completed, it is necessary to assess their effectiveness. Did the solution work? If not, can a revision make it work? What actions are necessary to implement changes?

Other Considerations

Keeping adequate records of all steps completed (especially brainstorming) can allow energy to be "recycled." Falling back on thinking that was previously done makes it unnecessary to "plow the same ground twice."

When entering into problem-solving, remember that it is unlikely that the best solution will be found on the first attempt. Good problem- solving can be viewed as working like a guidance system: The awareness of the problem is an indication of being "off course," requiring a correction in direction. The exact form the correction is to take is what problem-solving is aimed at deciding. But once the correction (the implemented solution) is made, it is possible that, after evaluation, it will prove to be erroneous--perhaps even throwing you farther off course than in the beginning.

If this happens, the task becomes to immediately compute what new course will be effective. Several course corrections may be necessary before getting back on track to where you want to go. Still, once the desired course is attained, careful monitoring is required to avoid drifting off course again unknowingly. Viewing problem-solving in this realistic manner can save a lot of the frustration that comes from expecting it to always produce the right answers.
 
Re: WORST CASE SOME ONE!!!

Organizational Redesign is structuring an organization, division or department to optimize how it supplies products and services to its clients and customers.

The focus and, consequently, the examples in this presentation will be on organizations. The same principles with some "translation" apply to divisions and departments.

Designing an organizational structure is dependent upon:

The kind and quality of information it gathers from its customers, suppliers and partners
How the company gathers the information
How it interacts with each of these constituents
How this information flows through the organizational structures
Who has access to it and who doesn't
How is the information utilized in making decisions
How the information is stored for ease of use and analyzed
Whether both the organizational processes and systems reflect and mirror information flow
ISN'T IT DONE THAT WAY NOW?
In general, NO!!

Most organizations look like this:


SO, WHAT'S THE PROBLEM?
The typical organization structure results in many of the problems with which we are asked to deal, such as:

Conflict between departments (e.g., the perennial one between Sales and Operations)
Long lead times in developing new products and services
Quality problems, billing inaccuracies, etc.
Inefficiencies (which are usually blamed on individuals)
Not being able to keep up with customer demands
Low employee morale (often related to staff not being empowered to make decisions)
Departmental goals and performance measures not being cascaded down through the entire organization (goals stop at the top of the hierarchy without a real appreciation of how "it all fits together")
WHY DOES IT HAPPEN?
The early stages of a business's relationship to its customers often look like this:


Information about a customer would be gathered by one department which then would parcel it out to the other departments as it saw fit. Frequently, the information wasn't disseminated and discussed between and among the departments.

This structure and flow of information is usually sufficient for an early stage or smaller company to function. The information needed about customers is usually limited ("Do they like it or don't they?").

However, for large and rapidly growing companies that have been accumulating competitors by the bushel-full, the picture changes to ........


The company is larger, there are many more customers and different kinds of customers, each with a different variety of needs, expectations, strategies, etc.

The old structure, however, persists in too many companies with the one department (usually sales and/or marketing) remaining as the "gatekeeper" for the dispersion of customer information.

The net result is that the information each department requires to do its job is either lacking, late or incorrect.

No wonder, then, the problems that plague large and growing companies!! And, we would venture to say, companies that are stagnant also fail to consider the importance of its structure.

THE SOLUTION? REORGANIZATION
Example #1:

The Problem: A firm supplies large medical equipment to hospitals, care centers, etc. Their problem was that the cost of inventory (of both parts and equipment) was eroding their margins, along with a growing failure to deliver service on time, the emergence of quality issues and similar concerns. Their firm's structure was traditional, with each department being its own "silo".

Historically, Marketing's discovery of new markets led to Sales selling any new product they could get a hold of (which were the most financially rewarding for the sales people), the net result being a lack of "family of products" and, therefore, an absence of standardized (and fewer) parts, a need for an ever increasing number of Service personnel as well as an ever demanding need for increased training of Service personnel.

The Solution: Replacing the silos with cross-functional teams, i.e., with members from each discipline, at the top. The cross-functional teams retained the old designations (Marketing, Sales, Service, Operations).

The Result: All four teams focused on who their customer was, is and should be, and what could be in the best interest of these units. For example, the Supply team (composed of representatives from each discipline and headed by a Supply person) had as many suggestions as to which customer niche should be targeted as the Marketing team. They offered the characteristics of an ideal customer based on repair rates, additional services required, machine capabilities, etc. Sales and Purchasing, thus, had their marching orders; Marketing had the necessary constraints placed on where they could go to find customer niches; and, as a result, Supply lowered their costs. Compensation programs for sales people were adjusted accordingly. Their IT system was restructured so as to capture the appropriate customer information and shuttle the information each team required .

Example #2:

The Problem: A large and growing infertility medical practice was experiencing complaints from patients, problems with staff, quality problems (e.g., patients kept waiting, late test results, etc.). Based on interviews with the entire staff and workflow observations, it was clear that there was a serious rift between the Finance/Business and the Clinical Departments. People in one department complained bitterly about people in the other.

The Solution: Determining and designating who in fact was the "customer", namely, the patient and her husband. They had to be served.

To mirror this, the entire company was placed under the Clinical Department after a director with the requisite skills (both clinical and financial) and background was found. Patient care became pre-eminent. For example, the receptionist (a clinical function) had immediate access to basic insurance information that patients sought - previously a patient seeking information talked to at least three people before obtaining the data she needed. The design of the offices which earlier had been a function of what the Business section deemed financially prudent became a clinical decision, resulting in much more inviting and pleasant surroundings. The business functions (billing, collections, financials) were redesigned so that any information needed about patients was immediately accessible by any department that needed it (marketing, clinical, etc.). In addition, programs to ensure the comfort of the patient couple were introduced.

Also, corporate goals and performance standards were created and cascaded downward through each section so that everyone knew what was expected of them. This forced each department of the company to "negotiate" not only their goals but to ensure that their action steps were supportive of and integrated with those of the other departments.

The Result: The number of new and retained patients has increased, expenses have been reduced, collections from insurance companies and co-payments have ratcheted upward dramatically, morale is good and growing.

THE STEPS
Determining How the Company Goes to Market
Sketch how the current organizational structure (e.g., departments, roles, responsibilities, information flow, decision-making, etc.) supports how the company goes to market. Include:
What the current structure does well.
What the current structure does not do well.
If possible, "numbers" that put a value to what is done well and what not.
Draw an ideal organizational structure (first draft) that reflects better how the company goes to market. This step is crucial in establishing the value of the organizational change. Focus on:
How it can improve upon the current situation (in "numbers")
What it can improve upon.
How it will affect the organization and its parts, processes and people.



Planning
Determine who should be involved in the planning process, in particular "RACI", i.e. who is Responsible, Accountable, Consulting and who should be kept Informed.
List the major players who perform or are involved in the key processes that support the current structure.
What would the ideal organization (processes, roles, people) look like (first draft)? Who would fill what position? How can/might/should the current players be utilized in this new schema?
What new equipment, technology, resources, people, skills or systems would be needed in the new structure?



Implementation
Develop a schedule (dates and RACI) for the change from the current situation to the ideal state. Create flowcharts that capture the changeover. Be specific about:
When and how the change from the old to the new will occur.
Impediments that might appear during the transition (e.g., a huge amount of business that might distract people's attention). Create scenarios of what might occur and how they can be handled.
Create a program that would prepare employees for the change.



Administrative Issues
Salary adjustments?
Assignment of roles: Sponsor, Project Manager, Oversight Committee, Teams
Regular communication to staff regarding the progress, decisions, plans, etc., of the project.
A written plan that is shared with key personnel, that is referred to periodically, updated when necessary and referred to continually.
Scheduled "monitoring" meetings between the Project Team, Sponsor, Oversight Committee.
 
Re: WORST CASE SOME ONE!!!

A Tripartite Model of Motivation for Achievement: Attitude/Drive/Strategy*
Bruce W. Tuckman, The Ohio State University

*Paper presented in the Symposium: Motivational Factors Affecting Student Achievement – Current Perspectives. Annual Meeting of the American Psychological Association, Boston, August 1999.

Abstract

This paper presents a model of motivation for achievement that includes three generic motivational factors that influence outcome attainment : (1) attitude or belief about one’s capability to attain the outcome; (2) drive or desire to attain the outcome; (3) strategy or techniques employed to attain the outcome. Recent experimental research evidence is presented to illustrate the contributive influence of each proposed factor on academic engagement and achievement, followed by some empirically-derived causal models that link the various factors to achievement outcomes.

A Tripartite Model of Motivation for Achievement: Attitude/Drive/Strategy
Motivating students to achieve in school is a topic of great practical concern to teachers and parents, and of great theoretical concern to researchers. New books on the topic appear with increasing frequency and relevant research is proliferating at a rapid rate. Higher education institutions are beginning to provide assistance to students, especially new ones, in developing so-called study skills and self-regulatory skills such as time management. One of the greatest challenges and opportunities of the 21st century will be for schools at all levels to focus more on assisting students to become motivated in order that they can succeed in school.

The purpose of this paper is to present a proposed model of motivation for achievement, as applied particularly to the educational setting. This model focuses on three generic variables: (1) attitude or beliefs that people hold about themselves, their capabilities, and the factors that account for their outcomes; (2) drive or the desire to attain an outcome based on the value people place on it; (3) strategy or the techniques that people employ to gain the outcomes they desire. Each of these variables will be described in more detail, and evidence will be provided to support the contention that each exerts an important influence on motivation to achieve in an academic environment.

Achievement outcomes have been regarded as a function of two characteristics, "skill" and "will" (McCombs and Marzano, 1990), and these must be considered separately because possessing the will alone may not insure success if the skill is lacking. The focus in this model is on will, or the motivation to achieve the outcome, and it will be considered separately from level of skill. Where achievement measures such as scores on course examinations or grades are used as criteria of motivation to achieve, measures of skill have to be separated out or controlled for. To measure motivation for achievement directly, measures of engagement must be examined.

Cognitive engagement represents the amount of effort spent in either studying or completing assignments. It is the result of motivation, not its source. Pintrich and Schrauben (1992) review a large body of research that suggests that (1) the value of an outcome to the student affects that student's motivation, and (2) motivation leads to cognitive engagement, such engagement manifesting itself in the use or application of various learning strategies.

Many of the studies Pintrich and Schrauben describe employed self-reports of learning strategy use as a measure of cognitive engagement. Such studies become dependent on what students claimed to be doing as a way of determining that they were indeed engaged in a task. To avoid this dependence on students’ self-reports, the studies I carried out (either alone or in collaboration with Tom Sexton) and report on in this paper operationalized cognitive engagement as a manifestation of effort expenditure or actual performance on the homework task of writing test items on text chapters in which students had the option to perform assignments for extra credit. In studies I carried out and report on using achievement test scores as a measure of motivation to achieve, relevant data on students' skill level was used as a control.

In the next three sections, I will briefly review findings that suggest a relationship between each of three proposed causal variables and motivation to achieve. Following that, I will present some evidence about their combined impact as revealed through causal modeling.

Attitude

The attitude that is often used in conjunction with motivation to achieve is self-efficacy, or how capable people judge themselves to be to perform a task successfully (Bandura, 1977). Bandura (1997) provides extensive evidence and documentation for the conclusion that self-efficacy is a key factor in the extent to which people can bring about significant outcomes in their lives. Specifically, there is considerable evidence to support the contention that self-efficacy beliefs contribute to academic achievement by enhancing the motivation to achieve. For example, Schunk (1989) in a number of studies, has shown that children with the same level of intellectual capability differ in their performance as a function of their level of self-efficacy.

In my own (collaborative) work comparing the task performance of students at high, intermediate, and low levels of self-efficacy with regard to the task (Tuckman and Sexton, 1990), the highest self-efficacy group was found to be twice as productive as the middle group, and 10 times as productive as the low group. Moreover, the high group outperformed their own expectations by 22%, the intermediate group equaled their own expectations, and the low group fell below their own expectations by 77%. The results reflect a clear relationship between self-efficacy beliefs and academic productivity.

Efficacy beliefs have also been shown to play a mediational role in academic attainment, especially between instructional or induced-strategy treatments and academic outcomes. Schunk and Gunn (1986) report that providing children with strategy instruction and training in self-monitoring and self-correcting increased performance both directly and through the enhancement of self-efficacy. Schunk and Rice (1993) found that training in verbal self-guidance increased both self-efficacy and reading comprehension skill.

In one of my (collaborative) studies (Tuckman and Sexton, 1991), encouraging feedback was found to increase self-efficacy on the task and subsequent performance on the task. Statistical analyses showed that when performance was held constant, encouragement was seen to affect self-efficacy, but when self-efficacy was held constant, encouragement had no effect on performance. Hence, self-efficacy functioned as a mediator of performance.

Using "control beliefs," a somewhat more complex construct of beliefs than self-efficacy, one that combined capacity and strategy beliefs with more generalized expectations, Skinner, Wellborn, and Connell (1990) sought to predict achievement. They found that elementary school children’s perceived control influenced academic performance by promoting or undermining engagement in learning activities.

The relation between self-efficacy and performance is best summed up by Bandura (1997, p. 61).

"The evidence is relatively consistent in showing that efficacy beliefs contribute significantly to level of motivation and performance. They predict not only the behavioral changes accompanying different environmental influences but also differences in behavior between individuals receiving the same environmental influence, and even variation within the same individual in the tasks performed and those shunned or attempted but failed."

Drive

Is attitude about one’s capability alone enough to account for motivation to achieve? Evidence suggests otherwise. Kirsch (1982) presented subjects with a hypothetical feared task, specifically picking up a snake and holding it in front of their face, and asked them whether they would be able and willing to do it. They reported having neither capability nor inclination. He then offered them a progressively stronger incentive (namely, more money), and eventually reached a level where all subjects reported both the capability and willingness to perform the feared task. He also found that subjects would continuously perform a task for which they had little expectation for success, namely throwing a wadded-up piece of paper across a room into a wastebasket, if the consequence for success was a considerable reward and the punishment for failure was zero (Kirsch, 1985).

Maddux, Norton, and Stoltenberg (1986) also took issue with self-efficacy theory for disregarding outcome value as a potential influence on behavioral intentions. They showed that outcome value had a significant influence on behavioral intentions, especially among people high in self-efficacy. Also, the distinction between intrinsic and extrinsic motivation (Deci and Ryan, 1985) is an acknowledgement of the role of the value of a behavior in the determination of whether or not the behavior is performed.

One potential source of the drive to perform is the incentive value of the performance. Incentive theories of motivation (e.g., Rotter, Phares and Chance, 1972; Overmier and Lawry, 1979) suggest that people will perform an act when its performance is likely to result in some outcome they desire, or that is important to them. For example, in anticipation of a situation in which a person is required to perform, that person may expend considerable effort in preparation because of the mediation provided by the desire to achieve success or avoid failure. That desire would be said to provide incentive motivation for the person to expend the effort. Accordingly, a test, as a stimulus situation, may be theorized to provoke students to study as a response, because of the mediation of the desire to achieve success or avoid failure on that test. Studying for the test, therefore, would be the result of incentive motivation.

I was involved in four experiments on the effect of incentive motivation on academic achievement, each of which used tests as a mediator. They were done using the spotquiz, a weekly, announced test made up of seven completion-type items on the textbook chapter assigned for that week, as a source of incentive motivation to motivate timely processing of the text. No direct overlap existed between spotquizzes and the tests used to measure final achievement, so they would not function as a form of practice. In the first experiment (Tuckman, 1996), using a five-week segment of an undergraduate college course, students taking spotquizzes were compared to students of comparable aptitude employing the learning strategy of identifying, defining, and elaborating upon 21 key textbook terms per chapter in required homework assignments, and those who neither took spotquizzes nor completed homework assignments. The homework group was used as a control for time-on-task. Large significant differences on the final achievement test favored the spotquiz group whose performance exceeded the homework group by 16% and the control group by 24% .

In the second experiment (Tuckman, 1996), carried out over an entire 15-week college course, a spotquiz group was compared to another group of students of comparable aptitude, taking the same course at the same time, that completed the same terms-definitions-elaborations homework assignments as in the first experiment. Students were further subdivided into high, medium, and low on prior grade point average. On the three exams, results significantly favored the spotquiz group (this time by 4%), despite students reporting spending no more time studying for spotquizes than was required by the other group to complete the homework assignments. Moreover, comparisons of treatments by GPA level yielded a significant interaction, and revealed that the major beneficiaries of the spotquizes were low GPA students, precisely those who tend to devote the least amount of time to schoolwork. Among these low GPA students, those taking the spotquizzes outachieved homework students by an average of 14% across the three course examinations. No differences across treatments were found for either high or medium GPA students. The spotquizzes appeared to provide students with an incentive or drive to study on a regular basis.

In the third experiment (Tuckman, 1998), weekly spotquizzes were compared to the learning strategy of chapter outlining as a weekly homework assignment. In this study, students were classified as high, medium or low procrastinators based on their scores on the Procrastination Scale (Tuckman, 1991), a 32-item self-report inventory on which students indicated their tendency to delay starting on tasks and assignments. Spotquiz students significantly outachieved homework students of comparable aptitude by 7% on the course examination. A significant interaction was based on the finding that, while low and medium procrastinators in the two treatment groups did not differ significantly in achievement, high procrastinators who took spotquizzes achieved 18% better on the course exam relative to the achievement of their counterparts who completed homework assignments. The incentive motivation or drive provided by frequent quizzing enabled students to manifest higher achievement, while reporting spending less time studying for spotquizzes than homework students reported spending doing their assignments. In the fourth study (Tuckman and Trimble, 1997), spotquizzes were also found to enhance the achievement of middle-school students in a science course.

Other studies support the importance of drive or value, using sources other than incentives, as a factor related to achievement (Pintrich and Schrauben, 1992). Pintrich and De Groot (1990) found a significant negative correlation between test anxiety, often considered a manifestation of drive, and achievement among seventh graders, while Bandura, Zimmerman, and Martinez-Pons (1992) found a strong relationship between high school students' grade goals, another reflection of value or drive, and their school achievement.

Wigfield and Eccles (1992), building on the work of Atkinson (1966), argue that incentive value of a task is an important determinant of task choice, and that individuals will tend to do tasks that they positively value and avoid those that they negatively value. The work by myself and others cited here tends to show that enhancing the incentive value of studying, and thereby a person’s drive to engage in that task, increases level of achievement as a result, and shows drive or desire to be an important component of motivation.

Strategy

Work has been done by myself and others showing a relation between strategy and success in school and in a variety of other areas as well. Indeed, the entire concept of self-regulation has burst upon the motivation scene to reflect the connection between specific strategies and performance outcomes, exemplified by the considerable work of Schunk and Zimmerman (e.g., Schunk,1989; Schunk and Zimmerman, 1998a, b; Zimmerman, 1989; 1990; Zimmerman and Martinez-Pons;1988), including a paper given earlier in this symposium. Strategies that have been shown to have a particular impact on achievement (Zimmerman, 1989) are self-observing, self-judging, and self-reacting (e.g., goal setting, planning), and more recently, self-evaluation and monitoring, goal setting and strategic planning, strategy implementation and monitoring, and strategic outcome monitoring (Zimmerman, 1998). Another paper given earlier in this symposium by Gwen Quinn, one of my former students, deals with a detailed goal setting and planning strategy called the "Doing Something Better Plan" (Tuckman, 1995).

In one of my own studies (Tuckman, 1990), I compared goal setting, group outcome and control conditions on the performance of students at three levels of self-efficacy. I found that the unique combination of strategy condition and self-efficacy level determined the amount of performance. The goal setting strategy yielded the best performance from low self-efficacy students, the group outcome strategy yielded the best performance from middle self-efficacy students, and the no induced strategy or control yielded the best from high self-efficacy students. Similarly, Tuckman and Sexton (1992) showed that in a competitive performance situation, a feedback strategy worked better than a no feedback strategy for low and intermediate self-efficacy students while the reverse held true for high self-efficacy students.

In the last decade, the evidence compiled for the role of strategies in the motivation for achievement has been considerable, especially within the framework of self-regulated learning. Beyond believing in one’s own capability, and having the desire to achieve a particular outcome, being able to carry out specific strategies associated with success in a variety of fields (e.g., writers, athletes, musicians, students) appears critical (Zimmerman, 1998).

Models of the Combined Variables

Zimmerman (1989) identified the three elements of self-regulated learning as "students’ self-regulated learning strategies, self-efficacy perceptions of performance skill, and commitment to academic goals (p. 329)." Pintrich and De Groot (1990), in a correlational study of 7th graders’ school achievement, identified the following five variables as predictive: (1) self-efficacy, (2) intrinsic value, (3) test anxiety, (4) strategy use, and (5) self-regulation. The first is a reflection of attitude, the second and third: drive, and the last two: strategy. I did a similar study of college students (Tuckman, 1993) using factor analysis and identified three factors: (1) an attitude factor, primarily representing self-efficacy; (2) a drive factor, representing self-reported grade importance, test anxiety, and two behavioral measures that reflected grade importance; (3) a factor that primarily represented ability (i.e., aptitude and achievement test scores), but that also included cognitive strategy. Self-regulation tended to load in the attitude factor.

Zimmerman, Bandura, and Martinez-Pons (1992) reported a path analysis for final grades of 9th and 10th graders. Predictor variables were prior grades, parent grade goals, student grade goals, self-efficacy for self-regulated learning, and self-efficacy for academic achievement. Their results show the influence on achievement (as measured by grades) of the attitude factor (the two self-efficacy measures; the direct effect of self-efficacy on performance has also been shown by Pajares and Miller, 1994) and the drive factor (as reflected by student grade goals and parent grade goals). The strategy factor could not appear because they did not include a measure of strategy use, only of the belief in being capable of it.

Another causal model of academic achievement is provided by Abry (1998) as reported in this symposium. He found metacognitive strategies (planning, monitoring, and utilization of feedback) and attitude (self-efficacy, locus of control) to predict achievement. He also included cognitive strategies (coding, elaborating, organizing) and found them to predict achievement. He did not include any measure of drive.

Finally, a causal model that Abry and I did together (Tuckman and Abry, 1998) included measures of all three constructs: attitudes (self-efficacy), drive (intrinsic value, test anxiety, student goals, parent goals), and strategy (self-regulation). It also included a somewhat skilled-based variable, prior grade point average. The model shows that all seven predictors were represented in the causal path, with significant loadings.

The model shows an interesting pattern. Student goals, based on the answers to two questions: What grade have you set as your personal goal for this course? What grade would you regard as minimally satisfying for this course? (from Zimmerman et al., 1992) appeared as the major mediating variable. It was influenced by grade point average, parent goals, and self-efficacy for course. Even though the act of setting goals is a strategy, the goals themselves are, in my estimation, a measure of drive in that the level of the goal helps propel the person toward a particular level of achievement. Locke and Latham, 1990, p. 2) define goals as "something that the person wants to achieve," and see them causing people to marshall their resources and mobilize their effort for their attainment, while Dweck (1992) considers a goal to be a specific outcome that someone is striving to achieve. The verbs "want," "mobilize," "marshall," and "strive" suggest the concept of drive.

Self-efficacy, an attitude, was found to exert its influence on achievement through student goals, rather than directly. Hence, beliefs in yourself appear to influence goals for which you strive. This relationship is consistent with that reported by Locke and Latham (1990). Zimmerman et al (1992) found self-efficacy also to influence grades indirectly through student grade goals, but they found it to influence grades directly as well.

Conclusion

While I have not provided an exhaustive literature search on the topic, the work I have described suggests that attitude, drive and strategy each make a distinguishable but interrelated contribution to motivation for achievement. Without attitude, there is no reason to believe that one is capable of the necessary action to achieve, and therefore no reason to even attempt it. Without drive, there is no energy to propel that action. And without strategy, there is nothing to help select and guide the necessary action. While other theories focused on one or two of these constructs, I would argue that a more complete understanding is provided by a consideration of all three.

There is also an implication for practice or application in educational settings, insofar as motivation for achievement is a quality with high societal value. Efforts should be made by teachers to enhance students’ attitudes or beliefs in their own capability, to impel or propel engagement in the learning process, and to teach students about relevant strategies that can be used. A considerable amount of material on "teaching" motivation by changing attitudes and strategies is currently available (see, for example, Pressley, Woloshyn, and Associates, 1995; Zimmerman, Bonner, and Kovach, 1996), but the greatest unmet need regarding effective enhancement techniques would appear to be in the area of drive.
 
Re: WORST CASE SOME ONE!!!

Work Motivation Work motivation is one of the key areas of organizational psychology. Organization theory is frequently described as an interdisciplinary study that examines the structure and functioning of organizations and the behavior of the people within organizations. Usually the term organizational psychology refers to the area of industrial psychology derived from social and personality psychology. Baron and Greenberg (1990) stated that organizational psychology is the field that focuses on understanding and predicting human behavior in organizational settings. Here we discuss aspects of work motivation.
McGregor's Theory X and Theory Y Douglas McGregor (1960) summarized two possible views of management in worker motivation. Theory X is the traditional view of direction and control. It states that the worker dislikes work and tries to avoid it. The function of management, therefore, is to force the employee to work, through coercion and threats of punishment. The worker prefers in most cases to be directed and wants to avoid responsibility. The main motivator for the worker, therefore, is money.
Theory Y is the humanistic/self-actualization approach to human motivation. Sometimes called the human resources model, it states that work is natural and can be a source of satisfaction, and that when it is, the worker can be highly committed and motivated. Workers often seek responsibility and need to be more fully involved with management to become motivated. Theory Y is most likely to be used when management utilizes worker participation in organizational decisions. In their book In Search of Excellence, Peters and Waterman (1982) stated that one of the chief differences between American and Japanese management is that American managers tend to use Theory X and Japanese managers tend to use Theory Y. This difference may be lessening, as evidenced by the practices of the management of the General Motors Saturn plants.
In his book Theory Z, William Ouchi (1981) described the characteristics of the Japanese companies that produce high employee commitment, motivation, and productivity. Many Japanese employees are guaranteed a position for life, increasing their loyalty to the company. Careful evaluation occurs over a period of time, and the responsibility for success or failure is shared among employees and management. Most employees do not specialize in one skill area, but work at several different tasks, learning more about the company as they develop. And Japanese companies are often concerned about all aspects of their employees' lives, on and off the job. According to Ouchi, Type Z organizations tend to have stable employment, high productivity, and high employee morale and satisfaction. Many of these outcomes are similar to Theory Y, and research will continue to evaluate the feasibility of implementing some of them in American companies (Landy, 1989).
Organizational psychologists have become interested in devising strategies to help workers enhance their quality of work life (QWL). Lawler (1982) suggested several strategies for raising job satisfaction and QWL, including improving work conditions and security, increasing worker responsibility, and providing financial stability. Other strategies include enhancing the worker's sense of self-worth and providing opportunities for social relationships to develop within the organization. Job satisfaction is an area of organizational psychology that will continue to be important in the future.
 
Re: WORST CASE SOME ONE!!!

Equity Motivation We want fairness in our lives, whether it is in our social relationships or in our rewards for work. Equity theory predicts that people seek equitable rewards, or that people should be rewarded in proportion to their effort. Thus, in comparison to our coworkers, if we work harder, we expect higher compensation. If we believe we are being overpaid or underpaid, we are motivated to restore equity by working more or less (Greenberg, 1982).
In his 1988 study of equity in the workplace, Jerald Greenberg measured productivity of employees of an insurance company when they were temporarily moved to a different office because of construction. The temporary offices belonged to employees that were higher, lower, or equal in rank to the subjects.
Greenberg found that those employees assigned to the higher-rank offices increased productivity, and employees assigned to lower-rank offices decreased productivity. The employees presumably felt that they were being rewarded (high-rank office) or punished (low-rank office) and needed to adjust their performance to match their compensation. In general, we are motivated to ensure fairness in our lives.
 
Back
Top