Technology Forecasting
Dave Tutelman - June
9, 2008
This article is a companion to a technology
forecast
I made in 2008. There were several places in the forecast where it
would help understanding if you knew what thought process took me to
that conclusion. That's because technology forecasting itself is a
"vocational skill", which involves being able to apply some
well-understood principles.
The 2008 golf technology forecast is hardly my first experience in the
field. From
the late 1970s through the early 1990s, one of my responsibilities at
Bell Labs was to do 5-year forecasts for various technologies
associated with computers and telecommunications. So I've done it
professionally -- to earn my living. The examples in this article are
taken from the two industries where I have forecasting experience: golf
equipment and computers/telecom.
How does one do a technology forecast? Of course, it always helps
to be a subject matter
expert in the relevant subject, but
there are
some basic principles and forecasting techniques that apply to almost
every area of
technology. Here are some of the ways a professional forecaster looks
at things.
What drives technology evolution, anyway?
Think about the two adages:
- Build a better mousetrap and the world will beat a
path to your door.
- Necessity is the mother of invention.
These are diametrically opposed views of what drives the evolution of
technology. The views might be expressed as technology-driven
vs market
driven. Another way of expressing the dichotomy is supply-driven
vs demand-driven.
- The first says that evolution is technology driven.
If you come up with
a truly better technology, then people will buy it.
- The
second says that evolution is market-driven. If there is a big enough
need,
the technology will (or must) be invented to fill the need.
It can work both ways. But not equally frequently.
For
the last 30 years (at least in the companies where I worked), the
official position was that everything is market-driven. That does not
match my observation. True, some products (seldom whole technologies,
but products) were limited by willingness-to-pay, the primary criterion
according to
believers in
the market-driven universe. Instead, most of the big advances in
computers
and telecommunications came about because good technology became
available
and people found a use for it, not because a need existed and brilliant
scientists invented a technology to fill the need. Sorry, my MBA
friends, but most history I know does not support what you say.
Products can be developed to fill a market need, but rarely if they
require a significant evolution of technology.
How
about the golf equipment industry? I point to two periods that I have
lived through: the first half of the
1990s and the period since:
- 1991-1996:
This was a period of extreme technology advance, paced by the
acceptance of titanium driver heads and carbon fiber shafts. Were
either of these advances demand-driven? I maintain they were
supply-driven. Titanium and carbon-fiber technologies were around for
decades, but they were largely
the province of the aerospace industry. In the early 1990s,
the
Clinton administration cut back on the aerospace and defense budget
that the Reagan administration had beefed up. That left a lot of
unemployed rocket scientists, especially in southern California. It may
be coincidence that the biggest concentration of defense-technology
cuts was geographically the same as the concentration of the biggest
golf club manufacturers. It is not
a coincidence that the golf club industry started serious use of
aerospace technology during that period. The engineers who knew how to
use titanium and carbon/resin composites were suddenly available for
the golf club companies to hire. And that availability of the
technology -- not some deep-seated market need -- caused the technology
to be adopted.
- 1996-2008:
I maintain that this was a period almost devoid of fundamental
technology advance. The marketing departments of all the golf club
companies will argue with me there. Let them argue; they are wrong. The
advances were incremental at best and negligible at worst. Gradual
weight
reduction and quality improvement of graphite shafts is probably the
most
significant advance in this period. As for the rest, they were one of:
- Rule-driven, like the square driver head profiles
to
maximize moment of inertia within the "box" allowed by the Rules.
- Rule-beating, like the stealthy introduction of
(obviously illegal) spring-face drivers in the late 1990s.
- Advertising-driven, like almost everything else.
This is not so much to fulfill a market need as to fill a marketing
need, the need to sell more golf clubs. Not the same thing at all; in
fact, usually quite the opposite. If it is advertising-driven, its
purpose is to create a
need, not fill a need. Don't confuse market-driven with
marketing-driven.
Why do I even bring this up? And why so early in the
discussion?
Because if evolution were demand-driven, then technology forecasting
would be done very differently. If it were, then the only way to
forecast technology would be to forecast demand and say,
"Well, a technology will be developed to meet this need," and go from
there. There are a few fields where this approach might bear fruit.
Food, energy, and housing have significant trend
lines in demand (e.g.- population
growth) as well as supply. But the fields I have experience forecasting
are far more
supply-driven than demand-driven.
|
Forecast horizon
The first thing you need to know when
preparing a forecast is the forecast horizon. In the 1980s, computers
and
communications was moving very quickly. At that time:
- A
forecast for five years or less had a chance of being accurate. But it
was often on the optimistic side as far as schedule is concerned. That
is, a good forecast was likely to be right, but the five-year
prediction might take seven years to actually happen.
- A forecast for 15-20 years is almost impossible,
except for extrapolating trends
that have already had a long duration. And even there, you can almost
never predict the actual mechanism that will allow the trend to
progress -- just that the trend will continue. The reason it is hard to
predict is that there is always something unforeseen -- "And now for
something completely different," to borrow a phrase from Monty Python
-- that makes a fundamental change in the field. Changes like that can
usually be seen five years in advance, but not fifteen.
So what sort of horizons does the golf industry have?
- Consumer golf products are
probably being introduced today even faster than computers and telecom
were in the 1980s. So a five year forecast is probably feasible, and a
fifteen year forecast is at best a roll of the dice. My forecast above
is a five-year forecast at
most. (Note that not all new consumer golf products involve actual
advances in technology. Too many don't, even if they claim they do. But
at least some new products do involve new technology.)
- Products for clubfitters, on the other hand, are more
like "capital goods" -- means of production -- and don't have nearly as
short a product life cycle. You might expect a longer-term forecast for
them, and you would probably be right. However, there is one mitigating
factor; technology for clubfitting is seldom a separate technology.
Remember, the market for clubfitting tools is really tiny, too
small to support much really fundamental technological R&D. So
it is going to have to ride the coattails of technology being developed
for much larger markets. (See learning
curve
below.) When we look at which technologies will be
"borrowed" from which markets (digital cameras, digital scales,
computer image processing), it looks like a five-year forecast is
also appropriate here.
|
Trend lines
In every field of endeavor,
there are long-term trends that continue for years or even decades. For
instance, when my job was forecasting computing and telecom technology,
the trend that dominated any prediction was that the cost of computing
was coming down by about a factor of two every two years. This has held
true, more or less, for over 40 years. There are periods where the
number is more or less than that, but those periods seldom last more
than five years. If you look at the trend in ten-year slices, it is
fairly constant.
While a trend
itself may be obvious, the implications require
imagination and subject-matter expertise on the part of the forecaster.
For instance, in the late 1970s, Larry Roberts (once the director of
ARPA)
pointed out that computing costs were coming down faster than
transmission costs. His interpretation was that it would soon be
worthwhile
(and
more worthwhile the longer you waited) to do computing to save
transmission. For instance:
- Spend computing cycles to compress a file before you
transmit it, and uncompress it afterwards.
- Do
packet switching (as the Internet does) rather than circuit switching.
Packet switching requires more digital processing, but often requires
sending fewer bytes of data.
It took over a decade before Roberts' prescription became economically
feasible, but it seems to be generally accepted today. In my opinion,
this is a classic success story for technology forecasting from trend
lines.
Every
technology forecaster learns to look for such trends in the subject
matter of the forecast. What might such trends be in golf? One of the
most obvious is driving distance. During a forecasting exercise a few
years ago, I was able to find more
than sixty years of data to plot such a trend. A quick scan for this
2008 forecast didn't turn up the data, but it would have been
interesting as a starting
point. It would also be very interesting to see if the USGA's efforts
to limit further distance increases have the effect of stifling the
trend.
|
Bowers Limits
Many improvement trends have fundamental limits.
Forecasters look at a trend that is about to bump into such a
limit
(within the horizon of the forecast), and are forced to ask, "Will this
limit end the trend? Or will the trend find some way around the limit?"
Sometimes the limit wins, as we would expect. But -- more often than
you'd expect -- developers find a way to continue the trend.
Here's an
example from the computing cost trend I mentioned above. For
several decades, the trend depended directly on the fact that the
number of transistors that could be fit on a chip was growing
exponentially. In fact, the cost trend was often equated with the trend
that counted
transistors on a chip. But that trend faced an eventual,
seemingly-immutable limit: the size of a molecule.
Transistors require a certain number of molecules for reliable
operation, even in the purest theoretical sense. It was pointed out
many times that the trend was only a decade from bumping into this
fundamental limit.
But the cost trend continued, even though the
transistors-per-cubic-inch trend had to stop when its limit was
reached. How did computing manage to sidestep the limit?
- Semiconductor manufacturers learned how to get
acceptable yields from larger chips. So more transistors per chip were
possible than had been thought before.
- The volume of computers grew by leaps and bounds
starting in the 1990s, as computers became consumer goods rather than
capital equipment. This greatly lowered the cost per chip, due to the learning curve.
- Computer designers learned to get multiple chips to
cooperate in computing, through parallel computing and array computing.
The potential of a limit to the trend still lurks. But research
continues to look for ways to stave off the limits. Research on quantum
computing and organic computing really have as their objective the
ability to continue the trend for a few more decades.
I first learned about this from an article in the 1960s, which called
fundamental limits like
this "Bowers limits", after a tech forecaster that noticed
their impact. I can't find Bowers in the literature any
more, but I will continue to call it that.
Does golf technology have any Bowers limits? Let's look at driving
distance. The Rules authorities (USGA and R&A) have long tried
to limit driving distance by placing bounds on the measurable
properties of clubs and balls. We don't know what distance
numbers
today might have been without their efforts, but they have not
prevented a
distinct trend to longer and longer drives. As the USGA continues to
legislate, here are a few things that might trump the limits it tries
to impose:
- Conditioning.
A lot of the advances in driving distance at the highest level (the
tour players) is due to physical conditioning. Exercise and
body-training were simply not a
big issue ten years ago, and are definitely a big deal today. It
may well trickle down to the amateur golfer, which could result in much
longer average drives at the public courses and country clubs.
- Instruction
and swing research. Our idea of what constitutes a good
swing itself has changed over the years. And our ability to teach the
swing has also changed. If this continues, it could continue the trend
of longer drives, particularly among recreational golfers.
(The
touring pros are already much closer to the best we know how to do.)
- Loopholes.
Golf club designers may find distance determinants the Rules do not
cover, and exploit them to increase distance. I don't know what these
may be. (If I knew, I'd be out making a million bucks selling
bigger-hitting clubs myself.) But they may be there.
- Ignoring
the Rules. Except
at the highest levels of the game, there
is a remarkable ignorance of the Rules -- and often deliberate disdain.
The USGA has done little to counter this groundswell; they remain a
conservative organization determined to maintain "tradition" (as they
see it) in golf. At some point, those who don't participate in official
sanctioned competition are likely to just ignore the Rules they don't
like.
(I can point to many examples of such on-course behavior already.) If
the
manufacturers sense this mood about the driving distance limitations,
they may go ahead and sell non-conforming clubs and balls and let the
market decide.
|
The
Learning
Curve
The term "learning curve" is thrown around very
loosely. Like too much
terminology today, it has a specific, rigorous meaning but has been
borrowed and corrupted by popular use. "Learning curve" or "experience
curve" comes from production experience, and it holds that,
for every doubling of production, the cost per unit produced drops by X% (where X is usually
between 10% and 30%). This is a result of many things. For instance, as
you have more production behind you:
- Labor is more experienced, therefore quicker (more
productive) and surer (fewer rejects).
- Discounts for raw materials improve.
- Administrative costs, setup charges, and factory
costs are shared over more units.
The
picture is an example of a fairly typical 20% learning curve; the unit
cost drops by 20% for each doubling of total production. After the
manufacture of a thousand units, the per-unit cost has dropped to about
a tenth the cost of the first unit. By a million units, the cost is a
hundredth of the original -- and
ten times lower than it was at a thousand units.
This
is a very important tool for technology forecasting. If I am doing a
forecast for a low-volume industry (say, tools for custom clubfitters),
I like to look for a high-volume industry that requires the same
technology. The high-volume industry will drive down the cost of the
technology (due to the learning curve) and the low-volume industry can
benefit. For instance, it is unlikely that the market for a digital
shaft flex tool is over a thousand units. But digital scales are being
sold by the millions. If I am designing a shaft flex meter (which
measures the force to deflect a shaft), then I can jump on the
high-volume bandwagon of
digital scales and drop my cost of force-sensing technology by a lot.
|
Sailing Ship Effect
Quoted from The
Centre for Innovation Studies:
Sometimes
the advent of a new technology stimulates a competitive response, and
by virtue of Herculean efforts, driven by desperation, the incumbent
technology manages to improve its performance way beyond the limits.
For example, in
the 50 years after the introduction of the steam ship, sailing ships
made more improvements than they had in the previous 300 years. [Ward,
“The Sailing ship effect”.]
But these
efforts are doomed, as the new technology has much more potential for
improvement. These efforts can hold off extinction for a while, but
eventually fail.
While such improvement is eventually limited, almost every field has
examples where the effect significantly delayed deployment of
the new
technology. Forecasters have to be aware of the sailing ship effect and
its ability to delay the fulfullment of a technology prediction.
Are
there any "sailing ships" in golf today? I don't see any that jump
right out at me, but the wooden clubhead was certainly a sailing ship
in
the 1990s. By that time, steel and titanium metalwoods were demonstrably
superior, but their acceptance was very slow at the highest levels of
the game. The reasons included the reluctance of successful pros to
change something that worked for them, as well as improvements in
wooden heads
that partially duplicated a few of the improvements of metalwoods.
(Things like graphite shafts and conscious weight distribution.) These
could have been done for wooden clubs much earlier, but they were
competitively
necessary in the 1990s after metalwoods had demonstrated what could be
achieved.
|
The Delphi method -- and why not
This is perhaps the most common method used for forecasting in
industry. The definition, taken from Wikipedia,
is:
The Delphi
method is a systematic, interactive forecasting
method which relies on a panel of independent experts. The carefully
selected experts answer questionnaires in two or more rounds. After
each round, a facilitator provides an anonymous summary of the experts’
forecasts from the previous round as well as the reasons they provided
for their judgments. Thus, participants are encouraged to revise their
earlier answers in light of the replies of other members of the group.
It is believed that during this process the range of the answers will
decrease and the group will converge towards the "correct" answer.
Finally, the process is stopped after a pre-defined stop criterion
(e.g. number of rounds, achievement of consensus, stability of results)
and the mean or median
scores of the final rounds determine the results.
This sounds very sensible, and often it is. But I have had
problems with it in practice. The Wikipedia
article lists some of the problems, but I have encountered
one that they missed. When you assemble a group of independent experts,
there will almost always be several different opinions represented --
and differing views of how the future will (or ought to) turn out. That
is not surprising at all; the Delphi approach of multiple
rounds to converge on the "correct" answer is assumed to solve the
problem.
But experts are people, too. Not only are there differences of opinion
(understandable and necessary) but also differences of personality.
Some of the participants will be better known than others (even famous
or at least
widely respected). Some of the participants will be more persuasive.
Some will have more faith in their opinions and defend
them more tenaciously. These personality differences sometimes have
more influence on the final consensus of the group than the actual
merits of the argument.
Here's
the most classic example I have experienced. In 1983, I did a forecast
that included flat-panel displays to make computers more compact and/or
portable. My panel of experts included a very dynamic and persuasive
individual whose own research area was plasma panels. He was
able
to dominate the group to the extent that the consensus was that plasma
would be the technology that would prevail. That made no sense to me --
LCD technology had both the learning
curve
and power consumption in its favor -- but I reported it as the
conclusion of my experts. Now, with 25 years of hindsight, we know that
LCD was the winning technology for that application, and plasma panels
dominate only in home-theatre-size screens.
For that reason, I try to avoid Delphi when I do forecasting. Perhaps
it works better with written questionnaires, where the experts never
meet face to face, and preferably anonymity is maintained. But I only
have experience with in-person panels. If I can
get the same group of experts into a playful mood (see "brainstorming" below) I tend
to get better results. Rather than seriously defend an entrenched
opinion, they get into the spirit of making fun of the whole field --
and thereby are more creative and less parochial about their own views.
Politics and bias
It
would be nice if we could do honest, unbiased forecasts. But sometimes
that is not the job. More often than most forecasters would like to
admit, the purpose of the "forecast" is to justify a predetermined
position. Generally this occurs when one of the following is true:
- Your
boss knows what he wants the answer to be, and has let you know.
("Boss" may be extended to the person or organization funding the
work.) Having encountering this too many times over the years, I have
taken to calling it "sponsored research", and have learned to distrust
its results.
- The person doing the actual forecasting has a stake in
the result. For instance, the forecaster himself may have taken a
position in the past and does not want to show his previous opinion
wrong now. This bias may be subtle, even unconscious -- but it can still affect the prediction. Usually,
when I do a forecast, I try to audit the result for this sort
of
bias on my own part; in all honesty, I do not always succeed.
Here is an example from my past, where money and politics drove a
forecast -- to an incorrect prediction. (The politics may be
what
we normally call "politics" -- the unfortunate realistic workings of
our government. Or it can be company politics, or just interpersonal
relations.)
Local growth forecasts
When I joined Bell Labs in 1962, it was the R&D
arm of the Bell
System,
the nation's phone company. All new members of technical staff had to
learn how a phone company works, in order engineer for the real
world. Part of my education was a six-week stint at New England
Telephone Company, in a group of ten new Bell Labs employees.
We
spent about a week with each of the major departments.
One of those
weeks was with the department that did business planning for the
company. Perhaps their most important function was the forecast of
telephone growth in each locality the company served. The thing that
made this so important is that the budgeting, ordering, and hiring for
the next three years would be based on this forecast. If the forecast
were too high, the company would be overstocked and pay idle
installers. If the forecast were too low, the company would would have
to scramble to hire, train, and stock to keep up with the demand for
telephones; this tended to be more expensive and inefficient than being
prepared in advance. So an incorrect forecast in either direction would waste money.
(Actually,
the commercial forecast was several projections for periods as short as
three months and as long as three years. But let's keep the story
simple and talk about the 1-year forecast portion.)
The local
forecasters made their projections mostly by extrapolating past growth.
Very sensible! And completely in harmony with trend-line forecasting.
They look at the phone population a year ago and again
today, treat that as a trend line, and extend it a year into the
future. In the graph, Year 0 is the present, with 10,000 phones in the
region under study. The forecaster goes to Year -1 (a year in the past)
and notes that there were just over 6000 phones in that same area last
year. That's a change of just under 4000 phones. Extend this in a
straight line one year into the future, and we have just under 14,000
phones.
That looks like a good way to do things. Just one trouble: if we go back after the fact and see what actually happened, the
resulting forecast is almost always on the low side. Often
significantly low.
Why should a seemingly sensible forecasting process result in such a
biased result? And it is
biased. If it were simply a hard thing to predict, then the misses
would be about equal high and low. But this process always misses low.
So there is something wrong -- and definitely biased -- about the
process itself. |
Bear
in mind that we (the Bell Labs trainees) were still wet behind the
ears, and we knew it. We were there to learn how the real world works. But we had a lot of
trouble squaring that consistently low forecast with a good way to run
a company. So we had an evening meeting at the hotel where we were
staying. We met armed with the data for the past ten years, a year at a
time. We were going to try various ways to forecast the data from old,
non-current data. We had the advantage that we could look "ahead" a year to data we already had but
didn't use in our forecast, to see how well we did.
Remember that this was the first half of the 1960s, when the economy was growing by leaps and bounds. Telephone growth was
substantial in the suburbs -- which is where we were looking at
forecasting. In fact, the growth was exponential -- literally. The
year-by-year growth did not follow a straight line, but rather grew
faster than a straight line. Compare the exponential growth that we
observed (blue line) with a linear foreast (dotted red line). The
linear forecast is always
low if the true trend is exponential growth.
The following morning, we reported our findings to our hosts, along
with a recommendation to do a best-fit exponential forecast rather than
a best-fit straight line. They gave us a sheepish look and said they
were not surprised, but nothing was likely to change. The scenario
they painted for us was as follows:
Forecaster: "I see a growth of 6000 phones in my region over the next
year."
Management: "But last year there was a growth of only 4000 phones."
F: "That's right."
M: "And last year was the best year we've ever seen."
F: "Right again."
M: "So why should I believe that next year will be even better."
F:
"Because the growth is exponential, not linear." [Manager's eyes glaze
over.] "It has been growing by larger amounts each year, so it is
reasonable to expect that to continue."
M: "Even if I believed
you, I can't go to my boss and tell him there
will be even more growth next year than the big year we had last year.
Growth means money that
he will have to budget. He can't get it from anywhere else, because
every other region is predicting growth as big as last year's growth.
And the company's budget for growth is limited. We would have to go to
the Public Utility Commission that regulates telephone rates and ask
for an increase."
F: "But the growth is
going to happen! If it happens and we are not budgeted for it, it will wind up costing
us more to serve the growth."
M: "We'll just have to deal with it as an emergency if it happens. We
can't go to the Public Utility Commission and ask for a rate increase
on the basis of your forecast. They will be very skeptical. And they have motivation not to believe the higher growth.
It is their job to protect the public from higher rates."
F: "But, if the growth occurs unbudgeted, then next year we will have
spent more money than we should have. So they will wind up approving even higher rates next year,
because by law they have to allow us a return on our investment."
M: "And your point is...?"
F: "Huh?"
M: "We are talking about upper corporate management and
politicians. If it doesn't happen this year, then it didn't happen as far as they are concerned. If
next year brings an expensive emergency, then they will deal with it next year."
Note that we are looking at more than one level of politics here.
- The forecaster's prediction is unacceptable to the manager, because he cannot get the budget to address it.
- If
the company were to embrace the forecast, it would be unacceptable to
the government regulators, because they don't want to approve the
higher phone rates to address it.
This is only one example of
how politics, money, or personal bias can affect a forecast. I
have a bunch more. It is not just a hypothetical problem, but a
constant danger in the real world. |
Long-range forecasting
I said above that short-horizon forecasts are feasible, and
long-horizon forecasts generally are not. But suppose my job depended
on a useful long-term forecast for computers, and my pointy-haired boss
(see Dilbert
if you don't get the reference) won't listen to reason. How might I go
about trying to get it as right as I can?
I'm not going to actually try that here. But it is very
instructive to look at the 60-year history of computing. I know of a
couple of good 20-year forecasts. I can also think of a few ways that
20- to 30-year forecasts might be attempted if you really had to.
Trend lines
and humor
In 1964, Datamation
magazine (a computer and software industry trade journal) had an
article in its April 1 edition. As is common in many magazines, the
April 1 edition has an "April Fools" section with humorous or satirical
pieces. One of the articles was a tongue-in-cheek 20-year forecast for
the computer industry.
Remember what I said about 20-year
forecasts; the only way you stand a chance is to extrapolate a
long-standing trend and hope it doesn't hit a Bowers Limit. One of the
predictions in the article was an extrapolation of the trend to smaller
computers. At that time, IBM had introduced transistors only a
few
years before to replace the much larger vaccuum tubes in their
computers. They were doing some hand-waving about "integrated
circuits", but that didn't become a reality for several years
afterward. That activity raised expectations, and highlighted a trend
in the size of the box you'd need as a computer. (Just for reference,
in 1964, a relatively small IBM computer occupied 150 square feet of
floor space, the size of a room. A typical IBM mainframe took up space
the size of an apartment. Even a minicomputer like the DEC PDP-1 was
the size of a few refrigerators.)
To most sober
short-range technology thinkers, the suggested trend was hopelessly
naive. The article poked fun at the trend, with a rolling-eyeballs
projection that:
"In
twenty years, we will have a computer the size of a postage stamp. The
big problem will be the six-inch-diameter input/output cable."
So
where was the industry in 1984, twenty years later? By then, there were
lots of companies making microchips, a whole computer on a silicon
integrated circuit. There were personal computers around, of many sizes
and shapes. The industry had grown beyond the hobbyist computer; the
IBM PC and the Apple Macintosh were here by then. And the heart of each
of these machines was a microprocessor -- a computer on a chip. In
fact, the chip itself was considrably
smaller than a postage stamp, but it was packaged in a ceramic case
just about postage-stamp size. Part #1 of the prediction came true, and
in fact was exceeded.
But why such a big package, given the tiny
size of the chip itself? Because there had to be enough space to attach
the large number (typically, 48-64) of electrical leads necessary to
get information on and off the computer chip. So the other part of the
prediction -- that the techical bottleneck would be input/output wires
-- was also spot-on.
There are not too many examples of
20-year forecasts that are this successful -- and this one was intended
as a joke. This suggests that a good technique for
long-range forecasts might be brainstorming.
Here we have another word that has been corrupted by commercial use;
originally, it refers to a specific
creativity technique,
and that is the sense in which I use it here. The essence of
brainstorming is for a group to generate a lot of ideas without
stopping to "criticize" -- to question an idea's feasibility -- but
just continue generating ideas. Most people with experience in
brainstorming recognize that the best ideas come when the
group tries to come up with more offbeat or even funnier ideas, when a
lot of laughing accompanies the idea generation activity.
So, if I were actually to try to do a long-range forecast
- I'd try to collect a group of creative but otherwise
diverse people. Expertise in the technology is definitely a
plus, but not an absolute necessity. Knowing how the technology is
used is probably more important.
- We
would look at the trends, and then try to come up with the funniest,
most outlandish possible consequences of continuing those trends into
the future. For the first pass, the goal is funny, not feasible.
- Now we have a list of predictions -- except they're more
like jokes than predictions.
- Next,
we have a team of people (perhaps the same, perhaps different) who have
subject-matter expertise look at the list and see which ones might
possibly be feasible in the future -- or come up with twists or
combinations that might make it.
So one way to do long-range forecasting is to try to come up with the
funniest or most outrageous outcomes that the trends might imply --
sometimes they turn out to be reality. And, without a reach like that,
a long-range forecast will be hopelessly conservative.
Broaden the application -- by a lot!
The key to this approach is to think about things that we need to do
and will continue to need to do. Don't restrict yourself to things that
are the obvious province of the technology under study; instead, look
for things having nothing to do with that technology -- yet. Then ask
how the technology might help with those tasks, assuming the
improvements implied by the trend lines.
Ancient
History
The computer market from 1950 to the
mid 1980s was largely shaped by an incorrect technology forecast. (Part
of this story is industry legend. I tell it the way I heard it, having
been in the industry since 1961. It
may not be true in every detail but, like any good legend, it clearly
has elements of truth and teaches us some important lessons.)
Around 1950, one of the few companies that was capable of making
computers (Remington Rand, according to the version I heard) had to
choose a marketing/manufacturing strategy for the new technology. To
that point, the applications of the computer were pretty much
limited to computing tables of artillery trajectories for the military,
and one success predicting the presidential election of 1948. The
company estimated that the next 30 years would see the need for only
six computers of the power that they could build at the time. So they
decided not to begin mass production, but rather only build one when
they
had an order in hand.
IBM had a different take on the situation. Their business had started
with typewriters, but was by 1950 very heavy in "card punch
calculating" equipment. If you were in school about that time, you
remember taking tests that would be graded with "IBM machines". These
were machines that read punched cards and sorted, collated, and counted
them. The exact "process" for each card depended on a wired "program"
-- a patch-board whose wiring contained the instructions for sorting,
collating, and counting. Not real computing as we know it, actually a
lot simpler. But IBM already made a pretty penny renting and supporting
CPC (Card Punch Calculating) machines. Once the IBM people understood
what computers could do, they said, "Even if they do nothing else, they
can do everything our CPC machines do. And they can do it faster and
cheaper. And it is easier to program a computer than a CPC machine;
don't bother wiring a patchboard, just punch the instructions onto
cards, just like the data cards themselves."
So IBM decided to manufacture computers large-scale, as an improvement
on the CPC machines that were already their revenue stream. By the time
it was clear how much application there was for the new technology (as
computers... not just as CPC upgrades), IBM had the manufacturing
capacity
and know-how (remember the Learning
Curve), had the experienced support and sales staff, and had
a market
ready to buy from them. It was all over; IBM "owned"
the computer market.
Fast-forward 30 years -- those infamous 30 years of the six-computer
prediction. The cost trend for computing meant prices had come way
down. In 1980, IBM was still in control of the computer market, but was
about to lose its grip very quickly. A couple of things were going on
to accelerate that:
- Most new operating systems, led by UNIX®, allowed multiple
users to share the computer via terminals, as if each had the full
computer to himself -- just a little slower. This was time-sharing; it
had been around for 15 years at that point, but was now standard
operating procedure.
- Personal computing -- a small dedicated computer for each
user was being pushed by companies like Xerox at the high end and Apple
near the low end.
Either way, we are looking at computing cheap enough so it can be an
individual tool, no longer just a corporate tool. And IBM's grip on the
market was their customer base in the corporate IT departments.
Individual computing broke that grip.
Can
you imagine?!
Let's imagine someone way back in 1950 attempting a 30-year
forecast -- but with the benefit of the computing cost trend
curve. Ridiculous hypothesis, I know; there was no trend data
in 1950. But suppose our hypothetical forecaster knew
the trend would drive computing costs as low as they got in
1980. How could he translate that into a picture of computer technology
and computer use 30 years in the future. Let's try a "stream of
consciousness" on a very imaginative person working that problem.
(Think of it as a one-person "brainstorm". And remember that I'm
writing with almost 60 years of hindsight.)
"IBM made its decision to mass-produce computers as a more
cost-effective CPC machine. It quickly became a 'computer' and created
brand new uses. Now there's no way I could ever foresee new uses 30
years in the future. But how about cost-effective replacement of
something else?
"The trend line says that companies can afford to deploy, for every
office worker in the country, a thousand times the computing power of
one present-day [1950] computer. With all that power, what could they
do with it? Let's think about what office workers do, and imagine how a
computer effectively replace the way they do it today.
- "Office workers calculate! Hey that's a painfully obvious
replacement; it's what computers do best. Now they won't have to do it
by hand, or those expensive electro-mechanical adding machines (which
are also slow, big, noisy, and require lots of maintenance)." Actual 1980 result:
computers were not just desk-top adding machines, they already provided
office workers with spreadsheets; think VisiCalc.
- "There's a lot of typing that goes on in an office. Would a
computer make any sense as a cost-effective typewriter?" Actual 1980 result: word
processors like WordStar (desktop computers) and troff (UNIX) were in
widespread use in computer-literate organizations.
Secretaries had to become proficient in computer use, because computers
were so much better than simple typewriters.
- "Office workers communicate. They send one another notes.
Nah, we've already covered that with typing." There's no way our
forecaster could have foreseen e-mail. It required other developments,
specifically computer-to-computer networking, that were completely
invisible in 1950. But,
by 1980, my own organization at Bell Labs practically ran on e-mail --
almost all its own internal communcations. There were other high-tech
companies doing the same.
How about another 30-year
jump? Well, almost 30 years, to today, 2008. Again, the
things we knew in 1980 were:
- The computing cost and size trend. By 1980, we knew it was
not only getting cheaper but smaller and requiring less power.
- Computer networking. Computers could talk to each other,
exchange data, run programs on other computers. And this capability was
also getting cheaper.
So where does that push the forecast? Let's look at replacement of
functions again. What are things that we know we will be doing in 2008
where computing might actually be cheap and small enough to replace the
way we do it now?
First off, how cheap and small is that? The trend says that a chip with
the capability of a 1980 personal computer would have a cost in pennies
at most. And, because chips are getting smaller as well as cheaper, we
could have some additional memory and input-output functions on the
chip, so we have almost an entire system on a chip. What could we
replace with that sort of cheap computing power? A few examples.
(Again, remember that I live in 2008. I know how the book ends. But
pretend for now that I'm just very prescient.)
- Let's
look at any electronic system. If we can replace it with a
(suitably programmed) computer, then the cost to do so is just pennies,
at least for the electronics themselves. A simple example is the
telephone. Today's telephone (the cell phone is an extreme
example) consists of a few audio processing chips and a computer chip
running the whole show. All that function that you see -- contact
lists, built-in calendar, sophisticated choices of options -- are there
because a computer chip is actually the cheapest way just to keep
track of the state of a phone call. Once you have replaced
that bit of electronics with a computer chip, contact lists and
calendars and configuration menus are practically free.
- In
1980, a computer is tethered to your desk. It requires a
power cord, and is too heavy to carry around even if it didn't. (Well,
by 1980 the big-heavy argument was already starting to lose.) If we
look at the size and power trends, it could be portable and
battery-operated. Yes, this suggests laptops and even palmtops, but
that's no 30-year forecast for 1980. In fact, that prediction was made
in the mid-1970s, and everybody accepted that the issue would be when, not if. But...
- If computing were portable, what sort of things
might we want to do that are too complex to consider with dedicated
electronics -- but might be pretty easy with a computer.
Here's a partial list:
- TV
remote. We had some primitive remotes in 1980, but the
microchip made the remote powerful enough so every TV had one. In fact,
it eventually became a TV cost reduction, because you didn't need many
buttons on the TV itself. And, by 2008, buttons are more expensive than a computer microchip.
- Automobile
ignition. The distributor had been around for the whole
twentieth century. Its function, controlling the timing and
distribution of the spark current in an automotive engine, is really
pretty simple. And it just cost a buck or two to make in quantity. But
a combination of low chip costs and high gasoline costs said it might
be cost-effective to replace the timing mechanism with a
computer chip -- and it might become very cost-effective if more
efficient timing could be computed to use less fuel. Today, there are
quite a few "computers" in the typical consumer automobile.
- Game
Boy. By 1980, many "personal computers" were used for
games like Pong and Pac-Man, especially if they were home computers.
And there were game consoles being sold (mostly Atari at that time)
that worked in conjunction with a TV set. Any competent forecaster
could put two and two together and suggest that game consoles would
become much more capable, and games more complex and realistic. But
you'd have to follow display panel trends to guess that you could
package the whole game in a portable box and not be tethered to the TV
set. Even so, a good forecaster who noticed that might still be surprised
that the combination sold millions of units.
Remember, these uses depend on computers being so small and
cheap that you can replace any electronics with a computer chip. But
that is roughly where we are in 2008. The main obstacle to complete
replacement with computers is that software development is as hard as
circuit development. A finite set of skilled developers seems to be the
biggest obstacle to complete computerization of almost everything
electronic today.
Science fictionThe writers of the best true science
fiction are playing a game. They take the world and tweak one or two
things about it -- create a premise contrary to fact, a specific
"suspension of disbelief". Then they go with it and see where it leads.
Many of the best science fiction stories (not fantasy or "little green
men" stories, but true science fiction) have this theme: visualize a
slightly different world from the one we live in, and extrapolate that
change to all the resulting changes in society, technology, and the
characters' lives.
So another way to do long-range forecasting
is to take a premise contrary to fact -- specifically, where the trend
lines say a technology is going -- and write a science fiction story
about it. We know we can't predict the details of the technology, but
we do have the trend lines to say where the technology is likely to go.
What would civilization look like if we were there? Write a science
fiction story. Get people involved in the creation of the story -- the
experts you would choose for brainstorming or even a Delphi exercise.
But make it a science fiction story instead.
The measure of how
well you have done is the "stability" of the story. Is that the most
likely outcome, given the premise you assumed? Not the most
conservative outcome, nor the one most like our present lives, but
rather the most likely effect of the changed premise. If so, then that
is your forecast.
From these examples, you can see how trend-based long-range forecasting
might be done -- and how much imagination it takes to do.
Challenging as it is, I don't see any other feasible way to do honest
long-range
technology forecasting.
Acknowledgements
I'd like
to thank Herb Landau for looking this article over and giving me
comments, suggestions, and ideas. Herb is a high school classmate of
mine (Bronx Science 1958), who has spent most of his career doing
forecasting as his "day job".
I'd also like to thank Randy Pilc,
a director of Bell Labs at the time, who challenged me to prepare
forecasts -- and to defend them when he asked hard questions. That is how I learned much of what I know about forecasting.
Last modified -- Nov 16, 2008
|