Technology Forecast -
Tutelman - July 14, 2016
My brother brought to my attention a
technology forecast apparently based on a Singularity University
seminar. It is very
well done and I agree with much of it. But I
question a number of the predictions, and have some predictions of my
own (including societal implications).
On June 10, 2016, my brother Bob sent me an article which was brought
to his attention by Sam Anzelowitz (an acquaintance of ours from our
Bell Labs days). This article is my assessment
of the forecast. I agree
with much of it, but am skeptical about some of the predictions. That's
OK. Any forecaster who writes it down in a public place is taking
the risk of being wrong. And that is why forecasts are interesting; if
it is a certainty, then
everybody must know it anyway so why write it down.
So let me begin by applauding the author of the forecast for an
insightful and ambitious assessment of technology.
me continue with a disclaimer. Almost noplace in the article am I
saying, "It can't be done." Read it more as, "If you're going to do it,
here are the problems you will have to solve. Some of the problems
are difficult enough that I think the forecast date is
unrealistic." To keep me honest, this article has a second page that will cite relevant news articles about progress toward the predictions.
Who wrote it?
The first thing I did was a quick search to find out who wrote the
article, because the copy Sam sent Bob did not have any attribution. That
turned out to be more interesting than I expected.
Udo Gollub posted it on Facebook, and
prefaced his post with "I
went to the Singularity University summit and here are the key
learnings." (If the link to Facebook doesn't work for you,
I have made a copy of the article.) It is a good
read and not difficult; I commend it to you, preferably before you read
the rest of my article.
So what is the Singularity University Summit? Singularity University
is a think tank headquartered in Silicon Valley. The founder is Ray
Kurzweil, who also popularized the term "the singularity". Here is
Wikipedia's description of what
the singularity means:
The technological singularity is a hypothetical event in which an
upgradable intelligent agent (such as a computer running software-based
artificial general intelligence) enters a 'runaway reaction' of
self-improvement cycles, with each new and more intelligent generation
appearing more and more rapidly, causing an intelligence explosion and
resulting in a powerful superintelligence that would, qualitatively,
far surpass all human intelligence.
Singularity University runs seminars predicting the implications of the
singularity on business, society, and other future concerns. The
technology forecast in the article is completely consistent with the
notion of the technological singularity. Udo Gollub went to one of the
"summit" seminars, and wrote this summary of what the [many] speakers
But it doesn't end there. A Google search for a line or two from the
article turns up several other articles, each a word-for-word copy of
Gollub's summary and none of them crediting him nor anybody else (not
even Singularity University). The purported authors include a priest
submitting "his" article to a Catholic newsletter, two different MDs
each signing a copy of the article in a health newsletter, an IT
director for a
food company, and the managing director for a maritime
shipping company. Each claimed the article as his own. Gollub was quite
candid when I contacted him and asked; he attended the seminar, then
posted his takeaways in his words on Facebook. In his own words: "I
wrote the article myself after the summit. It's a summary of what I
learned and also some things that I have learned before." On top of
that, his was
the first posted of all those articles, at least according to the dates
-- and Gollub's date was stamped by Facebook, not Gollub himself. This
is not a court proceeding on a plagiarism charge, so I don't have to
"beyond a reasonable doubt", but Gollub has by far the most credible
What it says
The content is consistent with the story of the singularity.
And I agree with most of it. Indeed, I see where it is coming from, and
it is a skillful piece of technology forecasting. I have done
technology forecasting professionally myself, and have an article on my
site about techniques for forecasters.
However, I take issue with a few of the forecasts here. Perhaps the
trend is correct for the very long term. But the article (either
Gollub's interpretation from the seminar or the information he was
actually given there) is shorter term -- a few years to a few decades.
Forecasts like that require at least some engineering knowledge of the
subject matter, not just a broad philosophy that "the singularity will
So let me detail the forecasts that I would modify, along with my
reasoning. I'll also flag some I emphatically agree with and want to
emphasize. The rest I either agree with or I'm not enough of a subject
matter expert to take a stand pro or con.
Yet digital cameras were invented in
1975. The first ones only had
10,000 pixels, but followed Moore's law. So as with all exponential
technologies, it was a disappointment for a long time, before it became
way superiour and got mainstream in only a few short years.
Moore's law is the king of learning
curve trends. It affects so much of
our technology. This point is emphatically correct. And, as the article
says, the digital camera is not the only example that took maturing.
are a couple of additional examples:
A more familiar characterization of Moore's law is to look at actual
computers over the decades. I do that in a very short survey article
citing three interesting examples along the evolutionary curve.
- The Internet was functional
in the mid-1970s. I was using email in 1977 in the normal course of
doing my job. But it wasn't until the World Wide Web (in other words,
along in the 1990s that the Internet became bigger than something
- Getting phone access using
your cable TV service has been feasible, both technically and
economically, since at least 1979. Another study I led in 1982 showed
cable TV as a feasible medium for wideband data. But it took until
after 2000 before the cable "triple play" (TV, phone, and Internet
data) became commonplace. The reasons were partly economical
but mostly financial- and business-related.
Moore's law has hit a slowdown, a plateau. Instead of
progress, today we have distinctly incremental progress.
Computing power per dollar during the six years I've had my current
not even doubled. Is Moore's law
Has it hit a Bowers
limit? Or is this just a temporary change
in slope, and over the next decade it will catch up? No way of knowing
but it will take a significant change in technology to continue on the
Moore's Law slope. The current
technologies (silicon transistors) and current architectures (multiple
Neumann computer "cores" on a chip) are running into fundamental
physical limits. Will quantum computing succeed? In time to get Moore's
law back on track? I don't know. We'll get back to this question later,
when we talk about artificial intelligence.
Before leaving this topic, it is worth reviewing why Moore's Law might
be flattening out. I mentioned current technology running into
fundamental physical limits. Moore's Law was originally not about
computing power per se, but about the number of transistors that can be
fit on a chip. It was first observed and written in 1965 by Gordon
Moore, the co-founder of Intel. It has become synonymous with computing
power because microprocessor chips and memory chips are the building
blocks of computing power. If progress in those chips is exponential,
then so is progress in computing.
So why might it be slowing down now? Transistors are made of lattices
of silicon with carefully controlled impurities distributed throughout
the lattice. The silicon lattice requires a significant number
of atoms to enable transistor
operation; just a few atoms and one impurity atom are certainly not
enough for reliable transistor action. We are close to that number on
today's microchips. So, in order for the
exponential increase of transistors per chip to continue, one of two
things must happen:
If you're interested in more on Moore's Law, Wikipedia has an excellent
- Transistors must become smaller, which necessarily means
fewer atoms. The laws of quantum physics would appear to suggest this
would make them less reliable, less predictable. That approach is being
explored by "quantum computing". There have been some successful proofs
of concept, but nothing nearly big enough to do useful computing.
- Chips must become bigger. This is an architectural and a
manufacturing process issue. The process issue is one of defect density
in the creation of the chip, and is a tough nut to crack. Progess is
there, but slower than we need for Moore's Law. I say a bit more about the architectural issue below.
will disrupt most traditional industries in the next 5-10 years.
Absolutely! And I agree with many of his examples.
will be 90% less lawyers in the future, only specialists will remain.
I'm skeptical, but it is possible. (Insert your favorite lawyer joke
here, and applaud the prediction. ☺) My
skepticism is more about the size
of the shrinkage -- the 90% estimate -- than the general trend.
- Lawyers are different! They
not only practice
the law, they make
the law. Unless we can find some
way to get the majority of our politicians to not be lawyers,
find a way to preserve the need for lawyers -- whether they are really
needed or not. This is not a technological observation. It is a
fundamental problem in our evolved democracy and human nature. People
study law not just to become lawyers, but also to become career
politicians, and then use the law to perpetuate those careers.
- As the manufacturing economy
diminishes, the service economy increases in importance. We already know that;
not new information. Gollub's article
warns that the service economy is also in danger, and I agree 100%. But
lawyers are close to the top of the food chain of the service economy.
I think the routine stuff will go away or get much cheaper; routine
lawyering will be replaced by AI or at
least paralegals. Actually, you don't even need AI for a lot of it;
even today, do-it-yourself legal papers are
just an Internet search away. But 90% disappearing is too high a
number. If the article had said 1/2 or 2/3, I don't think I'd argue.
This is the central theme of the
singularity, so it is natural that a seminar from
Singularity University will make bold predictions here.
2030, computers will become more intelligent than humans.
This is the party line for the
singularity. It might actually happen. It might even
2030, but I'm skeptical. Here's why:
- Every time a computer
exhibits AI, there is a large body of people (including experts) who
say, "Well, that's not really intelligence," and move the goalpost. So
we may in 2030 reach the 2016 goalpost, but that won't be the 2030
goalpost, and probably won't be the "runaway point" predicted as the singularity. In
fact, just today I noticed an article in MIT Technology Review
suggesting that we need a new Turing Test (the traditional
measure of AI), because some programs can pass the old Turing test
without really being intelligent. Moving goalpost!
- The AI people don't want to
admit this, but all the significant advances I have seen in AI (since I
first started following it in 1962) have been necessarily in lockstep
with hardware improvement -- Moore's law. That's a strong statement, so
let me explain what I see.
The significance of this assertion: if
Moore's law is actually
slowing down, then the corollary is AI advances will slow
down as well.
But let me hedge my bet a little. In AI's favor,
the most promising AI technique today, deep learning, is
massively parallel algorithms. So progress in Graphic Processing chips
benefit AI, because GPU chips have a much larger number of smaller
"cores" than the traditional CPU chip. GPU is a massively parallel
hardware architecture, well matched to deep learning. But I don't see
much more than an
order of magnitude to be gained there... maybe 6-7 years at the rate of
Moore's law. We'll need something else -- and soon -- if we are to
achieve the singularity by 2030. Note to self:
I could be wrong about that. GPU might be a more likely technology to
enable AI than quantum computing.
- Most of the cleverness of heuristics to do AI was
exhausted by 1970. From then until about 2000, almost all AI
achievements came from the ability to integrate more data or search
deeper "into the tree." (That would be the decision tree for diagnosis,
the look-ahead tree for game playing, etc.) Not generalized techniques
for "pruning the tree"; that was pretty much exhausted in the 1950s and
'60s. (Note that more clever rules for evaluating positions in the tree
constitute human intelligence, not machine intelligence. The computers
are only applying those rules.)
- Handling more data or searching deeper didn't take much
cleverness, it took more, cheaper memory and faster, cheaper computing
cycles. As Moore's law drove computing costs down and computing
capability up, AI naturally followed.
- But what about the new AI paradigm, "deep learning". It
is very impressive. It is not completely new. For instance, there were
do AI with neural networks in the 1960s. (Neural networks are one
configuration for deep learning systems.) The hardware capabilities
not there yet. There are certainly some significant insights involved
writing deep learning programs. But the technique also depends heavily
on a lot of computing power.
2020, the complete industry will start to be disrupted. You
don't want to own a car anymore.
Maybe in big cities. Maybe in the developing world. In the US, at least
is not just a technological nor economic change; it's a major cultural
upheaval. 2020? No! It'll take a generation or two. Not because we can't do it, but we
You will call a car with your phone, it will show up at your location,
and drive you to your destination.
You will not need to park it, you will only pay for the distance
and can be productive while driving.
Our kids will never get a driver's licence, and will never own a car.
Unless, of course, we can't afford not to. Which is entirely possible.
There have been a few technical issues that will further slow the
adoption of autonomously driving cars.
So I disagree with the author's premise, at least as applied to
suburban and rural America and possibly worldwide. But granting it
hypothetically, I agree with his conclusions about what it would do to
the auto business,
the insurance business, etc. We will get over the technical failures at
some point, either by curing them or collectively deciding the benefit
is worth the risk. (I think it will be some of each.)
And suburban and rural America may turn out to be a negligible part of
the world economy. This will probably not happen by 2020, but perhaps
week, a Tesla car beta-testing their latest self-driving software
and killed the occupant. There is no doubt it was the fault of the
Tesla, and specifically the control system's failure to recognize a
dangerous situation that a human driver would have easily avoided.
- Even the modest electronically networked capabilities in
cars have a
hacking problem. Hacking automobiles has been postulated for years, and
in the past year. It's one thing for your computer to "get a virus".
But malware in your car can be fatal, to you and others.
Let me also qualify my skepticism and state that my
observation above is not
To me personally, a car is not a symbol, not independence, not freedom,
not status, not
"chick magnet". It is merely transportation, and I am very pragmatic
about the cost and convenience of my transportation. I would welcome
the future predicted by the article. But I am not a typical American in
that regard. For more on that, see my article about how my New York
City upbringing was affected, perhaps even dominated, by the
availability of public transport.
To my surprise, the article said little about the electric car. I
wonder if that was an oversight or simply an assumption that it will
happen, and sooner rather than later. The part on electric power was so bullish
that it was probably just assumed that everything
run on electricity. But electric cars have significant
drawbacks as well as advantages.
is the range problem. While the overwhelming percentage of cars does an
overwhelming percentage of their driving within range of plug-in power,
the electric car will not solve all of the owner's transport needs.
Some small percentage of current automobile use (1%? 15%? depends on
the individual) is outside the range of an electric car. Moving the
owners of cars to other solutions will not be easy. Oh, the solutions
exist and are not high-tech (public transport, renting long-distance
cars), but are constrained by convenience or cost.
- The electric car is viewed as an environmental plus. But
there will be unintended
consequences if they grow to a significant fraction of the
will become incredibly cheap and clean: solar production has
been on an exponential curve for 30 years, but you can only now see the
Last year, more solar energy was installed worldwide than fossil
energy. The price for solar will drop so much that all coal
will be out of business by 2025.
|I have a real problem with this! Coal may be out of
2025, but it won't be because of solar power. And if coal is gone it
will be replaced by other fossil fuels, oil and natural gas. (I almost
added nuclear. But the lead time for nuclear -- technical, logistical,
regulatory -- is outside the proposed 2025 horizon.)
I denying the progress in solar? Not at all; it is most impressive.
However... Solar cells can go to zero cost and 40% efficiency (the
theoretical maximum), and it still won't put fossil fuels out of
business by 2025. The reason is
"demand availability". Solar power cannot be "generated" at night. And
is weaker at some times than others, even during the day. Look at the
diagram; it shows the sun's rays impinging on a round earth. They
deliver maximum brightness (maximum energy that can be turned into
electricity) where they meet the earth face-on. The more oblique the
impact, the more area needed to acquire the same amount of energy --
hence the less output from a solar cell of any given efficiency.
The bottom line is that, to generate 24 kilowatt-hours (KWH, the unit
of electrical energy), you cannot leave a 1KW solar panel out there for
24 hours. Over a 24-hour period, there is only enough sunlight to
generate about 8KWH of electricity. You only get to average about a
third of the cell's capacity over a full day. We'll get back
to this fraction of one-third later. It isn't absolute, but I doubt
that it's more than a half, and probably not less than a quarter. In
any event, one-third is a good
enough starting point to look at the implications.
So what are the implications? For one, we need three times as much
solar cell capability as the average electric usage would suggest. But
there's an even bigger implication: we need to store
2/3 of the generated power
for later use. And it's not
clear we know how to do that. For now, without big progress in power
storage or a worldwide grid (see below for why these would help),
we are limited to obtaining only a third of our electricity from
solar... At any
price or efficiency.
|Still using one third as the ratio of average need to
solar capacity, let's look at ways we might solve this. The
have seen fall into three general classes:
- Do what we are doing now: live with the fact that
solar cannot provide more than 1/3 of the electricity we use, and
generate the lion's share (the other two thirds) by other means. Today,
that means fossil fuels. We might get help from other renewable sources
(e.g.- wind or hydro), but not enough by 2025 to put a serious dent in
the need. And the people pushing renewable eco-friendly energy seem to
be the same people exhibiting paranoia about nuclear power; I
don't see a nuclear initiative being successful in the 2025 time frame.
Given enough time, non-fossil-fuel energy sources can probably be
tapped, but not even close to the calendar of the forecast.
- Overbuild solar by a factor of three, and store 2/3
of the power generated. Passive electric storage means batteries. If
your household uses 30KWH per day (the US average in 2014), then you
need about 20KWH of battery capacity. Charge it with solar when the sun
is out, and use it when the needs are greater than the power coming
from the sun. Battery R&D is trying to get battery costs down
to $100 per KWH, driven mostly by the electric car industry. They are
not expected to get there before 2020 at the earliest, probably several
years later. At $100 per KWH, we have an implied capital cost of $2000
for the storage for a typical household. That is not overwhelming, but
it isn't something in the "solar power is practically free" liturgy
There are other ways to store electricity. The most credible
alternative to batteries that I've seen is "pumped hydroelectric". You
use excess power now to run electric pumps to raise water to some
higher-altitude reservoir (i.e.- convert the electric power to
potential energy). When solar power isn't enough to run the household,
much less the water pump, use that elevated water to generate
- "Follow the sun." That is, have a large enough
shared, solar-powered grid to have solar power anytime. Theoretically,
can be done. But it requires a round-the-world string of solar farms,
so enough farms are illuminated at any given moment. Yes, you still
need solar cell capacity three times as much as your average use.
That's because only a third of the solar cells are powering the whole
world at any moment.
This sounds technically difficult, but that
difficulty is probably dwarfed by the political difficulties. If you
think about issues like national self-sufficiency, differing
regulations, hacking, sabotage and
terrorism, the technical problems sound easy. Not going to happen
anytime soon. (At least not unless the singularity has occurred, and a
benevolent artificial intelligence rules the whole world. ☺)
is also the fact that the energy market will have gone global; we're
living with this in most products today, and I don't see why energy
should be different. In fact, the oil market is global
suspect coal is as well, and natural gas is getting there. So I don't
see why electricity should be different.
|I promised to get back to that factor of three. If you
just look at the geometry, it's a factor of π (3.1416....), not
3. But there are so many other tugs on the number that I've left it as
three for the early discussion. Now let's look at what would make it
bigger or smaller than π.
So the multiplier is probably around 3, but might be higher or
perhaps even lower depending on our circumstances and how well we
- Some power needs are naturally correlated with sunlight. For
instance, the need for air conditioning in warm places is highly
correlated with sunlight. If the power needs are dominated by air
conditioning, then the factor could be much lower than 3.
- That argument cuts both ways.The premise of the Singularity
University prediction is that electric
power becomes so cheap you can use it for everything. So that should
include heat. Heat is indeed a significant fraction of energy usage in
many places; we just don't think of the energy as electricity. But if
electricity is what makes energy cheap, then... Just as air
conditioning needs in hot climates is
correlated with sunlight, heating needs in cold climates is correlated
with lack of sunlight. These places will need more electricity at
times of less or no sunlight.
- Let's continue following correlation of energy usage to sunlight, to
see where it leads. This same forecast postulates electric cars. The
simple model would
have those cars charging overnight, when everybody is sleeping. If the
needs are dominated by electric transportation, the factor
be much higher than 3.
- But let's be clever about transportation. Make your car solar
so it recharges during the day. In fact, maybe it can be
self-sustaining, except for driving done after dark. Or anyplace you
park may have a charging connection (solar-powered, of course), so only
driving or charging after dark requires power that is not directly from
renewable sources might be anti-correlated with sunlight. Specifically,
some studies show that wind strength is greatest at night. (Not in my
area, nor any place I've visited! But I have seen a study asserting
that.) If true, then a significant buildup of wind power might
complement solar very nicely.
- (Assuming we use an energy storage approach to demand availability.)
Storage is not 100% efficient. I was surprised to find that it is very
efficient, but it's still not 100%. If the cost and size of battery
technology renders them practical for solar energy storage, current
battery technology is really good -- about 99% for Lithium Ion
batteries. (Negligible loss.) If we have to go to pumped hydro, the
pump can be built about 90% efficient and the turbine (to convert it
back to electricity) another 90%. So we're talking about 80% efficiency
for storage.Not a huge difference,
but it would increase the factor.
- Smart mounts for solar panels. If we just look at night vs day, the
factor is 2, not 3 or π.
The difference is the oblique incidence of the sun's rays on the cells.
If the cells were mounted on gimbals and controlled to track the sun
during the day,
we could make the factor lower. We couldn't reach 2, but we could
improve things in the direction of 2.
- Statistical variation. We tend to think about the average usage as
the capacity we need. Engineers for public utilities (electric, water,
even telecom) know that you need more capacity to handle statistical
fluctuations. If it's a very hot day, you'll need more solar power. If
it's a very cold night and we have electric heating, you'll need more
storage -- and more solar power to charge it. Clouds reduce the output
of solar cells, so you need more cells to achieve the same power. Ete,
And this gives rise to another problem: what happens if
there is sustained over-usage for more than, say, overnight. We may
need backup sources -- such as fossil fuel generators. The actual fuel
used over the course of years may be small to negligible, but you still
need the capital expenditure for the generator and the labor to keep it
maintained and keep people trained and exercised in its use.
|You may be jumping up and down now, wanting to ask,
"But what about
those self-sustaining, fully solar-powered homes I read about?" There
may be one or two examples, but I doubt it (as of early 2016, anyway).
What exists, and the popular press describes as self-sustaining, is
really "zero net revenue to the power company." Not the same thing!
The difference is important.
homes are still connected to the power grid, and buy electric power
from the grid when their solar is not generating enough -- like at
Those that are actually self-sustaining have an on-site generator,
usually driven by fossil fuel. (A few may be hydroelectric or wind.) I
have yet to see a home with sufficient power storage to be
self-sustaining on solar alone.
But it gets worse! For a number of years, the government has encouraged
solar power with subsidies and regulations:
have subsidized the installation of solar panels with rebates and tax
breaks. This has somewhat distorted the market. But even that is not
the big factor.
- They have required the power companies to buy
excess generated solar power from homes with solar cells, and buy it at
the retail rate. This is big.
Consider this diagram of the solar-powered home model. The
way it works is:
key here is that the solar home with enough excess capacity can toss
enough power into the grid during daylight hours to pay for the
electricity that it uses during the non-solar hours. That's a good
thing, right? Well, maybe while solar power is rare. If solar homes
become the norm, then not so much. Imagine that there are a lot of
homes, enough to make a statistical difference, and that they each
understand and use the self-sufficiency criterion to minimize their
electric bill. Consider the implications:
there is just enough solar power to run the home, it does so and no
electricity flows through the meter. Nobody pays anybody anything. But
this is a rare event.
- When there is not enough solar power to
run the home (say, at night when there is no solar power), then power
flows from the grid into the home through the meter, and the homeowner
pays the utility for it.
- When there is more than enough solar power to run the
home, the excess solar power flows into the grid through the meter, and
the utility company pays the homeowner for it. This last step is important in
understanding the solar self-sufficient home.
think power companies are evil are smiling about this. But putting
power companies out of business in the short term would be disastrous,
because we don't have another solution for demand-available power -- which
solar power is not.
Those power companies are going to be necessary until we have enough
alternative power sources or storage to be demand-available on
renewables alone. We're a lot further from that than 2025.
- They are tossing extra electrical power into the grid
that is not
Really! Not needed. Remember that factor of three. They are using all
the power they need, and (in order for the numbers to be
revenue-neutral) dumping twice
as much into the grid. (1/3+2/3=1) If a third of the homes on the local
grid are solar and dumping that power, then no generation is needed at
all during hours of strong sunlight. If it's more solar than that, we
have a power glut. This
is not an academic point!
In Germany, where government commitment and incentive to go solar has
been very strong, power companies are actually having to pay customers to use the
- The utility company has a lot of infrastructure to
maintain: not only generators but the transmission and distribution
get power to the customers. All this is capital intensive, and requires
labor as well to maintain it. Turns out that the cost of fuel is only
about half the cost of power for the average household; the rest is
paying for infrastructure and labor. If the solar home pays nothing to
the utility company, then is that fair? They are using the grid to get
2/3 of their power, but not paying anything for it. If payment were
"fair", based on usage, then the solar homes would be paying for the
infrastructure as well as for actual generation costs they use, minus
the actual cost of the electricity they provide to the grid. (Of
this ignores point #1, which is that enough such homes provide too much
power to the grid and actually create a problem.)
line: the only way we can get much more than about a third of our
electricity from solar power is if we either solve the problem of
cheap, massive energy storage or manage to build and maintain a
worldwide electric grid with massive transcontinental power flow.
The price of the cheapest 3D printer came down from $18,000 to $400
within 10 years. At the same time, it became 100 times
All major shoe companies have started printing 3D shoes.
Spare airplane parts are already 3D-printed in remote airports.
The space station now has a printer that eliminates the need for the
large amount of spare parts they used to have in the past.
At the end of this year, new smartphones will have 3D scanning
possibilities. You can then 3D scan your feet, and print your
shoe at home.
In China, they have already 3D-printed a complete 6-story office
By 2027, 10% of everything that's being produced will be 3D-printed.
|I believe the 10% figure. I believe 2027.
I don't know how much beyond
10% it will ever get. 3D printing
is ideal for certain types of manufacturing, including:
who has taken Economics 101 will instantly recognize the graph to the
right. It shows manufacturing cost, the cost of producing N units
assuming some fixed cost and some unit cost.
manufacturing. Shoes are a great example. I was just successfully
treated for a foot malady with custom orthotics. The fitting and
manufacturing were low-tech, but I could certainly see digital scanning
and 3D printing doing it in the future.
- One-off or
few-off. We will see what this means below.
distribution or delivery is a major issue, 3D printing enables digital
distribution and delivery. E.g.- space station repair parts --
already been done. E.g.- circumvent gun control laws by
putting 3D printer source files on the web to print guns -- this has
also been done.
- Product sizes and shapes that cannot be readily
manufactured by conventional means. (Suggested by Ted Noel in Aug 2017)
primary economic difference between 3D printing and conventional
manufacturing is the
very low -- almost zero -- fixed cost. The entire fixed cost (assuming
you have a 3D printer available) is the engineering required to produce
the numerical control file for the printer. Since conventional
manufacturing requires a similar level of engineering, let's compare
them by ignoring engineering costs altogether. (Hey, I'm an engineer.
If it doesn't bother me, it shouldn't bother you. ☺)
- The fixed
cost is the cost for producing zero units,
and it is where the line on the graph meets the left of the chart (the y-axis, if you
remember high school math).
- The variable
cost is the additional cost of producing
one more unit, and is the slope of the line on the graph.
most conventionally manufactured parts have a lower variable cost
("unit cost") than
3D printing. The unit costs for 3D printing are the 3D "ink" and the
per-unit amortization of the 3D printer.
I said above that we
would get back to what "few-off" means when comparing conventional
manufacturing to 3D printing. Here we see it quantitatively. The point
where the "3D printing" line crosses the "conventional manufacturing"
line is the break-even point. For fewer units, it pays to do 3D
printing. For a larger number of units, it is worth incurring the
tooling and setup costs of conventional manufacturing -- say, casting
or machining. As 3D printing technology matures and unit costs thus go
down (moving from the red curve to the purple curve), this break-even
point goes up; it pays to manufacture more units with 3D printing.
How can conventional manufacturing compete? For most conventional
manufacturing to win, one of two things must happen:
Based on these two factors, I
assert that there would always be products and/or
parts that are preferable for conventional manufacturing, and in fact
far more conventional than 3D-printed parts will be made far into the
future. One of the reasons, paradoxically, is the versatility of 3D
printing. Engineers are going to use 3D printers to attack the fixed
costs of conventional manufacturing. For instance, the major fixed cost
of casting parts is making the specialized mold for the part. Why not
make the mold itself on a 3D printer? That could substantially reduce
cost. If you look at the graph again, you can easily imagine the
tradeoff point not moving very much. It only requires that reduced unit
costs for 3D
printing are offset by reduced fixed costs for 3D-printed tooling for
- A very large number of identical parts are required.
There are surprisingly many parts with this requirement.
- Fixed costs can be reduced (moving from the blue
curve to the green curve).
will become mainstream this year and might even become the
default reserve currency.
I don't think so. Bitcoin has severe limits on the size of the
it can represent. It needs a major rearchitecture in order to be large
enough to be the monetary system of a modest-size country, and
definitely not a large industrial economy.
The technocrats of bitcoin have known that for a long time, but I have
not seen them committed to a new algorithm that can handle enough
transactions to grow the bitcoin economy.
But perhaps the forecast is on the right track, that it really doesn't
mean bitcoin literally but rather "something like bitcoin" will go
mainstream. What is it about bitcoin that would make it attractive
enough to become a major world currency? It has two things that
make it attractive to its adherents:
#1 might be a good idea, if suitably redesigned. The current
architecture of bitcoin limits the total number of transactions per
second that can be accomplished. There are proposals
for expanding it,
but none have yet been adopted by the bitcoin economy. Some
rearchitecture must be done for bitcoin transaction volume to rival
even Visa -- much
less become the basis of world currency.
- It is
Existing currencies are either based on some commodity (often gold), or
are based on the promises of a government. Bitcoin is "backed" by
computer cycles and encryption technology, not by raw
materials nor a government promise.
is easy to see how a commodity
could back a currency. In its simplest form, the coins would be made of
the commodity -- e.g.- gold or silver coins. But many (most) currencies
today are based on a government's promise to pay, even without having
of the commodity to redeem all the currency out there.
is "backed" by the work necessary to solve a decryption problem. In
other words, the value behind it is "labor" (in quotes because the
actual work is done by computers, not human hands nor minds). So it
isn't restricted by the supply of gold, and it doesn't depend on the
promise of a government that people might not trust.
- It is
is no central bank, no regulating authority. It could be a
government-free world currency.
#2 is a lot more
questionable. A one-world or a libertarian political philosophy might
want it. But there is a definite value for national reserve banks. They
can set interest rates to stimulate or cool the economy. They can
devalue and revalue currency, which is often necessary to maintain
economic order. A completely decentralized cryptocurrency (a big
bitcoin) would not serve any of those functions. My guess is that
economic anarchy would not be pretty.
The idea that computing
cycles will be the currency of the future is consistent with the belief
system of adherents of the singularity. While bitcoin might indeed
become an important player in the world economy, predicting it will at
this point seems more like confirmation bias than balanced forecasting.
If you think of a niche you want to go
into, ask yourself, "In the
future, do you think we will have that?" and if the answer is yes, how
can you make that happen sooner?
If it doesn't work with your phone, forget the idea.
This is a very perceptive rule of thumb.
It also says something about our investment environment.
Something toxic, in my opinion.
the first place, it is self-described as a niche strategy. Niches make
money for a while. Occasionally they make money for a decade or two.
But they do not advance civilization. Pokemon GO will be gone in a
year, having been a very successful niche that made someone a lot of
money. It is the sort of thing that venture capitalists love:
relatively little capital investment and a quick return.
But real progress requires longer-term thinking and
as Edison famously said, "1% inspiration and 99% perspiration." (I'm
both an engineer and a resident of New Jersey, so quoting Edison comes
naturally.) The modern example that comes immediately to mind is Elon
Musk. And it is no surprise that he is heavily invested in two of the
major topics of this forecast: electric automobiles and solar power.
(He is the chairman of both Tesla Motors and Solar City.) And, also not
surprisingly, he is investing heavily in battery technology; he knows
that it is necessary for the eventual success of both the electric
automobile and solar power. It is also worth mentioning that he is the
founder of Space-X, another big-risk, big-reward enterprise.
is ignoring the suggestion, "If it doesn't work with your phone, forget
the idea." His investment is orthogonal to working with your phone; it
doesn't work with it and doesn't work against it. (Actually, success
with batteries might help your phone, but that isn't what the author
meant by "work with your phone".)
I'm afraid my attitude on this
topic will not resonate with millennials and venture capitalists. But I
sincerely believe it is
correct. Significant human progress depends on undertaking large,
daunting projects. Doing all the easy and fun things ("works with my
phone") isn't going to advance humanity very far.
70-80% of jobs will disappear in the next 20 years.
Agreed! In fact, it seems to me that there will never be enough jobs
again. And that is going to
create substantial social upheaval.
There will be a lot of new jobs, but it is not clear if there will be
enough new jobs in such a short time.
may not have reached the technological singularity, the point where our
technological creations take over and run on their own. But we have
indeed reached a point where technology (robots and AI at the top of
that chain) is taking over many of our jobs. Yes, we can blame
offshoring for loss of manufacturing jobs. But if all that
manufacturing came back tomorrow (and some if it is actually returning
to the USA), it is not clear that mass employment would be the result.
The companies manufacturing in the US are making it work with
industrial robots. Some manufacturing is being done with 3D printing, and more will be
in the future.
the same token, software (including, but not limited to, artificial
intelligence) is displacing a lot of our knowledge jobs. The case was
made in Gollub's article that the need
for human lawyers
is going away. (So many punch lines, so little time. ☺)
But the market for
engineers and software developers is also drying up. (A lot of that is
due to offshoring today. But there is no reason to believe it won't see
the same impact from AI that manufacturing did from
Health care is a
large and growing industry, and currently a source of jobs -- even new
jobs. But as it
grows as a fraction of the total economy, we are seeing the imposition
of (sometimes draconian) cost controls. And robots and AI will
certainly vie for currently human jobs in health care. Gollub's article
gives a number of examples where this is already happening. Right now,
technology "helps" doctors and nurses, but enough help can turn into
"replacement" eventually. I'm not an expert on healthcare nor its
technology, so I don't have anything to add to the conversation. But I
can read handwriting on the wall.
point out to well-read friends that machine productivity (both manual
mental) is creating unemployment and will continue to increase
unemployment, one of them always tells me that Economics 101
teaches that increased productivity always eventually leads to higher
employment, not lower. Not just a historical observation, but provable
in the mathematics of economics. So I looked up the proof. Found
several versions in different places. All the proofs depended on a
fundamental economic assumption: elasticity
That means that, if the price decreases, people will buy more
Without elastic demand, it doesn't follow that increased productivity
results in increased employment.
We may no longer live in a
demand-elastic world, or we might not in the future (especially after
the singularity). If all the goods and services the population
needs can be provided by significantly less than the full population,
an increase in productivity (fewer people needed to provide the same
good or service) does not necessarily mean an increase in demand. All
it means is a decrease in employment necessary to provide it.
there's another argument that unemployment will increase permanently.
Historically, automation may have done away with some jobs but created
new ones. Usually the new jobs are better jobs than the old ones they
replace. I hope this happens. But the jobs I see being created are
those of designing, making, and maintaining the instruments of
automation, those instruments being robots and AI. Yes, those are
"better" jobs than the ones the robots and AI displaced; they require
more education and training, and would be [presumably, but not with
certainty] better paying. But they would also be fewer.
Think about it. No company is willing to automate (spend dollars on
capital equipment) if they get the same output with the same payroll.
Spending on automation goes up only if the cost per unit goes down.
And, unless there is considerable elasticity in demand, if unit cost
goes down, then total cost (and total payroll) will also go down.
So the new jobs will have to be in new industries yet to be envisioned.
I hope they are there. I am not confident they will be.
unemployment is constantly growing and the problem is structural and
not temporary, we need to find a different form of society. We have
been wrestling with the economics and morality of "welfare" for more
than a century. If my interpretation of where technology and industry
are going is correct, then the welfare problem will move from chronic
acute. There will be a large and growing segment of the population that
is unemployed. It won't be for lack of ambition or discipline. It won't
be for lack of education, knowledge, or training. It will be because we
don't need as many workers to maintain the tangibles of civilization.
Today there are opposing and equally valid prevailing opinions about
If technology leads to the future I see, we will need to find a way to
reconcile these opposing views. Welfare will need to do both.
We will need fewer workers, but talented ones. Entrepreneurs, idea
people, and those with the needed skills should be able to exercise
those talents to live a better life than most, in exchange for their
contributions. But just minimal subsistence for the unemployed would
result in a tense accommodation at best and class warfare at worst.
- It should be sufficiently generous to provide a life with
dignity and self-esteem.
- It should be sufficiently frugal to incent those who can
work to do so.
For more on a
Right now, the average life span increases by 3 months per year.
Four years ago, the life span used to be 79 years, now it's 80 years.
The increase itself is increasing, and by 2036, there will be more than
a one year increase per year. So, we all might live for a
time, probably much longer than 100.
This is presented as a blessing. If you agree with me on the previous topic,
you have to take the blessing with a grain of salt. Longevity means a
population, which probably means more unemployed. I'm not saying don't
work on longevity; I'm 75, and I want to push longevity personally. But
I am saying that longevity will exacerbate the unemployment and welfare
This increased population also means we will need all the
agricultural tech that the forecast postulates. (I'm not at all
knowledgeable about agriculture; I'm not qualified
to comment on that
part of the forecast.) If we can't do that, then Malthus will not have been wrong
in his dire predictions, just early by a few centuries. The fact that
a few centuries past his deadline gives me cause for hope.
The cheapest smartphones already cost $10 in Africa and
Asia. In 2020,
70% of all humans will own a smartphone.
That means everyone has the same access to a world-class education.
Every child in the world will be able to use Khan Academy to learn
everything children in First World countries learn.
Show me where this actually works. In the US, the smartphone and
Internet are widespread. My observation is that it provides education
for a small percentage of the users, and dumbs down the rest. I'd love
to be shown otherwise, but that is my impression, based on no formal
knowledge nor expertise, just common observation. I don't
know anybody without skin in the game who will argue that the
currently making our
society smarter, on average.
This article is a critique of a technology forecast based on a seminar
by Singularity University. That forecast is predictably consistent with
SU's major tenet that, sooner rather than later, artificial
intelligence will grow without bound and make our lives much better. My
critique agrees with much that is said. But I question the time frame
of some of the predictions;
we aren't nearly ready to move ahead with them. I
also have some skepticism about the generalization of AI that the
singularity requires; I don't see progress at the rate predicted, and
I don't know that AI will ever be sufficiently "aware"
to attack problems it hasn't been asked to solve. Finally, I throw in
some caveats about societal implications of the forecast.
There is a second page to this article, with later thoughts and, more importantly, news stories that do a reality check on the forecast.
modified - August 15, 2017