Technology Forecast -
2016:
Notes and 'Dotes
Dave
Tutelman - July 23, 2016
Notes
and Anecdotes:
Some things in the original article call for footnotes that would
interrupt the flow of the article. And some call up related anecdotes,
well worth writing down but not essential to the argument. Here they
are, on their own page. Finally, some of the notes are
added-after-the-fact "progress reports", either denials or
confirmations of the predictions.
Technology
forecasting
...And why
would anybody even care what I have to say about a technology forecast?
One of my jobs during my 40 years with Bell Labs was technology
forecasting. In addition to a number of formal forecasts at five-year
intervals, much of my career was spent on questions of the sort "what
if this actually works?" So I have some experience in forecasting.
I have written an article on methods used by
technology forecasters, in case you're interested in how it's
done.
Phone service
and wideband data
via cable TV
When I started working for Bell Labs, the "Bell System" was in fact the
nation's phone company. My fellow engineers at The Labs had to
understand how the
entire phone system worked. As part of new-employee orientation, we
were even loaned out to a local phone
company somewhere in the USA for six weeks to
experience operations first-hand. (I went to New England
Telephone
in Boston and
New Hampshire.)
One of the things you learn with the phone
company is that most of the costs are in "the last mile" -- the wires
from the phone company's local office to your home. Not the telephone
on your desk, and not the big, computerized switching system, and not
the transcontinental transmission systems. Nope, just copper wires
hanging from poles or buried in the ground. Why? It's the multiplier. A
switching system serves thousands or even tens of thousands of
households. Long-distance transmission may serve even more. But we need
a pair of copper wires -- dedicated -- from the last switch in the
TelCo building to the
telephone that is going to use it. (Remember, this is before the days
of cell phones. If you had a phone, you needed wires.)
The last
mile (or three or ten miles, depending on population density) is called
"local distribution", and it's where the money is. Find a way to save
money there and you're talking big dollars. As a forward-looking
technologist (my job for most of my career), one of my jobs was doing
assessments of novel ways to get phone signals to the end user for less
cost.
In 1979, the question came up of whether the cable TV
industry, growing quickly from just about nothing a few years earlier,
could be part of the local distribution solution. They had plenty of
bandwidth, because TV signals are far more demanding than telephone
voice signals. Is there some way to share a little of that bandwidth to
piggyback telephone signals on the cable? My group was assigned to
answer the question. The lead members of technical staff were Debi
Bennett for technology and Tony Cooper for business issues. The way we
attacked it was a several-month role-playing exercise:
we imagined
ourselves as the Sandy Springs Cable Company. Why Sandy Springs? We had
good technical
relations with Southern Bell Telephone Co. They told us that Sandy
Springs, GA, was a very typical fast-growing suburb, the perfect
example for a cable-TV market. And they were willing to share a lot of
data with us, including street-by-street and manhole-by-manhole utility
maps, so we could do detailed engineering.
We quickly became
experts on the technology available, and invented our own where needed.
(Very little new technology was needed; the market already had niche
products for
adding phone channels to the TV channels.) Here were some key results
of our study:
- In 1979, cable company coverage had grown to the point
that 50% of all homes in the USA were passed by cables. Of
those homes, 50% were cable TV subscribers, and the other 50% could be
easily and cheaply connected to the cable that ran by their house. So
doing local distribution
for phones was a likelihood for about a quarter of homes, and not much
more of a stretch for another quarter of them. We couldn't
replace phone cables with TV cables; the phone cables were already
bought and paid for -- in the ground. But we could defer (perhaps
indefinitely) adding any new phone cables. That is big bucks.
- If I were a cable company wanting to offer local phone
service, my last mile costs would be less than the phone company's
copper wires. I already had cable in the ground; my only costs would be
the incremental cost of adding phone capability to the cable. I would
have to make an investment in some electronics,
basically replacing some amplifier modules in my cable. And I would
have to turn a few of my TV channels into phone channels. (But most
cable TV operators in 1979 weren't using all the TV channels their
systems could carry.)
- Given these findings, why did it take another 20+ years for
telephone service via cable TV to become a large business? The answers
are financial, not technical nor economic. (The difference between
"economic" and "financial" is the difference between "it can make some
money" and "it is the best disposition of our resources to make the
most money.") In 1979, and for years after that, cable
companies had
more lucrative things they could do than offer phone service.
- Adding more customers on the routes they already served
(houses they already passed) was remarkably low cost, and had a big
revenue return.
- Pay-TV services (premium channels and pay-per-view)
required almost no capital investment at all, and promised substantial
revenue. That's a huge margin. Not only a more lucrative return on
investment, but it was a better use of those spare channels that they
might have used for phone service.
- So it was feasible, but there were good reasons it didn't
get done for many years.
Three years later,
my group (not the same group nor the same project) was looking for ways
to deliver broadband data service to homes and small businesses. It had
now been ten years since we had figured out how to deliver 56kb/sec
data on
ordinary phone lines (Dataphone Digital Service), but more than that
still required special lines -- making the last mile even more
expensive. Perhaps we could use some of the capacity of cable TV for
high-speed data distribution. This is not phone service; the premium
data service might change the financials for the cable company, and the
technology could probably be made to support it.
The cable industry was growing by leaps and bounds; lots of new
technology since our 1979 study. Also, my lead engineer on this
project, Ken Huber, wasn't involved in the earlier study and had none
of the knowledge we had acquired then. So Ken and I went back to
college. George Washington University offered a three-day intensive
course in up-to-date cable TV technology. So we drove down to
Washington DC to get educated.
The answer came out pretty much the same. There were business reasons,
not technical nor economic, that limited the approach. For one thing,
there were still investments with better return that looked more like
cable TV. For another, the industry was still mostly fragmented; much
of the nation's coverage was small "mom'n'pop" companies, not today's
consolidated industry of big companies. Getting each small company on
board and technically equipped and trained was daunting.
That brings me to an anecdote that depends on the small-company nature
of the industry, and the fear among
those small companies of big companies and consolidation...
There were about 50 people taking the course. Almost everybody there
was from a small cable operation, the one or two technical people in
the whole company. Only one large company was represented, and it
wasn't a cable company! It was AT&T (of course, including Bell
Labs); there were a dozen attendees from my company. AT&T was a
huge company (about a million employees if you include the subsidiary
local phone companies), and Ken and I didn't know the other
AT&T students, nor did they know each other. None of us even
knew
that other AT&T employees would be attending.
But to the small cable company people, it looked like a huge
conspiracy. They were convinced that this presence from the Bell System
(about a quarter of the students) meant AT&T intended to take
over the cable business -- and throw them out of business. We got the
evil eye from them for the entire course.
Whenever I was in Washington on business (and that was a lot in the
1970s), I would make time to have dinner with my friends David and
Brenda, who lived in Georgetown. So I called them to see which day they
would be available. They couldn't make it. Seems that Brenda had a new
job, general counsel to the Cable Industry Association, a commercial
association (Washington talk for lobbying group) funded by all those
small cable companies. She was working overtime to get up to speed on
her new industry, and didn't have time for anything else, not even
dinner. I said I was there to take a cable TV technology course, and
would love to discuss her new job. But no dice.
The middle day of the course, we got out of class to be greeted by huge
news -- interesting to everyone in the telecommunications industry. Divestiture was announced that day!!!
AT&T and the FCC announced that they were settling their
antitrust lawsuit by breaking up the Bell System. The local telephone
companies would be spun off as independent companies. AT&T
would just be left with long distance, which was a competitive business
already. That meant that AT&T would be free to go into other
related businesses. Like
the cable TV business? That was the first thing on the
minds of our classmates when they heard it. If we were facing paranoia
from them before, it was doubled now.
And, when we got back to our hotel that evening, I had a bunch of
messages from Brenda asking when we could get together. Looks like the
paranoid conclusion was drawn by everybody in the industry -- including their general counsel.
Finding meaning in a
non-productive life
My brother Bob pointed out,
You do not discuss – but I see as
a very
significant
issue – the problem of finding meaning in a non-productive life.
Certainly there will be people whose hobbies will give them
satisfaction and a sense of usefulness. But many – perhaps most – of us
now find justification for our existence in our work. The person who
retires to a life of idleness and dies in a year is cliché.
Bob is absolutely correct. But I don't know what technology nor the
overall economic system can do about that -- not that I'm any kind of
expert. But let me at least give some independent data in support of
Bob's assertion.
During the first half of my Bell Labs career, there was a weekly
company newspaper, the Bell
Labs News, distributed to every desk. The entire back page
was devoted to "milestones": promotions, service anniversaries,
retirements... and deaths of [mostly retired] employees. In the
case of deaths, the year of retirement was mentioned. Over the course
of years, I noticed that there was a strangely bimodal distribution of
the time between retirement and death:
- The largest number of people lived at least 15 years after
retiring.
- A smaller but substantial number of people died within 5
years after retiring.
- Almost nobody died between 5 and 15 years.
I wondered about that. My first guess was retirements due to illness
that soon took the ex-employee's life. But there were too many for that
to be the only reason. Eventually I had been working there long enough
so that I knew the people In Memoriam. It struck me that those who
were dying early were almost always people who had been "married to the
job", people with no hobbies who just lived for their work. The people
who made it 15 years or more had outside interests when they were
working, that they could cultivate to full-time activities after they
retired.
So I am in complete agreement with Bob about the significance of the
issue. But I don't know how to deal with it, in the context of
technology nor the economic system. Education, perhaps. But I don't
want to get into my opinions on education here.
Food to feed
the world
Again, from brother Bob,
In fact, we are now producing more
than enough food to feed the world.
The reason that people are starving in some places has to do with
distribution and politics. In the Horn of Africa, for example,
charitable relief food is deliberately blocked from getting to starving
segments of the population by government, insurgent, or paramilitary
forces. Even without considering the lab-grown meat discussed in
Gollub’s review, today we have huge potential to expand the food supply
simply by reducing the amount of meat that we eat. By converting the
land given over to livestock and growing feed for livestock to growing
food for people, we could feed a much larger world
population. The
problem is that people enjoy eating meat, and as the economies of
poorer countries grow, more and more of their people can afford to eat
meat more often – and want their turn at the table. Supply and demand
forces have resulted in the current meat-vegetable balance in the
world’s food supply, and presumably will continue to do so.
I disclaim any expertise in agriculture and food distribution. I have
seen articles supporting what Bob says here, and I have seen [fewer]
articles claiming the opposite. I don't know enough to weigh in on one
side or the other. Some of the controversial issues that will determine
how it will work itself out include:
- Politics (as Bob notes).
- Global climate change.
- Acceptance or demonization of genetically modified foods.
- Acceptance or demonization of chemical aids to agriculture.
- Animal rights activism.
More on
electric cars
I read MIT Technology Review almost
daily. On August 16, 2016, there
was an article suggesting that the
range limitation of electric cars is not such a big deal. Of course,
that generated a lot of discussion, some thoughtful and some simply
Luddite or self-interested. But one comment was particularly
interesting. Martya (that's a forum nickname; I don't know who it
actually is) posted an interesting note on possible unintended
consequences of a transportation market with a significant percentage
of electric cars.
martya
Before one can assert that this will have positive, or negative
environmental effects, one must analyze all the globally significant
supply and recycling chains that would feed these vehicles. The big
questions:
1. Where does this much
electricity come from?
At this scale, and with the vehicles running in dense urban areas, the
nation will require a large shift in base load demand to night
time. This has implications for almost all the electricity
infrastructure of the US and the sources of power generation and
transmission.
I'm researching solar and wind systems - on the ground - as
deployed in the US. Land use implications are far from well understood,
and include water use, disruption of "desert crust" which has been
recently shown to sequester carbon, and many other unintended effects
of seemingly benign infrastructure changes.
2. What are the
environmental implications of this much energy storage?
(Batteries).
Travel to the mines. At this scale of vehicle penetration,
the effects on Earth's physical resources are quite large.
Mining uses significant water, in water-scarce places, for
example. At an 80% electric vehicle penetration
rate in a fleet of more than 230 million vehicles, the impact of
batteries required to handle such energy density will be
enormous. Recycling batteries is far from a known phenomenon,
and based upon early data from around the world, may represent a
significant environmental challenge.
3. What are the
environmental implications of the dramatic expansion in
electronics and data infrastructure required to manage this much power
and these many "mobile transportation computing devices"?
Current global e-waste patterns for small items such as the several
billion mobile phones recycled every year reveal a significant
effect. An electric vehicle can generate 1,000 to 7,000 times
more e-waste than a mobile phone, so the global implications of
150,000,000 operating electric vehicles in just the US will be a
significant environmental event.
The old joke about New Yorkers who did not know milk came from cows is
applicable here....except that it may not be a joke.
Every fundamental innovation in transportation, housing, food supply,
water use has affected the global environment in many unintended ways.
This is an interesting study, but does not address the core
environmental issues and tradeoffs.
This mirrors and expands on some of my own concerns. All of those
concerns might be handled eventually. (Might! Some won't be easy, and
some solutions will be controversial.) But the horizon of the
Singularity forecast is much too close to deal effectively with the
issues.
Let me draw an analogy to the settling of the western
United States in the 1800s. Chop down trees to build your houses. As
long as there were only a few people per square mile, it was cheap and
without noticeable side effects. At a thousand people per square mile,
it deforests the area. And a thousand people per square mile is just light suburban population
density. I live in a suburban town of no more than average density,
and that is 3000 people per square mile. It doesn't take that much
density or usage to turn a nearly-free resource into a scarce resource.
The electric car is similar. As long as it is just 1%
or less of the automobile market, the benefits are obvious. So are the
drawbacks, of course. When it starts to exceed 10-20%, it places
non-obvious strains on the economy and the environment (non-today, very obvious when it happens).
Progress reports
I read quite a bit of technology news, including the MIT Technology
Review every morning. Thus I am bombarded with
articles confirming or denying either my predictions or the
Singularity University's. Here is what I have seen so far. I am trying
to sharply limit the number of references; there is a virtual flood of
articles about topics in the forecast. I'm especially avoiding
articles pushing specific products, looking more for dispassionate
overview articles.
Artificial IntelligenceThe
press has a nearly continuous flow of stories on AI. I'm not going to
list every one I see here, just those that seem to me indicative of a
trend, or a broad observation about AI.
2017 Oct 6
- Rodney Brooks, an MIT researcher in AI, presents a set of common
misconceptions about AI, that you'll need to avoid to make sensible
predictions about it. In fact, you should be aware of these if you're
going to be intelligent about even reading AI predictions.
2019 July 14
- Since my 2016 article, there are been one headline after another
hailing yet another AI achievement. I am thinking a little differently
today:
- AI techniques, specifically learning from massive
amounts of data, are solving well-defined problems and solving them
pretty well to very well. Things like face recognition and the ability
to beat humans in increasingly complex games are very
impressive. Just this week, an AI program managed to conclusively
beat some of the world's best poker players, including wagering and
bluffing.
- Each of these successes has been completely isolated.
In fact, one characteristic is that even a small broadening of the
problem is "back to square one" with defining the problem and training
from data. So it is more a great leap of applying computation to a
problem, and less an obvious step towards "the singularity".
Generalized intelligence has not been part of the AI track record.
- AI
is facing a lot of pushback, and not just from Luddites. AIs related to
people, such as face recognition, exhibit racial and gender bias, and
it is not obvious how to change the data to completely eliminate such
bias. AI programs can be (and possibly have been) used by
repressive regimes to further subjugate their citizens. And AI
"algorithms" (I hate that application of the word; they are at best
heuristics) are completely non-transparent; even the
programmers/trainers don't understand the rules under which they
operate. This sort of problem has given rise to a whole field of "AI
ethics", which is increasingly saying, "Hold on there! Just a minute!"
This trend is likely to push "the singularity" back significantly, if it happens at all.
2019 Nov 5
- I have long been skeptical of generalized AI, which is what would
be needed to achieve "the singularity". We have found ways to automate
very specific tasks. Even small changes in the task require expensive
retraining, as noted in the previous reference. And retraining is
expensive. A modern AI application based on neural networks can require
more than a million dollars worth of cloud computing to train from
data. And the carbon footprint is scary big; that same training uses as
much energy as five automobiles in their lifetimes. So I saw AI as
addressing only specific applications or tasks that had a high value to
automate.
Can generalized AI be evolving, in a place or a way that I'm not seeing? Perhaps so. It appears
someone is working on the problem of making these tasks work together
based on context. This contextual orientation is, IMHO, essential to
generalized AI, probably even key to it. And it is not being driven by
the research community. Rather, it is the future that the world's
third-largest company sees for itself. So funding is not a problem.
What
is this AI breakthrough to watch? It is Amazon's big plan for Alexa,
the voice-operated personal assistant at the core of its smart speaker
product. It may have started as a smart speaker, but Alexa has spread
to more of Amazon's product
infrastructure, and is collecting data on how those products' owners
live. The picture at the right is an illustration of what the Alexa
team is looking to do in the short term. In order to develop this level
of sophistication, it will have to crunch the day-to-day data of
millions of Alexa owners' use of the existing tasks Alexa can do.
That's a huge amount of training. But now somebody has (or can and will
get) the data. And that somebody has the financial wherewithal and
incentive to bear the expense to crunch the data.
That
is simultaneously encouraging and scary. No not just scary for people
who fear the singularity rather than welcoming it. Getting there
involves a huge invasion of privacy by a profit-driven company. Amazon
is in business to sell you stuff. This big AI push is not just to sell
Alexa-enabled products. It is to be in a position to know when you will
need something, and try to sell it to you before you start shopping
with one of their competitors in retailing. That's the scary part. |
2022 Mar 10 - An article in Nautilus
by Gary Marcus pointed out that "deep learning" was running into a
brick wall in the biggest applications of AI. Deep learning is the
"training" of a simulated (or even real hardware) neural network to
learn to perceive things by very extensive trial and error. It is
pattern recognition rather than what Marcus refers to (correctly IMHO)
as "symbol manipulation".
The article points out some recent high-profile and very critical
failures of deep learning applications, most dramatically some fatal
accidents by self-driving vehicles. He points out the limitations of
deep learning (so far, anyway, thought he doesn't concede that) to
pattern recognition, and to the exclusion of logical reasoning. I
happen to agree with that assessment, and I can see no path around some
of the problems if sticking exclusively to deep learning.
Marcus' conclusion is that the only way to real and reliable AI
application is a hybrid approach: deep learning for pattern recognition
and symbol manipulation for abstract combinations of those recognized
patterns. I would agree. Although I think Marcus is biased and perhaps I share his bias, the evidence seems to support him.
|
Autonomous vehicles
I
could be wrong on this one. Time will tell. My specific skepticism was
that by 2020 it would replace the need for owning automobiles. The
Singularity University prediction was that an autonomous car would
be available on call
almost immediately, and could deliver itself autonomously. Think Uber,
but without a driver -- resulting in a cost that is competitive with
owning your own car. I am still skeptical anything like that will be
widespread in 2020. But the autonomous vehicle technology seems to be
coming along much faster than I expected. I see articles all the time of
another company committed to it. Many articles do not necessarily
report technical progress, but at least dollars committed to making it
happen. I'll cite some of those articles here -- but they are way too
numerous to mention them all.
2016
Autumn - Not only car companies, the leading software and
computer giants are tossing their hats into the ring. Apple and Google
are on board. And autonomous trucks are being
investigated and even tried.
2016
Dec 5
- I guess we should not be surprised that Uber itself intends to make
use of this. They would not want to be put out of business by
autonomous car sharing. If you can't beat 'em, join 'em. Of course, all
those people who are earning their living as Uber drivers would be
jobless.
2017 Dec 14 - A very level-headed article from Science Magazine.
It doesn't focus so much on the technology, but rather how fast it
could be available and how fast it could be accepted -- two different
questions. It looks at capabilities during six stages of making a car
driverless. The early stages are tagged as "now" and "soon" (with
dates). It rates the last stage as "somewhere over the rainbow".
Unfortunately, the last stage is where Uber and trucking companies need
to be in order to fire all their drivers. Well, maybe the next to last
stage if they don't mind shutting down in adverse weather, at night,
specific locations -- in other words, when you want a taxi. (As usual,
I looked at the author's bio; he's a staff writer with no obvious axe
to grind.)
2019 July 10
- An article in Fast Company, a generally insightful business
publication, takes a very pessimistic view of of both autonomous
vehicles (AVs) and electric vehicles (EVs). It cites many industry
insiders revising outward their projections of when these technologies
will "happen" -- and wonders what could change to make them happen at
all. My skeptical comments are positively rose-colored compared to
their assessment. Not only is the technology way behind, but the
companies pushing them are putting themselves in danger of going broke
in the process. It's a very tough business to be in, and they see
nothing in the near future to remedy that.
2022 Mar 10 - See the Gary Marcus article, "AI is Hitting a Wall". One of the key examples in his argument is autonomous vehicles.
Electric power and electric cars
2017 Mar 16
- One way to deal with the 1/3 duty cycle of solar power (due to the
earth's rotation) is a worldwide grid. I cited the technical
and political difficulties in making it happen. But someone is
trying, or at least working on part of the problem. Europe is
geographically large enough to have diverse non-fossil energy resources
(solar in the south, wind in the north, and geothermal in Iceland). And
to a significant extent, the European Union is one political entity
(though strains are beginning to show). The EU is upgrading its
grid to get closer to completely renewable energy. From the
article:
An international power grid is
gradually developing, using
power interconnectors to trade surplus energy across national
electricity networks, allowing big wind power producers in northern
Europe, for example, to trade electricity with large solar energy
generators in southern Europe.
2017 Mar 29
- The most often touted solution to the 1/3 duty cycle of solar power
is batteries. In 2017, almost all the R&D is going to the
lithium
ion battery techology. Let's assume that all this is successful, and we
figure out how to make very efficient LiIon batteries: weight and
volume efficient, fast charging, and cheap to manufacture.
That
will put a huge demand on the world's supply of lithium. Even if the
market were only electric cars and electronic gadgets, the lithium
market is starting to spike, based on world supply. If we were to add
the much larger market of reservoirs for solar-generated power, Li-Ion
batteries may not be cost-effective.
2017 Jun 8
- Apparently not just lithium. Cobalt is also important in the
manufacture of high-energy-density batteries. And the demand for those
batteries is pushing up cobalt prices. From the article:
There’s no way that current
supply is going to keep up. Some of the forecasts for cobalt supply are
pretty dire.
2017 Oct 31
- Nickel, too. It's about to become difficult to get enough of it, if
the battery market keeps growing. The article is about new
opportunities for the nickel producers, but makes it pretty clear that
enough battery growth could disrupt the nickel market.
2018 July 27
- My criticism has been about the daily variations in availability of
solar (or wind) power. Never even thought about annual variation. Turns
out variation over the course of the year would be even more demanding
of batteries. Several studies show that 100% conversion to renewable
power in California would be prohibitively expensive, even if the cost
of batteries came down by a factor of three.
2019 July 10 - See my comment in Autonomous Vehicles, about the article in Fast Company. It also addresses electric vehicles.
2020 Jan 23
- Germany has a mandate to close all its nuclear power plants.
That is proceeding, but the cost in CO2 emissions and other pollution
is high. Wired presents an argument that both the cost and the health
and safety implications of shutting down nuclear power represents a big
problem for the US, based on this German experience.
2020 Nov 5
- A common theme from these follow-up items is what happens when you
try to scale a new technology, and run up against straining the supply
of some normally ordinary material. And here it is again -- with a
vengeance. The manufacture of solar panels requires a lot of glass.
Glass doesn't sound like a big deal, but the solar panel market has
created a shortage of glass. This article from Bloomberg Green points
out how competing demands for increasingly scarce glass (because the
world's production facilities are being outstripped by demand) may push
the price of solar up so it is no longer competitive.
Future of Work
The technical press has lots of articles about robots either replacing
or cooperating with human workers. Each tends to look at some specific
new robotic offering, and most believe what the company making the
offering tells the reporters. It may be factual but, self-serving as it
is, I find it hard to put a great deal of credence in it. I'll spare my
readers the time of reading anything not well thought through. That
said, here are articles that address the problem
systematically, rather than a selling point for some robot product.
2017 Oct 4 - This is an MIT
economist's view of the middle-term future. He makes a distinction
between "enabling" and "replacing" kinds of technology. Replacing
technology obviously puts workers out of work. Enabling technology
helps workers do their work better, makes them more productive. He says
that the history of technology shows enabling technology to far
outweigh replacing technology, so the destruction of human work is not
imminent. But bear in mind that the American Enterprise Institute,
which published this article, is a decidedly conservative think tank.
Some of this bias comes through, and you have to read it with an eye
out for political point of view. For instance, he calls for an end to
public policy that subsidizes the replacement of labor with technology.
Reading between the lines, he is proposing weakening labor laws that
make it more expensive to employ human beings. That may or may not be
the right answer, but it is certainly a politically conservative answer.
2017 Dec 12 - A good article, with more input from the legal profession than software companies. The carry-away I got:
- Paralegals are in trouble first -- document discovery is being successfully automated.
- Temporarily,
it may help lawyers themselves, especially young lawyers learning the
trade. Working with software rather than delegating to a paralegal is
teaching them, making them better lawyers -- and more productive
lawyers -- faster.
- Of course, more productive lawyers means
fewer jobs for lawyers; many of them are thus endangered, too.
Especially true for new law graduates looking for a first job.
- Other
white-collar professions better be looking over their shoulders. I
don't see this pattern as something that can't be duplicated elsewhere.
2018 Mar 27
- "AI won't replace doctors anytime soon." That is the conclusion reported about a Google paper on AI
diagnosis. It seems to indicate for doctors what the article above says about lawyers. The software reportedly allows
doctors to be very effective and more efficient, suggesting a new
paradigm for the effect of AI on knowledge-worker employment. The old
paradigm was replacement; AI is going to take my job. The paradigm
suggested by this article is twofold:
- If I use AI as a tool, I will be more effective. I will be able to do a better job.
- If
I use AI as a tool, I will be more efficient. But this is replacement
by slow degrees. If I am more efficient, that means I can handle more
patients, which means fewer doctors are needed than would be needed
without the AI.
2018 Apr 11 - Continuing in the vein of replacement by slow degrees,
Ajay Rajadhyaksha of Barclays reports the result of their study:
"Technology frequently ends up lowering the skill-set needed to do a
job, in turn expanding the pool of potential workers, which then acts
as a drag on wage growth."
That's pretty simple supply and
demand. It has been the result of technology for centuries. The trick
to continued job and wage growth is for new jobs, requiring more
skills, to emerge. I find it hard to see those jobs at the moment, but
that may just be my lack of foresight. More disturbing, I don't see a
work force with higher skills developing; our system seems to be
dumbing down the population.Bitcoin and other cryptocurrencies
2019 Jun 12
- Since I wrote the forecast, much has been made of the "carbon
footprint of bitcoin". Bitcoin, and other cryptocurrencies as well,
require computation to create the crypto pairs. In fact this "mining",
as it is called, is essential to the entire notion of a cryptocurrency.
And computation takes energy. Most of our energy currently comes from
burning fossil fuels, so creation of new bitcoin money also
creates greenhouse gases. How big is this carbon footprint.
The
article cited is a more accurate (and considerably lower) estimate of
bitcoin's carbon footprint, than previous estimates. It places it at
0.2% of the world's use of electricity. That doesn't sound like a
lot, but it means that creating bitcoin "money" uses as much
electricity as all of Kansas City. Still, bitcoin enthusiasts think
that is worth it. Is it? Not just for the few enthusiasts, but as a
full, functioning economy?
The best estimate I can get
of bitcoin's standing in the overall world of money puts it at about
0.05% of the world's money. Let's look at a few more numbers to give us
some scale.
- Bitcoin = $41 billion
- All cryptocurrencies = $100 billion
- All money = $83.6 trillion = $83,600 billion
So
what does this say about bitcoin? Or even cryptocurrencies in general,
because we know bitcoin has a limited size of the economy it can
handle? Bitcoin already costs more in electricity than its overall size
in the economy: 0.05% of the worlds economy, but it costs 0.2% of the
world's electricity. If cryptocurrencies grow as a share of the
economy, so will its power consumption. So if the bitcoin enthusiasts
are right about its being the future of the world economy, we better
have a far superior way to generate electricity, because just
maintaining the money supply will eat up a significant chunk of the
power we generate. If it approaches a quarter of the world's economy
(without a substantial improvement in power use), it will require as much power as we use for all other things combined.
Since
I am talking about technology here, let's assume technology can improve
on the power consumption of cryptocurrencies. (That is a big
assumption, because the value of a cryptocurrency is derived from the
work necessary to mine it. Make that more efficient, and you probably
devalue the currency.) Then we must also assume that the world's
electric power needs are going to increase markedly as well, for all
the technological reasons elsewhere in this article -- like electric
transportation rather than gasoline or diesel.
That does not
sound like a ringing endorsement of cryptocurrencies to me. They are
grossly wasteful of resources and, until the electric power problem is
solved, bad for the planet as well.Online education
2016
Dec 14
- My assessment of the performance of online education
is still skeptical. I wrote, " I
don't know anybody without skin in the game who will argue
that
the smartphone is currently making our society smarter, on average."
Now someone with
skin in the game seems to agree with me. Sebastian Thrun,
who invented the "massively open
online college" (MOOC) course and founded Udacity to deliver it, has
apparently changed the concept drastically. According to the article,
Udacity’s completion rates were as low
as 2 percent, and the people who
made it through were mostly the kind of well-motivated students already
served well by conventional institutions.
He has changed Udacity's modus operandi from college degrees to
vocational "nanodegrees" focused on specific technology skills (mostly
in software development). That seems to be a success so far, confirming
my assertion that it will help a small percentage of people.
Last
modified - Jan 24, 2020
|