Decisions, Decisions – There’s an Algorithm for That March 20, 2017Posted by Peter Varhol in Software development, Strategy, Technology and Culture.
Tags: Kahneman, statistics, technology
add a comment
I remember shoveling sh** against the tide. Yes, I taught statistics and decision analysis to university business majors for about 15 years. It wasn’t so much that they didn’t care as they didn’t want to know.
I had more than one student tell me that it was the job of a manager to make decisions, and numbers didn’t make any difference. Others said, “I make decisions the way they are supposed to be made, by my experience and intuition. That’s what I’m paid for.”
Well, maybe not too much longer. After a couple of decades of robots performing “pick-and-place” and other manufacturing processes, now machine learning is in the early stages of transforming management. It will help select job candidates, determine which employees are performing at a high level, and allocate resources between projects, among many other things.
So what’s a manager to do? Well, first, embrace the technology. Simply, you are not going to win if you fight it. It is inevitable.
Second, make a real effort to understand it. While computers and calculators were available, I always made my students “do it by hand” the first time around, so they could follow what the calculations were telling them. You need to know what you are turning your decisions over to.
Third, integrate it into your work processes. By using machine learning to complement your own abilities. Don’t ignore it, but don’t treat it as gospel either.
There are many philosophical questions at work here. Which is better, your experience or the numbers? Kahneman says they are about the same, which does not bode well for human decision-making. And the analysis of the numbers will only get better; can we say the same thing about human decision-making?
Of course, this has implications to the future of management. I’ll explore my thoughts there in a future post.
Health Care is Institutionally Resistant to Technology March 9, 2017Posted by Peter Varhol in Technology and Culture.
Tags: health, technology
add a comment
That is an overarching and controversial statement, and is probably not true under all circumstances. I will only touch on a few points, based on this article in WSJ (paywall) and my own recent experiences.
The WSJ article notes a pretty complete failure of the University of Texas MD Anderson Cancer Center to leverage IBM Watson AI technology to help diagnose and treat cancer.
Of course my own recent experiences include a referral to what is purportedly one of the leading cardio institutes in the country, which asked me to fill out forms using a Number 2 pencil. Like I did when I was in elementary school. When I went to the website, there were obvious misspellings and bad grammar, including in their bragging about being a leading institution.
My doctor objected to my objection. “They don’t do their own website!” My response: “And they can’t even be troubled to read it, either. If you can’t get the easy things right, it leaves a lot of doubt that you can get the hard things right.”
I see a couple of forces at work here. First, health care remains incredibly complex. Every patient is different, and has to be treated with individuality. (To be fair, that is not how many human practitioners treat their patients, but that is a tale for another day). This approach may not be amenable to current machine learning endeavors.
That being said, however, it is clear that health care practitioners and institutions are rooted in routine and learned practice, and passively or actively resist new approaches. In a sense, it is sad that otherwise highly intelligent and educated people are so steeply rooted in their routines that they cannot adapt to changes for the better.
But the institutions and bureaucracies themselves force this attitude on many. It’s simply less friction to do things the way you always have, as opposed to trying something new. And that, more than anything, is where health care needs to change.
A Brave New World December 21, 2016Posted by Peter Varhol in Technology and Culture.
Tags: technology, tragedy of the commons
add a comment
As more and more sober people call attention to the fact that there is a dichotomy between the winners and losers of the information/technology economy, it’s still not at all clear that this issue is concerning or even recognized by those best in a position to do something about it. Many of us are operating under the impression that advances are universally good, and that attempts to slow or stop such advances are universally bad.
I am not at all sure that the future of society will take care of itself. I am reminded of the old Sydney Harris cartoon, below:
We advance technology because it is fun, it is intellectually invigorating, and it will make us money. We acknowledge that in many cases we are disrupting the established order, and that in significant cases we may be the proximate cause of eliminating jobs and even entire industries. We justify that by saying that other jobs will arise to replace them.
Probably true; almost certainly true. But that process could take years, even decades. In the meantime, many lives will be disrupted as jobs and lifestyles disappear without a clear way forward.
We justify that by saying that every adult needs to be a lifetime learner, and become accustomed to multiple career shifts over the course of a lifetime. Again, true. But some are more capable at this than others, for a wide variety of reasons.
Well, those who are left behind deserve to be, right? Here is where the logic starts to break down. In a strict economic sense, that may be correct. But economics only models society at large, and only loosely (yes, I know the difference between macro and micro). Forces other than economics are at work, and economists don’t seem to want to model those forces at all. And the end result seems to be coming as a surprise to many.
In 1968, psychologist Garrett Hardin defined “The Tragedy of the Commons.” He noted that when there was a shared interest in a limited resource, it was in every person’s self-interest to use as much of that resource as they could, thus destroying the resource for all.
What we have in society today is possibly approaching a tragedy of the commons. That’s not to say that economic value is a fixed resource, but however we grow it, it is finite. If all use the value to the best of their abilities, some will achieve great wealth. Others will lose out.
I strongly believe in advances in technology, and in capitalism. In general they benefit society, and make it wealthier and more secure in the aggregate. I also strongly believe in democratic processes. Others are welcome to disagree with my beliefs in technology and capitalism.
Beyond the intellectual simulation and possibilities for profit, we in technology largely believe that we are building a better society. There are those who disagree with us, with some justification. The disagreements between these two forces may be getting closer to a head. If there is confrontation, I have no doubt that many of us will be among the first up against the wall when the revolution comes.
To be clear, I’m copasetic with being one of the haves (relatively speaking) in a have/have-not society (I also realize that society can turn on me in a heartbeat). I’m not nearly as copasetic with helping to create (in a very minor sense, but still) the have-nots. We can do better, and it is in our personal, economic, and societal interest to do better.
I wonder what Ayn Rand would say.
The Myths Behind Technology and Productivity February 26, 2016Posted by Peter Varhol in Strategy, Technology and Culture, Uncategorized.
Tags: productivity, technology
add a comment
There was a period of about 15 years from 1980 to 1995 when productivity grew at about half of the growth rate of the US economy. To many of us, this was the Golden Era of computing technology. It was the time when computing emerged from the back office and became an emergent force in everyone’s lives.
When I entered the workforce, circa 1980, we typed correspondence (yes, on an IBM Selectric) and sent it through the postal system. For immediate correspondence, we sat for hours in front of the fax machine, dialing away. Business necessarily moved at a slower pace.
So as we moved to immediate edits, email, and spreadsheets, why didn’t our measures of productivity correspondingly increase? Well, we really don’t know. I will offer two hypotheses. First, our national measures of productivity are lousy. Our government measures productivity as hours in, product out. We don’t produce defined product as much today as we did then (more of our effort is in services, which national productivity measures still more poorly), and we certainly don’t measure the quality of the product. Computing technology has likely contributed to improving both of these.
Second, it is possible that improvements in productivity tend to lag leaps of technology. That is also a reasonable explanation. It takes time for people to adapt to new technology, and it takes time for business processes to change or streamline in response.
Today, this article in Harvard Business Review discounts both of these hypotheses, instead focusing on the fact that we are communicating to more people, for little purpose. Instead, this article focuses on what it calls the dark side of Metcalfe’s Law. Metcalfe’s Law (named after Ethernet inventor and all-around smart guy Bob Metcalfe) states that the value of a telecommunications network is proportional to the square of the number of connected users of the system.
The dark side is that we talk to more people, with little productivity. I will acknowledge that technology has contributed to a certain amount of waste. But it has also added an unmeasurable amount of quality to the finished product or service. It has enabled talented people to work where they live, and not have to live where they work. It has let us do things faster and more comprehensively than we were ever able to do in the past.
To say that this is not productive is simply stupid, and does not take into account anything in recent history.
Warning: I am not an economist, by any stretch of the imagination. I am merely a reasonably intelligent technical person with innate curiosity about how things work. However, from reading things like this, it’s not clear that many economists are reasonably intelligent people to begin with.