jump to navigation

I Need a Hero, Or Do I? July 13, 2014

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: ,
add a comment

I am hardly a paragon of virtue. Still, this article nicely summarizes the various misdeeds and lapses of the multimillionaire entrepreneurial set that comprises the Silicon Valley elite, as they attempt to deal with other people and with society at large.

I think there are a couple of issues here. First, startups such as Uber and AirBNB are attempting to build businesses that fly in the face of established regulations concerning the type of business activity they are attempting to change. That may in fact be a good thing in general. The regulations exist for a certain public good, but they also exist to protect the status quo from change (Disclosure: I use neither service and remain wary of their value). I would venture to say that any attempt to change regulatory processes should have been attempted before, rather than after, the execution of the business. At best, the founders of these types of companies are tone-deaf to the world in which they live, though long term success may turn them into something more than they are today.

But there are also larger issues. To publicly practice sexism, ageism, intolerance, and active encouraging of frat boy behaviors is juvenile, quite possibly illegal, and at the very least stupid and insensitive. These are not stupid people, so I must believe that there is an overarching reason for acting in this manner. I can’t possibly help but think of Shanley Kane; despite her extreme and uncompromising stands, she directly lives much of the darker side that we too often gloss over, or in some cases even embrace.

The question is both broad and deep. Should we have any expectation that the leaders of technology should be leaders of character, or even be good people in general? Are we demanding they take on a role that they never asked for, and are not fit for?

On the other hand, does wealth and success convey the right to pursue ideas and policies that may fly in the face of reason? “Privacy is no longer a social norm,” says Zuckerberg, almost certainly because that position benefits him financially. I don’t recall him being appointed to define our privacy norms, but this represents more than an opinion, informed or not; it is also an expression of Facebook’s business strategy.

I think what annoys me most is his power to make that simple statement a reality, without a public debate on the matter. It’s not your call, Zuckerberg, and I can’t help but think that you believe it is. I would have thought that the likes of Eric Schmidt served as adult supervision to the baser instincts of Silicon Valley, until he meekly went along with the crowd.

To be fair, I don’t think these questions are new ones. In the 1800s, media moguls single-mindedly pursued extreme political agendas under the guise of freedom of the press. “Robber barons” single-mindedly pursued profits at the expense of respect and even lives.

Still, the Rockefellers and Carnegies ago attempted, with perhaps limited success, to atone for their baser instincts during their lives. I grew up in a town with a Carnegie Free Library, after all.  Perhaps this is like what a more mature Bill Gates is doing today. We can hope that Zuckerberg matures, although I am not holding my breath. I think that boat has long since sailed on Schmidt.

But it’s difficult to say that this era is all that much different than the latter 1800s. We as a society like to think we’ve grown, but that’s probably not true. But this cycle will end, and at its close, we will be able to see more clearly just what our tech leaders are made of.

Mindsets and Software Testing May 18, 2014

Posted by Peter Varhol in Software development, Strategy.
Tags: ,
1 comment so far

All of us approach life with certain attitudes and expectations. These attitudes and expectations can affect how we perform and what we learn, both now and in the future.

According to researcher Carol Dweck, there are two fundamental mindsets – fixed and growth. A person with a fixed mindset feels the need to justify their abilities at every possible opportunity. If they succeed, it reaffirms their status as an intelligent and capable person. If they fail, it is a reflection upon their abilities, and there is nothing to be learned from it.

A person with a growth mindset recognizes that chance, opportunity, and skill all play a role in any activity in which they engage. Success can be due to a number of factors, of which our intelligence and abilities play only a part.   More important, we can improve our abilities through failure, by not taking it personally and by delving into the lessons learned.

It’s important to understand that the idea of mindset is a continuum, so that few if any of us are entirely one or the other. And in some circumstances we may be more one than in others. I can personally attest to this through my own experiences.

This has a couple of implications to software development and testing. First, it means that we will almost certainly make mistakes. But how we respond to those mistakes is key to our futures. We can be defensive and protective of our self-perception, or we can learn and move on.

Second, and perhaps more important, is that failing at a project, creating buggy code, or failing to find bugs isn’t a reflection on our intelligence or abilities. At the very least, it’s not something that can’t be corrected. If we are willing to grow from it, we might recognize that our work is a marathon, rather than a sprint.

It also has implications to technical careers in general. I’ve failed more times that I would like to count. I’ve also succeeded many times too. With a fixed mindset, I’m not sure where that leads me. Charitably, I’m almost certainly a screw-up. With a growth mindset, I’m right where I want to be. I’ll leave the answer to that question as an exercise to the reader.

Why Do Biases Exist in the First Place? April 17, 2014

Posted by Peter Varhol in Software development, Strategy.
add a comment

If we are biased in our analysis of situations and our decisions in those situations, something must have precipitated that bias. As I mentioned in my last post, it is often because we use Kahneman’s System 1, or “fast” thinking, when we should really use the more deliberate System 2 thinking.

But, of course, System 2 thinking requires conscious engagement, which we are reluctant to do for situations that we think we’ve seen before. It simply requires too much effort, and we think we can comprehend and decide based on other experiences. It should come as no surprise that our cognitive processes favor the simple over the complex. And even when we consciously engage System 2 thinking, we may “overthink” a situation and still make a poor decision.

Bias often occurs when we let our preconceptions influence our decisions, which is the realm of System 1 thought. That’s not to say that System 2 thinking can’t also be biased, but the more we think about things, the better the chance that we make a rational decision. It’s easier to mischaracterize a situation if we don’t think deeply about it first.

As for System 2 thinking, we simply can’t make all, or even very many, of our decisions by engaging our intellect. There isn’t enough time, and it takes too much energy. And even if we could, we may overanalyze situations and make errors in that way.

There is also another, more insidious reason why we exhibit biases in analyzing situations and making decisions. That is that we have yet another bias – we believe that we make better decisions than those around us. In other words, we are biased that we believe we have fewer biases than the next person!

Are we stuck with our biases, forever consigned to not make the best decisions in our lives? Well, we won’t eliminate bias from our lives, but by understanding how and why it happens, we can reduce biased decisions, and make better decisions in general. We have to understand how we make decisions (gut choices are usually biased), and recognize situations where we have made poor decisions in the past.

It’s asking a lot of a person to acknowledge poor decisions, and to remember those circumstances in the future. But the starting point to doing so is to understand the origin of biases.

Why We Are Biased in Our Development and Test Practices April 10, 2014

Posted by Peter Varhol in Software development, Strategy.
Tags: ,
add a comment

We all have biases. In general, despite the negative connotation of the word, that’s not a bad thing. Our past experiences cause us to view the world in specific ways, and incline us to certain decisions and actions over others. They are part of what make use individuals, and they assist us in our daily lives by letting us make many decisions with little or no conscious thought.

However, biases may also cause us to make many of those decisions poorly, which can hurt us in our lives, our jobs, and our relationships with others. In this series, I’m specifically looking at how biases influence the software development lifecycle – development, testing, agile processes, and other technical team-oriented processes.

Psychology researchers have attempted to categorize biases, and have come up with at least several dozen varieties. Some are overlapping, and some separate biases describe similar cognitive processes. But it is clear that how we think through a situation and make a decision is often influenced by our own experiences and biases.

Much of the foundation research comes from the work of psychologists Daniel Kahneman and Amos Tversky, and published in Kahneman’s Thinking, Fast and Slow. In this book, Kahneman describes a model of thinking comprised of two different systems. System 1 thinking is instinctive, automatic, and fast. It is also usually wrong, or at least gullible. Many biases come from our desire to apply this kind of thinking to more complex and varied situations.

System 2 thinking is slower, more deliberate, and more accurate. If we are being creative or solving a complex problem for the first time, we consciously engage our intellect to arrive at a decision. Using System 2 thinking can often reduce the possibility of bias. But this is hard work, and we generally don’t care to do it that often.

It’s interesting to note that Kahneman won a Nobel Prize in Economics for his research. He demonstrated that individuals and markets often don’t act rationally to maximize their utility. Often our biases come into play to cause us to make decisions that aren’t in our best interest.

In the case of software development and testing, our biases lead us into making flawed decisions on planning, design, implementation, and evaluation of our data. These factors likely play a significant role in our inability to successfully complete many software projects.

Does Blackberry Stand a Chance? January 30, 2013

Posted by Peter Varhol in Software platforms, Strategy.
Tags:
add a comment

I don’t spend a lot of time and money with smartphone technologies.  It’s simply too easy to go down a black hole in expense and time with gadgets like that.  When I finally upgraded to a smartphone, in 2009, I got a Blackberry, mostly because I already had one in my job at the time.

The Blackberry of that time wasn’t all positive.  When the network went down, for hours or in a couple of cases days, I felt for the loss of instant email.  Web browsing was possible, and I did it occasionally, but using the Blackberry button as a pointing device was painful.

When I needed a phone upgrade, about two years ago, my requirement was for a world phone due to my occasional travel in Europe, and my carrier offered two choices – a Blackberry or an HTC Android device.  After pondering the question, I decided to go with the Android.

Now we have the Blackberry Z10 (not available in the US until March, and probably longer than that for my second-tier carrier).  It sounds very good at a high level, and most analysts and reviews are giving it good marks.

What Blackberry really had in the past was an app problem.  I went to one of their developer conferences, circa 2009, and talked to some developers.  There were one or more groups within RIM that encouraged developers to build apps for the still-robust platform, and provided tools and interfaces to do so.  And when those apps were completed, they were systematically shot down by RIM’s legal department.  Talk about a bait and switch!

Nevertheless, Blackberry still has a lot going for it today.  It has loyal enterprise users and IT departments (my small employer still runs a Blackberry Enterprise Server).  The independent network, despite occasional well-publicized outages, is a huge advantage, far better than anything available from anyone else.  And it has QNX, the OS vendor RIM acquired to build the Blackberry 10 operating system.  QNX is a POSIX-compliant multi-tasking OS that is small, fast, and, well, inherently multi-tasking.  You can actually run multiple apps on your phone simultaneously, and do so easily and transparently.

The battle for third (and likely last) place in the smartphone platform wars is between Blackberry and Microsoft Windows Phone.  It’s difficult to predict these things, because it comes down to so much more than technology.  Preston Gralla says the Windows Phone will win entirely because Microsoft has the bigger ecosystem.  But Microsoft has had a phone OS years longer than Apple or Google, and look at what that ecosystem has bought it.  Nothing.

But in a partially technical but primarily emotional response, I hope Blackberry wins.

And yes, I am currently interviewing for employment at Microsoft.  Go figure.

SLAs Are A Crock December 29, 2012

Posted by Peter Varhol in Software platforms, Strategy.
Tags: , ,
1 comment so far

We would like to think that automated systems make us predictable.  That assumption leads to a number of desirable characteristics, the foremost of which is that we can predict the behavior of the system to stay fully functional, and how long any failures may last.

The problem is that automated systems are complex.  I realized this when Matt Heusser recently tweeted, “If Amazon can’t fulfill its SLAs, how can we?”  The answer is that we can’t.  We couldn’t do it with relatively simple pre-digital manual systems, and we certainly can’t do it with complex automated ones.  We can look at each of the components of the system – multiple servers and maybe large server farms, storage and other peripherals, network, OS, applications, database, power, environmental, and so on, and assign probabilities of failure to each.

But the problem is that we treat them as independent at best, and unrelated at worst.  We understand that individual parts of the system have a high reliability, but we fail to understand their interrelationships.  We think that the failure rate of each component is somehow analogous to the failure rate of the system as a whole.  Or we think that the individual components are large and independent of each other.  The latter is a more subtle fallacy, but it’s still a fallacy.

SLAs (Service Level Agreements, to the uninitiated), offer contractual guarantees to IT services users on uptime and availability.  They usually promise over 99 percent uptime, and the resolution of any problem within a certain limited amount of time.  But they are based on the reliability of the portion of the system within the control of the provider, rather than the full system.  Or they are based on a misunderstanding of the risk of failure.

The worst thing is the Black Swan event, as Heusser notes.  The Black Swan event is the purported once-in-lifetime event that surprises everyone when it occurs, either because it is considered very rare, or no one thought of it.  The nuclear meltdown of the Fukushima Nuclear Power Plant in Japan as a result of the undersea earthquake and tsunami is a well-known example of a Black Swan event.  It required a complex series of circumstances that seemed obvious only in retrospect.

We tend to think incorrectly when applying Black Swan events to complex systems.  We believe that the sequence of events that caused the unexpected failure is very rare, where in fact the system itself is complex and has multiple unknown and untested failure points.

In IT, we tend to counter rare events through redundancy.  We connect our systems to separate network segments, for example, and are then surprised when snow collapses the roof of the data center.  The physical plant is part of the system, but we simply treat it as a given, when it’s not.  I’ll discuss how we might test such systems in a later post.

So the next time someone asks you to take their SLA bet, go ahead.  You will win sooner or later.

UPDATE:  Matt Heusser has his own post on this topic here.

Apple – The Homeland Security of Tech Companies August 21, 2012

Posted by Peter Varhol in Software platforms, Strategy.
add a comment

It’s been a month or so since I’ve darkened these pages, mainly out of an extreme paid workload, but in part because I didn’t have anything truly compelling to say.

That may have changed.

First, I was in the market for a new MP3 player recently (the old one got doused with water at a hotel fitness center).  Everyone has iPods, at around $160 and up.  At the other end, there is a proliferation of $20-$50 devices that play music (and play videos and even take photos), but in almost all cases with a more difficult user interface.

I understand that the simple usability of iPods, the iTunes store, and other Apple products is worth something.  But is it worth $130+ more than a generic product?  Well, I voted with my wallet.  Other people may vote other ways.

Second, I read an op-ed by Wayne Rash in eWeek on Apple’s lawsuit.  He compares Apple’s willingness to go to court with that of Ashton-Tate in the 1980s.  Ashton-Tate, one of the largest and most powerful tech companies at the time, ultimately discovered that its so-called innovations predated its patents, and failed in rather spectacular fashion.

But worse, Ashton-Tate came to rely on litigation, rather than innovation, for its future.

I don’t understand the invective that passes for rational discourse whenever the conversation turns to Apple.  There seem to be “Apple apologists” and “Apple haters”, rather than any sort of mutual respect for opinions and an eventual coming to a compromise, or even to agree to disagree.

I will confess to one bias.  I believe that intellectual property lawsuits are ill-suited in an industry where rapid innovation depends on the cross-fertilization of ideas.  That may color my opinion here, but should not reflect upon Wayne’s more specific points.

Wayne has been doing this as long, or longer, than I have.  To color him with either label would be grossly unfair and inaccurate.  His comparison may or may not hold water under closer scrutiny, but deserves consideration on its merits.

Wayne points out the dangers in the course Apple is taking.  I’ll take it a step further, and say that Apple has become a lightning rod for both praise and criticism in this and other strategies.

So what makes Apple the Homeland Security of tech companies?  In our lives, we as a society have largely accepted greater restrictions in personal liberty in the name of safety and security.  CCTVs blanket public areas and commercial establishments, we endure inconveniences in air travel so as not to repeat the events of more than a decade ago, and find that we can never again be as anonymous as we were twenty or more years ago.  We hope that these restrictions make us safer.

Likewise, we have largely accepted the use restrictions put upon us by Apple, in exchange for a more pleasant user experience.

Is this a bad thing?  Many people have decided it is not.

I recall, circa 1991, doing a review of PC video capture cards, which was emerging technology at the time.  The PC cards were enormously difficult to set up and use, in large part because they had to take into account a range of different PC I/O standards, displays, and interrupts.  In contrast, the Mac cards I used were ridiculously easy, because there was one approved approach for interface, hardware setup, and display.

Sounds good, and through this approach Apple made technology accessible and even fun for far more people.  But in accepting that approach, I ceded control over my own environment and use.

I do believe that to some extent the tradeoff between usability and flexibility is an either-or proposition.  But I think that Apple can become less of a Homeland Security while still retaining its compelling advantages over alternatives.

Moneyball and the Science of Building Great Teams April 10, 2012

Posted by Peter Varhol in Education, Strategy, Technology and Culture.
add a comment

I got the germ of the idea when I was speaking at the Better Software Conference last fall, and used Moneyball as an impromptu example of how blind belief influences our decisions of software readiness.

Moneyball, of course refers to the Michael Lewis book as well as the movie of the same name.  Lewis used the Oakland Athletics major league baseball team as the basis of a story about how experts evaluated talent poorly, and about how Oakland’s Billy Beane took advantage of this fact to build a winning team on a shoestring.

So I developed a presentation proposal around this topic.  I thought the same concepts could be applied to software projects.  The book was more about numbers, while the movie focused on personalities.

Following Lewis’ trail, I ended up combining the two.  I used Daniel Kahneman’s new book, Thinking, Fast and Slow, to describe sources and causes of human error, and Moneyball to look at personalities and how they interact on teams.

I’ll be presenting this at CAST 2012 (check out the splash on the right side of this page), STARWEST, and TestKit, among others.  I feel very good about it, and think it will be a great talk.  I hope to see you at one of these conferences later this year.

Follow

Get every new post delivered to your Inbox.

Join 377 other followers