jump to navigation

Pay for Performance, Mathematics Edition November 21, 2017

Posted by Peter Varhol in Education, Technology and Culture.
Tags:
add a comment

I’ve always been suspicious of standardized tests that conclude that US students were average or worse in mathematics than others.  My primary issue is that it is very likely that many more US students took these types of comparison tests than in other countries, and while the mean tended to be average, the standard deviation was larger than average, meaning that many did much more poorly, but many also did much better.  The popular press tends to find fault with anything that reeks of US influence, and neglects to mention such a basic measure for better comparison.

There is a study that offers a different but related conclusion, however.  It claims that US students are competitively capable, but only when sufficiently motivated.  How do you motivate them?  Well, by paying them, of course.  When students are financially rewarded, their math results are significantly elevated.

This means that US students aren’t (necessarily) stupid, or undereducated, just unmotivated.  It’s an intriguing  proposition, one that I think deserves more study.

Advertisements

Are Engineering and Ethics Orthogonal Concepts? November 18, 2017

Posted by Peter Varhol in Algorithms, Technology and Culture.
Tags: , , ,
add a comment

Let me explain through example.  Facebook has a “fake news” problem.  Users sign up for a free account, then post, well, just about anything.  If it violates Facebook’s rules, the platform generally relies on users to report, although Facebook also has teams of editors and is increasingly using machine learning techniques to try to (emphasis on try) be proactive about flagging content.

(Developing machine learning algorithms is a capital expense, after all, while employing people is an operational one.  But I digress.)

But something can be clearly false while not violating Facebook guidelines.  Facebook is in the very early stages of attempting to authenticate the veracity of news (it will take many years, if it can be done at all), but it almost certainly won’t remove that content.  It will be flagged as possibly false, but still available for those who want to consume it.

It used to be that we as a society confined our fake news to outlets such as The Globe or the National Inquirer, tabloid papers typically sold at check-out lines at supermarkets.  Content was mostly about entertainment personalities, and consumption was limited to those that bothered to purchase it.

Now, however, anyone can be a publisher*.  And can publish anything.  Even at reputable news sources, copy editors and fact checkers have gone the way of the dodo bird.

It gets worse.  Now entire companies exist to write and publish fake news and outrageous views online.  Thanks to Google’s ad placement strategy, the more successful ones may actually get paid by Google to do so.

By orthogonal, I don’t mean contradictory.  At the fundamental level, orthogonal means “at right angles to.”  Variables that are orthogonal are statistically independent, in that changes in one don’t at all affect the other.

So let’s translate that to my point here.  Facebook, Google, and the others don’t see this as a societal problem, which is difficult and messy.  Rather they see it entirely as an engineering problem, solvable with the appropriate application of high technology.

At best, it’s both.  At worst, it is entirely a societal problem, to be solved with an appropriate (and messy) application of understanding, negotiation, and compromise.  That’s not Silicon Valley’s strong suit.

So they try to address it with their strength, rather than acknowledging that their societal skills as they exist today are inadequate to the immense task.  I would be happy to wait, if Silicon Valley showed any inclination to acknowledge this and try to develop those skills, but all I hear is crickets chirping.

These are very smart people, certainly smarter than me.  One can hope that age and wisdom will help them recognize and overcome their blind spots.  One can hope, can’t one?

*(Disclaimer:  I mostly publish my opinions on my blog.  When I use a fact, I try to verify it.  However, as I don’t make any money from this blog, I may occasionally cite something I believe to be a fact, but is actually wrong.  I apologize.)

O Canada, What Have You Become? November 2, 2017

Posted by Peter Varhol in Uncategorized.
2 comments

I have just spent several hours attempting to enter Canada, specifically Toronto, and to use my phone while in Canada.  I can only wonder just when Canada became a third-world country.  Because my experience today is by far the worst of the two dozen or so countries I have visited in the last 10 years.

First and foremost was clearing Immigration, a process that took well over two hours.  I am not exaggerating when I say there were about two thousand people in the various lines at Immigration.  I counted three (count ‘em) people attempting vainly to direct that traffic into the appropriate lines and answer any questions about which line they were supposed to be in and what they were supposed to do.  I had no idea if I was in the right line until I finally managed to emerge at the other end, unscathed.

Then there was actually a line to enter the baggage claim area.  I did not have baggage to claim, but had to, um, participate in that line, because there was no way to bypass it.

Then there was the 200-yard long taxi line.  I’ve experienced those before, in Las Vegas, except that here in Toronto they let the gypsy cabbies troll the line and aggressively (and I mean aggressively) recruit the gullible to join them on a ride to wherever.  I have not seen gypsy cabbies at an airport since the Dominican Republic, back in 1986.

My phone worked at the airport, but not downtown.  I couldn’t even call my provider to find out why.

So I repeat: when did Canada become a third world country?  Because my first four hours in this country were akin to entering such a destination.  I wanted to turn around and go back.  Fortunately, my stay is on the order of 48 hours, and I really have no desire to come back again.

Facebook, Fake News and Accounts, and Where Do We Go From Here? October 31, 2017

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Those of you who read me know that I am no fan of Facebook, for a wide variety of reasons.  I am not a member, and will never be one, even though it may hurt me professionally.  In short, I believe that Mark Zuckerberg is a megalomaniac who fancies Facebook as a modern religion, and himself as god, or at least the living prophet.

And regrettably, he may be right.  Because Facebook is far more than the “personal-ad-in-your-face” that I thought when I presented past objections.  Over the past 10 months, it has become pretty clear that Facebook is allowing itself to be used for purposes of influencing elections and sowing strife, sometimes violently.

The fact of the matter is that Zuckerberg and Facebook worship at the altar of the dollar, and everything else be damned.

Worse, from a technology standpoint, Facebook treats its probably-fatal flaws as mere software bugs, an inconvenience that it may fix if they rise up too far in the priority queue.

Still worse, the public-facing response is “We can’t be expected to police everything that happens on our site, can we?”

Well, yes, you can.  It is not “We can fix this,” or “We don’t think this is a problem.”  It is “You are at fault.”

In an earlier era of media (like, 10 years ago), publishers used to examine and vet every single advertisement.  Today it’s too hard?  That’s what Zuckerberg says.  That is the ultimate cop-out.  And that sick attitude is a side effect of worshiping at the altar of the dollar.

On Facebook, we are hearing louder echoes of our own voices.  Not different opinions.  And Facebook will not change that, because it will hurt their revenue.  And that is wrong in the most fundamental way.

So where do we go from here?  I would like to argue for people to stop using Facebook completely, but I know that’s not going to happen.  Maybe we should just be using Facebook to keep in touch with friends, as was originally intended.  We really don’t have ten thousand friends; I have about 900 connections on LinkedIn, and probably don’t even remember half of them.  And I don’t read news from them.

Can we possibly cool the addiction that millions of people seem to have to Facebook?  I don’t know, but for the sake of our future I think we need to try.

Interlude For, Of All Things, Corn on the Cob October 19, 2017

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

I grew up, well, not quite on a farm, but definitely not in suburbia.  We had large gardens (and cows, chickens, and even a pig), of which I partook of little, to my adult regret.  But I devoured corn on the cob, and still do to this day, now in New England.

I have tried broiled and grilled, and my preference is grilled, although you need a grill of course.

But as we move into a world of genetically modified crops, I am okay with that.  Really.  I dislike the non-GMO labels on my food.  I think they are pandering to those who don’t know that our crops have always been modified.  But I have a request.

As I shuck them, I cannot get rid of the hair.  If there is anything that you can do to get rid of the “hair” on the corn on the cob, I would appreciate it.

Bias and Truth and AI, Oh My October 4, 2017

Posted by Peter Varhol in Machine Learning, Software development, Technology and Culture.
Tags: ,
add a comment

I was just accepted to speak at the Toronto Machine Learning Summit next month, a circumstance that I never thought might happen.  I am not an academic researcher, after all, and while I have jumped back into machine learning after a hiatus of two decades, many more are fundamentally better at it than me.

The topic is Cognitive Bias in AI:  What Can Go Wrong?  It’s rather a follow-on from the presentations I’ve done on bias in software development and testing, but it doesn’t really fit into my usual conferences, so I attempted to cast my net into new waters.  For some reason, the Toronto folks said yes.

But it mostly means that I have to actually write the presentation.  And here is the rub:  We tend to believe that intelligent systems are always correct, and in those rare circumstances where they are not, it is simply the result of a software bug.

No.  A bug is a one-off error that can be corrected in code.  A bias is a systematic adjustment toward a predetermined conclusion that cannot be fixed with a code change.  At the very least the training data and machine learning architecture have to be re-thought.

And we have examples such as these:

If you’re not a white male, artificial intelligence’s use in healthcare could be dangerous.

When artificial intelligence judges a beauty contest, white people win.

But the fundamental question, as we pursue solutions across a wide range of applications, is:  Do we want human decisions, or do we want correct ones?  That’s not to say that all human decisions are incorrect, but only to point out that much of what we decide is colored by our bias.

I’m curious about what AI applications decide about this one.  Do we want to eliminate the bias, or do we want to reflect the values of the data we choose to use?  I hope the former, but the latter may win out, for a variety of reasons.

In the Clutch September 28, 2017

Posted by Peter Varhol in Algorithms, Machine Learning, Software development, Technology and Culture.
Tags: , ,
add a comment

I wrote a little while back about how some people are able to recognize the importance of the right decision or action in a given situation, and respond in a positive fashion.  We often call that delivering in the clutch.  This is as opposed to machine intelligence, which at least right now is not equipped to understand and respond to anything regarding the importance of a particular event in a sequence.

The question is if these systems will ever be able to tell that a particular event has outsized importance, and if they can use this information to um, try harder.

I have no doubt that we will be able to come up with metrics that can inform a machine learning system of a particularly critical event or events.  Taking an example from Moneyball of an at-bat, we can incorporate the inning, score, number of hits, and so on.  In other problem domains, such as application monitoring, we may not yet be collecting the data that we need, but given a little thought and creativity, I’m sure we can do so.

But I have difficulty imagining that machine learning systems will be able to rise to the occasion.  There is simply no mechanism in computer programming for that to happen.  You don’t save your best algorithms for important events; you use them all the time.  For a long-running computation, it may be helpful to add to the server farm, so you can finish more quickly or process more data, but most learning systems won’t be able or equipped to do that.

But code is not intelligence.  Algorithms cannot feel a sense of urgency to perform at the highest level; they are already performing at the highest level of which they are capable.

To be fair, at some indeterminate point in the future, it may be possible for algorithms to detect the need for new code pathways, and call subroutines to make those pathways a reality (or ask for humans to program them).  They may recognize that a particular result is suboptimal, and “ask” for additional data to make it better.  But why would that happen only for critical events?  We would create our systems to do that for any event.

Today, we don’t live in the world of Asimov’s positronic brains and the Three Laws of Robotics.  It will be a while before science is at that point, if ever.

Is this where human achievement can perform better than an algorithm?  Possibly, if we have the requisite human expertise.  There are a number of well-known examples where humans have had to take over when machines failed, some successfully, some unsuccessfully.  But the human has to be there, and has to be equipped professionally and mentally to do so.  That is why I am a strong believer in the human in the loop.

SpamCast on Machine Learning September 20, 2017

Posted by Peter Varhol in Software platforms.
Tags: ,
add a comment

Not really spam, of course, but Software Process and Measurement, the weekly podcast from Tom Cagley, who I met at the QUEST conference this past spring.  This turned out surprisingly well, and Tom posted it this past weekend.  If you have a few minutes, listen in.  It’s a good introduction to machine learning and the issues of testing machine learning systems, as well as skills needed to understand and work with these systems.  http://spamcast.libsyn.com/spamcast-460-peter-varhol-machine-learning-ai-testing-careers