Alexa, Delete My Data December 25, 2016Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: Alexa, data, privacy
add a comment
As we become inundated this holiday season by Amazon ads for its EchoDot voice system and Alexa artificial intelligent assistant, I confess I remain conflicted about the potential and reality of AI technology in our lives.
To be sure, the Alexa commercials are wonderful. For those of us who grew up under the influence of George Jetson (were they really only on TV for one season?), Alexa represents the realization of something that we could only dream about for the last 50+ years. Few of us can afford a human assistant, but the intelligent virtual assistant is a reality. The future is now!
It’s only when you think it through that it becomes more problematic. A necessary corollary to an intelligent virtual assistant is that assistant has enough data about you to recognize what are at times ambiguous instructions. And by having that data, and current information about us, we could imagine issues with instructions like these:
“Alexa, I’m just going out for a few minutes; don’t bother setting the burglar alarm.”
“Alexa, turn the temperature down to 55 until January 15; I won’t be home.”
I’m sure that Google already has a lot of information on me. I rarely log into my Google account, but it identifies me anyway, so it knows what I search for. And Google knows my travel photos, through Picasa. Amazon also identifies me without logging in, but I don’t buy a lot through Amazon, so its data is less complete. Your own mileage with these and other data aggregators may vary.
To be fair, the US government currently and in the past has been in possession of an incredible amount of information on most adults. I have held jobs and am a taxpayer; I have a driver’s license (and pilot’s license, for that matter); I am a military veteran; and I’ve held government security clearances.
I’d always believed that my best privacy protection was the fact that government databases didn’t talk to one another. The IRS didn’t know, and didn’t care, whether or not my military discharge was honorable (it was). Yeah. That may have been true at one time, but it is changing. Data exchange between government agencies won’t be seamless in my lifetime, but it is heading, slowly but exorably in that direction.
And the commercial firms are far more efficient. Google and Facebook today know more about us than anyone might imagine. Third party data brokers can make our data show up in the strangest places.
And lest you mistake me, I’m not saying that this is necessarily a bad thing. There are tradeoffs in every action we take. Rather, it’s something that we let happen without thinking about it. We can come up with all sorts of rationalizations on why we love the convenience and efficiency, but rarely ponder the other side of the coin.
I personally try to think about the implications every time I release data to a computer, and sometimes decline to do so (take that, Facebook). And in some cases, such as my writings and conference talks, I’ve made career decisions that I am well aware make more data available on me. I haven’t yet decided on Alexa, but I am certainly not going to be an early adopter.
Does Anybody Really Know What Time it Is? October 27, 2016Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: fitness tracker
add a comment
That iconic line from the song of the same name by the rock group Chicago popped into my head as I pondered my next step with fitness trackers. It seems that Microsoft is indeed discontinuing the Band (bastards), and I have to find another way forward.
(And if you’ve not listened to Chicago, also known as Chicago Transit Authority, do yourself a favor. There is a reason why they have sold more albums than anyone else in history.)
I have a pretty good idea of what I need in a new device:
- Steps, floors, GPS, running, heart rate. Sleep would be nice, as long as it was automated.
- Integration with Android, and notifications from Android. Phone and texts, with options for others.
The last item turns out to be fascinating. At Mobile Dev and Test this past spring, Caeden COO Skip Orvis presented information on a fascinating device called Sona that looked at heart rate variability, a measure of both health and stress. I would have bought the device when it came out, but I had a question: “Does it tell time?”
I meant it as a joke, but it was no joke. It turns out that is doesn’t have a display, and doesn’t tell time. Virtually everyone in the audience objected, because no one wants to wear multiple devices on their wrist.
This prompted me to look into the history of the wristwatch. It seemed for several years that the wristwatch was dying, but because of the Apple watch and activity trackers in general, seems to be making a comeback.
Today, I wear my Band at all times when it is not recharging (which, frankly, is often; Microsoft acknowledges that the battery won’t even last long enough to run a half marathon with the GPS engaged). It is sleek and unobtrusive, and passes through airport security just like a wristwatch. It syncs readily with my Android phone.
Today, there are many comparable devices (most of which are more expensive than the Band), but I need to choose one with the features I need. Any ideas?
AI: Neural Nets Win, Functional Programming Loses October 4, 2016Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: AI, functional programming, Lisp, neural networks
add a comment
Today, we might be considered to be in the heady early days of AI commercialization. We have pretty decent speech recognition, and pattern recognition in general. We have engines that analyze big data and produce conclusions in real time. We have recommendations engines; while not perfect, they seem to be to be profitable for ecommerce companies. And we continue to hear the steady drumbeat of self-driving cars, if not today, then tomorrow.
I did graduate work in AI, in the late 1980s and early 1990s. In most universities at the time, this meant that you spent a lot of time writing Lisp code, that amazing language where everything is a function, and you could manipulate functions in strange and wonderful ways. You might also play around a bit with Prolog, a streamlined logic language that made logic statements easy, and everything else hard.
Later, toward the end of my aborted pursuit of a doctorate, I discovered neural networks. These were not taught in most universities at the time. If I were to hazard a guess as to why, I would say that they were both poorly understood and not worthy of serious research. I used a commercial neural network package to build an algorithm for an electronic wind sensor, and it was actually not nearly as difficult as writing a program from scratch in Lisp.
I am long out of academia, so I can’t say what is happening there today. But in industry, it is clear that neural networks have become the AI approach of choice. There are tradeoffs of course. You will never understand the underlying logic of a neural network; ultimately, all you really know is that it works.
As for Lisp, although it is a beautiful language in many ways, I don’t know of anyone using it for commercial applications. Most neural network packages are in C/C++, or they generate C code.
I have a certain distrust of academia. I think it came into full bloom during my doctoral work, in the early 1990s, when a professor stated flatly to the class, “OSI will replace Ethernet in a few years, and when that happens, many of our network problems will be solved.”
Never happened, of course, and the problems were solved anyway, but this tells you what kind of bubble academics live in. We have a specification built by a committee of smart people, almost all academics, and of course it’s going to take over the world. They failed to see the practical roadblocks involved.
And in AI, neural networks have clearly won the day, and while we can’t necessarily follow the exact chain of logic, they generally do a good job.
Update: Rather than functional programming, I should have called the latter (traditional) AI technique rules-based. We used Lisp to create rules that spelled up what to do with combinations of discrete rules.
I Bought Another Band; I am not Sure Why July 31, 2016Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: Microsoft Band
add a comment
My original Microsoft Band, a nice and relatively inexpensive fitness tracker and GPS, is disintegrating before my eyes. The wristband is peeling and falling apart, and I doubt it will last much longer. It is getting more difficult to charge, as the charging cable seems to have trouble engaging with the device.
My Band is just over a year old, and I would expect any electronic to last longer, perhaps much longer (I still own a VCR, after all). I do use it almost daily, and wear it constantly except when charging (which it requires almost daily). I would like to tell Microsoft that, for all of its functionality at a reasonable price, it is an inferior product.
But I did so in a strange way; I just bought a new one. It is a Band 2, which by most accounts seems to be on the way out in favor of a possibly compelling Band 3, but I could not wait three months for that product to ship. I think my current model has about another 2-4 weeks of life ahead.
So for a product that is disintegrating after just over a year of use, why have I doubled down? Especially in a market where fitness trackers are mostly a dime a dozen, and I could choose another among many?
Yes, familiarity is one part of that answer. I know how to use it. Don’t discount that as a significant motivator. If I have to spend time learning a new feature set, I may take a while to get up to speed.
It is customizable. Within a fairly wide range, I can set up the type of information I want it to display. And I rather like the Microsoft Health app. While it may not be superior, it is easy to use within a fairly wide range of fitness activities. And it was my first experience with notifications from my phone (texts, incoming calls, voice mail), which I can’t really do without any more.
And while I complain at the rate at which my Band is falling apart, I also realize that fitness and activity technology is changing rapidly. I hope to have more compelling technology in the next purchase, and at a lower price.
Update: I bought my sister a Band 2 for Christmas 2015, and explained to her that I bought a new one because mine was falling apart. Her response: “Mine disintegrated last month, the clasp came completely apart. I did buy a new one, because I like it.” At least mine lasted a year.
The Tyranny of Open Source July 28, 2016Posted by Peter Varhol in Software development, Software platforms, Software tools.
Tags: GNU, open source
If that title sounds strident, it quite possibly is. But hear me out. I’ve been around the block once or twice. I was a functioning adult when Richard Stallman wrote The GNU Manifesto, and have followed the Free Software Foundation, open source software licenses, and open source communities for perhaps longer than you have been alive (yes, I’m an older guy).
I like open source. I think it has radically changed the software industry, mostly for the better.
But. Yes, there is always a “but”. I subscribe to many (too many) community forums, and almost daily I see someone with a query that begins “What is the best open source tool that will let me do <insert just about any technical task here>.”
When I see someone who asks such a question on a forum, I see someone who is flailing about, with no knowledge of the tools of their field, or even how to do a particular activity. That’s okay; we’ve all been in that position. They are trying to get better.
We all have a job to do, and we want to do it as efficiently as possible. For any class of activity in the software development life cycle, there are a plethora of tools that make that task easier/manageable/possible.
If you tell me that it has to be an open source tool, you are telling me one of two things. First, your employer, who is presumably paying you a competitive (in other words, fairly substantial) salary, is unwilling to support you in getting your job done. Second, you are afraid to ask if there is the prospect of paying for a commercial product.
And you need to know the reason before you ask the question in a forum.
There is a lot of great open source software out there that can help you do your job more efficiently. There is also a lot of really good commercial software out there that can help you do your job more efficiently. If you are not casting a broad net across both, you are cheating both yourself and your employer. If you cannot cast that broad net, then your employer is cheating you.
So for those of you who get onto community forums to ask about the best open source tool for a particular activity, I have a question in return. Are you afraid to ask for a budget, or have you been told in no uncertain terms that there is none? You know, you might discover that you need help using your open source software, and have to buy support. If you need help and can’t pay for it, then you have made an extremely poor decision.
So what am I trying to say? You should be looking for the best tool for your purpose. If it is open source, you may have to be prepared to subscribe to support. If it is commercial, you likely have to pay a fee up front. If your sole purpose in asking for an open source product is to avoid payment, you need to run away from your work situation as quickly as possible.
Artificial Intelligence and the Real Kind July 11, 2016Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: artificial intelligence, Machine Learning
add a comment
Over the last couple of months, I’ve been giving a lot of thought to robots, artificial intelligence, and the potential for replacing human thought and action. A part of that comes from the announcement by the European Union that it had drafted a “bill of rights” for robots as potential cyber-citizens of a more egalitarian era. A second part comes from my recent article on TechBeacon, which I titled “Testing a Moving Target”.
The computer scientist in me wants to say “bullshit disapproved”. Computer programs do what we instruct them to do, no more or no less. We can’t instruct them to think, because we can’t algorithmically (or in any other way) define thinking. There is no objective or intuitive explanation for human thought.
The distinction is both real and important. Machines aren’t able to look for anything that their programmers don’t tell them to (I wanted to say “will never be able” there, but I have given up the word “never” in informed conversation).
There is, of course, the Turing Test, which generally purports a way to determine whether you are interacting with a real person or computer program. In limited ways, a program (Eliza was the first, but it was an easy trick) can fool a person.
Here is how I think human thought is different than computer programming. I can look at something seemingly unstructured, and build a structure out of it. A computer can’t, unless I as a programmer tell it what to look for. Sure, I can program generic learning algorithms, and have a computer run data through those algorithms to try to match it up as closely as possible. I can run an almost infinite number of training sequences, as long as I have enough data on how the system is supposed to behave.
Of course, as a human I need the imagination and experience to see patterns that may be hidden, and that others can’t see. Is that really any different than algorithm training (yes, I’m attempting to undercut my own argument)?
I would argue yes. Our intelligence is not derived from thousands of interactions with training data. Rather, well, we don’t really know where it comes from. I’ll offer a guess that it comes from a period of time in which we observe and make connections between very disparate bits of information. Sure, the neurons and synapses in our brain may bear a surface resemblance to the algorithms of a neural network, and some talent accrues through repetition, but I don’t think intelligence necessarily works that way.
All that said, I am very hesitant to declare that machine intelligence may not one day equal the human kind. Machines have certain advantages over us, such as incredible and accessible data storage capabilities, as well as almost infinite computing power that doesn’t have to be used on consciousness (or will it?). But at least today and for the foreseeable future, machine intelligence is likely to be distinguishable from the organic kind.