AI: Neural Nets Win, Functional Programming Loses October 4, 2016Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: AI, functional programming, Lisp, neural networks
add a comment
Today, we might be considered to be in the heady early days of AI commercialization. We have pretty decent speech recognition, and pattern recognition in general. We have engines that analyze big data and produce conclusions in real time. We have recommendations engines; while not perfect, they seem to be to be profitable for ecommerce companies. And we continue to hear the steady drumbeat of self-driving cars, if not today, then tomorrow.
I did graduate work in AI, in the late 1980s and early 1990s. In most universities at the time, this meant that you spent a lot of time writing Lisp code, that amazing language where everything is a function, and you could manipulate functions in strange and wonderful ways. You might also play around a bit with Prolog, a streamlined logic language that made logic statements easy, and everything else hard.
Later, toward the end of my aborted pursuit of a doctorate, I discovered neural networks. These were not taught in most universities at the time. If I were to hazard a guess as to why, I would say that they were both poorly understood and not worthy of serious research. I used a commercial neural network package to build an algorithm for an electronic wind sensor, and it was actually not nearly as difficult as writing a program from scratch in Lisp.
I am long out of academia, so I can’t say what is happening there today. But in industry, it is clear that neural networks have become the AI approach of choice. There are tradeoffs of course. You will never understand the underlying logic of a neural network; ultimately, all you really know is that it works.
As for Lisp, although it is a beautiful language in many ways, I don’t know of anyone using it for commercial applications. Most neural network packages are in C/C++, or they generate C code.
I have a certain distrust of academia. I think it came into full bloom during my doctoral work, in the early 1990s, when a professor stated flatly to the class, “OSI will replace Ethernet in a few years, and when that happens, many of our network problems will be solved.”
Never happened, of course, and the problems were solved anyway, but this tells you what kind of bubble academics live in. We have a specification built by a committee of smart people, almost all academics, and of course it’s going to take over the world. They failed to see the practical roadblocks involved.
And in AI, neural networks have clearly won the day, and while we can’t necessarily follow the exact chain of logic, they generally do a good job.
Update: Rather than functional programming, I should have called the latter (traditional) AI technique rules-based. We used Lisp to create rules that spelled up what to do with combinations of discrete rules.
Another Old Line Conglomerate Gets It Wrong August 4, 2016Posted by Peter Varhol in Software development, Technology and Culture.
Tags: coding, GE
add a comment
I seem to be taking my curmudgeon role seriously. Today I read that Jeff Immelt, longtime CEO of industrial conglomerate GE, says that every new (young) person hired has to learn how to code.
So many things to say here. First, I have never been a proponent of the “everyone can code” school. No, let me amend that; everyone can probably learn to code, but is that the best and most productive use of their time? I would guess not.
Second, I’m sure that in saying that Immelt has put his money where his mouth is, and has gotten his own coding skills together. No? Well, he’s the boss, so he should be setting the example.
This is just stupid, and I am willing to bet a dollar that GE won’t follow through on this idle boast. Not even the most Millennial-driven, Silicon Valley-based, we’re so full of ourselves startup tech company would demand that every employee know how to code.
And no company needs all of their employees to be spending time on a single shared skill that only a few will actually use. GE needs to focus on hiring the best people possible for hundreds of different types of professional jobs. It may be an advantage for all of them to have some level of aptitude for understanding how software works, but not coding shouldn’t be a deal-breaker.
I have worked at larger companies where grandiose strategies have been announced and promoted, but rarely if ever followed through. This pronouncement is almost certainly for PR purposes only, and will quietly get shelved sooner rather than later. And making such a statement does no credit whatsoever to Immelt, who should know better.
The Tyranny of Open Source July 28, 2016Posted by Peter Varhol in Software development, Software platforms, Software tools.
Tags: GNU, open source
If that title sounds strident, it quite possibly is. But hear me out. I’ve been around the block once or twice. I was a functioning adult when Richard Stallman wrote The GNU Manifesto, and have followed the Free Software Foundation, open source software licenses, and open source communities for perhaps longer than you have been alive (yes, I’m an older guy).
I like open source. I think it has radically changed the software industry, mostly for the better.
But. Yes, there is always a “but”. I subscribe to many (too many) community forums, and almost daily I see someone with a query that begins “What is the best open source tool that will let me do <insert just about any technical task here>.”
When I see someone who asks such a question on a forum, I see someone who is flailing about, with no knowledge of the tools of their field, or even how to do a particular activity. That’s okay; we’ve all been in that position. They are trying to get better.
We all have a job to do, and we want to do it as efficiently as possible. For any class of activity in the software development life cycle, there are a plethora of tools that make that task easier/manageable/possible.
If you tell me that it has to be an open source tool, you are telling me one of two things. First, your employer, who is presumably paying you a competitive (in other words, fairly substantial) salary, is unwilling to support you in getting your job done. Second, you are afraid to ask if there is the prospect of paying for a commercial product.
And you need to know the reason before you ask the question in a forum.
There is a lot of great open source software out there that can help you do your job more efficiently. There is also a lot of really good commercial software out there that can help you do your job more efficiently. If you are not casting a broad net across both, you are cheating both yourself and your employer. If you cannot cast that broad net, then your employer is cheating you.
So for those of you who get onto community forums to ask about the best open source tool for a particular activity, I have a question in return. Are you afraid to ask for a budget, or have you been told in no uncertain terms that there is none? You know, you might discover that you need help using your open source software, and have to buy support. If you need help and can’t pay for it, then you have made an extremely poor decision.
So what am I trying to say? You should be looking for the best tool for your purpose. If it is open source, you may have to be prepared to subscribe to support. If it is commercial, you likely have to pay a fee up front. If your sole purpose in asking for an open source product is to avoid payment, you need to run away from your work situation as quickly as possible.
Artificial Intelligence and the Real Kind July 11, 2016Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: artificial intelligence, Machine Learning
add a comment
Over the last couple of months, I’ve been giving a lot of thought to robots, artificial intelligence, and the potential for replacing human thought and action. A part of that comes from the announcement by the European Union that it had drafted a “bill of rights” for robots as potential cyber-citizens of a more egalitarian era. A second part comes from my recent article on TechBeacon, which I titled “Testing a Moving Target”.
The computer scientist in me wants to say “bullshit disapproved”. Computer programs do what we instruct them to do, no more or no less. We can’t instruct them to think, because we can’t algorithmically (or in any other way) define thinking. There is no objective or intuitive explanation for human thought.
The distinction is both real and important. Machines aren’t able to look for anything that their programmers don’t tell them to (I wanted to say “will never be able” there, but I have given up the word “never” in informed conversation).
There is, of course, the Turing Test, which generally purports a way to determine whether you are interacting with a real person or computer program. In limited ways, a program (Eliza was the first, but it was an easy trick) can fool a person.
Here is how I think human thought is different than computer programming. I can look at something seemingly unstructured, and build a structure out of it. A computer can’t, unless I as a programmer tell it what to look for. Sure, I can program generic learning algorithms, and have a computer run data through those algorithms to try to match it up as closely as possible. I can run an almost infinite number of training sequences, as long as I have enough data on how the system is supposed to behave.
Of course, as a human I need the imagination and experience to see patterns that may be hidden, and that others can’t see. Is that really any different than algorithm training (yes, I’m attempting to undercut my own argument)?
I would argue yes. Our intelligence is not derived from thousands of interactions with training data. Rather, well, we don’t really know where it comes from. I’ll offer a guess that it comes from a period of time in which we observe and make connections between very disparate bits of information. Sure, the neurons and synapses in our brain may bear a surface resemblance to the algorithms of a neural network, and some talent accrues through repetition, but I don’t think intelligence necessarily works that way.
All that said, I am very hesitant to declare that machine intelligence may not one day equal the human kind. Machines have certain advantages over us, such as incredible and accessible data storage capabilities, as well as almost infinite computing power that doesn’t have to be used on consciousness (or will it?). But at least today and for the foreseeable future, machine intelligence is likely to be distinguishable from the organic kind.
Some Hard Questions About Building a Team April 23, 2016Posted by Peter Varhol in Software development, Technology and Culture.
Tags: meritocracy, team
add a comment
I feel a certain kinship with Pieter Hintjens. From his blog, it sounds like his diagnosis was similar to mine, last year. My diagnosis was wrong, and I declined surgery. In the same universe, he had the Whipple procedure, and has had at least several years of life tacked onto the end of his existence. And they seem to have been productive years, in a professional and personal sense, although it sounds like he may have little time left on the mortal plane.
But, reading other posts of his, I would hesitate to place myself firmly in his camp. Among his posts, on the viability of GitHub moving forward:
>> . . . a climate in which political outsiders use the weapons of gender and race against meritocracy.
So what is meritocracy? And what are the weapons of gender and race?
There was a time when I believed in strict meritocracy, like it was something that was easily definable and measureable. Age and experience have cured me of that delusion. In fact, we can’t define meritocracy in any way that doesn’t include our own biases.
Let me explain. Certainly, we can devise a test to determine who is the best at a particular skill. Or can we?
I spent my formative years studying psychology, which is where I was introduced to the concept of bias. We have these things called IQ tests, which purport to measure innate intelligence. Or something like that. But whatever we are measuring is the end product of our own biases of what comprises intelligence. There is a question on the standard IQ Test: What color is a banana? Seems straightforward. But to someone growing up with spoiled bananas, or no bananas at all, or even is color-blind, the question becomes problematic. Irrespective of intelligence.
I would not bet on a team that had the ten best programmers. I would bet on a team that worked as a team, with strengths and weaknesses. To compensate for the weaknesses, we need different points of view. To get different points of view, we need team members that are different, yet are cohesive. That is harder, and we shy away from harder.
Yes, there are people, who in their ignorance or incompetence, brandish gender and race maxims as teleology. And yes, they are wrong in a fundamental sense. And it is unfortunate that we have to endure them.
But that doesn’t mean that there is not value here. We are lazy. We ascribe success to intelligence, or ability. I say no. Success means having teams with complementary skills, not the best skills necessarily, but skills that offer the best chance of working effectively together.
How can we tell the difference between real value and political one ups-man-ship? Ah, that is the rub. I won’t pretend to be able to do so. But I do know that choosing the ten best pure coders is a recipe for failure.
Testing and the Aftermath of Brussels March 26, 2016Posted by Peter Varhol in Software development.
Tags: terror, testing
add a comment
My collaborator Gerie Owen and I had the honor and privilege of speaking twice at the Belgium Testing Days conference. We are very familiar with the areas that suffered from indiscriminate terrorist bombings earlier this week, and our hearts go out to the victims and survivors.
Much can be said about this. I will limit my comments to its impact on the testing community in Belgium and in Europe in general. While the organizers of Belgium Testing Days have confirmed to us that they and their loved ones are well and safe, it seems like any organization that is trying to bring people together in Belgium, and to perhaps a lesser extent in other parts of Europe, is at risk.
And that sets back the testing community immensely. One of the important things that we do as a profession is to gather people of similar interest and enthusiasm to exchange knowledge and ideas to advance the field. It has been my perception that Belgium Testing Days was an important part of that exchange in the European testing community.
No matter where we work or live, we are influenced by outside events. In this case, these events can stunt the growth of our community, and the professional development of individuals.
Gathering in large groups may not be what we want to do right now, because of the risk of being involved in a terror attack. We all make our own decisions, of course. But I am going to continue to participate in testing and DevOps conferences, in Europe and beyond.
No AI Cannot Solve the World’s Biggest Problems February 23, 2016Posted by Peter Varhol in Software development, Software platforms.
Tags: Google Brain, neural networks
add a comment
‘AI can solve world’s biggest problems’ – Google Brain engineer.
The headline screamed for a response. The type of work being done with Google Brain is impressive. It relies on neural networks. Neural networks are cool things. I was recently reminded by a new LinkedIn connection of the work and writing that I did on neural networks in the 1990s.
Neural networks are exceptionally good algorithms. They use multiple levels of nonlinear equations, running in parallel and serially, that can approximate a given response to a set of data inputs, then adjust the variables so that each successive pass can result in a slightly better approximation (or it may not; that’s just the way nonlinear algorithms work sometimes).
That may sound like just so much gobbledygook, but it’s not. In the 1990s, I used neural networks to design a set of algorithms to power an electronic wind sensor. In my aborted doctoral research, I was using neural networks to help dynamically determine the most efficient routings through large scale networks (such as the nascent Internet at the time).
Let’s be clear. What I did had nothing to do with the world’s biggest problems. The world’s biggest problems don’t already have mountains of data that point to the correct answer. In fact, they rarely, if ever, have correct answers. What they have is a combination of analysis, guesswork, and the many compromises made by dozens of competing interests. And they will never result in even close to a best answer. But sometimes they produce a workable one.
From my own experiences, I came to believe that having an instinctive feel for the nature and behavior of your data gave you a leg up in defining your neural network. And they tended to work best when you could carefully bound the problem domain. That is hardly the stuff of something that can solve the world’s most difficult problems.
Neural networks can seemingly work magic, but only when you have plenty of data, and already know the right answers to different combinations of that data. And even though, you are unlikely to get the best possible solution.
What Are We Doing With AI and Machine Learning? February 12, 2016Posted by Peter Varhol in Software development, Uncategorized.
Tags: AI, Machine Learning, testing
add a comment
When I was in graduate school, I studied artificial intelligence (AI), as a means for enabling computers to make decisions and to identify images using symbolic computers and functional languages. It turned out that there were a number of things wrong with this approach, especially twenty-five years ago. Computers weren’t fast enough, and we were attacking the wrong problems.
But necessity is the mother of invention. Today, AI and machine learning are being used in what is being called predictive analytics. In a nutshell, it’s not enough to react to an application failure. Applications are complex to diagnose and repair, and any downtime on a critical application costs money and could harm people. Simply, we are no longer in a position to allow applications to fail.
Today we have the data and analysis available to measure baseline characteristics of an application, and look for trends in a continual, real-time analysis of that data. We want to be able to predict if an application is beginning to fail. And we can use the data to diagnose just what is failing. In that the team can work on fixing it before something goes wrong.
What kind of data am I talking about? Have you ever looked at Perfmon on your computer? In a console window, simply type Perfmon at the C prompt. You will find a tool that lets you collect and plot an amazing number of different system and application characteristics. Common ones are CPU utilization, network traffic, disk transfers, and page faults, but there are literally hundreds more.
The is a Big Data sort of thing; a server farm can generate terrabytes of log and other health data every day. It is also a DevOps initiative. We need tools to be able to aggregate and analyze the data, and present it in a format understandable by humans (at the top level, usually a dashboard of some sort).
How does testing fit in? Well, we’ve typically been very siloed – dev, test, ops, network, security, etc. A key facet of DevOps is to get these silos working together as one team. And that may mean that testing has responsibilities after deployment as well as before. They may establish the health baseline during the testing process, and also be the ones to monitor that health during production.