It seems that The Machines are much in the news lately. I’ve seen several articles on increasing uses for drones and other UAVs in domestic “security” applications, not to mention the prospects of arming them for bona fide warfare.
On a much less violent front, we have this item from the Wall Street Journal, which describes how a seemingly innocuous (but nominally helpful) tool for hiring managers is blocking even qualified candidates from further consideration in job application processing.
I’m very much in favor of having machines help humans; that is, after all, why they are conceived of, built, and used. But if you read the article you’ll find that—as is increasingly the case in too many areas—humans are not only being required to adjust their own activities to accommodate machine incapabilities, but to overcome obstacles brought on by the use of the machines in the first place.
I know that, for the moment at least, the machines are still being programmed by people, and it is the programmer’s (or systems engineer’s, or manager’s) failure to take into account all possible requirements and outcomes that leads to this situation, but it’s a real problem, nonetheless.
Questions: How do you determine if your decisions will have the results you want? Can you make such a determination with any certainty? How (can) you tell if a decision is the “right” one? What criteria do you use for such determination? Do you learn from situations where your decision was the “wrong” one? What made it wrong, anyhow?