Industry Observations-Technical Insight-True Stories of the TRC

Lowering Defenses to Increase Security

Starting at WhiteHat was a career change for me. I wasn’t sure exactly what to expect, but I knew there was a lot of unfamiliar terminology: “MD5 signature”, “base64”, “cross-site request forgery”, “‘Referer’ header”, to name a few.

When I started testing real websites, I was surprised that a lot of what I was doing looked like this:<script>alert(1)</script>

Everything was definitely not that simple…but a lot of things were. How could I be correcting the work of people who knew so much more about computers than me? I’d talk to customers on the phone, and they already knew how to fix the vulnerabilities. In fact, they were even already aware of them, in some cases! Periodically, WhiteHat publishes statistics about how long it takes vulnerabilities to get fixed in the real world, and how many known vulnerabilities are ever fixed. The most recent report is available here, with an introduction by Jeremiah Grossman here.

SQL injection was first publicly described in 1998, and we’re still seeing it after 17 years. Somehow, the social aspects of the problem are more difficult than the technical aspects. This has been true since the very beginning of modern computing:

Apart from some less-than-ideal inherent characteristics of the Enigma, in practice the system’s greatest weakness was the way that it was used. The basic principle of this sort of enciphering machine is that it should deliver a very long stream of transformations that are difficult for a cryptanalyst to predict. Some of the instructions to operators, however, and their sloppy habits, had the opposite effect. Without these operating shortcomings, Enigma would, almost certainly, not have been broken.

Speaking of the beginning of computing, The Psychology of Computer Programming (1971) has the following passage about John von Neumann:

John von Neumann himself was perhaps the first programmer to recognize his inadequacies with respect to examination of his own work. Those who knew him have said that he was constantly asserting what a lousy programmer he was, and that he incessantly pushed his programs on other people to read for errors and clumsiness. Yet the common image today of von Neumann is of the unparalleled computer genius: flawless in his every action. And indeed, there can be no doubt of von Neumann’s genius. His very ability to realize his human limitations put him head and shoulders above the average programmer today.

Average people can be trained to accept their humanity – their inability to function like a machine – to value it and work with others so as to keep it under the kind of control needed if programming is to be successful.

The passage above is from a section of the book called “Egoless Programming.” It goes on to describe an anecdote in which a programmer named Bill is having a bad day, and calls Marilyn over to look at his code. After she finds 17 bugs in 13 statements, he responds by seeing the humor in the situation and telling everyone about it. In turn, Marilyn thinks that there must be more bugs if she could spot 17, and 3 more were spotted by others. The code was put into production and had no problems for 9 years.

The author of the book, Gerald Weinberg, made another interesting observation:

Now, what cognitive dissonance has to do with our programming conflict should be vividly clear. A programmer who truly sees his program as an extension of his own ego is not going to be trying to find all the errors in that program. On the contrary, he is going to be trying to prove that the program is correct, even if this means the oversight of errors which are monstrous to another eye. All programmers are familiar with the symptoms of this dissonance resolution — in others, of course…And let there be no mistake about it: the human eye has an almost infinite capacity for not seeing what it does not want to see. People who have specialized in debugging other people’s programs can verify this assertion with literally thousands of cases. Programmers, if left to their own devices, will ignore the most glaring errors in their output—errors that anyone else can see in an instant. Thus, if we are going to attack the problem of making good programs, and if we are going to start at the fundamental level of meeting specifications, we are going to have to do something about the perfectly normal human tendency to believe that one’s “own” program is correct in the face of hard physical evidence to the contrary.

What is to be done about the problem of the ego in programming? A typical text on management would say that the manager should exhort all his programmers to redouble their efforts to find their errors. Perhaps he would go around asking them to show him their errors each day. This method, however, would fail by going precisely in the opposite direction to what our knowledge of psychology would dictate, for the average person is going to view such an investigation as a personal trial. Besides, not all programmers have managers — or managers who would know an error even if they saw one outlined in red.

No, the solution to this problem lies not in a direct attack — for attack can only lead to defense, and defense is what we are trying to eliminate. Instead, the problem of the ego must be overcome by a restructuring of the social environment and, through this means, a restructuring of the value system of the programmers in that environment.

By the nature of what we do, WhiteHat does try to find mistakes in other people’s work. It’s not personal, and those mistakes are rarely unique! In the big picture, what brought us computers was the scientific method, that is, the willingness to learn from mistakes.

Tags: sql injection