There are many implementations of Crypto-systems that defeat the very locks they attempt to erect.  It's not that the Crypto-system algorithm is itself weak, but the way it was implemented.  At the turn of the century a widely used implementation of Secure Socket Layer, (SSL) by one of the largest software makers in the business, was designed with a serious oversight, it sent back error messages in the clear.  What this meant was that if a cracker wanted to pick this lock, all he/she had to do was to throw carefully tailored strings of jibberish at a site implemented this way, that is using this software, and occasionally a recognizable error message would be sent back. Example if you sent just exactly the right jibberish, you might get back



jiBberiSh-JibbErish file not found:


this tells you the last part of the jibberish you sent, when decoded by the other side was some command that requested something be done with a file, and it also tells you that the middle of the jibberish you sent, when decoded by the other side was decoded as jiBberiSh-JibbErish and judging how long it took to return the answer, you might even be able to guess the operating system on the other end was attempting to run jiBberiSh-JibbErish as an executable program.  After a mere million tries, you have a fifty percent likelihood of breaking the password, based solely on error message dialog.  A security reseacher in AT&T discovered the issue and reported it to the company that implemented the SSL in question.  The company first tried to deny the error messages were sent back in the clear, when the AT&T researcher presented demonstration code to break the key, the implementing company tried to claim that it was a special case, and no body would use it that way.  It was more or less at this point that the demonstration code was leaked to the public, along with the whole sordid story, and then they had to fix it.  Folks in the business call this Cryptographic Snake Oil.


A more subtle form is where Cryptography is implemented correctly, but weaknesses in the underlying operating system allow attackers to install things like Keyloggers, which wait for a system call to the SSL DLL to act as a trigger to transmit the last 5000 keystrokes to some E-mail address on the internet, completely undoing any security offered by even a correctly implemented Cryptosystem.  It's still Cryptographic Snake Oil, their still offering for sale something that looks good, but won't provide one whit of protection from a determined foe.

What can be done...  One thing one can do is to use code issued under either the BSD, or GPL license where ever possible, this protects you via the Thousand eyes effect Many Linux distros today think nothing of tainting the Kernel there are three levels or degrees to which you can do this, to some introducing even the smallest amount of encumbered code, that isn't strictly GPL pure, is tantamount to being a little bit pregnant, either you are proprietary, or you're GPL, no consideration is given for any thing in between.

  • Free but encumbered
  • Here you are able to modify, even redistribute in many cases as long as you meet certain rules, the code is open, the thousand eyes effect is still working for you, others on the internet can alert you to security problems, so this is probably the lightest penalty you are likely to pay for things like factory written Linux drivers.

  • Nearly Free
  • Personally I feel that if you and others can see the code, if there is a potential for mischief it's visible, and corrective action can be taken against it.  Even if you don't have the right to distribute modified code under such a license hostile to the GPL, using it for your own purposes is another matter, but you must be sure you don't distribute such code accidentally.

  • Non Free
  • This one can be DANGEROUS if not handled with Extreme care.  This is the case where a Blob of binary, often directly spiked from a Windows CD is placed directly into the Kernel, who knows what that code really does.  The NDIS driver is an example of this.  There is one exception.  Sometimes the only code needed from the proprietary Closed Source world is written to run on some embedded microprocessor inside an I/O card of some type as part of the initialization sequence that wakes that card up.  After that is done, the binary blob is never used again unless the card for some reason needs to be reinitialized.  The point here is that this hunk of unknown binary is firmware for that card, not part of the running Kernel executing privileged instructions on the system CPU.  It is no more dangerous to insert this firmware, than it would be to run firmware that came with an I/O card's internal ROM.  The absolute worst thing such code could do is tamper with the data stream channeled through it.  It could not ever be for instance, used as a Key-Logger, since the I/O card has no access to any of the system resources such as the keyboard.  This is an important distinction, code that runs on the system CPU has access to all system resources, it could if it wanted to, reformat the system hard drive, Kernel modules have that kind of access.  Fortunately this isn't as dark as it seems.  A carefully written wrapper program, acts as a kind of SUPERVISOR program, to limit what can be done, and if the hostile Windows driver attempts to do certain things an exception interrupt is triggered, to halt / monitor / log  such activity, at the superuser's option.  The key to making this safe, is to be very careful in the design of the Wrapper, to allow only those things to happen that really need to happen, and to block all others.  If the Wrapper driver is not designed with the utmost care, a single Windows Blob of code could defeat your security completely.  You may as well be running Windows.  Or another way to look at it, is you are running Windows, at least a little bit of it.