There are a whole host of reasons why you -should-. But sometimes, implementing security can in fact be counterproductive. The issue is, as in all issues security, thus: Users.
I am involved in the creation of server software. In a meeting a few years back, when we were brainstorming in regards to a new product which was meant to run in the same system as an existing product and connect to it, one of our architects got caught up in the idea that we should provide security for the connection.
We argued with him. At length. Everybody else in the room was vehemently opposed to securing the connection.
Okay, you say, that makes no sense! But it did. These two systems were intended to sit on the same box - while technically possible to connect these systems on different boxes, you'd have to have access to the server box (that is, the original product) to do so.
Even so, you might be thinking, why on earth wouldn't we want to secure the connection? So it's on the same box, so what?
Because any security credentials we used would be, by definition, accessible to anybody who could access both the client and server application to begin with.
We have a longstanding policy regarding security: It's the client's job. We don't encrypt database usernames or passwords, we don't password protect server certificates (although the client can do so), we, quite simply, don't do security. (That's not in fact true, we have -lots- of security, but only on outward-facing components.)
If your file system is secure, so is our software. If your file system is compromised, so is our software.
Because there's no way around this. We can encrypt your username and password - but if our file system isn't secure, you can open up our code, decompile it, reverse-engineer our algorithm, and decrypt that username and password. It would take me a couple of hours to do exactly that. Obscurity of encryption algorithm doesn't help here, and where obscurity doesn't help, nothing does. (Yes, we could encrypt it with a key, but where is the key going to live?)
Any security we implement would be a bandaid, and would only hide real security issues from our users: Namely, that these boxes can only be as secure as their filesystems. Not only that, by providing security, security becomes a part of our product; if it fails, and somebody does Something Bad, we're (more) liable, because a part of our product was in defect; our security could have Been Better.
So we just don't do security, not in that way. We -could-, and it could arguably make the product better [Normally I despise the word "arguably" but here I think it's appropriate, as there are other reasons I disagree which aren't relevant to this post], but to do so makes us more liable without adding anything to the table we can sell. Even if it were free to develop we wouldn't do it.
So this architect lost that argument. And progress marches on. Without redundant security measures.
No comments:
Post a Comment