For quite some time, we've been at the point in these security wars where we have to surround the truth of our data with multiple layers of protection, and we continually have to increase or improve those layers. One of those layers is that of secure answers to those "security questions" we so often have to fill in.

We've all seen those questions used to confirm our identity:

What high school did you attend?
What was your first car?
What's your dad's middle name?
What's your pet's name?

In the beginning, answering them with factual answers was the convenient, and seemingly only, option. They were easy to remember. As is usual with any cycle of invention, the bad guys found that they could make easy work of their own activity by taking advantage of the forthrightness of the good guys (just think of how door locks, steering wheels, and passwords have had to change).

The current wisdom is to falsify the information; or, as they say, lie. Why should we give incorrect answers to these questions? Other people - such as family and friends - easily know the real answers, or can figure them out quickly.  Our extended families and friends already know our high school, dad's middle name, and pet chinchilla's name. With a little social engineering, their friends and family will discover the answers, too. (not that anyone has anyone other than saints for friends and family, but just in case...).

One example: A mechanic friend of mine, after sharing some personal information, had his ID stolen by his college "friend," an action which led to years of paperwork and attorney fees in order to recover. Crime, even by a "friend," happens - always has, always will.

We are presented with an apparent dilemma: To be secure, we either 1) tell the truth, making the answers easy to figure out, or 2) lie.

For any person trying to be good, this is an untenable situation. Yet by understanding the situation, it's easy to see that it's not lying. At first blush, this seems to be rationalizing, but it really isn't.

When involved in some kind of intense networking event - e.g., speed dating - the potential is high that others can get lots of personal answers really fast. Additionally, many answers can be gleaned by anyone searching online for information (just check out recent Secjuice posts on OSINT). One example of freely available information is: "What is your mother's maiden name?" Many women include their maiden name on their Facebook page so that high school friends can find them after they're married (e.g., Jane "Smith" Doe). Another example is all of those posts about that favorite pet "Bunny" being at the vet.

When giving false answers to these questions, we're not giving fake information to any governing authority, falsifying financial records. or misrepresenting ourselves in court documents. We're giving answers so that an identity management system can identify us. We're used to passwords, passcodes, and passphrases - you can think of these as passanswers.

We're using what's called a "shibboleth." This isn't the same as the Shibboleth SSO architecture, but the origin of the term is related.

In modern use, a "shibboleth" is much like jargon, using a phrase or word to identify one group from another. It comes from an ancient practice where one people group use the word "shibboleth" itself to identify insiders from outsiders. Specifically, one group used the word because outsiders couldn't pronounce the "sh." If the challenger answered with "sibboleth," then entry was a no-go. In modern usage, it might also be considered jargon or slang. We all use this test often to determine where someone is from, or what profession people belong to, or any number of uses. Tech people love to use TLAs, business people like to talk about war rooms,  Twitter users talk about "ratio-ing," etc.

It's never been unethical to use a code word or secret phrase that identifies a person as belonging to a group. On the playful side, it's the kids in a secret club who require that potential entrants give the secret codeword before before being allowed in. On the technical side of things, it's an aspect of IAM (yes - that's a TLA - you can tell what group I'm in).

From a 2015 study by Google and Stanford, the following are 5 categories that make it easy to guess those "secret" questions:

  1. Questions with common answers
  2. Questions with few plausible answers.
  3. Publicly available answers.
  4. Social engineering.
  5. Social guessing attacks
    (Study found here: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/43783.pdf)

Answering with false, even ludicrous, answers is good security, and it's not a lie - it's part of identifying yourself to the authentication system as being part of its in-group, as an insider. You're electronically setting up a shibboleth between you and a piece of equipment that allows it to distinguish between you and the other 7 billion people on the planet.

A word of caution: No one is supposed to analyze any of those answers in those systems, but just in case...stay away from answers that can be misconstrued if monitored. If your collective answers equate to a personage who is considered a terrorist or on a most-wanted list, if someone (maybe your bank) is monitoring for suspicious entries, your answers, while not looked at directly but indirectly by something like AI monitoring, will get negative attention. Please note: I don't know if companies actually monitor those things, but stranger things have happened.

Go forward with the confidence that fake answers do not create a moral dilemma. Answer those questions with incorrect or even incoherent answers with a clear conscience, and end up with better security.

The awesome image used in this article is called Little Red and was created by Siv Storøy.