Quite a lurid title, but what gives? Some very smartass infosec people might yell heresy, but let me explain. Today we'll be diving into multiple "security" related questions, topics, even reasoning methods about security and I'm going to tell you why I think, that the security of something shouldn't just be measured on how hardened a system or software is, how many security features it has etc. Or at least for you to ask yourself, if it should be measured that way.

Let's get into it, shall we?

Preface, Disclaimer & Warnings

Just a quick heads up. Despite the title saying security and I'm mainly going to focus on security in this blogpost, I also mention "projects" that may be security & privacy related or even mainly privacy related and security is either necessary for these "project" applications and/or an afterthought or something alike. I try my best to be technically accurate in these blogposts, while also not going into every little detail and/or overcomplicating things. So please forgive me, if it seems like I'm not being too technically accurate at some points.

Also it may seem, that I tend to drift off from the given topic in a certain context, usually this is to introduce a concept that is known to most people in order to explain the "main topic" in that context. It also may be, that I do it unconsciously, but I try my best to avoid this. Though I want to mention, that they give my blogposts a personal spin to it and since most people (yeah, I mean you) reading these blogposts are infosec professionals anyways and are bored to death by all the "regular" blogposts, I thought this style of writing could perhaps be a little refreshing. You're reading a blogpost after all, not the manual of something.

Not Everybody Is Like You

I don't mean that in a negative way. I'm sure you're a great individual, but unfortunately not everybody is as tech savy as you. I know you've probably read this many times on every social media platform inside the "infosec bubble" and I'm tired of reading it too, but security shouldn't be in the users way. Sure, you know this, I know this and probably many more. That's also not the problem, but the problem is , that we tend to forget about these things. They might be somewhere in our memory, but we don't actively think about them when we'd need to.

If you know why security shouldn't be in the users way, you can just skip to the next block of text. In case you don't, here's a little explanation of mine. Security has many forms and layers, it reaches from the "deeper" technical side of (for example) sandboxing, pointer authentication and generally cryptographic processes that make your web experience secure up to warnings of a phishing attempt from your email provider and two factor authentication.

Besides that, there are also many other things that don't fit in this little concept of mine. Generally though you want to stay out of the users way with security related things and if you do, you'll have to make the user behave in a secure way. It doesn't matter, if you use the best and securest virtualization technique or have the latest and greatest IDS/IPS, if one of your employees out of curiosity puts in a USB that they found laying around, or they click a malicious link, maybe they even enable macros in excel shudder.

Shortly, most security or at least the worst happen because of the misbehaviour of people and not just systems. Essentially, go with this mindset. If you give the user the power to rule over their own security, they'll do everything in their power to either turn off the security features or circumvent them. It's a shitty reality, but it's reality after all. Obviously it's hard to secure a program/device or for that matter a user, without interacting with them. For now though, we'll have to do our best to stay out of the users way and in case we have to interact with the user, make it as friendly and intuitive as possible.

"Give a man a 0day and he will exploit some companies once. Teach a man how to phish and he will exploit people and companies for the rest of his life." - Luca Ziesler, 2020

Security = Philosophy?

Security is quite a bit like philosophy and in some way its own philosophy. So just like with philosophy, I don't really have answers just questions. I don't know if we have a definition that everybody agrees on to measure when something is secure. Some say it's really about the security itself and the security features, others say that security is only as good as it's intuitive for the user and some people say something different entirely.

We all want easy answers to big questions, but for me it's not as easy as that. Security is an amalgamation after all and not linear in any way. It really depends on many factors and not just one, the different factors are in no way equal and yet shouldn't be valued unequally. Everything that's supposed to be valuated from a security perspective has to and deserves to be valuated individually. You can't just put UX over security or security over UX, try to get the best out of everything for your use case.

I know, that this is very hard to achieve and not everybody in infosec has to deal with this or at least not directly, but it's even more important for the small amount of people in infosec that do. Even we as "infosec professionals" make mistakes from time to time, either because we're lazy or there's lacking intuitivity of some sort.

As an endnote, just try to imagine what you would be like as someone who's not in infosec. How would you evaluate security? What do you think is mostly misunderstood or miscommunicated from your infosec standpoint? What would you want people out of infosec to know about security? Any questions you might ask yourself and perhaps even others brings you one step forward to reason about infosec and security in general.

Is Security Incomparable?

So after all this talking about how you should make security as intuitive as possible, that you can't directly tradeoff security and UX we now go to yet another question. Namely, is security incomparable? Just another question that is hard to reason about. After all, this is a infosec/OSINT blog, so why do we even bother about all the other stuff besides security? Mostly because security is (in the big picture) useless without all the other stuff around it. Security depends on many things, it comes in all shapes and sizes and more.

You just can't have security without all the other things. To truly understand security, we have to analyze and understand all the stuff that comes with it and is around it. As previously stated, I don't have the answers. People interact differently with different things and while a smartcard is obviously more secure than most different operating systems, it also doesn't have the amount of functionalities and complexity of a full operating system.

There are so many things you'll have to consider about security evaluation, that I can't possibly write about all of them. My blogposts are messy enough and hard to follow anyways, at least that's what I imagine them to be. So, is security incomparable? From my viewpoint it surely is. You can't really reason about security in comparison to things outside of security and you can't even really compare different security techniques with each other. Computers, technology and also physical security are way to diverse and complex for me or most likely any of us to truly understand.

A Hands On Comparison

Let's get this done quick and dirty, what is more secure? Is it Windows10 or OpenBSD? I'm being serious here. No, this time I'm not even considering user friendliness or anything like that. Of course, OpenBSD has many exploit mitigation methods and such. Also, more Windows10 users get pwned than people who use OpenBSD. Mainly because more people use Windows10 and the people who use OpenBSD know what they're doing. So now that we've talked about how many people using OpenBSD and Windows10 got pwned, let's talk about how many attacks have been prevented by OpenBSD and Windows10. I think the answer is quite obvious.

But Luca, many more people use Windows10 than OpenBSD.

Correct, just let me explain. Windows10 and OpenBSD are obviously really different operating systems in every way, the only real thing they have in common, is that they both can be used as a desktop operating system. Thing is, OpenBSD maybe is in some technical ways more secure than Windows10, but also something to consider is that most real life attacks aren't about exploiting some low level stuff.

Mostly these things are either done in a lab, or just on special people/infrastructures etc. On the other side, people or companies using OpenBSD usually have a very good reason to do so and I wouldn't describe them as an everyday or common user. Also the attacks on OpenBSD systems are far more high profile than what most Windows10 users experience. The "common" user usually doesn't have to worry about these kind of attacks. Of course both OpenBSD and Windows10 use exploit mitigation techniques, but most "hacks/attacks" are still done through phishing, social engineering or a mix of both. A bit obvious since phishing basically is social engineering.

What I want to make clear, is that security is not just about protecting the system, but also (and mainly) about protecting the user. In a sense, protecting a system also protects the user, but protecting the user goes far beyond the technical side. Protecting the user or guiding them to act more securely can also be more effective than some security features from the system.

What Did You Just Read

I know, my comparison isn't really fair. That's what I already said beforehand, but I at least had to try. You might be furious or something similar even about my comparison and I understand that. What I hope though, is that you can put these feelings aside for now and think about what critique you have about my comparison. It's not just a comparison for the sake of it, but also for you to think more about. In a way I made this comparison unfair, so you can see the unfairness and spot it in the future, when people might make a similar comparison as mine or try to give an ultimate answer to something that has no answer.

Conclusion

Infosec is and always was a diverse and complex subject. It's kind of like a science that has just been discovered, where many things are still uncertain and unknown. Some views might still change and new insights are going to be made.

Many people in infosec come from different industries and shape infosec as a community and as a science. Approaches to information security reach from mathematics, physics, immunology up to art, creativity and design techniques. I've virtually met people with all sorts of different backgrounds, who are inspired and guided by their history to contribute to infosec. The main conclusion obviously is, that security can't simply be measured like a scenario in physics for example. Besides that though, this blogpost also shows the diversity of infosec.

Both in securing systems, securing the users and approaches as well as the people inside of infosec. Diversity isn't just a buzzword, infosec needs diversity in order to survive and make it as an industry. Cheers.


Learn More About The Images We Choose

Today we celebrate the work of noted parapsychologists Dr. Ventman, Dr. Stantz, and Dr. Spegler in the field of ectoplasmic physics and for their invention of the Proton pack. Proton Packs are special because they are the only piece of technology in the world that has the ability to strip electrons away from protons. It has a hand-held wand known as a "neutrona Wand" (also called a particle thrower) connected to a backpack-sized particle accelerator. The Proton Pack, also referred to as a charged particle accelerator, functions by using a miniature cyclotron to concentrate protons by channeling though a "positron collider" and then to the neutrona wand, emitting way-fire positronic ionized stream of proton energy that polarizes with the negatively charged ectoplasmic entities which held them in the stream while active.

Noted parapsychologists Dr. Ventman, Dr. Stantz, and Dr. Spegler outside their place of work.