Security researchers must know how to responsibly disclose vulnerabilities found in any organization so they can navigate the legal system lawfully. Therefore, it is important that researchers understand how to practice responsible vulnerability disclosure, which will be explained in this Q&A-style article.

Without practicing responsible vulnerability disclosure, legal action may be taken against security researchers even if they disclosed vulnerabilities in good-faith, resulting in dire consequences of getting put behind bars to serve time in prison and/or having their financial accounts wiped clean due to lawsuits.

"Many hackers refrain from publicly disclosing privacy and security vulnerabilities they discover for fear of legal retaliation. Consequently, this is creating an increasingly hostile digital frontier for everyone. The only viable remedy is to provide safeguards for hackers conducting good-faith security research." –HINAC

In this Q&A, I focus solely on the legal aspects concerning security researchers and disclosing vulnerabilities they find, exploring the repercussions of not safeguarding yourself as a security researcher from potential legal retaliation.

I interviewed cybersecurity experts Alyssa Miller (@AlyssaM_InfoSec), Bryan McAninch (@bryanmcaninch), Phillip Wylie (@PhillipWylie), Alberto Daniel Hill (@ADanielHill), as well as a data & privacy lawyer, D (@PrivacyLawyerD), to gain their insights and advice on these important matters. This is part two of my series on the Hacking Is NOT A Crime global advocacy group, and if you haven't read it yet, check out part one here.

Now let's begin our Q&A!


Responsible Vulnerability Disclosure

Knowing how to properly disclose vulnerabilities found in a company's network, app, or device is imperative for any security researcher in order to prevent any unnecessary and potential legal action taken against them. In this Q&A-style article, I interview five cybersecurity experts, and ask important questions to learn how security researchers can protect themselves, responsibly disclose any vulnerabilities they find, navigate laws, and I interview a professional hacker from Uruguay who served time in prison gathering his perspective on these matters.

💡Q: What advice would you give to hackers who are afraid of disclosing a vulnerability they had found?

Alyssa Miller:

"Yeah... it is tough. Because even if you have the legal cover of a particular agreement and what not, organizations are still known to go back on it.

Before you go digging into any site, you want to be careful about understanding if they have: a Bug Bounty program, a reporting policy, and making sure you clearly understand and follow their process. Many orgs have that information published on their website and movements to help you do that as well. And sometimes, we may discover things by accident, and that’s when it’s far trickier.

The most important thing is to be very detailed in your documentation, to be very clear on what you did, and honestly, be careful that you don't stumble into private data or something else. So let's say as you build a POC (proof of concept) wherever possible, be able to stay within your data. And if you're able to create multiple accounts on a system, on that end, at least you can say, "I was administrator and wasn't accessing data I didn't have permission to." You can do things like that, but you have to be exceptionally careful.

Another thing you can do is reach out to an organization and ask for a safe-harbor agreement before telling them you discovered anything, and tell them that you found something. The safe-harbor agreement is to allow you to report a vulnerability without any threat of legal action. And at that point if they don't give it, you won't give something to report then. I know we want a company to get better and to work with you, but sometimes for your own sake, you don't report it.

You have to understand that if they won't give you a safe harbor agreement, you don't give them additional details.

You can go so far as to say, "Hey, I’ve discovered something that appears to be a significant vulnerability in your web app. I'd like to disclose it, but I'd like to have a safe harbor agreement in advance," so that they agree they won't come after you. It’s probably a good idea to get a lawyer if you can to review the safe-harbor agreement, and make sure there are no omissions where they can’t go after you."

After speaking to Alyssa Miller, I asked data and privacy lawyer, D (@PrivacyLawyerD) for more input from a legal perspective.

PrivacyLawyerD:

"First things foremost, I do want to say it’s a tricky area. You’re already working in an area that companies are reluctant to make disclosures about. And this can’t be legal advice because you shouldn't take it from a podcast. Consult a lawyer for your own situation.

The safest way for a security researcher is to make sure that the entity you're researching has a robust Bug Bounty program, robust procedures in place for reporting, and honestly it may help to see if there's an informal history of how the business responds to inquiries, notifications in the past etc.

What really pisses me off is when they claim to have a Bug Bounty program, and don't abide by their own rules because it’s just a matter of respect between companies and researchers out there that want to help. Because most researchers are ethical and decent people and also want to help these companies. Bug Bounty programs are shooting themselves in the foot.

Ultimately for a researcher, if you're worried about legal ramnifications, have a conversation with a lawyer before you disclose anything, usually they’re free. And if it’s a professional consult, they'll tell you if you need to talk further for a complex situation."

Responsible vulnerability disclosure is absolutely critical to safeguard yourself as a security researcher from any legal action, and to ensure that a company does not suddenly turn their back against you. In the case of Alberto Daniel Hill who is currently out on bail, he was the first hacker in Uruguay to get put behind bars after being accused of attacking a medical provider, which he had already disclosed vulnerabilities to a couple of times years apart prior to being arrested.

I asked him what he would have done differently, so that security researchers can learn from his personal experience and take responsible vulnerability disclosure very seriously. He enlightened me with his answer. Read further below.

💡Q: I read about your story... What would you have done differently if you could go back in time and do it all over again?

Alberto Daniel Hill:

"I used to always say if I had to live my life again and all the outcomes of my decisions, I would make the decisions again. But after my situation that destroyed my life and took everything away from me, I wouldn't have been so naive, and I wouldn't have reported anything, because what I reported to the medical provider was only 1% of the reports I’ve made as a professional security researcher.

It’s not like one day I woke up in a bad mood and decided to attack the provider. I wouldn’t… I love hacking, I love security, to me it’s my life. I can’t imagine my life without that. It’s an obsession. I can't live without that. If I had to live again, I’d be a hacker again, but I wouldn't be so naive to trust people like I did and report those things.

For me it’s complicated because I should want to help and do the right thing. But doing that... and dealing with a legal system... a powerful corrupt legal system…

if I could back into time, I wouldn’t do that because it was the login to hell. I had to live in hell. My life was destroyed and I’m still recovering, and I don't know if I can recover completely again and the events of that report.

After everything I lost in my life, I am sharing my story. I am completely open for it. For me, it’s completely natural. I have nothing to hide. I'm completely open to talking about it. I am not ashamed of having done anything wrong. I can work with my head up because I feel no shame of anything. I should try to help and make a change in the system and cooperate.

I am convinced the system is going to improve. And the laws of the system and the laws in the worldit’s a part of evolutionit’s going to improve. It’ll take time and effort, but it’s something natural. My event was a point of inflection, before and after my event.

I hope my case will eventually end, and change the way computer crimes are being persecuted in my country, and I have a petition. It’s absurd we don't have computer crime laws here in Uruguay as if we are in the 19th century. I don't want people to suffer what I did. I want to be the first and last person to go through that due to incompetence of police and justice of society in my country."


To Find or Not to Find A Vulnerability

Because of the fear instilled in many security researchers to disclose a vulnerability found, especially after reading about a story like Alberto Daniel Hill's and other cybersecurity professionals who have been charged with computer crimes, I asked our cybersecurity experts Phillip Wylie and Alyssa Miller for advice if it is OK or not OK to find a vulnerability without permission and disclose it.

Here's what they shared!

💡Q: Do you think it’s OK for a security researcher to find a vulnerability in an organization without their written permission and disclose their findings? Why or why not?

Phillip Wylie:

"If their intention is good, they should have some kind of permission. Or some kind of Bug Bounty. You definitely need that 'OK' to do that. You don’t want to test things without permission because, sometimes, people stumble across things when they shouldn’t do that. Be careful what you’re doing because it makes things more difficult and can affects laws in the future."


Phillip Wylie:

"What it does is, it makes the industry looks bad as a whole. You told me ethical hackers are good, but when you have someone doing criminal activities as an ethical hacker… to distinguish ourselves from the bad guys, it blurs the lines. Our biggest enemy are people doing that sort of thing."

💡Q: What actions would a security researcher or hacker need to do in order for them to be officially deemed as a criminal?

Alyssa Miller:

"That's a very broad interpretation of the CFAA (Computer Fraud and Abuse Act). We’ve had some recent court cases that we’ve pushed back on that a bit. It’s hard to draw a specific line. If what you’re doing isn’t for any gain of any type or to embarrass someone of any type—that’s kind of the high level line. As long as you’re not accessing data you’re not supposed to access, like if it’s a proof of concept be careful, even with ways to validate vulnerabilities instead of exploiting them.

When you are using data you're not supposed to access to gain benefit or to use it against someone else... then you’ve done something that is criminal. Even if you have a moral high ground, the legalities dont always match the morality.

You can even see whistleblowers today, not necessarily hackers, or people who stole things from the government and released them—legality winners. From the morality perspective, you may ask, "Did she release information that is good for society as a whole?" It’s also technically illegal.

So you have to be aware of, what are you using the information for. If you're using it for any reason other than the organization using it to fix the issue, it is probably illegal."


Advice For Businesses Dealing With Security Researchers Disclosing Vulnerabilities Found

Organizations should treat security researchers with respect rather than F.U.D. (fear, uncertainty, doubt) because they are implicitly ethical and genuinely want to help an organization when they disclose a vulnerability found to them.

Much of the F.U.D. is due to a lack of awareness and understanding of how security researchers work, including the misconceptions and stereotypes of hackers being "bad guys", the category which security researchers may fall under.

I asked BISO and hacker, Alyssa Miller as well as privacy and data lawyer, D what advice they would offer businesses and organizations, in an effort to change their fearful and negative views of security researchers. I was lucky to have that information provided to me which I am now sharing with the public.

"I think you have to start off by giving them the benefit of the doubt and establishing a cooperative relationship. More times than not, they have good intentions, or they won't tell you about it. Now that line starts to get grayed because of Bug Bounty programs.

If your organization doesn't have a Bug Bounty program, you may get a contact from security researcher requesting or demanding payment… so before you ever get contacted, you need to have a policy on vulnerability reporting clearly on your website somewhere. That’s where you can tell researchers exactly what your expectations are, and in order to set those expectations, you set those safe-harbor parameters that tell the security researcher "we will give that to you and not pursue legal action against you."

You should look into the possibility of how you will offer Bug Bounty rewards, and what criteria you’ll create in how those Bug Bounties should be.

And if they're not demanding compensation, still consider compensating them, especially if they are coming to you by reporting something to you they don't need to. They are doing you a favor. The security researcher may easily report it publicly which may be a zero-day response, but they are reporting to you instead.

Still today, hackers like "cred" by finding a high-profile vulnerability in high-profile product. So in order to avoid that… figure out compensation and how long it will take to remediate it. Negotiate in good faith. Unfortunately many organizations don't negotiate in good faith, and they try to downplay it or act like they knew about it all along. Be genuine and transparent. Those are the most important things."

💡 Q: How can these organizations develop a better relationship with security researchers to prevent jumping the gun when they disclose something?



Alyssa Miller:

"Organizations that lack security awareness and have less mature security programs are most likely to take legal action against a security researcher. It comes back to that cooperative attitude. Understanding what hackers are really about.

Cybercriminals are not typically going to you to report a vulnerability to you. In rare cases, a cybercriminal leveraged a vulnerability for gain and then reported it. Security researchers' intentions are good.

Even when demanding payment, although I don't like the idea of demanding payment when you’re not contracted with an organization… but still their intentions are good. But even if you’re not willing to pay them, their intentions are not evil. Their intentions are ultimately good.

So, using that opportunity to go after somebody just sows distrust, and ultimately it is going to lead researchers to not disclose anything to your company, and instead disclose it publicly. Would you rather encourage them to disclose to you rather disclose than public?"

Next, I asked @PrivacyLawyerD for his insights.

"This is all anectodal, but from what I’ve seen, businesses take action against security researchers because they’re nervous, freaking out, and trying to cover their asses. I think it’s an ego thing too. To a lot of companies that have security issues that are found out, the entire security community would respect you a hell of a lot more if you just came out and admitted what was going on, and you fixed it, or told them what you are working on.

Listen to the advice from security researchers and pay out on your Bug Bounties and to these researchers who want to help you. Otherwise, seeing these companies sue researchers, and take these aggressive actions makes them look like the “bad guys” in the situation.

It makes them look like they’re not willing to learn, to grow, to improve their own positions to make them better. Because nobody’s security is 100%, nobody’s privacy is 100%, and nobody’s going to get everything right. If you don't work to learn everything you can from privacy and security, then you're setting yourself up for failure.

I think some PR people and old school lawyers who are worried about disclosures think that going after the researcher is the right way to go when that just makes them look bad. Honestly I haven't seen it where the researcher has acted unethically, but I'm sure they are there out there.

But in the end, when a security researcher is coming in good-faith to try to let the company know about the security issue, the company should be coming at them and jumping at them to learn and say, "Oh okay, here's something that we can fix." If you're not doing that, someone who is less ethical will exploit that vulnerability,and use that information against your company.

Don't be too proud to take advice and help from security people.


That’s my advice to companies.

And that’s the larger trend in society that reflects how whistleblowers are treated in general. I think that we'd do a lot better if we did listen to a whistleblower in every aspect of society. And the fact that they have been treated badly in every aspect, has absolutely damaged security. It’s not to make the company look bad.

This is their job as researchers. We do better when we work together, and treating people with retaliation from these things are not making us better. Security researchers are not trying to be whistleblowers, but the whole paradigm of how whistleblowers have been treated have absolutely infected the way these companies treat security researchers."

💡Q: Anything else you'd like to add?


PrivacyLawyerD:

"If you’re a company out there, and somebody discloses a vulnerability to you—treat them seriously with respect. They're doing you a favor. If you get defensive, that just looks bad on you. Nobody wants to admit they have vulnerabilities when a company’s sole job is to provide security protection etc.

Ultimately, by accepting the work of these researchers and abiding by the terms of your Bug Bounty programs, you are sustaining a healthy ecosystem and infrastructure of security. Having that healthy ecosystem is what’s going to protect more of us in the longrun.


If you don't have a Bug Bounty program, please set one up. If you set one up, please follow the terms ethically like you say on your own program."

💡Q: What should a security researcher do if a company has a Bug Bounty program and doesn't abide by it?

PrivacyLawyerD:

"It puts the security researcher in a difficult situation so it’s a tough question. I can't speak to what they should do legally, but I will say my opinion. I think that ethically if a security researcher tries to go to an entity with a found vulnerability, and that entity tries to patch on the downlow or doesn't abide by the terms of their own Bug Bounty program... then ethically, I don't see anything wrong with the researcher publishing something (maybe not everything) because you still don't want to release vulnerabilities while you can help it.

Yet, I don't see anything wrong with them publishing what they did, what they found, if this company, this entity, didn’t abide by the terms of their own Bug Bounty program. As for what they should legally do all depends on the situation and facts at hand.

Ethically, they should disclose or at least publish their own research in a way that is ideally minimal and non-harmful to the security status of the entity, but PR be damned if they don’t abide by their Bug Bounty program because it will make them look bad and look like a fool. But for what they should do legally is a situation to situation issue.

There’s one more thing that really upsets me. There’s an element to a Bug Bounty program where the company itself is trying to put on a statement that the researcher, if they receive payment, doesn’t own intellectual property from the research made. And I think that is made both selfish and shortsighted, because if you are limiting the ability of security researchers, to publish in a way that is respectful and safe for the entity with the vulnerability, then there's just little reason to have a Bug Bounty program in the first place.

Don’t try to do these things in the cheap. Yeah, Bug Bounty programs can be expensive, but breaches are a hell of a lot more expensive.

Treat 👏 security 👏 researchers 👏 with 👏 respect, and you’ll have a strong ecosystem overall."


👉 If you enjoyed this article, please follow me on Twitter @marsgroves_

👉 Please follow the excellent interviewees on Twitter and show them support as well: Alyssa Miller (@AlyssaM_InfoSec), Bryan McAninch (@bryanmcaninch), Phillip Wylie (@PhillipWylie), Alberto Daniel Hill (@ADanielHill), D (@PrivacyLawyerD).

Happy Hacking y'all!