laitimes

After discovering a low-level bug in a competitor's code, I was fired by the company and taken to court

Compile the | Nuclear Coke, Tina

Is this just "a big storm in ordinary life"?

Yesterday, a developer shared his recent experience on Hacker News: Curious, he looked at the web page source code of a friend's company and found a very low-level error in the code of the other company. Safety was at stake, so he immediately reported it to his supervisor and his own company. Unfortunately, he was fired shortly after by the company on the grounds of this security incident. After being fired, he also received a summons from the court and needed to fight another lawsuit himself. He didn't understand this, believing that he "didn't do anything illegal", but as a lesson, he shared the story anonymously...

In reality, however, things were not as simple as he had imagined.

What happened: A statement from the person concerned

View the source code to report vulnerabilities

Some time ago, I worked in software engineering at a bank. Although it has nothing to do with information security, I know some business insider information and have always been interested in this aspect. At work, I found that a company was going to issue credit cards, and the launch of the business would make the other party a direct competitor to our bank. I'm certainly curious about their business plans, and I happen to have a few acquaintances working over there.

After seeing some of the card-issuing content they posted on the production app, I downloaded it and planned to parse the relevant assets behind the feature (it's actually very simple, just unzip the .ipa file and look for the image/text). Unexpectedly, I found that it contained a large number of server mocks, which may be a legacy of a debugging build. To figure out how apps use these resources, I set up Charles Proxy and tried accessing them on my phone. Honestly, I'm just going to give it a try, after all, is there anyone who doesn't use SSL Pinning now? (After all, the other party has issued and accepted credit card payments, more or less a little security awareness) But it turns out that they don't use any SSL Pinning, and I can use these mocks I found to further parse the app.

I signed in to the app with my personal account and looked at the request to see which endpoints needed to be mapped to which local mocks — I should have encountered unauthorized endpoints and HTTP 403 errors, right? One of the endpoints returns true/false for the credit card module, which I mapped to a local file that always returns true.

So, I quickly found the card opening guide section of the other party's credit card and wondered if I could further find out the credit card function homepage. Later, I noticed that a lot of mocks on .ipd used the exact same endpoints that I had mapped before, so I quickly found the credit card feature homepage. Here I found something more unusual... It's a name I've never seen in any mocks. After checking through Charles, I realized it came from some API that I hadn't mapped...

I specified a card ID using mock... The app will then ask for this digital ID. Because I'm logged in as a cardless user, I should be denied access. But...... The card ID is here, displayed in clear text.

With my experience working on credit card projects, I was initially judging that the other party was running a test process in some kind of production environment, because I was getting a test procedure that only employees (the other company had a small team of specialty) could access. "Maybe if you go down this line, you can find a static file." Driven by curiosity, I decided to request a change in ID, this time with another card number and name. As I fumbled, I came to realize that these were real card numbers that would be provided to the logged-in users in clear text. This is also outrageous... There were so many mistakes that I couldn't imagine such low-level mistakes in my company. But the other party really came up with such a development result.

At this point, I decided that I must report the safety issue to the other party. They may not realize the risks of deploying these problematic things into production. But after some thought, I decided to say hello to my employer at the time. It's definitely not meant to show off, I'm worried that if this competitor happens to have a bug bounty program, it might seem a little strange to take the bonus there as me.

In addition, our company may be able to communicate directly with the responsible department of the other party, after all, I do not know which person in charge of the other party should contact. So the next day, I told my manager about it, and she reported it to the CISO. In the days since, she has been following up and said she will definitely disclose the discovery. Some colleagues from the information security department also participated, warning that public disclosure is also risky, after all, many companies are not very acceptable to this kind of thing...

Here's the trouble: not only being fired but also going to court

After that, there was no movement... I began to wonder if the discovery was important—the problem itself was serious, but the opposing company might have known about the problem and decided to risk getting the other features done first.

A few months later, my manager called me at work one day and asked me to attend a temporary meeting. Walking into the conference room, I found that the HR specialist, the company executives, and my manager were all there. The result of the meeting was that I was fired because the other party claimed that I had accessed some of their internal APIs. Yes, I interviewed one, and that's the one I disclosed. My manager knew it, and so did the other three executives at our bank. After listening to my arguments, the field leaders communicated the opinions of the other company, claiming that I had visited these internal APIs several times after reporting the situation (but I did not).

Honestly, the other side seems to be doing this to describe my behavior as some kind of commercial espionage. Our bank fired me to prove that there was no commercial espionage. But I'm purely out of personal curiosity, which is so complicated...

A few weeks later, a police officer came to the door and brought a court summons. They didn't say what was the reason, but combined with the situation some time ago, it must be related to the security problem I found. After talking to an independent lawyer familiar with the technology, I realized that the court had identified the incident as suspected credit card fraud: the plaintiffs claimed that I had made multiple transactions with dozens of credit cards. It gave me an instant understanding of why I was fired. The allegations are so severe that no bank wants to hire employees accused of fraud.

I have never made any transactions with these card numbers, have not disclosed the specific method of obtaining card numbers, and have not made any profit from using these data. The other party's allegations are obviously untenable, and the so-called "transactions have been made on dozens of cards" cannot be queried in their journal system. So when I first got the subpoena, I was worried for a while, afraid that the other party would sue me for unauthorized access... It wasn't intentional, but I did.

Fortunately, the other party chose to charge credit card fraud, although it sounds more terrible, but I am confident that this false accusation can be dismissed.

That's basically the way it is. My life goes on, I've got another job, and I've got a lawyer to help me deal with this farce. When the dust settles, I think it's a good idea to share this story anonymously. Honestly, from an outsider's point of view, this is really a big storm in ordinary life.

Security Expert Review: Please put away your curiosity

After the anonymous post was issued, netizens had a fierce discussion and built hundreds of floors on Hacker News in a short period of time.

The blogger's post shows that he himself is not aware of the criminal behavior, but feels that the other party is mistaken, although he has probed the other party's API, but he has never used these card numbers to make transactions, there is no fraudulent behavior.

Some netizens believe that the dismissal is unreasonable and that the blogger is shirking responsibility. Some netizens also believe that modern software is very bad, especially since the transition to modern technology, more and more APIs in these enterprises are written in js (not discriminating against this language), whether these APIs are external or not. Therefore, when exploring the partner/customer (internal) API, it is normal to accidentally find very corrupted errors and vulnerabilities. Between companies and companies, companies and employees, there is a lack of honest communication, a lack of trust, a lack of cooperation to make things better.

At this time, a security expert with the screen name "tptacek" jumped out.

Tptacek is a security researcher and software developer with nearly three decades of experience in the security field, a reviewer at several security conferences, and one of the three founders of the security company Matasano Security, once the largest software security company in the United States.

He pointed out: "What the author did was definitely not completely legal. Some people may think that I am engaged in the 'victim guilt theory', and I just want to make a simple judgment from my own point of view. The idea of innocence can mislead others. With the exception of CFAA (Computer Fraud and Abuse Act), no external test is a 'reasonable test'. The author's problem is that he looks for vulnerabilities on someone else's website without permission. You don't have the right to do that. While some businesses will feel indifferent, most will certainly not tolerate this behavior. The law actually supports the intolerable side. Although the author will most likely be acquitted, the lawyer's fee is probably enough to drink a pot. ”

This happens from time to time, and even having a clear bug bounty program can sometimes put technicians at risk. Such programs generally have boundaries, that is, only allow tests within a certain range, and many people are guilty of not being sufficiently alert to the boundaries. Violating these unspoken rules without prior authorization can easily lead to legal disputes, which is what the author encountered.

"So I hope you can learn to be good, and the next time you encounter a clear text credit card number popping up, remember to stop and don't go any further." Put away your damn curiosity and think about where someone else's bottom line is. ”

The more discussed things become, the clearer it becomes that bloggers act instead of just "looking at the source code," but also taking apart competitors' applications and reverse engineering around private APIs. This laid the groundwork for a liability "mine" in which he needed to consult the relevant lawyer immediately. In addition, bloggers have not left any written records when they feedback to the company...

Not that ignorance of the law is an excuse, but software developers who aren't in the information security space also need to be aware of some limitations. Public information security research is necessary, but the vulnerability itself is also a "hot potato". On the one hand, the vulnerability handling mechanism is inseparable from the cooperation of the vulnerability finder, but on the other hand, there are high legal risks in the exploration and reporting of security vulnerabilities, and it is necessary to pay attention to not touching the "red line" of the law. There are often developers in China who commit misconduct because of their weak legal awareness:

In July 2019, a government website administrator reported that netizens in the mailbox module had sent abnormal messages many times, suspected of being hacked. After the police investigation, it was found that the suspect used his rest time to conduct penetration testing on the website without authorization, and his purpose was to find out the vulnerabilities of the website and generate vulnerability reports, "to make some contributions to his hometown".

In May 2019, jieyang internet police work found that the illegal suspect Su Mou was suspected of illegally invading the computer system. After in-depth investigation, it was found that the illegal suspect Su used the "Royal X" software and other software to scan the vulnerabilities of websites such as Nanfang.com, and then used a weak password to test the background of the website of Beijing Hospital of Traditional Chinese Medicine and successfully logged in, changed the administrator account password without authorization, and submitted the vulnerabilities of the website to the "Vulnerability Box" website. According to his own account, his illegal behavior was only to obtain the corresponding points, which was "conducive to his future job search".

The endless occurrence of such incidents also shows that even if you do not work in the field of security information, it is necessary for everyone to understand some basic laws, including the Cybersecurity Law, the Regulations on the Management of Security Vulnerabilities in Network Products, etc., to know where the "boundaries" are, so as to avoid the embarrassment of breaking the law or even committing crimes because they do not know the law and abide by the law.

Reference Links:

https://news.ycombinator.com/item?id=30706014&p=1

https://www.secrss.com/articles/13122

http://gd.sina.cn/news/2019-11-12/detail-iihnzhfy8578952.d.html

Regulations on the Security Protection of Computer Information Systems: http://www.gov.cn/flfg/2005-08/06/content_20928.htm

Cybersecurity Law: http://www.cac.gov.cn/2016-11/07/c_1119867116.htm

Provisions on the Administration of Security Vulnerabilities in Network Products: http://www.gov.cn/zhengce/zhengceku/2021-07/14/content_5624965.htm

A Guide for Researchers: Some Legal Risks of Security Research: https://clinic.cyber.harvard.edu/files/2020/10/Security_Researchers_Guide-2.pdf

Read on