So here's something awesome. And by "awesome" I mean "kinda shitty."
Back in August, having seen me on Hak5 talking about hacking Netgear WiFi routers at Black Hat, someone contacted me through my company's website asking if I'd be willing to help out with some hobbyist WiFi router hacking. (As an aside, the company that I work for is Tactical Network Solutions. We're a boutique computer security firm in the US, specializing in vulnerability research and advanced exploitation of network infrastructure and related embedded systems.) The person, whom I'll call Will, told me he and others are trying to root their ISP-provided modem/routers so they can customize them and even use them with other ISPs. Essentially they're wanting to jailbreak them.
I responded to Will that if he would ship me some actual hardware, I'd take a look in my free time, but no promises. I wrote this off as something that happens when you get a few seconds in the Internet's spotlight, and figured I wouldn't hear from Will again. To my surprise, Will shipped me a couple of routers to hack on, so I signed on to their forum and got caught up on their research to date.
I spent about six weeks, part time, poking at the router's firmware. Expecting to find the sort of low hanging fruit we typically find in embedded gear, I thought this would be an easy win. I was wrong. I ended up taking an onion-like approach, investigating various aspects of the device in multiple passes gaining a deeper understanding with each pass. I finally found an application that I could successfully crash, controlling the CPU's instruction pointer. This was promising. I was confident I could develop an exploit that would yield a root shell, and bring us closer to jailbreaking the device.
I posted an update about the crash to the forum. What happened next was unexpected. A couple of days later, I received an email directly from the ISP's retail head of security. Mind you, this ISP operates communications infrastructure and sells services on multiple continents. This gentleman, whom I'll call "Mr. Flemming," reports to the corporation's board and CEO. My research had caught the attention of this vendor in no small way.
Mr. Flemming expressed concern that the vulnerability I had discovered would impact their customers' security and asked me not to disclose it publicly. Rather, I should share the details of the crash with his technical team so they could fix it. Reasonable, right?
At this point, I should point out that at TNS, we've never before been contacted by a vendor regarding our research. I know readers will attribute various motives to this vendor for wanting to prevent this bug's public disclosure. I'll just say this is a large company whose motivations are complex. I'm sure many possible motives are factors in their decision process.
"Well, this is interesting," I said to our managing partners. They agreed. While Mr. Flemming's message was amicable in tone, the subtext was clear: "We've been watching you. We know who you are. Mr. McGee, you wouldn't like us when we're angry." One of the partners promptly responded to Mr. Flemming's request, via email. The conversation spanned a couple of weeks and several messages. It went something like the following:
Us: We'll provide you with priority access to our research for no charge, with the following terms:
-We plan to release a proof-of-concept exploit after 30 days
-We want to publish papers and present our research at conferences such as Black Hat.
Them: We want 90 days before public disclosure, and we want veto power over anything you want to publish.
(At this point we looked at each other and said: "Wait...what? We're offering our research to them for no charge, and they're not happy with the terms?")
Us: How about if you reimburse us for our time--we're not trying to profit--and we'll work with you on a longer release window. We reserve the right to publish our findings, though.
Them: We don't pay for research we don't commission. Sorry. Also we'd really like you to let us review in advance (i.e., veto?) anything you plan to publish.
So that's where we stand. It's frustrating. While vulnerability research is still a nascent field where the legal ramifications are untested, this doesn't seem like a completely original problem. I know we're not the only ones forging this path.