Posts tagged ‘DNS’

Should Uncle Sam Mess with the DNS?

There’s a debate going on right now in governmental and technical circles on how best to combat copyright and patent infringement online. A bill called the PROTECT IP Act would allow the government to secure a court order and then force an ISP to stop resolving the offending domain name to its corresponding I.P. address. Here’s Ars Technica with a really good overview article.

Image courtesy of the report "Security and Other Technical Concerns Raised by the DNS Filtering Requirements in the PROTECT IP Bill"

This is commonly referred to as “DNS Filtering,” and is a fundamental change to how the DNS operates. DNS ideally seeks to return the exact same IP address everytime for any URL requested anywhere around the world. To oversimplify just a little, this provision of PROTECT IP is a state sanctioned “man in the middle” attack. Unlike a criminal attack where the intent is to deceive, users would be presented with information informing of them of why they were being blocked from the content they requested.

The debate around this approach has been fierce, even if confined to tech and policy arenas. It can get pretty techie, but basically boils down to two related but distinct issues — will this approach work, and is it the right approach?

Will It Work?

Paul Vixie is Chairman of the Internet Systems Consortium and has played a big role in how the Internet operates today. In the mid 1990s he introduced the publication of “black lists” that network operators could share and use to refuse email traffic from known spammers. In 2010, he proposed something similar for DNS filtering, called DNS Response Policy Zones to perform much the same function for DNS queries. Here’s his full description via a CircleID essay last year.

But Vixie says this won’t work as part of PROTECT IP. As I understand it, he feels his approach only will work when network operators (ISPs) and end users agree on what kind of content needs to be filtered — spam, malware, phishing attacks etc. If users are blocked from content they truly want, there are many ways to bypass DNS filtering. Here’s another CircleID post where he explains his position.

Is This the Right Approach?

In May Vixie and four other DNS experts authored a report raising security and technical objections to the filtering provisions in PROTECT IP. Their concerns are numerous, but focus mainly on conflicts with DNSSEC and the fragmentation of the current DNS addressing system, potentially leading to (somewhat paradoxically) a more dangerous and crime filled cyberspace.

Not so fast, says George Ou of HighTech Forum. Here’s a long post in which he debunks the positions of the report’s authors, even though he acknowledges that users will be able to bypass the filtering. And last week he participated in a debate in which he claims report author Steve Crocker refused to answer questions about how filtering interferes with DNSSEC.

It’s impossible to capture all the points in one blog post — I hope all the links above give any reader who wants more info lots of options. One thing I haven’t read from any expert is the danger of government abuse of DNS filtering. Once the practice is sanctioned, isn’t it possible this could happen without a court order?

You don’t need to dabble in conspiracy theories to think so. Just a few years ago there were massive wiretaps conducted illegally by the government in the aftermath of 9/11. A process called FISA was in place for court ordered surveillance, but the government simply chose to ignore it. Under intense pressure, every ISP and telco except Qwest caved and handed over the information.

Once a technical work around is established, more uses will be found for it. A very smart guy who has been involved in Internet infrastructure issues for decades warned me about this back when Internationalized Domain Names (IDNs) were first implemented back in the early 2000’s. Once the redirect genie was out of the bottle, there was no way to put it back in.

I’ve worked on DNS issues for a long time, but I’m not an engineer. Nor am I a fan of copyright and patent violations in cyberspace. I don’t think people have a “right” to whatever they want or can find online.

I do believe that the more visibility this debate has the better. A public and transparent debate gives us the best chance of finding a compromise that is good public policy and  good for the Internet at large.

July 28, 2011 at 8:28 am 1 comment

The Dark Internet

I consult on communication issues for Neustar, an Internet infrastructure company. Neustar works behind the scenes to ensure the smooth operation of many critical systems like DNS, the .us and .biz domain extensions, local number portability and digital rights management.

One of the cool things about working for them is the chance to attend the events they sponsor. Last week Neustar sponsored a security briefing for senior federal IT personnel focused on Cybersecurity and Domain Name System Security Extensions (DNSSEC). The speakers were Rodney Joffe, SVP and Senior Technologist at Neustar; Merike Kaeo, founder of Double Shot security and a prominent security expert; and Edward Lewis, a Director at Neustar and author of numerous RFCs dealing with DNS and DNSSEC.

What they all described was very sobering. Bottom line, there are fundamental protocols of the Internet that were not designed to be secure. And there is only so much anyone can do to protect themselves.

There’s no way I can communicate all the material presented in this post — I’m just not that good a note taker. But I can share how they framed the escalating security threats.

Merike led off the presentations. She grouped threats into four categories — Protocol Errors, Software Bugs, Active Attacks and Configuration mistakes. Here’s how she charted the evolution of online threats:

In the Past – Deliberate malware was rare, bugs were just bugs, mitigation was trial by fire and the regulatory structure did not exist.

Today – Highly organized criminals are designing specific malware, bugs are now avenues for attack, mitigation is understood but deployment issues remain, and regulations struggle to assess the reach and impact of cybercrime, though global coordination is much better

She also shared some interesting insights into the cyber attacks in Estonia in May of 2007. Merike is Estonian and was in the country at that time. She shared how cyber literate the population is in that country, and how they fended off the attacks far better than media reports indicated.

Rodney titled his presentation “Black Swans and Other Phish,” a reference to the Nassim Taleb theory, not the new Natalie Portman movie. His overall message was the miscreant of the distant hacking past became the spammer of yesterday. The spammer became the hardcore online criminal of today, hired by organized crime and nation states alike.

Some other interesting point for me:

  • DDoS attacks first arose to attack anti-spam efforts
  • Malware specifically designed to steal personal information and credentials appeared around 2005
  • In 2007 nation states got into the dark game

In an effective demonstration, Rodney brought up a false FBI web site by typing in an IP address corresponding to The cache had been poisoned, and that morning a fake web site was announcing to the world it was the real site of the FBI. Many in the room were clearly surprised by how easy it is to poison the cache of such a high profile government site.

Rodney also talked about the need for better information sharing between government and private networks. (Actually, he said government shares nothing, so anything would be an improvement.) Neustar will be launching a  new service soon that will offer agencies full visibility OUTSIDE their networks, and analysis based on actual packet inspection, not just sampling. This gives them a dashboard so they can monitor, understand and then (hopefully) mitigate.

There was no mistaking Ed as the engineer of the group, in his jeans and flannel shirt. He’s also one of the foremost experts on DNSSEC in the world, and feels that finally there is consensus around a critical point. Finally, people are realizing that the cost of implementing DNSSEC pales in comparison to not implementing it.

The biggest challenge of DNSSEC is not the signing, it’s the key management. The more or less final version of DNSSEC has been ready since 2004, and got a huge visibility boost with Dan Kaminsky’s revelations on DNS vulnerabilities in the summer of 2008. That same year, OMB mandated DNSSEC for the .gov domain.

Ed sees that as a good first step, although it doesn’t address the security of others caching .gov IPs. There’s still a lot of work to be done, but Ed is a lot more confident that he used to be. First, because of the cost question mentioned above. Second, because the security problem is real. Finally, because there is no better solution to the problem.

He also cautioned the government audience to focus on the right end goal. The goal is a secure DNS, not a deployment to meet a mandate.

I left the briefing a lot smarter on this topic, and a lot more worried. There seems to be more official recognition of online dangers, and one of the presenters referenced the fact that Janet Napolitano has announced she wants to hire 1,000 cybersecurity professionals over the next three years.

But it was also mentioned the Chinese government is training 10,000-20,000 cybersecurity students per year in their national defense universities. The land where the Internet was invented is starting from behind in this race. We’d better start sprinting!

January 25, 2011 at 8:53 am 3 comments

‘Tis the Season for DNS Innovation

Look out below!

The first two weeks in December haven’t just been the glide path to Christmas this year. 2009 has belatedly turned into a year of innovation and business moves around the Domain Name System (DNS), a vital but under-appreciated protocol essential for the proper functioning of the Internet. All this news may finally propel awareness of DNS beyond strictly technical circles.

First off, let’s start with the big Internet whale. On 12/3 Google announced they were offering Google Public DNS, a free service that allows anyone to use DNS supplied by Google. The company already controls about 65% of online advertising, so why not control the on-ramp millions of consumers use to get online? Here’s TechCrunch’s take, just one story among much coverage.

When Google muscles into a market, they create a lot of waves. As TechCrunch points out, a startup called OpenDNS has been successful in the recursive DNS space. (quick DNS tidbit — recursive DNS returns answers to users, authoritative DNS provides the right answers) Both services are free — OpenDNS makes money by presenting ads to users who type incorrect URLs, or domains that don’t exist. Google says it won’t do this, and their “pure” DNS will deliver a not found response. (Google proudly saying no ads served — nice irony)

While this is going on, my client Neustar and Infoblox announce a strategic relationship. To over-simplify a bit for clarity, both are leaders in the area of authoritative DNS. Neustar is the number one provider via the cloud, offering  DNS as a managed service. Infoblox offers DNS management via an appliance approach, a “box” if you will that resides in the customer’s network. In the past these two have been competitors — now by working together they have created a potent, one stop shop for ISPs and top Internet brands. Here’s a good piece by Carolyn Marsan of Network World on the partnership.

Now back to OpenDNS. On 12/10 Neustar announces they have launched the Real-Time Directory service, the first fundamental change in how the DNS operates in about 20 years. Basically it allows changes made to DNS to be almost instantaneous, rather than waiting up to a full day for servers to ask for changes (due to something called caching). OpenDNS is the first recursive provider to sign up, making their DNS better than Google’s. Take that, you “do no evil” bullies! Here’s Cade Metz of The Register with a good synopsis.

So it’s been quite a couple of weeks for old DNS. And very exciting for me, and not just because my client is right in the middle of these developments. DNS is a protocol that really needs the attention, as was made plain by the Kaminsky vulnerability in July of 2008. Now it’s getting it, and with Google in the mix more media may pay attention.

So what’s next for DNS? I can’t say, but a great by-product of Google offering DNS is it makes the big ISPs take a harder look at how they do DNS. Up to now it’s been an after-thought – maybe now they will focus on ways to make the recursive DNS they provide millions of consumers more reliable and secure. And that’s something to be thankful for as you purchase your gifts online this holiday season.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

December 15, 2009 at 10:37 pm Leave a comment

Combining to Confront Conficker

Microsoft reached back to the days of the Old West last Thursday to battle an online worm that has infected millions of computer worldwide. It put out a bounty and assembled a “posse” to catch the bad guys.

Microsoft announced a $250,000 reward for information leading to the arrest and conviction of the author(s) of the Conficker worm, also known as Downadup. The worm first appeared late last year and has multiple ways to infect machines running Windows. Estimates range as high as 12 million computers infected, and the infections have the potential of creating a gigantic “botnet” out of those machines. This could be used for distribution of malware, spam or to launch Distributed Denial of Service (DDoS) attacks. A patch was released by Microsoft in October, but the worm has continued to spread rapidly.

The company also announced a large group of firms working together to combat Conficker. The group is made up of leading security firms, the Internet Corporation for Assigned Names and Numbers (ICANN), registries and leading operators of the Domain Name System (DNS). Microsoft’s announcement:

Here’s a roundup of coverage:

Computerworld —

PC World — —

InformationWeek —

Washington Post —

The posse was created to head the worm off at the pass, so to speak. The worm seeks to update itself using seemingly random lists of domain names it checks to receive new code. The algorithm used to generate those domains has been cracked by Finnish cyber security firm F-Secure. Now the companies can pre-register the domain names, preventing the worm from updating itself. And computers infected with the worm can be identified when they check in. This contains the growth of the virus, although it does not eradicate it.

Here’s a detailed description from Jose Nazario of Arbor Networks:

This is an encouraging example of industry working together to combat a common threat — much like the coordination around the DNS flaw identified by Dan Kaminsky in July of last year. Hopefully this group can remain organized in some form and continue to fight the increasingly sophisticated attacks looking to exploit the distributed nature of Internet infrastructure.

UPDATE – new variant of the worm released by the bad guys, Network World:

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to FurlAdd to Newsvine

February 16, 2009 at 7:32 am 1 comment

Kaminsky “Officially” Reveals DNS Flaw at Black Hat

Dan Kaminsky has had quite a month. Early in July, it was announced that months earlier he had discovered a major security problem with DNS, the addressing system of the Internet. But he didn’t make the news public. Instead he worked for months behind the scenes with major technology providers so patches could be programmed and made available.

He wanted to give companies a full month to implement steps to protect their recursive nameservers. Then he promised to reveal all during an address today at the Black Hat security conference in Las Vegas.

But it didn’t quite work out that way. Details of the vulnerability leaked out on July 22nd, stealing some of Dan’s thunder. But from all reports the presentation was jam packed, and Dan was shown the appreciation he deserved as he detailed the seriousness of the problem. Joe Menn from LA Times:

He called the problem the worst discovered since 1997. The standing-room only crowd gave Kaminsky two ovations, in part for the technical significance of the find and in part for his handling of the crisis. Microsoft, Google, Yahoo, Facebook, MySpace, EBay and many Internet service providers have secured their machines.

“We got lucky with this bug,” Kaminsky said in his talk, saying other profound flaws are lurking that will be just as hard to resolve. “We have to have disaster-recovery planning. The 90-days-to-fix-it thing isn’t going to fly.”

Interestingly what few of the articles on this problem talk about is, what now? The patches greatly reduce the danger that this flaw could be used for DNS cache poisoning attacks, but they don’t prevent it entirely. Many are touting DNSSEC as the ultimate answer, but that is years away in a best case scenario. Even after the final nameserver is patched against this latest threat, the issue of DNS security will remain critical. Too many things — cloud computing, SaaS, ecommerce, wireless NAC, VOIP — depend on reliable DNS for the status quo to continue. “Patched” isn’t good enough — DNS needs to be fixed.

August 6, 2008 at 10:32 pm Leave a comment

It’s Tuesday — Must Be Time to Fix DNS

Tuesday a big story broke that could have impacted millions of web users. A researcher discovered a major security flaw involving the Domain Name System (DNS), and instead of selling the information or using it to market himself he went to major internet vendors and discussed the vulnerability with them. Today Microsoft, Cisco, Sun and BIND (via the Internet Software Consortium) issued patches to this problem, before the bad guys could exploit. Good report from Rob Vamosi of CNET:

Dan Kaminsky, director of penetration testing services for IO Active, found the DNS flaw earlier this year. Rather than sell the vulnerability, as some researchers have done, Kaminsky decided instead to gather the affected parties and discuss it with them first. Without disclosing any technical details, he said, “the severity is shown by the number of people who’ve gotten onboard with this patch.”

He declined to name the flaw as that would give away details.

On March 31, Kaminsky said 16 researchers gathered at Microsoft to see whether they understood what was going on, as well as what would be a fix to affect the greatest number of people worldwide, and when they would issue this fix.

Here’s a description straight from Dan himself off his DoxPara Research blog:

I’m pretty proud of what we accomplished here. We got Windows. We got Cisco IOS. We got Nominum. We got BIND 9, and when we couldn’t get BIND 8, we got Yahoo, the biggest BIND 8 deployment we knew of, to publicly commit to abandoning it entirely.

It was a good day.

For the most technical, here’s the US Computer Emergency Readiness Team (US-CERT) Vulnerability Note, which includes a long list of the vendors affected:

I spoke with a DNS expert I know well for some context around the announcement. He confirmed the magnitude of the potential problem, saying that it puts the majority of web nameservers at risk for DNS cache poisoning.  He also noted that the initial reporting portrayed the problem as being with the DNS itself, which is true to some extent.

But BIND and Microsoft nameservers are particularly susceptible to cache poisoning, due to a weakness in how the query response number is randomized when the recursive server responds with the proper IP address. Other name servers, like PowerDNS, are much less at risk.

Here’s how he tried to describe the attacks to me in layman terms. The attack sends repeated queries for the same resource record (IP address) to the recursive server, which causes multiple open queries to be opened.  Think of these as tickets started but not completed.

Then the attack also sends a number of answers using spoofed addresses to make it appear they are coming from the legitimate nameserver for that resource record.  What the attacker is trying to do is “guess” the socket number and transaction ID of the actual, correct response.  So the machine asks a server for an IP number, but then floods the server with false answers to that same query, racing to see which answer gets accepted first by the resolver.

Because of weak randomization in many nameservers, the attacker was highly likely to eventually hit on a correct transaction address, which means the resolver would give an answer the attacker assigned, not the correct IP address. That false answer would then be cached by the server, and every request for that IP address would be given the new, fraudulent destination. And users might never know the difference.

This description makes sense, based on this from the CNET story that refers to beefed up randomization:

Kaminsky said he will release details in time for Black Hat 2008, on August 7 and 8 in Las Vegas. However, Microsoft in its security bulletin said its patch uses strongly random DNS transaction IDs, random sockets for UDP (User Datagram Protocol) queries, and updates the logic used to manage the DNS cache.”

Kaminsky did confirm that the patches released today will increase DNS randomness: “Where we had 16-bit before, we now have 32 bits.”

Beyond the technology, this is a very heartening story of collaboration and discretion in the name of the greater good. By waiting until Microsoft, BIND and others could issue a patch for this problem before making any public statements, a great deal of online harm was avoided. I’m sure Kaminsky will get the royal treatment at Black Hat, and it sure sounds like he deserves it. Dan, here’s a big thank you from this Internet user.

July 9, 2008 at 1:57 pm Leave a comment

HP and EDS — Hey, You, Come on to My Cloud

Lots of good reporting lately on the $13.9B purchase of EDS by HP. Many are saying its the clearest sign yet that cloud computing has fully arrived. Others say the purchase is more about buying market share and becoming the world’s #2 IT outsourcing company, behind IBM. Rob Hof of BusinessWeek has a really good roundup post with some different perspectives:

One question interesting to me is whether a giant company like HP/EDS can make the concept of cloud computing more palatable to the federal market. EDS is the 19th largest contractor to the federal government, with $2.4B worth of business in 2006. The combined company would seem well positioned for even more government work. Here’s Government Executive on the deal:

Security isn’t mentioned in any of the above articles. That’s a good reason the government is cautious about outsourcing infrastructure over the cloud. At the foundation of Internet transport is the DNS system, a simple protocol that translates IP addresses into the shorter domain names familiar to us all like and It was not designed originally with security in mind, and needs to be “hardened” as more and more critical applications ride along above it.

Here’s an article yesterday from Government Computer News that makes this point very strongly. What is being described here is mandating that the government implement DNSSEC — Domain Name System Security Extensions — although the article doesn’t use the term. DNSSEC allows the the digital signing of DSN responses for authenticity, in other words ensuring the reply (IP address) is coming from the right server. This prevents spoofed return addresses and helps defend against DNS cache poisoning and Distributed Denial of Service (DDOS) attacks.

May 15, 2008 at 1:30 pm Leave a comment



Traffic Sources

Alexa Rank

Twitter Stream

Become a Strategic Communications Fan

Add to Technorati Favorites


Get every new post delivered to your Inbox.

%d bloggers like this: