Web architect Cricket Liu takes stock of his 30-year career in Domain Name Systems
Cricket Liu has been working on the internet since it was still known as the ARPANET.
After starting his career in office automation strategy, he has now spent decades working on the Domain Name System (DNS) and its BIND implementation – a field of expertise he fell into somewhat by chance while working for Hewlett-Packard in the late 1980s.
The Daily Swig recently caught up with Liu, the chief DNS architect at Infoblox, to discuss his career in internetworking and the future development of DNS, which remains one of the web’s key building block technologies more than 30 years since it was first developed.
Daily Swig: How would you explain DNS to a layperson?
Cricket Liu: At the absolute beginner level we tend to use a phone book as a metaphor. But kids these days don’t have any direct experience with phone books.
To the computer-savvy folks I tend to say it’s the distributed naming service for the internet. Almost everyone has some familiarity with domain names. You can mention that they appear in things like email addresses and URLs.
This service is responsible for taking these domain names and mapping them to other sorts of information, like IP addresses.
DS: How did you come to start working on DNS?
CL: I really just lucked into DNS. I worked for two summers, when I was at college, for Hewlett-Packard (HP). I started working full-time maybe a week after graduation.
I was working for the department within HP that ran the HP internet. In those days the HP Internet was the world’s largest private TCP/IP internet network.
That was pretty cool, but the part of the department I worked for did office automation strategy. At the time, I was living in San Francisco and was commuting down to HP’s office in Palo Alto with a bunch of friends.
One morning I got a call from one of the bunch of guys I carpooled with and he said, ‘John has had a family emergency and he’s not going to be able to work today. He was registered for a class up in San Francisco. It’s too late for us to get our money back but we could send someone in his place. Do you want to go?’
I said, ‘Sure, if it means I don’t have to come all the way down to Palo Alto’. And then after the fact I thought, ‘What’s the class about?’
It turned out to be a class on DNS, taught by Paul Mockapetris, who’s the father of DNS. He’s a great presenter and really entertaining. I loved it and thought it was fantastic.
It still wasn’t my job to do DNS. That didn’t happen until later.
The Loma Prieta earthquake, which collapsed a section of freeway, struck in October 1989. One of the very minor effects of that earthquake was that one of HP’s data centers was flooded. The earthquake cracked the sprinkler main and flooded the entire data center.
One of the computers that was in the data was the primary HP.com name server, which is just about as important as it sounds. David, a friend of mine, and I realized we were going to have real problems unless we restored that service. I drove down from San Francisco and we spent pretty much all night restoring that server in a different data center.
HP Labs, which ran the name server, had wanted to give the responsibility for running it to the corporate office for a long time. Now that the server had moved physically into a corporate-run data center, HP Labs said, ‘OK, that’s your problem now’.
My friend, who worked for HP Labs, said ‘Cricket can do it’. It wasn’t some grand plan.
DS: What was it about the topic that of DNS captured your imagination or caught your interest?
CL: There was this global technology, a technology that was rolling across the then-ARPANET and it was still easy to understand. It made sense. The other big network that HP was running at the time was an X.25 network. X.25 never made any sense to me at all, but DNS made some intuitive sense.
DS: What kept you in the area, having come into almost by accident?
CL: As a technology DNS has evolved a lot. It’s about 35 years old, [but] it’s still showing a lot of life for a middle-aged protocol.
DS: How do you view this ‘middle-aged’ technology?
CL: There’s a tremendous amount going on, even over the last 10 years. For example, the application of DNS to security problems.
The idea of DNS servers as a security tool is relatively new, and it turns out that they’re pretty effective for some applications. That’s exciting. That’s new.
DS: You say DNS is effective for some applications. Can you give some examples of where it’s effective, and others where it maybe doesn’t work so well?
CL: One thing that works quite well is to use DNS servers to enforce filtering based on reputational data. A lot of organizations have databases of domain names that they know are being used maliciously.
DNS servers are the things that resolve those domain names to IP addresses and other information, so they’re an ideal vehicle for making sure you don’t accidentally access those domains.
When devices access those domain names, your DNS server is in an ideal place to tell you which device it was.
DNS servers aren’t the only thing that can do that. Next-generation firewalls can perform that sort of function, but the NG firewalls can’t hold as many domain names.
They can’t have as large a reputational database. They also have a difficult time ‘triangulating’ – determining which device it was that originally sent the query.
Because of the way DNS works, if you have an infected laptop and it looks up a domain name that is known to be malicious, then by the time the firewall sees that query, which it may or may not realize is a query for a malicious domain name, it thinks the query came from the last DNS server to touch it and not the device that originally sent the query. It’s too late at that point.
DS: What applications are DNS servers not really suitable for?
CL: DNS servers are not really a mechanism for protecting against DDoS attacks.
They can be part of a solution. One of the things that some DDoS mitigation firms do is that when they sense there’s a DDoS attack going on, they may change the configuration of your DNS servers so, for example, your web server IP address changes from its real IP address to that of a cloud-based filtering mechanism.
DNS servers themselves are only a part of that solution. I wouldn’t even say they were the main part.
DS: The DDoS attack on DNS service Dyn rendered many high-profile websites unreachable. Do you think the lessons from that have been learned?
CL: One of the lessons that I hoped we learned from that attack on Dyn was that we were sort of placing all our eggs in one basket.
Most of the organizations that were most affected by the attack on Dyn only used Dyn as a DNS-hosting provider. Those organizations weren’t targeted. Dyn was targeted.
We’ve been saying for a long time that organizations should be running a heterogenous set of authoritative DNS servers. What that means is to run a couple of your own, or use Dyn and another provider. That way, if Dyn is attacked then you still have some capacity to answer queries.
It sounds easy enough to have a mixed set of DNS servers. The challenge really is that DNS hosting providers like Dyn or Neustar tend to offer these value-added services, such as load balancing or traffic management services. Those services are non-standard.
It’s very easy to synchronize basic DNS configuration because there’s a built-in mechanism to DNS called zone transfers that will synchronize that data. With this value-added stuff there’s no easy way to synchronize that. That’s the challenge.
DS: What are the key technologies coming down the line for DNS within the next two to five years?
CL: We have DNS over TLS (DoT) and DNS over HTTPS (DoH), which are both, I think, interesting attempts to solve the last mile problem in DNS: encryption, or just security in general, between the stub resolver that runs on your device and the recursive DNS server that it communicates with.
There’s an effort called DANE, which stands for DNS Authentication of Named Entities. It’s an attempt to solve a long-standing problem with X.509 certificates and trust.
Dan Kaminsky did a study which found that 2,500 organizations were ultimately trusted to issue X.509 certs.
A lot of folks who run IT infrastructure probably wouldn’t choose to trust every single one of those. Moreover, because of the way X.509 works any one of those organizations could issue certificates for any domain name.
You wouldn’t expect, for example, a certification authority based in Africa to issue a cert for Google.com. That would be surprising. But your browser would not recognize that as at all out of the ordinary.
There have been lots of situations where CAs have inadvertently given out certificates that they shouldn’t have – certificates for Microsoft properties, or Google properties, or what-have-you. Those certificates can be used for no good.
DANE gives people who manage DNS data the ability to say, in their DNS data, this is the CA that I use or even this is my cert. Effectively, you can bind that information and make it difficult for folks who are trying to impersonate you who manage to get a falsified cert.
You can combat that which is, I think, really useful.
DS: What’s happened with DNS Security Extensions (DNSSec)?
CL: DNSSec’s adoption has been very mixed. We’ve been working on the protocol for 20 years and people like Dan [Kaminsky] and, to a lesser extent, I have been stumping for it – trying to get people to adopt it.
Up at the very top of the name space – the root zones, the top-level zones – they’re almost all signed now using DNSSec. It’s below that where it’s problematic.
Below the very top it depends on [the] country domain you’re looking under. There are country top-level domains that have very high levels of adoption and they’re not the countries that you’d think of.
Belgium, for example, has a high level. Sweden, Czech Republic – solid double digits. If you look at .com or .net then it’s a tiny fraction of one per cent.
If you look at the other side of DNSSec adoption, the number of companies that do DNSSec validation, then there are similarly low numbers. Comcast, for example, does DNSSec validation but most other large American ISPs don’t do DNSSec validation.
DS: Finally, what projects are you currently working on?
CL: I have projects within Infoblox that are to do with delivering DNS services from the cloud, which I think are pretty exciting. By moving to the cloud we’ll be able not only to scale, but also apply some really advanced security analytics.
You can identify all kinds of malicious activity with DNS telemetry – what we call passive DNS data. You take this passive DNS data and you can identify malware domain generation algorithms. You can identify tunneling.
The great thing is you can identify malicious activities on one customer’s network and then protect other customers from it. It’s pretty cool.