Transcript: Life after IPv4 Exhaustion Plenary

Disclaimer

Due to the difficulties capturing a live speaker's words, it is possible this transcript may contain errors and mistranslations. APNIC accepts no liability for any event or action resulting from the transcripts.

Wednesday, 23 February 2011 at 11:00

APRICOT-APAN 2011.
Hong Kong.
Plenary: Life after IPv4 Exhaustion.
11:00-12:30.
Hall B&C.

Srinivas Chendi: Good morning, APNIC Members and ladies and gentlemen. This is the start of the APNIC sessions, from today onwards.

My name is Srinivas Chendi, I'm the program manager for APNIC. I'd like to welcome you all to APNIC 31 and to Hong Kong.

First up, we start our APNIC sessions with a plenary. You have been watching the video in Miami, which happened, the ceremony which happened where APNIC received the last /8. That marks the beginning of this plenary, to lead into the discussions, what are our speakers going to share with you all.

First up, I would like to thank our sponsor, PHCOLO for sponsoring the plenary. Can we all give them a round of applause, please.

APPLAUSE

Srinivas Chendi: Thank you PHCOLO. Just rules of engagement, we are broadcasting this live, video, audio, live transcript, as well as there's a Jabber room, if anyone wants to ask questions remotely, they can ask, and those will be relayed to the floor.

For those who are in the room, I would like to ask keep your mobile phones on vibrate mode, so they don't ring and disturb the speakers while they're speaking.

Also, when you approach the microphone, to make a comment or to ask questions, state your name and affiliation clearly. The affiliation is optional, but we would like to capture your name. So state clearly and then proceed with your question or comment.

Also, there is a social event tonight by APNIC.

I'll give more details about that towards the end of the session, but before that, I would like to invite the moderator of this APNIC plenary Masato Yamanishi, please, and the speakers Sanjaya from APNIC, Xing Li from CERNET, Geoff Huston from APNIC, and Dr Sheng Jiang from Huawei.

I request Masato-san to just briefly introduce the session and the speakers.

Masato Yamanishi: Thank you very much for introducing us. As everybody will already know, is the pool was existed at the beginning of this month and it is the first APNIC plenary after this important milestone. I think it is good timing to consider about the life after exhaustion again. Of course, we already consider -- maybe we already consider about that, but it's very good timing to consider it again.

That is the reason why we choose this topic as the theme of APNIC plenary.

However, since it is third session about IPv4 to IPv6 transition in this week, so you may be bored of this topic, so we added some entertainment at the end of this session. So to keep enough time for it, I need strict time management in this session.

Maybe somebody may remember what happened when I chaired last APRICOT in Malaysia, actually I did get from the airport, still it needs another hours. So if the same thing will happen today, everybody will lose lunch. So please cooperate and make it -- if you want to say some comment, please make it short. Thank you very much.

Let me introduce first panellist, he is Sanjaya, who is the APNIC Services Director and he talks about policy, process and also its current trend.

Sanjaya: Thank you very much. This is going to be a very short presentation, just to give you a background on the state of IPv4 to IPv6 transition, seen from the registry point of view, seen from APNIC.

Quickly, go through the three stages in APNIC of IPv4 address depletion, what has been happening in the last three years, what's the policy changes coming up and what's the practice changes and the conclusion.

In APNIC 30, Gold Coast, APNIC presented the three stages of IPv4 exhaustion.

Stage 1 is before IANA depletes its IPv4 inventory, which has passed. They depleted in February, so we are now in Stage 2, which is IANA has depleted and APNIC is going through its inventory to the last /8 invintry and stage 3 is when we reach our last /8 inventory.

Delegation practice may change during each of these stages.

Let's look at the three years statistic prior to this point of time.

From the IP address delegation count, I'm comparing IPv4 and IPv6 number of delegations, so how many times we hand out IPv4 and IPv6.

It looks very encouraging that IPv6 is -- used to be only about less than 1/6th of IPv4 delegation, in 2010 it's coming up to half.

On the delegation side, it's even more encouraging, frankly, because of IPv6 address size that is so big. So in 2010, we actually allocated 212 million /48 networks, which has surpassed the number of even IPv4 hosts last year, which is 121 million individual IPv4 addresses.

It's all looking good, people are taking up IPv6, size is no problem.

We also look at what in this region, where do these IP addresses go to, and we tried to break it down into data centers, you know, hosting, to wireless access and landline access.

I think you probably would intuit this already, that most of the IP address in this region is going to access providers. So wireless is very strong in South Asia. Southeast Asia is busily deploying ADSLs, cables and so on. East Asia, which is more mature, as well as Oceania and New Zealand and Australia, have a balanced deployment of wired connection as well as wireless connection. So it's all about access network.

Then in considering how we should delegate closer to the end of IPv4, we also look at -- has there been any change in the speed of approval by APNIC Hostmasters in approving IPv4 request.

As you can see here, there don't seem to be any -- it has been quite consistent in the last three years, that APNIC Hostmasters, on average, approved the IPv4 requests within approximately two weeks.

And this also shows that the large allocation tends to be approved a bit longer, the medium size allocation is slightly faster and actually the small multihoming assignment takes a bit long. But the sweet spot is about 15 days before you get an allocation.

With IPv6, it's, of course, much better, because of the policy changes that has been happening in IPv6 space that allows us to perform less, to evaluate things quickly, so the average IPv6 approval time has actually gone down from approximately 10 days to probably under six days nowadays. So that's the approval time by type and size.

We have a bit of a problem with the large IPv6 allocations because of the policy that is in place now, but I thinks this being fixed, you know, as time goes by, we need to know how we should evaluate large IPv6 requests.

There's also an important question: has there been a rush after IANA IPv4 depletion? It doesn't seem to be. I think probably thanks to the communication effort by everyone, that people know that IPv4 is running out, then IPv6 deployment has to go up.

We don't see any rush behaviour here in people trying to land-grab the remaining IPv4, so it's a good sign.

What do the stats say? The stats simply say the Internet in the Asia Pacific region is still growing at a very accelerated rate, particularly in the access networks. That reflects the population.

Even without the rush, we are still growing because of the population demand.

We think that the Stage 2 is predicted to end within three to six months after IANA completion.

So we are now looking at around June timeframe.

Is there any policy change? There's no change in Stage 1 and 2, so address policy remains the same.

Stage 3, there will be some policy change. When the APNIC last /8 policy is triggered, the policy is still being refined now. It's going to be discussed this week.

Then in Stage 2, we are going to do the evaluation slightly differently. We are going to set up a queue, because as things are now, like in theater or in movie, if there's limited ticket, then you need to set up a queue. That's what we're doing now. We're setting up a queue. IPv4 request processing is serialized. Response to requests and correspondence is now set to exactly five business days, to maintain the queue and requests are evaluated by the whole APNIC Hostmaster team.

This is, well, a bit complicated diagram, but the point is the green line there is five business days before a requestor will either receive an approval, or a clarification questions, or decline request or if that person happens to be the last one before we hit the last /8, then we would apply the last /8 policy to this person.

So in conclusion, we don't see any evidence of rush for IPv4 after IANA depletion. We changed the IPv4 delegation practice since this month, to set up a queue.

Policy changes will happen when we reach Stage 3, approximately around June, but then we can't really predict this accurately, because it really depends on the requests coming, and there is no change in IPv6 delegation practice. We are still using a very streamlined one-click IPv6 request for our Members.

So that's it. Thank you.

Masato Yamanishi: Any comment or questions? No?

OK. Randy, go ahead.

Randy Bush: When we were doing these kind of measurements previously, we had nice curves in times of the allocation and we had the actual time they were announced in BGP. It would be very nice if APNIC would do that.

Sanjaya: Thanks, Randy. We'll have that -- Randy Bush: Hi, Geoff.

Pavan Duggal: Quick question. You talked about some policy changes in the last phase. Are you referring to the processes that will come in the policy SIG today or are you referring to a subsequent stage thereafter?

Sanjaya: That's going to be discussed this week.

It's not yet applied today. It will be applied some time this year, when we reach the last /8.

Pavan Duggal: Has it been already formulated or are we in the process of walking away?

Sanjaya: It's already formulated, but it's being refined.

Pavan Duggal: Is it available on the website?

Sanjaya: Yes, it is on the website, yes.

Masato Yamanishi: Thank you very much, Sanjaya.

APPLAUSE

Masato Yamanishi: OK. Next panellist is Dr Xing Li of CERNET. He talks about IPv4 and v6 transition experience in CERNET and also he talks about the importance of stateless transition between IPv4 and IPv6.

Xing Li: Good morning. OK. Probably you were in yesterday's transition workshop. I will talk something from different view. So currently we have IPv4 and IPv6. Let's take a look at the scale of the v4 and the v6, so this is from AS level, still compare with IPv4 and IPv6, it's quite small.

Also, we can see the trend and on the left it's the IPv4 AS count. It's 35K currently, and it's linearly increasing. So the good news for IPv6 is, well, it's only 3K. However, it shows some kind of exponential increasing, that's good news for transition.

However, actually, if we take a look at the regional registry counter, APNIC will be the first one to run out of the pool. That's in the next 172 days.

So let's think, can we increase the number of AS numbers in the real routing table from 3K to 35K?

That's a big jump.

OK. Second, can we make 99 per cent of content IPv6 available? It seems very difficult, at least yesterday, from what I learned, Google spend 18 months to make the IPv6 service. Well, that's 18 months, but here is the 172 days.

So it seems our task is mission impossible.

It's very difficult.

Let's take another view of the IPv6. By the way, I'm from CERNET.

Yesterday, I mentioned Google spend 18 months to make IPv6 ready and for CERNET, we are academic networks, we started IPv6 project back in 1998, so we spent 12 years.

Still, currently we are not reaching fully IPv6 service.

How about ISPs? Even worse. OK. It seems current technology, the core work or backbone, it's IPv6 ready, no problem. And the DNS is getting there, so it's OK.

How about the content? Not ready. OK. Even the coming IPv6 day, Google, Facebook, Yahoo, all those major, I mean, giants, will try IPv6. Yesterday, also mentioned, can we just make not a single IPv6 day? After that day, just turn off IPv4, only give IPv6 service? Nobody agree on that. Right? So content is not ready.

And how about applications? Some are ready.

However, a lot of like gaming or others, not ready, even Skype is not IPv6 ready.

And how about billing systems for ISP? You need to bill. Definitely, it's not ready.

And access? It's not ready. People still debating on CPE, because the features and the requirements for CPE home gateways and that seems another very difficult or mission impossible task.

And for the host, for the PCs and like Windows, Mac, OK, however, how about mobile devices? You know the iPhones, the reason it's some kind of IPv6 ready and iPad and you see all those kind of things, not ready.

So you can see still there is a long way and we have to make all of them IPv6 ready in 172 days.

Can we do that? That's our job, right?

Another view. OK. We always talk for the new technology, there is S curve. Currently, still, OK.

Many IPv4 servers and a few IPv6 servers. However, when we finish the transition that will be the other top side of the S curve. We should have many, many IPv6 servers and a few IPv4 servers.

Well, can we move S curve from the bottom one to the top one in 172 days? Can we do that?

OK. Another view. Who will be mostly affected for the IPv4 address depletion? Actually, the existing IPv4 only users, they are happy now, until there's some IPv6 only content. That's the days a few suffer, not today, a few happy. So upgrade to dual stack for the existing IPv4 users is not urgent.

On the other hand, actually, when we upgrade to IPv6 or dual stack, that will degrade their experience. That's very, very bad, based on what I learn in IETF's work, if you run dual stack, then if you use IPv6 in dual stack, the latency or RTT is at least two times, because of DS, all those kind of thing.

So if under existing IPv4 users, I would hesitate to upgrade to dual stack, because my performance will be degraded.

OK. How about the new users? OK. They want Internet connectivity. However, they do not accept service if they cannot access the global IPv4 Internet, definitely. So we need to make IPv4 service for the new users, even after the depletion of the IPv4 addresses.

Very difficult.

And currently, there are several transition technologies. ISPs should make a decision at least in Asia Pacific region in 172 days. They are competing technologies. For example, NAT44 or NAT444, that actually is IPv4 plus NAT44 using RFC1980 addresses or something called the dual stack lite. Actually, you cannot avoid NAT44, because they rely on CGN and actually IPv6 is just some kind of digital chunks. Still, actually, it's CGN. Or if you're working in IETF working group which I'm working on that.

How about IPv6 plus translation? So there's two technologies and one is stateless translation, actually CERNET take a lead on that, what we call IVI. By the way, IV means 4 and VI means 6, so IVI means interaction between v6 and v6.

Also, there is a stateful investigation called NAT64. In this venue, you can access the stateful version. You try it. We provide stateless version in IETF79 in Beijing.

OK, how about the standard? There are competing technologies. For translation, they are stateless, we call IVI. OK, we have RFC 3 or 4, actually, is there. The first one, RFC 52, it's there already and RFC 61, 44, 45, and 47 and 46, actually, is in ... 48 status.

How about dual stack lite? It's still the IETF draft. 6RD, that's another way to provide IPv6 service through IPv4 and NAT444, actually, it's not even the working group draft. So you can see actually translation between IPv4 and IPv6 is the most mature standard in IETF.

Again, OK, let's back to the discussion. This one actually I don't remember, actually this slide I borrowed about 10 years ago from some scientist's presentation. The IPv6 is so great, how come it is not there yet?

That's the chicken and egg problem. Applications need front-end investment stack, et cetera, and the network needs to ramp up.

So the question here is IPv4 exhaustion does not really change this. Still, currently, even without IPv4 global pool, still we have the chicken and egg problem for IPv6, because people can use NAT44 to provide IPv4 Internet connectivity.

So IPv4 NAT versus IPv6, who will win? I can show you our experience.

OK. We are running two networks. One is called CERNET, China Education and Research Network, it's IPv4 and we have 2000 universities and 25 million users.

Another actually is IPv6 only network. We started CERNET 2 project back in 2004 and we have 200 universities connected, 2 million users.

And our strategy actually encourage transition for CERNET, IPv4 is congested and charged. That's the strategy, promotion strategy we tried back in 2004.

And the CERNET 2 is IPv6 only, light loaded, it's a 10G IPv6 backbone, native and free of charge.

So what we talk to our customers, if you want using high quality and free network part of your application would (port your application into IPv6.

Make sense, right? And you can see the traffic. IPv4 CERNET is about 170GBPS and CERNET 2, it's about 27GBPS, native IPV6, so it's about 20 per cent of IPv4 traffic. We reached highest ratio of IPv6 to IPv4.

So can we achieve the transition? No, what's IPv6 traffic? Mostly video. And I can tell you the real story. Those videos you cannot access using IPv4.

OK.

If it's IPv4 available, then people will use IPv4.

Anything which cannot be accessed via v4, they will use IPv6. If both available, users prefer to use IPv4, because of better experience.

Exception EE and CS students, they like to try, right.

OK. So, we will ask, when will be the X days? We have asked our customers, can we turn off CERNET IPv4 and only provide CERNET 2 services? The answer is absolutely no. If there is a single IPv4 only content if it's a dual stack model. We have almost reached X day IPv4 address depletion. When will be the X day? We don't know.

Another view, killer applications. When we build CERNET 2, we ask video is the killer application.

Later it seems YouTube can provide IPv4 video through net, then P2P, however, BT can do that also.

Internet things, maybe five years later.

So current understanding is the intercommunication with IPv4 Internet should be the killer application of IPv6, right? Translation. So that's the tower of babel.

So we invent IVI. We have IPv4 CERNET and IPv6 CERNET 2 and we build a stateless translator in between, so IPv6 only server can be accessed by the global IPv4 users and because of time, I will not go to the details, but I encourage you, if you are interested in translation technology and I'm a co-authors of that draft framework.

So there are eight scenarios. You cannot make v4-v6 Internet through a translator, you'll have some constraints. It's quite important.

Please study the scenarios.

Then the idea actually we have for the stateless translation is because address space for IPv6 is huge, if randomly use the IPv6 address, we cannot do stateless translation. How about choose a subset of the IPv6 address, we think your organization within that /48, for example, and use that subset, that subset you can have a 1 to 1 mapping relationship with global IPv4 Internet. That subset of IPv6 address you can communicate with IPv4 Internet. If you remember what Dr Vint Cerf said in the opening ceremony, right, IPv4, when designed, there is only class A address and later class B, class C, then introduction of CIDR so in IPv4, at the beginning, we use a subset of IPv4 and later extend that and we can try this same approach in IPv6.

OK. We actually publish the open source code for the initial implementation IVI stateless translation in this website, back in 2007. You still can try that.

We have the technology for 1 to and mapping, so one public IPv4 address can be used for many IPv6 computers. For example, if multiplexing ratio is 256, then actually if you got a /24 public IPv4 address, that is equivalent to a /16, a lot of savings. So you can -- if you are really a new ISP or provider, you can through address trading get IPv4 and open your IPv6 service.

If you are existing APNIC Members, well you can arrange your IPv4 public address and use them more effectively.

Also we have the double translation, so we can avoid ALG v6. I will not go through the details.

OK. So make this things easy and simple. For ISP, we need service continuity, only upgrade the core network to dual stack and keep existing IPv4 access networks running as usual and however, that's minimum -- OK, also minimum customer impact. Deploy IPv6 only data center with 1:1 stateless translation, move content to IPv6 without losing the IPv4 users and deploying new IPv6 only access network with 1:N double stateless translation for new customers using shared IPv4 address and incremental investment.

OK. You can get direct returns.

And so another remark, dual stack and tunnel are co-existing technologies. Ten years' experience indicated that we have not achieved the transition via dual stack and tunnel. OK. Let's try translation now. We need a single Internet, not two Internets. And due to long tail, the transition cannot be achieved in short time.

Actually, the competition, we believe, at this moment, is what type of translation technology we will use. On the left, actually, the NAT44.

NAT444. That's the translation between public IPv4 address and RFC 1918 addresses and at the same time, you can try build a dual stack and connect it to IPv6 Internet.

However, actually, there is no such global IPv6 Internet, right? Performance bad.

Or we can try translation between IPv4 and IPv6 through stateless and stateful translator, then we can build IPv6 only networks, so that's the bottom-up approach. The more and more those IPv6 only networks grow up, then there will be global IPv6 Internet connection, then in that case, you do not have, for example, when you do IPv4 address sharing, you have pooled limitations, that's through the global IPv6 Internet and you will have full function of IPv6 advantages.

So bottom up approach through translation.

And the conclusion, IPv6 is the right direction and it works. A lot of addresses end to end address transparency and IPv6 is not easy. The rest of users and contents may still use IPv4. For service continuity, minimal customer impact, incremental investment. We need translation between v4 and v6.

Also, the universal connectivity is the fundamental requirements for using Internet. We need translation.

So that's our, again, the translation way.

Actually, again, IPv4 and IPv6 are not compatible.

When we building IPv6 Internet, as long as we want to communicate with v4, we need v4 address.

However, do not use IPv4. For example, APNIC, the remaining IPv4, when you get it, don't use IPv4 address directly translate to IPv6. You are there.

You are already in IPv6 and eventually, bottom-up approach. Again, more and more IPv6 only access networks, which can also communicate with IPv4 and link those and appear in those IPv6 only networks and eventually we will reach the X day.

Thank you very much.

Masato Yamanishi: Thank you very much. Any comment?

Randy Bush: Two points. Number 1, the idea of using the small bit of IPv4 space in front for translation, whether that's IVI or whether it's NAT 64 or whatever, that is indeed the plan underneath the APNIC final /8 policy. Is that for the next 10 years or whatever, you can get a tiny little bit of IPv4 space to co-exist. So I think the policy and APNIC are in place on that.

Xing Li: OK. That's great.

Randy Bush: The other thing is, the 172 days, 172 days from now, I hope there's nobody here who plans to come and take away everybody's IPv4 addresses.

Right? There's no cliff at 172 days any more than there was a cliff last month when we ran out after IANA. It's going to be slow and there are ISPs who have enough IPv4 space to last them another two months or another two years.

So the point is, get going on the plan, not that you are dead in 172 days, so let somebody from Cisco or Juniper scare you into buying an enormous device that will fail two months out.

Spend that money instead of transition plans.

Xing Li: Thanks, yeah.

Masato Yamanishi: I have related comment. I think most important message is we need to keep IPv4 connectivity for the next couple of years, especially for newcomers, otherwise they will be impacted.

Xing Li: Yes.

Masato Yamanishi: Any other comment or question?

Thank you very much.

APPLAUSE I forget about one more housekeeping thing. If you haven't yet this small silver box, please

Srinivas Chendi: Everyone should receive one when they walk in the door.

Masato Yamanishi: Has somebody not yet received this box? Nobody? OK. Good.

OK, so let me introduce our third speaker, Geoff Huston. As you know, he's Chief Scientist of APNIC and he talks about various statistics, observations among transition solutions. I hope he is more optimistic than yesterday.

Geoff Huston: We'll see.

Good morning. My name is Geoff Huston. I work with APNIC.

I would like to report on some data that we have been gathering on the past few months about the functionality of dual stack.

Because, realistically, what we're about to head into is looking at the end-to-end network, when servers are offering their services on both v4 and v6, and clients are negotiating what protocol they might want to use to access it.

So I'm not measuring infrastructure per se. I'm not looking at routes. I'm not looking at ASes. I'm not looking at even the DNS per se.

I'm looking at the end to end service. So this is looking at it from the perspective of the server rather than the client.

So from one server, we're seeing what's going on now. From the server's perspective, they will log a transaction saying: I delivered that object in v6.

When everything else is working, when the route exists, when the DNS does all the necessary AAAAs and so on, when the client is able to actually ask the DNS for AAAAs, when end-to-end service actually works.

Today, in a dual-stack world, clients who are configured with both v4 and v6, if they try v6 first, are always able, in most browser environments, because we're talking about a web service here, are able to fall back to v4 if the v6 effort fails at any point.

So what I would like to do is look at this and look at the various metrics we have found.

So what we're doing is using a Java script that exists on a web page, quite invisibly, I'll give you a big clue, one of them is www.potaroo.net. You can go there now, I need the visitors. There are the ones on APNIC as well, because they need the visitors too.

When you go there, the browser you're using right now will actually undertake a test and once every 24 hours, because it does a cookie, it will try and deliver to you, five 1x1 pixel objects. So they have a slight size, they are all white, just so the browser really does pick then up.

These objects are constructed quite deliberately.

One of them only exists as a v6 object. One of them exists as both v4 and v6 and one of them exists only in v4.

The fourth one is going to test you a bit, because you only get the DNS if the resolver chain ultimately is able to do a resolution using v6 DNS transport. That might present you with some challenges.

Last but not least, just to flush out any insipient stuff out there that tries to avoid the DNS, there's a URL that contains a v6 literal. The address is right in the URL. So you should be able to fetch that using v6, without even invoking the DNS.

So I'll test each of you only once in a day and I'll only use one test, so even if you visit the page lots of times, it won't make any difference, because I'm only looking at each source address once.

What I'm trying to look at is the retrieval rates and the failure behaviour and the transaction times.

So a bit of taxonomy, because I'm basically testing you in v4, v6, and dual stack, if you only retrieve the v4 object and the dual stack object, I think you're a v4 only person and you're doing what I would expect a v4 only person to do. If you retrieve the v6 only object and you use v6 to retrieve the dual stack, and I see no evidence of v4, then you're this -- you're strange. You're running v6 only. Good luck to you.

There are a few of you out there, actually, every day, which I find rather weird.

If you prefer v6 in dual stack, I'll call you v6 preferred, a lot of recent operating systems, even though they're capable of doing v6 and they retrieve the v6 object, will actually use v4 in dual stack mode. So they're v6 capable, but v4 preferred.

Then a few of you, which we're really worried about, will retrieve the v4 object using v4. That's fine. But when you come to the dual stack object, you won't get it. Even though you could have got it in v4, somehow you wedge and do nothing.

You're a problem and we're worried about you.

So here are the folk who go to apnic.net. What I have done, this is since May, is separated out the folk who prefer to use v6 and dual stack versus the folk who are capable of running v6 but don't by default in dual stack. So they will retrieve the v6 only object, but in dual stack use v4. So the folk who visit apnic.net, since May, around 2 per cent of all those visitors every day, prefer to use v6 in a dual stack. But between 4 and 6 per cent will actually do use v6 if they're cornered, v6 only and encouragingly, since January of this year, you can see a sort of slight upward curve. If you're looking for optimistic signs of folk running v6, this is an obvious cause for optimism.

But I'll share a secret with you. The folk who visit apnic.net are weird. They're completely abnormal.

This is normality. This is a much larger site, whose visitor content is significant.

I'm not sure if the site involved wants to be named, but I'll simply say anonymously I will that I'm very grateful to them for running this and they can tell you themselves if they'll admit who they are. This is more run of the mill mainstream networking on the Internet as we know it.

Dual stack, v6 preferred, .2 per cent since November. Curve, flat. Nothing has changed.

V6 capable between 3.5 and 4.5 per cent. Those anomalies, by the way, is me. I had a few bad hair days when I changed things. So the two dips in the capable is my fault. But the curve: flat. Lots of noise between 3.5 and 4 per cent, but flat. This is relatively higher volume. We're up there in, you know, more than tens of thousands, so larger site.

Not optimistic. The folk who really would prefer to use v6 right now on a dual stack environment, .2 per cent are the number of Internet hosts out there. If you want to get some grain of hope out there, 4 per cent could, but they are going to prefer to use v4 in dual stack, they use auto-tunnelling.

OK. So why do 3.8 per cent of the world's hosts hide their v6 goodness? Why do they say look, you're dual stack, no, not going to use v6, even though I could.

So 20 times greater than the number who prefer it. Why that massive difference?

So the next thing you do is when you use v6, your address betrays a little bit about what you're doing. Because if your v6 address starts with 2002, you're using 6to4. Lucky you. If you're using 2001:0, you're using Teredo. Lucky, lucky you. And the rest of you are using unicast.

When I look at the breakdown on this site of those who prefer to use v6, the vast majority of them are not using Teredo or 6to4. They're not. They're actually using normal, if you will, unicast addresses and this actual mirrors recent operating system behaviour.

In a dual-stack environment, even if the host has 6to4 or Teredo, the host will prefer not to use it.

As you see, Teredo is down there in the oh my god it's almost nothing. 6to4 is sort of really low and most of it is Unicast.

When I look at who prefers to use v4, but will use v6 if cornered -- in other words, the folk who are hiding behind v4 -- they're all 6to4. They're all 6to4.

And the folk who actually, if you will, could do v6 but don't, the Teredo and Unicast are right down there low. Interesting.

So these folk who, if you will, don't do v6 but could, are the auto-tunnelling folk. They don't seem to have Unicast v6 locally.

So, you know, most hosts with Unicast prefer v6, most hosts with auto tunnel these days prefer v4. Older versions preferred v6 in all cases. Why did they change?

Well, failure modes, when it doesn't work, are bloody awful. Anyone still running XP? Good luck.

20 seconds.

Because when it fails, it sends a 6to4 TCP and then it goes I wonder if it made it. I'm going to wait for 3 seconds. 3 seconds elapse. I'm going to send another. The screen is still white, by the way. I'm going to send a third. Let's wait another 12 seconds. Your screen is still white. The wheel is still rotating. I'm going to flip to v4 and give you a page. Was that fun?

That's why they de-prefed it, because when you had it up there and it was failing, the user experience was crap. In a million dollar per millisecond world, you just blew your budget like crazy. So Windows changed. Mac OS, eventually, in October, changed. All those recent OSs basically de-pref auto-tunnelling. So can we look at this and what can we see about performance?

I can measure retrieval times, because I'm actually looking at the log files of all of this.

And if you regard v4 as zero -- so whatever time it takes you to start the request, do the DNS, fire off the SYNs, do the https and then I deliver this gift to you and stop the clock. That's in v4, zero.

How long does it take with v6 in comparison?

Because I can measure this.

Interestingly, if you're coming in Unicast addresses, within a statistical variance, v6 is as fast as v4. So all you network guys out there, 10 out of 10. You've built a 6 network that on the whole, when it works, works as well as v4.

So if users are saying my v6 experience sucks on Unicast, I think they're lying, on the whole. That on the whole, the v6 network is actually bloody good. This is terrific.

But you guys out there in auto-tunnelling land are suffering badly. And that between Teredo and 6to4, the penalty that I see in Australia for auto-tunnelling on the entire object retrieval, is at least 1 to 2 seconds on average - on average, sometimes a lot worse. So these auto-tunnelling mechanisms create a user experience from hell.

So auto-tunnelling Teredo variance 1 to 4 seconds, average 6to4, about 1.5.

Two causes of this. When you set up a tunnel, you have to set it up. It's not instant. There is a delay. In particular, if you're using Teredo, there is an initial delay of at least, at least, 1RTT. Also, when you tunnel, you can go the long way around. In 6to4, you can go the long way around around around, because not only do the packets that come to the server have to go through a relay, in Asia, the ISPs are lazy. In Asia, the closest relays I observe are on the west coast of the US.

In Asia, that round trip is plus 140 milliseconds per packet. Well done.

On the way back, you have to go through another relay. Guess where that is? Plus 140 milliseconds.

So all of a sudden, you have just added a third of a second per packet. Congratulations.

I'm nice to you. Geez, I'm nice. I put in a 6to4 relay on the server, so it now actually gets rid of the second path, so there should only be about 140 milliseconds.

So what do I see in set up performance? Because I'm also TCP dumping everything and I'm seeing every packet, and putting a timestamp on it. 6to4, I can actually pace the time to get the SYN up and I'm seeing basically zero penalty in setting up that tunnel. There's a certain outlier which is actually I think just variance inside the network, but the set up time is instant.

What about the RTT? Well, 12 per cent of customers don't see much at all. 12 per cent of folk who come to this page don't see much penalty at all. But the other 90-odd per cent see a penalty.

And as you see there at 150 milliseconds to be a sort of a peak point, which is basically a trans-Pacific hop. So in 6to4 there's a cost. The page gets slower. So we add about 1.2 seconds in retrieval time and that's a long, long time.

There's a little bit of congestion load delay, as far as I can see, that's what's going on.

Teredo, the tunnelling protocol from hell. This one runs through NATs and there are a couple of things that are really anomalous about this. It's on every Windows box and since Vista has been on by default and everyone runs NAT, so why aren't I seeing gazillions of Teredo packets, because I don't.

Teredo is extremely complicated. That little set-up diagram shows six ICMP packets just to set the tunnel up before the first SYN can actually work and it involves not just one relay server but two and, guess what, ICMP has to go in both directions end to end. Good luck.

Set-up time. It's not instant any more. There's a really big peak at two-thirds of a second to actually get that first SYN to the server. So the client might say I'm going to start Teredo and I'm going to do v4 and launch packets to Geoff at the same time. I don't see the first SYN packet until two-thirds of a second later in Teredo compared to v4. Tunnel to tunnel set up times take as long as 9 -- count them nine -- seconds for the first packet to come through.

What about RTT? 300 milliseconds. That's across the Pacific and back and across the Pacific and back. In 300 milliseconds, you could probably get all the way around the world -- per packet. This stuff sucks. That adds around 1 to 3 seconds of retrieval time and takes around 3 RTTs to complete.

Not good.

So, why is there so little Teredo? Because it sucks and because Windows these days also believes it sucks. It sucks so badly that if your only interface is Teredo, get host by name, won't even try v6.

So who am I seeing down there? I'm seeing Linux.

I'm not seeing Windows.

So can I trap these poor unsuspecting users to actually give me Teredo? Yes, I can. I bypass the DNS. I give them that URL. So now, 30 per cent of all the folk who visited this site send me a Teredo packet. That's really amazing. 30 per cent of clients, if they're trapped and offered absolutely no option, will go through this hell, set up, you know, two-thirds of a second set-up time and RTT that extends out to hell, they will go through that one-third of all clients on the net will actually do this, which I reckon is great.

But -- and there's always a but -- OK, there are performance overheads and context. So the problem is that if you go dual stack, and you're going to get auto-tunnelling folk, they're not going well.

And if you are running a server and you want to make life slightly easier for those dual stack auto tunnel victims out there you call your clients, put a relay up close, minimally do 6to4 relay right in the server, so at least the reverse path is as fast as you can make it for those auto-tunnelling folk.

But there's more and it's worse.

Because I also am able to look at a certain type of failure. Because I'm interested in this question about all of those folk retrieve the v4 object, but didn't even retrieve the dual stack, but they could have. How many of them sort of failed and got jammed?

In my experiments, that dual stack server had a failure rate of .6 per cent, almost 1 in 100 folk didn't get anything.

The original site is v4 only. The script that I deliver to start the tests, v4 only. The first time the client is asked to do something in dual stack -- not even 6 -- is this point. At this point, 1 in 100 almost fail. .6 per cent of folk don't get it at all. That's a really high number.

Now, I'm worried about that number, because it is really high and it's higher than others who have actually done this work. So I'm not sure what I'm seeing. It's a Java script and some of you folk don't like Java script, so maybe part of that .6 per cent is people saying: Java script, script from hell, I'm not going to do that. Or there's some kind of user reset. You're just impatient.

So can I sharpen that number and actually see what I can see in failure? Oh, yes, I can, I'm TCP dumping.

So when you actually come to retrieve the object, you're going to actually do the DNS -- lucky you -- and then you're going to send me a SYN. I'm going to send you a SYN ACK and then you're going to send me an ACK. So what I should see is a SYN and then an ACK. How often don't I see the ACK? How often does that SYN ACK fail? Wow. This is the connection failure rate, between 8 and 10 per cent of folk who attempt to make a v6 connection on this experiment don't complete.

That's amazingly high. In v4, it's .1 per cent or lower. It's low. 8 per cent to 10 per cent don't make it in 6.

Who are they? Well, again, the source address of the SYN will tell me who they are. Teredo? That's OK. That's the Linux folk. Unicast? Yeah, it's OK. That's variance. 6to4? You're broken.

Between 10 to 14 per cent of all 6to4 connections don't get past that first SYN. The SYN ACK is failing.

But hang on, I'm the relay. You're talking to me in v4. You sent me a SYN, I created the SYN ACK in 6, wrapped it in v4 and sent it back to you in v4.

What's wrong with you?

It's the same path that worked, but this packet fails. 10 to 15 per cent of them fail.

You guys, I don't think you guys, because you're part of the folk who visit apnic.net, so you're weird. Your guy's customers have protocol 41 filters up there. You don't like protocol 41. You don't like auto-tunnelling. So even though a large amount of your machines auto tunnel, you have filters to say, no, that's bad.

You're just frustrating your own users, you know.

What's going on is they're trying something and then failing at it and spending a lot of time. Remember that 20 second time-out? That's what they're doing.

Too much middleware doing too much crap or auto-tunnelling surprising the middleware. So that's a protocol 41 failure.

Remember when I trapped those users with a Teredo literal? I haven't got the graph, but I've got the number. Those 30 per cent of clients, one-third of all the Internet's clients out there is my extrapolation, 12 per cent fail in Teredo.

12 per cent. It's not a protocol 41 filter, this is UDP in v4. They're failing badly. That's a really high failure rate. I have no idea precisely what's going on, other than someone is paranoid about filters and UDP ports because Teredo was met to be NAT-friendly. It actually does all the right stuff and sets up the NAT state appropriately, yet 12 per cent of these connections fail.

That's bad. That's so bad that I would seriously say if you're running a service, don't use literals.

Because all you're going to do is piss people off.

Because 12 per cent of those connections fail.

Unicast v6, 2 per cent. That's a whole lot better.

Yes, it is a whole lot better. That's only 20 times the v4 failure rate. 20 times the v4 failure rate.

So there's a problem out here and I think it's actually quite large. When you go dual stack, you're going to tickle problems. I would certainly say that it's viable to go dual stack, but -- and there's a very big but -- a small fraction of the clients you see today will do a v6 timeout that might take up to 20 seconds. They will experience a very, very much slower service. But don't forget those 6to4 folk and the Teredo folk who just didn't do it at all. A small fraction of your existing clients, when you go dual stack and everything else is working, will fail to connect to you at all.

So dual stack is viable. Will I go v6 only if I was a web server? No. Because if I went v6-only today, 4 per cent of my customers would get to me.

The other 96 per cent wouldn't. There is too much IPv4-only infrastructure out there and those 4 per cent are using auto-tunnelling and a larger number of folk are trying to get to me and timing out and failing miserably. Auto tunnelling, despite every good intention in the world, was a mistake.

You can't jump over these kinds of problems and create a robust experience that mirrors what customers are used to in v4. They will see the difference and not like it.

There's a lesson here and it is a lesson to this room, not to your customers. There's lazy no way out.

I you can't expect Microsoft and Apple and Linux and BST and all the others to produce bandaids that hop over your laziness. You can't expect that to work, because it doesn't.

All it will do is piss people off. There's only one way to do v6 and you're the problem. You have to deliver your customers Unicast. There's no other way out. Any other approach just annoys people and that's not why they pay you money.

Thank you.

APPLAUSE.

Masato Yamanishi: Any comments or questions?

Lorenzo Colitti: From Google. We wholeheartedly agree with this message. It mirrors a lot of the data that we have been collecting. I would like to point out that if you have a mac, your 20 seconds becomes 75 seconds and also macs are much more susceptible to this kind of thing, so the mac users, if you take the mac users out of the picture, things get about 10 times better. Sadly, that's not possible.

Geoff Huston: 10 times better being particularly bad is still bad. I don't think there's any easy answer out there.

Lorenzo Colitti: Yeah. I'm also wondering, since you're TCP dumping, something that would be hard for us is to find out how many of these SYNS that don't complete are from private source addresses, you know, 2002:private IPv4 addresses.

Geoff Huston: I have looked at that Lorenzo and interestingly, vanishingly small, because I was trying to find the Apple airport bug that was being the NAT and the 6to4 at the same time and I found almost no RFC private addresses. I haven't done the long piece of work and I need to remember or look at this recording again, to see if the source addresses are being advertised, which is a subtly different version, but it's a great question and I will look at that. Thank you.

Masato Yamanishi: We are already 15 minutes behind, so I have two requests. Please make your comment as short as possible and second one is let me close microphone, except these three guys.

Randy Bush: Geoff, would you be a little clearer.

At the end you said provide Unicast. What you mean is provide dual stack and stop all these hacks.

Geoff Huston: Correct.

Randy Bush: There are a bunch of people in this room who failed to stop Teredo in the IETF, failed to stop 6to4 in the IETF, and are now failing to stop a bunch of other massive hacks that will also fail in these fashions. OK?

The SYN is not missing the ACK. The SYN is letting this crap through.

Geoff Huston: Thank you. Yes. I agree.

Mark Newton: We have done a fair bit of work with ADSL2+ CPE and have found some remarkably crappy firewalls, where when you check the tick box that says that you want firewalling enabled, god knows what it actually does.

But among other things, we have found some that do actually block protocol 41. Have you done any investigation into any causes of why 6to4 is failing in that way and by CPE or any other things on the Internet that might be inhibiting that?

Geoff Huston: As I said, your suspicion counts with mine, that is a suspicion that protocol 41 is seen as alien and it's being blocked. I'm reluctant to go into extended testing with the clients because it might take time and they might notice me performing these little invisible tests. So at some point, I have to move out from broad experimentation into pinpoint investigation. We will continue this work, because I think it's important to understand just how badly this stuff fails and why. So thank you, yes.

Eric Lan: Eric Lan from Google. Two quick things. Are you coming to Prague? Can you talk to the IETF?

Can you give this presentation to those who think that 6to4 needs to be extended and its lifetime is useful or do you want to put NAT in front of 6to4 and do IPv6 NAT after 6to4?

Geoff Huston: Yes, I intend to go there. If you want to get me to speak somewhere, I'll happily work with you to do that, yes.

Eric Lan: Secondly, I would be curious offline if you're seeing interesting source addresses, like link locals and 2001 DV8 and other sort of things and -- Geoff Huston: We'll need to talk about that, because this is an ongoing thing. The experiment keeps on running. If there are things to look for I'll happily talk to you about or anyone else.

The other thing is if you're running a very large website with a large number of customers who are normal -- people who come to my website aren't. I'm sorry. You're just all weird. You all do v6.

If anyone else wants to do this kind of work, please talk to me. If you look on the APNIC website and go there, tell me if you can see me doing the tests. You shouldn't see it. It is in the background. If there are other websites or web servers who wish to help, I will happily work with you to install it. The larger the pool of experimentation, the greater we can extrapolate these to the world as we know it.

Thank you.

Masato Yamanishi: Thank you very much. The large speaker is Dr Shen Jiang, of Huawei. He looks at the netty of working with v4 and v6 and also multi-cast interworking.

Sheng Jiang: Hello, everyone. Before I launch into my presentation, I have to say Geoff just managed to make me depressed very much.

Actually, with all IPv6 has many problem, also the dual stack. But I guess Geoff's intention here is not to make people stop deploy IPv6, it's going to encourage people to move to IPv6 and move fast. OK.

There is my presentation.

I will first introduce our observation for IPv4/IPv6 transition trends and introduce some IPv4/IPv6 multi-cast interoperation mechanisms.

Multi-cast interoperation is much behind with the Unicast, but will be there one day. We need this.

We all know the public IPv4 address is exhausting.

Geoff said a couple of years ago already.

Another side, more devices will be connected to the Internet, so we need more and more addresses.

IPv6 is the only right answer for the address exhaustion issue. But as we all know and everybody talking about this in last three days, IPv4 will be here for a very long time. We think IPv4 and IPv6 will co-exist for a very long time, at least 15 years, maybe more.

In October 2009, my partner and I realized it's time to investigate the IPv6 deployment and plans from ISPs. So we designed a questionnaire which covers most of the major IPv6 transition issues and distributed it globally.

We received 31 answers from various ISPs.

Given the fact many questions study commercial private information, so 31 is pretty good result.

Then IT working group found out the result is very useful.

October 2010, this result is published as RFC 6036, emerging service provider scenarios for IPv6 deployment.

From this RFC, we can see 93 per cent choose a dual stack routing backbone. I guess that's the very aggressive answer.

30 per cent of ISPs run 6to4 relay or plan it, although there is a lot of failures, just as Geoff Huston said. And 17 per cent of them run Teredo server.

77 per cent of them has no equipment dedicated to IPv6, because most of the people think IPv6 only is the right answer for now. It will be one day.

Dual stack backbone and supplement transition mechanisms are the majority choice, as we know.

For IPv6 and IPv4 inter-working, 57 per cent of ISPs don't expect IPv6-only customers. So they do think the interworking is very important. The assumption is mobile operators are certain they will have millions IPv6 only. It is mainly because some current devices can have only one IP address, either IPv4 or IPv6.

And most people think there will be more than 10 years IPv4 only application is still running. So that come to us, IPv4 and IPv6, at the IP layer is very much needed. However, for ISPs, only 30 per cent think they will run NAT-PT or NAT64 by themselves. And only 23 per cent rely on dual stack.

So that means most people have not prepared for the interwork. It's very serious, because the time is coming. If you are not prepared, your customer will be disconnected. Your customer cannot reach their content. Then your customer will complain to you and you lost your money.

OK. This is the observation, not in RFC, it's recent observation from Huawei marketing. We are talking to various customers globally. Global transport back haul network must be dual stack. We all know that. We found out European ISPs prefer jump to IPv6 and use DS light for the IPv6 access services.

The reason probably is because most of European ISPs have MPS backbone. MPS is IP independent, so it's easy for them to switch from IPv4 backbone to IPv6 backbone.

North America has prefer IPv4 based mechanism, like 6RD or incremental CGN, because they have a lot of devices still running in IPv4. There may be able to upgrade to dual stack, but nobody know the performance, nobody has a guarantee. So they like to stay to IPv4 for a while. By the way, North America has a lot of IPv4 address, so they are not in a hurry.

Who is in a hurry? Asia. Asia is the most quickest growing market and like China, Japan, they are building new networks, so ISPs may prefer dual stack, because they have a lot of new customers, so they will have to throw away their current devices very quickly.

When they buy new devices, they say: we want everything. We want dual stack. We want IPv4, IPv6 at the same time. That's fine.

This is the observation only for now. Still, new transition technologies proposed, like host-based 6a44, 4RD or Teredo extension.

However, there is a lot of debate in IETF to exactly when was IPv6 in Paris, I heard a lot of IETF experts say there are already too much transition mechanisms. From software working group, apologize to say, we apologize IETF has produced so much transition mechanisms, but we're not sure this many is enough or not. We have thousands ISPs in the world. Everywhere may face different situations, so every ISP should have understand their own issues and be able to choose the right transition mechanisms for themselves.

OK. Multi-cast use network infrastructure efficiently, even if it needs to be delivered to a large number of receivers.

So when we talking the IPv4 and IPv6 multi-cast, the first router use multi-cast protocol to construct to multi-cast routing table and multi-cast datagrams.

Use multi-cast group management protocol such as ITMP v4 with IPv4 and MLDv2 for IPv6 to manage multi-cast members and set-up and maintain member relationships. Although in say the protocols has very similar names, it's not able to talk to each other from multi-cast IPv4 to multi-cast IPv6. So we need transition mechanism for multi-cast.

Since I am running short of time, Geoff Huston eat into my time, 10 minutes.

So there's a lot of issues into giving when we consider IPv4 and IPv6 multi-cast inter-operation.

I'm not going to read through the question marks.

However, you know it's very difficult.

Here we have IPv4 and IPv6 packet based multi-cast. It mapping between IPv4 and IPv6. It has to translate every packet from v4 form to v6 form or another way around.

But the issue is RFC 2766 NAT-PT has been moved into historic status by RFC 4966.

So far, IETF has not yet produced new translation standard for multi-cast. But the working group is starting to look at that.

I heard it may be discussed in this March in Prague, but it takes time, maybe another two years to get the packet based multi-cast mechanism measured.

It still have technical issues. Part of multi-cast trait is invisible, so it means management difficulties.

We have the v4/v6 multi-cast proxy. It's mainly based on the content cache concept, with the proxy, there are two independent multi-cast traits. The management is separate and the advantage is IETF standard. Its implementation or deployment level mechanisms, as long as you can get your vendor to support it, then you can get the transition.

Multi-cast tunnel is the way you make your multi-cast package through the different protocol transport network, like you put v6 source and v6 tunnels through the IPv4 networks.

OK. I will jump through the transition scenario, because it's going to be too much details.

Last slide. I would like to remind everyone v6 is coming, it's now. The later IPv6 deployment starts, the higher the overall network transition cost could be. ISPs actually face both IPv6 transition issue and IPv4 address shortage problems at the same time, so the combination of mechanisms that solve two issues is needed.

That's it.

Masato Yamanishi: Thank you very much. Do we have any comments or questions? No? OK. Thank you very much.

APPLAUSE

Masato Yamanishi: I said at the beginning, we have another entertainment or ceremony, but actually I don't know what will happen, only Sunny knows.

Srinivas Chendi: No, I don't know either. Sorry.

I got my notes here.

Well, first of all, I want to thank you for moderating the session and for the speakers to contribute your experiences and your time and I would like to thank my colleague George Michaelson to suggest this topic for having for the plenary.

We have a little activity. I don't know how many of you circulated at the end of 2010, but we decided to play that video here in this room, to inspire you to encourage you to deploy IPv6.

So can the guys please play the video. Thank you.

VIDEO PLAYED.

Srinivas Chendi: Thank you. A round of applause for these kids.

APPLAUSE

Srinivas Chendi: Now, I hope you all received a gift from APNIC staff. Anyone not received it when you walk into the room? It's a torch.

Can someone distribute the gifts. Please raise your hand high so you can get one.

This one as well, please, on the right side of the room.

I have a couple of APNIC staff up on the stage here to assist me with this. It's all written here.

I really don't know what's going on, I'm just following the script given to me.

OK. We got everyone? There's some more hands here on the right side.

I think we're out of them. I think we ran out of the gifts. I do apologise for that. Maybe we have some back in the room or in the hotel. Another exhaustion. That's right. Exhaustion of the torch lights.

OK. Can I ask the APNIC staff to show what you got there, please. I would like the AV guys to dim the lights. IPv6. Awesome view from up here on the stage.

Thank your for your participation.

Can I have the lights, please?

Wonderful. Hope we inspired you, hope the speakers up on the stage inspired you. If you haven't thought about IPv6, at least you will go home now after this meeting and think about IPv6.

Not for us, for the future of our kids. Thank you so much.

APPLAUSE.

Follow us:

on Facebook on Twitter