Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google Invests in $300M Submarine Cable to Improve Connection Between Japan, US (techcrunch.com)
122 points by swohns on Aug 11, 2014 | hide | past | favorite | 72 comments


Obligatory link to "Mother Earth, Motherboard" whenever submarine cables are mentioned:

http://archive.wired.com/wired/archive/4.12/ffglass.html

Best 50 page article you'll read this year.


This really makes me nostalgic.

I miss the days when Wired actually had decent content.


Well, journalism was also a good middle-class job back then. Magazines don't pay like they used to. Like upfront budgets to write a high-quality long piece.

Reading Hunter S Thompsons letters from the 60s as a freelance journalist you can tell he was able to spend time writing quality articles, with the odd hack job thrown in for extra cash. I highly doubt there are many full-time journos left like that.


> by Neal Stephenson

Well ok then


'Nuff said.



Nice read, thank you. See also Tubes: A Journey to the Center of the Internet. http://www.amazon.com/dp/0061994952/


I had never read this. Amazing article. Thank you.


I would love to see Google do this for Australia and New Zealand as well. Both countries are at the end of the line (so-to-speak) and would benefit greatly from something like this. Our Internet is pretty rubbish and expensive, and while that might be in part due to limited choice of ISP's, archaic Government policies and equipment, bandwidth is a HUGE problem.


I always understood that the major problem with our internet is that >90% of our content comes via these cables, which makes the infrastructure costs of supplying bandwidth far more than, say, continental US where you don't have to cross literal oceans to reach all of the servers storing peoples' internet data.

If we consumed most of our internet from AU servers, our costs would decrease as you wouldn't need as fat intercontinental pipes.


Australia yes, but please don't include New Zealand in that! Our internet is roaring along thanks to enlightened regulation and significant government investment. We're half-way through rolling out a GPON fibre network which will cover 75% of the population by 2019. I'm on 100/50mbit fibre right now, looking forward to a 1000/500 upgrade before the end of the year - totally uncapped and unshaped.


There's one company already investing/building on this called SubPartners - http://www.subpartners.net

They are planning 3 submarine cables: 1. APX Central - Connecting Sydney, Melbourne, Hobart, Adelaide and Perth. http://www.subpartners.net/cables/apx-central.html 2. APX East - Sydney, NZ, California. http://www.subpartners.net/cables/apx-east.html 3. APX West - Perth, Jakarta, Singapore. http://www.subpartners.net/cables/apx-west.html

All 3 are planned to be completed by Q3, 2016. Obviously lots of factors and risks involved.

If Google (and others with investments in local clouds. eg: Amazon, Microsoft etc) wanted to invest, those 3 would be what you'd want to invest in.


I wonder what the difference is in the technology of the 3.3TB/s cable and the 60TB/c cable, just strands? (approx 20 more?)


Nope, the new fiber will be high dispersion compared to the old fiber and also will not have dispersion compensation along its length. The old cable was designed for OOK signaling. Previously optical signaling was 1 bit per symbol OOK (on-off-keying) and direct detection meaning you did not have any optical phase information to make dispersion corrections so the dispersion compensation was done in a specially designed fiber with the opposite sign slope for dispersion (DCF).

This fiber was also relatively high loss and has a narrower core which leads to higher nonlinearities in the link. The old cable tries to keep the dispersion within the range that OOK technologies can operate error-free (post FEC) so there's a lot of it typical at each repeater (EDFA). The newer coherent optical technology can transmit multiple bits per symbol (BPSK, QPSK) by encoding the bits in the optical phase. Since the phase is recovered at the receiver the dispersion accumulated in the fiber can be undone in DSP with a long enough FIR filter. So the need for dispersion compensation is gone with coherent optical. Taking out the DCF also reduces loss along the link reducing EDFA (amplifier) count and increasing spacing. Also nonlinear penalties on the newer higher dispersion fiber are lower which improves something called cycle slips that can punch through the FEC and cause you to take post-FEC errors.

The net result is that you should be able to transmit QPSK at 32GBd in 2 polarizations in maybe 80 waves in each direction.

2bits x 2 polarizations x 32G ~128Gb/s per wave or nearly 11Tb/s for 1 fiber. If this cable has 6 strands, then it could easily meet the target transmission capacity.


That jumped out at me too. My wild guess is that the older fiber is a 10G system, the new one is 40G.

From [1] > Unity cable system consists of eight fiber pairs, has design capacity up to 7.68 Tbps, with each fiber pair operating at 96x10G DWDM system.

From [2] > the SJC cable system consists of 6 fiber pairs with the initial design capacity of 28 terabits per second,

Taking a guess that there are 96x40Gb/s x 6 fibers gets you to 23 Tb/s, so in the right ballpark. (Wavelength spacing on a fiber is different between 40G and 10G, so this is a bit of a shot in the dark.)

Caveat: 40G used to be near and dear to my heart (Big Bear Networks), so everything pretty much looks like that nail to my hammer.

[1] http://submarinenetworks.com/systems/trans-pacific/unity [2] http://www.globe.com.ph/press-room/globe-regional-connectivi...


Nope this will be 100G per wave not 40G. 40G was a stop-gap technology that never really shipped in large volume. With the advent of coherent optical, everyone just went to 100G (eg. Infinera, Ciena, Alcatel-Lucent)

EDIT: One caveat, depending on a particular link many of these systems will run at half-rate. A lot of legacy cables today are running BPSK at 50G in 2 waves (25G/wave) due to nonlinearities.


> Nope this will be 100G per wave not 40G.

Do you have a link for this? Interesting news if true. Also: Things running at 50G used to be OC768 with error correction, ie 40G of data + 10G of overhead. Has this changed? At some point, the framers have to deal with standardized bitstreams, so is the 50G one part of an inverse mux or combined up from 10G?

Edit: It's been a while. Sorry for the bazillion questions, but curiousity is getting the better of me. Are folks really running 100G coherent undersea currently?


See for example: http://newswire.telecomramblings.com/2013/01/telstra-global-...

So you have to separate the "wet" plant from the terminal gear. The speed of the terminal gear is completely disconnected from the wet plant these days. Nobody replaces wet plant to upgrade capacity. They run Ciena, Infinera, Alcatel gear over Tyco's old line system.

Essentially the issue with upgrading over the wet plant is basically the presence of nonlinearities on the fiber. The links are not noise limited. Some of these fibers are still running 10G OOK in half the band and that on NZ-DSF that's used for submarine cables basically causes huge nonlinear penalties. The new subsea fiber is 22ps/nm-km and essentially larger effective diameter for reducing nonlinear penalty.

http://www.corning.com/opticalfiber/products/vascade_fibers....

BTW, I also worked at BBN


> See for example:

Thanks for the link. I confess I'm a little amazed that Infinera is the basis for running 100G coherent single wavelengths. That's great progress. (Edit- See below)

> Essentially the issue with upgrading over the wet plant is basically the presence of nonlinearities on the fiber

Yes, and there's great incentive to utilize legacy fiber if possible.

(BTW I managed to screw up my comment above when I edited. I had written: Usually the undersea guys are a generation behind, partly because of the need to send a destroyer-looking ship out for any repairs.)

> BTW, I also worked at BBN

Hello! and hope all is well, whoever you are. :)

Edit: The infinera 500G PIC in the PR from Telstra is running its basic bitstreams at 25G and muxing them up - http://www.lightreading.com/optical/dwdm/infinera-unleashes-...


Yes. It's a 100G service though. Infinera runs on 25G spacing so at 25G dual wave dual polarization is 100G in the same spectral efficiency as single wave.



Your capitalization is incorrect. You wrote TB (terrabyte) when you meant Tb (terrabit). This equates to a 8x difference.


That seems really cheap for such a crucial piece of infrastructure that has to wrap a quarter of the way around the earth


If you read the article you would see that they are one of a number of partners. The project costs several billion US$. By 'buying in' like this you are given a dedicated portion of the bandwidth to use.


The NEC press release says the total amount of investment across six different companies is approximately $300mm


Facscinating, so now I'm completely confused. If they can pull this cable for 300M$US total that is a huge improvement in costs from previous efforts. Now I feel compelled to track down what changed that made this an order of magnitude less costly to do.

EDIT: this link http://submarinenetworks.com/systems/trans-pacific/unity informs me that I am completely wrong on the costs. Apparently it is really "only" 300M$ to lay a cable from here to there.


You're right. The press release [0] does say "The total amount of investment for the FASTER system is estimated to be approximately USD $300 million". That would, however, make the title of this article wrong.

I'm surprised Techcrunch made that big a mistake in their reporting.

[0] http://www.nec.com/en/press/201408/global_20140811_01.html


no, the title is accurate. it says "Google invests in $300mm cable", not "Google invests $300mm in cable"


While manufacturing the cable is expensive, actually laying it (in that particular part of the ocean) is not that challenging. They essentially just drive a ship along that path while slowly spooling out the cable from the rear. They have the advantage that that stretch of ocean doesn't contain much of anything to plan around.


Anyone interested in the process (or just loves captivating writing about technical subjects), read http://archive.wired.com/wired/archive/4.12/ffglass.html. Can't recommend it enough.


It does, especially in comparison to the other number referenced in the article. Running a cable from Singapore to Japan cost $400mm, but running one from America to Japan costs $300mm?


The Southeast Asia-Japan cable links 8 countries, not just Singapore to Japan, and is nearly as long, with options to extend the total span to longer than the Japan-US cable.


Shallow waters, shipping channels, and landings cost way more than deep sea sections.


A sensor array along the cable line monitoring marine health would yield a data goldmine.


Totally. But who's willing to pay for that?


This might be an absurd question, but I can't think of a better place to ask it.

In my mind there's a huge mental disconnect between computers (servers/personal computers) and infrastructure like this.

Could someone provide insight on when/how these types of high-throughput cables are used? How the process is managed, by who, and how on earth all those bits are lined up at such a high speed.

I understand they're core to the structure of the internet, but I couldn't explain how information ends up in them to my grandmother.

edit: looks like the linked wired article is a good place to start


Routers. Routers everywhere...

The internet is broken up into different networks, each one being an 'Atonomous System' (AS). These networks are all connected to other ASes at various interconnection points, such as an Internet Exchange (IX) or another point-of-presence (PoP). At these points, a router on the edge of one network is connected to a router on the edge of another.

These 'edge' or 'border' routers talk to each other with a protocol called BGP (Border Gateway Protocol). This lets them 'advertise' all the routes that you can reach through that router to other routers (like, "hey, you can get to 54.24.0.0/16 at cost x through me").

Internally, each AS will also use an internal routing protocol, such as Open Shortest Path First (OSPF) or iBGP (the internal version of Border Gateway Protocol) to internally advertise this information along with information about how to get between internal routers to work out where to go. So if you have a packet at your grandmother's house, it will hit the first router that her cable or DSL is connected to, use something like OSPF to work out the best (fastest) path through the network to a border router, and then from there the best (probably cheapest!) path to the destination based on the information it got from neighboring routers with BGP.

This is because there are two ways that ASes will interconnect - either peering, where you say "we'll let you send traffic into your network for free if we can send data into yours", and transit, where you actually pay. There may be two paths to get to your destination, and one might be shorter but more expensive in transit, so the cheaper path might get chosen, depending on the priorities of the ISP.

An undersea cable is usually internal to an AS. Typically though, it's not actually the whole cable, but one or more wavelengths through it - for example, some cables have up to ~128 different wavelengths (colours) of light going through them - each 1, 10, 40 or 100Gbps. So a cable operator usually doesn't actually handle any data transfer but just sell wavelengths to different providers. Each one usually has a separate laser and then they are all multiplexed by a piece of optical equipment into a fibre strand. This method of sending multiple wavelengths is called dense wavelength division multiplexing.


This is fascinating. Thank you.


You'd probably find the book Tubes interesting.


The Chinese mainland will not see significant international speed improvements until the government decides people should see significant international speed improvements. Keeping things slow and unreliable, especially in peak times, is a form of subsidy for local internet-related business and therefore for government control. Subtle, but hugely effective when push comes to shove: more so than firewalling.


Following the tip to Stephenson's article, here's also a really good book, 'The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century's On-line Pioneers'.

http://www.amazon.com/The-Victorian-Internet-Remarkable-Nine...


Why only 6 strands? Doesn't it make more sense to put more fiber in there?


I say this without any knowledge of the profitability of such a venture, and as someone who does not live in Australia but who is familiar with the running joke that they have poor Internet:

How about run a new cable, or two, to Australia?


From what I understand, Australia's poor internet service is more due to poor ISPs and government meddling than the quality or quantity of the backbone.


Though if they could run it to New Zealand, it would certainly help to have competition with the SCC here...


The cables are pretty limited in both number and capacity. Just compare Australia with Japan for example: http://www.cablemap.info/


Poor inter continental connections dont explain data caps INSIDE their uber fiber national network, do they?


No, those are explained by the average consumer not having a clue whether website X is national or international. Seperate billing has been tried in some places, AFAIK always abandoned because it confused customers and made them feel like they were getting screwed over.


Australia has 20 million people and is at the 'end of the line'. Japan has 100 million people, and just behind it (from the US side) are the twin internet powerhouses of South Korea and China.


Why would they? The less BW they got, the more expensive it is, ergo more gauging and more profit.

Building infrastructure knowing that every single user has a transfer cap inside YOUR OWN NETWORK is a dream come true for every ISP.


On a slight offtopic note, I wonder why Japanese websites are still stuck in the mid 90s to early year 2000 style. One example of this is the imageboard type of websites, which interestingly enough has caught on here in the US.

I wonder if these better connectivity will bring more cross culture web designs or applications to both places.


In some ways it's cultural, but in many ways it's political. That sort of overhaul would have to come from the top. No-one wants to be the person who suggests it since, if any income drops as a result you'd be in a hard position. So, many lower sections jostle for space and they all get in. (I heard this from a Rakuten frontend performance engineer. Other companies may vary.)

Edit: it's also extremely unlikely for a designer with a strong vision and skills to rise in a large company. Most will turn freelance long before they get there. Most upper management seems to come from eigyo - sales. I don't think engineers (basically just troops to implement the sales guy's vision and take the blame when something goes wrong) are in a position to easily climb the ranks either.

Edit2: I don't think the feature phones argument still holds water. Most sites will redirect for feature phones on the user agent level - they have too. Many models and carriers have their own quirks (eg only tables allowed, only inline styles allowed, different emoji codes) around which a large but dwindling infrastructure exists, catering for normalizing across models using template generated html. The browser versions are separate.


There was an interesting discussion about Japanese web design on HN late last year: https://news.ycombinator.com/item?id=6718067


Here is my (limited) understanding:

A significant amount of internet use in Japan is via feature phones, with smart phones only catching on in the last few years. This limits the complexity of what can be displayed quite a bit. Similarly, the Japanese market tends to be significantly older than the rest of the world (http://www.statista.com/statistics/276045/age-distribution-o...) and resistance to change.


Same reason they still love fax machines:

http://www.nytimes.com/2013/02/14/world/asia/in-japan-the-fa...

It's a cultural preference


I can't find the article I read about this last year, but it's largely cultural. There is a greater expectation of information density in Japan, look at their advertising or TV news for example. Also, there is a severe paucity of Japanese (East Asian in general) web fonts so they're stuck with a lot of dated fonts from 25 years ago. Additionally there is just a different aesthetic there.


I came back from Japan last month. It's cultural.

A lot of stuff the Japanese do is from "back in the day" and there's a lot of group think that goes on there.

There's also not as much of a great sense of entrepreneurship or critical thinking going on there. Most people are like zombie's - wake up, get dressed, work until 7 or 8pm, take the subway or JR home, smoke or drink, socialize, repeat.


I've been told that the main devices that are used to browse the internet in Japan are mobiles (smartphones). That results in the websites being built for mobile users instead of desktops.

I'm not sure if this is true though.


Japanese are very egocentric and insular.

There was a story on Hackaday about Japanese Hackers simply ignoring English speaking part of the internet. Those are Hackers hacking on something in a Hackerspace, people on the forefront of open minded thinking.


>Japanese Hackers simply ignoring English speaking part of the internet.

Internationalization seems to be something that hackers rarely invest time on. It is not like hackers in Silicon Valley build their products while worrying about supporting users who speak Mandarin/Cantonese/Japanese/Hindi etc. I really don't think it is close-mindedness at play here.


Am I to understand then that there is a parallel online hacking community populated by Japanese-speakers? I have never really run across this, just blogs randonly-dispersed.


Of course there is! After all, the majority of Japanese developers don't speak English.

There's also a lot of offline activity: meetups, study groups and (less commonly) hackathons.

For example, look at JAWS-UG (Japan AWS User Group)'s schedule:

http://jaws-ug.jp/


I don't see why that's "egocentric". They just don't speak English...


I wonder if this will improve connection to South Korea as well. Browsing .de websites from here involves a trip around the world, Seoul - San Jose - New York - London - Frankfurt for an average ping of 300ms.


Increases in bandwidth aren't going to lower latency. This path may be slightly more direct than an alternate cable system, but as long as your going through the US, the connection will never be able to go bellow ~200ms between South Korea and Germany.


60tbs is such a huge amount of bandwidth. That is the equivalent of 1 million 50mbps wifi connections. Imagine having a single WiFi network with 1 million people all within 25ms latency of one another.


Anyone want to take a guess at Google's motive (other than goodwill)? Better access to East Asian customers?


> This will get it pretty close to the company’s data center in The Dalles, Ore., and give that facility a better connection to Japan.

That seems like enough to me.


I've been living in Japan and access across the pond is a bit flaky at times. Most of the time I can blame my local service provider (a sharehouse) for their piece of shit router, but there are countless numbers of times when a stream will suddenly cut or a connection will just time out for no good reason (while other simultaneous connections even across the ocean are fine).


Ah, thanks for the background info.


(USA) Just wanted to point out that "BTW" is most commonly expected to be an acronym for "by the way" whereas the shorthand for "between" is typically written "b/w"

edit: title has been fixed! :)


Ugh. I don't know how we missed that. (Edit: fixed.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: