ICCCN'17 trip notes, days 2 and 3

Keynote 2 (Day 2)

Bruce Maggs gave the keynote on Day 2 of ICCCN. He is a professor at Duke university and Vice President of Research at Akamai Technologies. He talked about cyber attacks they have seen at Akamai, content delivery network (CDN) and cloud services provider.

Akamai has 230K servers, 1300+ networks, 3300 physical locations, 750 cities, and 120 countries. It slipped out of him that Akamai is so big, it can bring down internet, if it went evil, but it would never go evil :-) Hmm, should we say "too big to go evil?". This, of course, came up as a question at the end of the talk: how prepared is the Internet for one of the biggest players, such as Google, Microsoft, Netflix, Yahoo, Akamai, going rouge? Bruce said, the Internet is not prepared at all. If one of these companies turned bad, they can melt internet. I followed up that question with rouge employee and insider threat question. He said that, the Akamai system is so big that it doesn't/can't work with manual instruction. They have a very big operational room, but that is mainly to impress investors. Because at that scale, the human monitoring/supervision does not work. They have autonomous systems in place, and so the risk of screw-up due to manual instruction is very low. (This still doesn't say much of the risk of a computerized screw up.)

OK, back to his talk. Akamai has customers in eCommerce, media and entertainment, banks (16/20 of the global banks), and almost all major antivirus software vendors. He gave some daily statistics: 30+ TB/s traffic served, 600 million IPv4 addresses, 3 trillion http requests, and 260 terabytes compressed logs.

Then he started talking about DDOS attacks, where the attackers want to do damage to the provider by its overwhelming resources. The attackers often recruit an army of compromised drone/bot machines, and they look for amplification of the requests sent by this drone army.

Bruce showed a graph of largest DDOS attacks by year. The attacks were exponentially growing in size in GB/s. 2017 saw the largest attack by a factor of two, where it reached 600Gbps gigabit per second at some point during the attack. WOW!

In 2016, 19 attacks exceeded 100 Gbps. The March 12, 2016, DNS reflection attack reached 200 GB/s. Th most popular attacks are the ones with the largest amplification, which is defined as the rate of request to response. DNS reflection attack has 28 to 54 amplification. The mechanism used for blocking this attack was built by "prolexic" IP anycast scrubbing centers. In this setup the origin server had dozens of scrubbing centers/servers that filter the requests first and allow only good ones to go the origin server.

It looks like these CDN guys are faring wars with the attackers on the Internet on a daily basis. It turns out attackers generally perform pre-attack reconnaissance using short burst of attacks/probes, and the CDN companies also monitor/account for these tactics.

Bruce gave some statistics about DDOS attack frequency. The surprising thing is the gaming industry is the target of majority of attacks at 55%. It is followed by Software technology at 25%, Media at 5%, and Finance at 4%. Why target the gaming houses? A DDOS slows the online game, and upsets the gamers. So the attackers do this to extort the gaming companies.

Bruce also talked about the attack on krebsonsecurity.com, the blog for the security researcher Jay Krebs.  Akamai hosted this page pro bono. But this site got a huge attack stemming from IOT bots. This was more than twice in volume of any attack they have seen. Akamai held up, but after a couple days of strong attacks, this started costing dear money to Akamai, who was doing this pro bono. After September 26, Google took over hosting the Krebs site pro bono.

Bruce talked about many other attacks, including the  SQL attack: select * from employees where lname= '' or '1'='1'. The lesson is you should sanitize your SQL input! Akamai scrubs bad looking SQL.

Another attack type is bot-based account takeover. The attackers first attack and obtain password dumps. And then they exploit the fact that many people use same username and password across services. The attackers then take the big password file, break it into pieces, send it to compromised routers, and these routers try these combinations with bank accounts, and look for lucky matches. This is not a DDOS attack, in fact they try to do this inconspicuously at rates as slow as couple per hour.

My takeaway from the presentation is whatever these CDNs are charging their customer companies is still a good deal. Because the way things are setup currently, it is hard for a small company like a bank, media site, etc. to withstand these attacks alone. On the other hand, I hope these CDN companies stay on the top of their game all the time. Because they have huge responsibility, they are too big/dangerous to fail. It is scary to think that it is Akamai who is serving the https, not the banks. In other words, Akamai has the private keys for the banks, and serve https on their behalf.

Panel 2 (Day 2)

Panel 2 was on "Cloud Scale Big Data Analytics". The panelists were: Pei Zhang(CMU); Vanish Talwar(Nutanix); Indranil Gupta (UIUC); Christopher Stewart (Ohio State University); Jeff Kephart (IBM).

Indy Gupta talked about intent-based distributed systems harking back to the "intent-based networking" term coined by Cisco. He cautioned that we are not catering to our real users. We invented the internet, but missed the web. We developed p2p, but missed its applications. He cautioned that we are dangerously close to missing the boat for big data analytics. The typical users of big data analytics are not CS graduates, but rather physics, biology, etc. domain experts. And they don't understand "scheduling", "containers/VMs", "network and traffic virtualization". And neither should they be forced to learn/understand this in an ideal world. They know what performance they need, such as latency, throughput, and deadlines, and we should design our big data systems to be able to serve them based on these end goals/metrics.

Jeff Kephard from IBM TJ Watson talked about embodied cognition and symbiotic cognitive computing, but in a twist of fate had to attend the panel as a disembodied Skype participant.

Yunqiao Zhang from Facebook talked about disaggregated storage and mapreduce at Facebook. The idea here is to separate the compute and storage resources so that they can evolve/sprawl/and get utilized independently. The two disaggregated systems, i.e., the compute and storage systems, are tethered together by very fast Ethernet. The network speed and capacity today is so good, it is possible and economical to do this without worrying about traffic. This was very interesting indeed. I found a talk on this which I will listen to learn more about the disaggregated MapReduce at Facebook.

Pei Zhang from CMU at Silicon Valley talked about collecting IoT data from the physical world.

Chris Stewart from The Ohio State University talked about the need for becoming transparent for big data systems from data collection, management, algorithm design, to the data translation/visualization layers.

The question and answer session included a question on the gap between the data mining and cloud systems communities. The panel members said that more collaboration is needed, while it is inevitable and even useful to look at the same problems from different perspectives. Couple panel members remarked that today the best place these communities collaborate is inside companies like Facebook and Google.

Keynote 3 (Day 3)

Henning Schulzrinne talked about "Telecom policy: competition, spectrum, access and technology transitions". He has been working at the government at the last 7 years on and off and so was able to give a different perspective than the academic. He talked about opportunities for research that go beyond classical conference topics.

He listed the key challenges as:
+ competition & investment poorly understood
+ spectrum is no longer just bookkeeping
+ rural broadband is about finding the right levers
+ emergency services still stuck in pre-internet

He talked at length about network economics. What we as CS guys have been optimizing turned out to be a very small sliver of the network economics: equipment 4%, construction 11%, operations 85%. We the CS researchers have been optimizing only equipment and have been ignoring economics! We should focus more on facilitating operations. Operations is not efficient, if we can figure out how to make networks more easily operable, and require less human resources, we will have larger impact than tweaking protocols.

He talked also about rural broadband, and mentioned that the drones/balloons are inapplicable as their capacity is not enough. The cost of deployment in rural is high, and the incentive for companies to deploy is low. But, pretty much everyone has wired telephone service, how did that happen? There was an unspoken bargain: the government said to ATT we'll give you monopoly, and you'll give us universal service. This was never stated but understood. He said to solve the rural broadband problem, policy levers need to pulled.
+ decrease cost of serving: dig once: bury cable during street repair & construction
+ provide funding: universal service fund (US $8 billion from tax money).

He talked about recycling TV broadband spectrums and how this is a very active issue now. He also talked about serving the disabled via the subtitles requirements, text-to-911, voip emergency, and wireless 911 services.

To conclude he asked us to think about the network problem holistically, including economics and policy in the equation. Many of the problems are incentive problems. And there is a need to think in decades not conference cycles! The network performance is rarely the key problem; academics work on things that can be measured, even when they are not that important.

Panel 3 (Day 3)

The Panel 3 was on "Federal Funding for Research in Networking and Beyond". The panelists were Vipin Chaudhary (US NSF); Richard Brown (US NSF); and Reginald Hobbs (US Army Research Lab).

Rick Brown talked about NSF CISE divisions:
+ CNS: computer network systems
+ CCF: computing and communication foundations
+ IIS: information & intelligent systems
+ OAC: Office of Advanced Cyberinfrastructure

He mentioned that NSF was established by congress in 1950 with the yearly  budget of $3.5 billion, with the post ww2 understanding of importance of science to the country. NSF promotes bottom up basic research culture in contrast to NIH NASA DARPA which tells you what to work on and build.

The total NSF 2017 budget 7.8 billion. NSF gets around 50K proposals, funds 10K of them. 95% of budget goes to the grants, only 5% goes to operational costs.

Reginal Hobbs talked about the funding & research collaboration opportunities at the Army Research Laboratory.

Vipin Chaudhary talked first about NSF broadly and then specifically about the  office of advanced cyberinfrastructure at NSF. He said that the CISE budget is approximately 840M, and in computer science 83% of academic research in CS is covered by NSF. (I didn't expect this ratio to be this high.)

He described the NSF I-Corps program at the end of his talk, which was very interesting for its support for entrepreneur activities. This program helps you to figure out if you are facing valley of death or black hole in your research commercialization process. Most academic spinouts fail because they develop something no one cares about. I-Corps provides support for you to meet with customers and test your hypothesis about what your product should be based on their feedback.

Comments

Popular posts from this blog

Learning about distributed systems: where to start?

Hints for Distributed Systems Design

Foundational distributed systems papers

Metastable failures in the wild

The demise of coding is greatly exaggerated

Scalable OLTP in the Cloud: What’s the BIG DEAL?

The end of a myth: Distributed transactions can scale

SIGMOD panel: Future of Database System Architectures

Why I blog

There is plenty of room at the bottom