Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3_100GNGOPTX] Warehouse Scale Computing, impact on reach objective



Andy,

Thanks for the reference. 

 

Your remarks, restated here, are my driving motivations

 

By this I mean reducing cost in one area is not beneficial if it results in an increase in overall costs, since cost is just shifted from one area to the other.   

 Such considerations should play a part in the determination of a reach objective.  As we increase the reach to include an ever higher percentage of the links as described in  kolesar_02_0911_NG100GOPTX, we should be cognizant of the increase in relative cost to achieve this increased reach; and evaluate if, when considered at a network level with a distribution of link lengths as per Paul’s presentation, we are decreasing overall cost, or not.

That is why I’ve been developing the solution set analyzer, a tool for comparing metrics of the physical layer across the total data center or subsets of it.  This tool is posted to the SG web site and I will be presenting a users guide to the MMF ad-hoc chaired by Jonathan King via teleconference this Tuesday.  This contribution has been distributed to the membership of the ad-hoc, but due to 500kB size limitation its distribution was blocked to the SG reflector.  I am told it will be posted to the site soon. 

 

Regards,

Paul

 


From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Sent: Saturday, November 26, 2011 5:21 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Warehouse Scale Computing, impact on reach objective

 

... oops

 

anyway, I was just scanning emails over a delicious lunch of BBQ pork, and felt a refreshing breeze of common sense and rigour from the direction of my mobile. I look forward to reading next werk what looks to be an excellent reference. 

 

Well done 

 

Chris 


On Nov 26, 2011, at 10:55 ng AM, "Andy Moorwood" <amoorwood@xxxxxxxxxxx> wrote:

Study Group Members,

I share the regret, expressed in several posts to this list, that large internet data center operators are unwilling to make their requirements known in an open non confidential manner.  I would like to forward to the group a paper, recommended by a colleague, that may help close this information gap.

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines

Luiz André Barroso and Urs Hölzle”

www.morganclaypool.com

ISBN: 9781598295566 paperback

ISBN: 9781598295573 ebook

The paper may be viewed, without charge, at the Morgan & Claypool site, but please be aware of the restrictive notices on page (iv)

 http://www.morganclaypool.com/doi/pdf/10.2200/S00193ED1V01Y200905CAC006

Copies may also be purchased at internet book sites.

The paper, 120 pages in all, describes the specific challenges faced when applications implemented by internet content providers, such as Google and Microsoft, require many thousands, even tens of thousands of servers.  Indeed the usage model of the data center changes from “a place to house servers” to a “building to host an application”.  

The introduction, pages 1 to 11, gives insight into why these warehouse computers differ from traditional data centers and how this impacts the need for communication bandwidth within the data center.

The ideal system, as described by Barosso and Hölzle, would be one where the cross sectional communication bandwidth of the data center would equal to the bandwidth of the servers, i.e. a network without over subscription.  In such a system the application developer can freely locate functions throughout the network, optimally distributing load and minimizing computational and HVAC hotspots.  The authors admit that economic considerations cannot support such a model and that over subscription levels of 5:1  are evident between racks of servers (80 servers per rack), 10 racks in a group (800 servers).  Using the terminology as per kolesar_02_0911_NG100GOPTX, page 4, citing barbieri_01_0107.pdf, this over subscription would refer to the “access" to “distribution” network layers.

http://www.ieee802.org/3/100GNGOPTX/public/sept11/kolesar_02_0911_NG100GOPTX.pdf

 

Decreasing the relative cost of these access to distribution layer links would enable warehouse scale computer builders to reduce the level over subscription and get closer to their ideal system.   Throughout the paper the authors use a system wide approach to find the lowest cost.  By this I mean reducing cost in one area is not beneficial if it results in an increase in overall costs, since cost is just shifted from one area to the other.   

 

Such considerations should play a part in the determination of a reach objective.  As we increase the reach to include an ever higher percentage of the links as described in  kolesar_02_0911_NG100GOPTX, we should be cognizant of the increase in relative cost to achieve this increased reach; and evaluate if, when considered at a network level with a distribution of link lengths as per Paul’s presentation, we are decreasing overall cost, or not.

 

Best Regards

Andy