Skip to main content
Apply

CADRE

Open Main MenuClose Main Menu

Why Storage for Big Data is Hard


Session Information
 

The data tsunami is upon us, with "volume, velocity and variety" exploding. As data collections grow, finding affordable mechanisms to preserve these collections is becoming increasingly crucial. This is especially so because extant business models for large-scale, long-term storage are very challenging under current research funding models, typically because (a) storage costs are impractically high, and/or (b) file owners have to continue paying recurring charges even after the relevant research funding has expired. Among the key issues are: (i) the cost of storing large datasets (ii) over the long term, while making the datasets both (iii) accessible by the owner and (iv) discoverable and accessible by others, while (v) using shorter-term funding, such as a 2-5 year grant, with (vi) minimal recurring costs, providing (vii) multiple copies for resiliency at (viii) minimal costs per TB per copy per year. In this talk, we'll discuss a way to address all of these issues via a combination of an established technology and innovative business model, providing the lowest cost to researchers, over the longest period of time, with the greatest reliability.

 

Presenter(s)

Henry Neeman

   

 

 

 

 

 

Back To Top
MENUCLOSE