Why webscale innovators should look beyond their bubble

New Reblog

Gigaom

The heavy hitters of the internet might feel pretty good about themselves for figuring out how to handle big data sets, but national laboratories have been accustomed to managing exabyte-scale loads for years.

It would have been wise for developers at webscale properties to check with government supercomputing experts. Before the Hadoop Distributed File System and the Google (s goog) File System hit the scene, there were things like Lustre and the Parallel Virtual File System, said Gary Grider, high-performance computing division leader at Los Alamos National Laboratory, at GigaOM’s Structure conference in San Francisco on Thursday.

“Really, if you reduce the semantics, … they would do the same thing, roughly,” he said. “It’s fascinating how we don’t work together as much as we should. If we worked together we probably would be further down the road than we are.”

Sure, there are differences in goals and culture. Los Alamos…

Ver la entrada original 3.365 palabras más

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s