The heavy hitters of the internet might feel pretty good about themselves for figuring out how to handle big data sets, but national laboratories have been accustomed to managing exabyte-scale loads for years.
It would have been wise for developers at webscale properties to check with government supercomputing experts. Before the Hadoop Distributed File System and the Google (s goog) File System hit the scene, there were things like Lustre and the Parallel Virtual File System, said Gary Grider, high-performance computing division leader at Los Alamos National Laboratory, at GigaOM’s Structure conference in San Francisco on Thursday.
“Really, if you reduce the semantics, … they would do the same thing, roughly,” he said. “It’s fascinating how we don’t work together as much as we should. If we worked together we probably would be further down the road than we are.”
Sure, there are differences in goals and culture. Los Alamos…
Ver la entrada original 3.365 palabras más