Been there, forked that: What the Unix-Linux schism can teach us about Hadoop’s future

ciberelmaster:

New Reblog

Originalmente publicado en Gigaom:

Hadoop is fast becoming the preferred way to store and process big data. By T-System’s estimates, in five years, 80 percent of all new data will first land in Hadoop’s distributed file system (HDFS) or in alternative Object Storage architectures.

Yet with the excitement around this open source framework, enterprise users risk overlooking that all Hadoop flavors are not created equal. Choosing one implementation over another can mean veering off the path of genuine open source software and instead heading down the dead-end street of expensive vendor lock-in and stunted innovation.

A little history lesson

The enterprise tech world has been there before. Remember the Unix vs. Linux schism? The former started as a project at Bell Labs and UC Berkeley in the 1970s. Unix was acclaimed for its performance, stability and scalability. It was cutting-edge back then when it came to multi-user and multitasking capabilities, support of IP networks, tools…

Ver original 832 palabras más

About these ads

Deja un comentario

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s