Continuing on the theme of "So easy that Hulk could do it", I recently wrote this document as a Pivotal blog piece, but I seemed to outrun the startup of the new Pivotal Technical Blog. So, it was decided to add the content to the product documentation since the blog is still in a "coming soon" state. Nothing earth shattering here, just some best practices for taking slave nodes out of the cluster in the proper manner. Decommissioning, Repairing, or Replacing Hadoop Slave Nodes Decommissioning Hadoop Slave Nodes The Hadoop distributed scale-out cluster-computing framework was inherently designed to run on commodity hardware with typical JBOD configuration (just a bunch of disks; a disk configuration where individual disks are accessed directly by the operating system without the need for RAID). The idea behind it relates not only to cost, but also fault-tolerance where nodes (machines) or disks are expected to fail occasionally without bringing...
Blogs about the technologies I am working with.