Skip to main content

Posts

Showing posts from 2017

Must See TV

In my earlier years, NBC Television had a slogan for their primetime television line-up: Must See TV.   It really was must see, because there wasn’t a great way to watch it later if you missed it and most people in the office were talking about what happened the next day.  So, if you missed it, you either got a verbal recount of the show or you waited 6 months.  I have been to way too many conferences in my career and very few of the main stage presentation or keynotes would fall into what I would consider the Must See category.  SpringOne Platform 2017 provided us with LOTS of those moments, and thanks to recordings and streaming video, this Must-See TV can be posted for easy consumption.    If you missed the event, I encourage you to at least take a look at the replays of the Keynotes from S1P.   The Pivotal speakers all kept their presentations light, yet informative with a good dose of humor.  I will be the first to admit that vendor presentations at these events are typically so

Is Hadoop Dead or Just Much Less Important?

I recently read a blog discussing the fever to declare Hadoop as dead. While I agreed with the premise of the blog, I didn't agree with some of its conclusions. In summary, the conclusion was that if Hadoop is too complex you are using the wrong interface. I agree at face-value with that conclusion, but in my opinion, the user-interface only addresses a part of the complexity and the management of a Hadoop deployment is still a complex undertaking. Time to value is important for enterprise customers, so this is why the tooling above Hadoop was such an early pain-point. The core Hadoop vendors wanted to focus on how processes executed and programming paradigms and seemed to ignore the interface to Hadoop. Much of that stems from the desire for Hadoop to be the operating system for Big Data. There was even a push to make it the  compute cluster manager for all-things in the Enterprise. This effort, and others like it, tried to expand the footprint of commercial distributions of H

Adding New Machine Types to Pivotal Cloud Foundry via Ops Manager API

Most of my career has been spent on infrastructure and data products, but recently I was asked to refocus slightly and work a bit more with data architectures in the cloud.  That's a pretty broad list of topics, but who doesn't love a challenge.   One of the first things I like to do when working with a new set of technologies is to set them up, break them, set them up again, and break them in a new and novel way.   I am actually pretty talented at breaking things, so this part comes really easy.    My first adventure was setting up Pivotal Cloud Foundry with Google Compute, and then using the Google Compute Service Broker.   The first step was getting the IaaS setup and configured. I looked around a bit and located a very helpful Terraform repo that was exactly what was needed to jumpstart the process.   Now, the process of setting up Google Compute for PCF was as simple as setting a couple variables and then running terraform apply.   These Terraform scripts are very flexi