Skip to main content

Posts

Data Pro? SpringOne Platform has you Covered!

In a funny coincidence, SpringOne Platform, on September 24th, has lots of talks about stateless apps and stateless functions and it just happens to be located in Washington DC….a stateless district. Those workloads are interesting, but I originally worked with the data products from Pivotal, so how we can manage and maintain state is a lot more interesting to me. Once “Platform” was added to the name of the conference, it began a metamorphosis into one that covered not only Spring development, but application/business transformation dev/ops, cloud, and data. So, if you are someone who considers themselves a data or database person, the conference has a lot of interesting sessions to offer. I put on my green Greenplum Chucks and took a walk through the conference agenda to see what would interest me if I was a Data Professional and thought I could provide some highlights. The conference officially begins on Tuesday, but if you are interested in an In-Memory data grid, super-fast transa...
Recent posts

CF Summit 2018

I just returned from CF Summit 2018 in Boston. It was a great event this year that was even more exciting for Pivotal employees because of our IPO during the event. I had every intention of writing a technology focused post, but after having some time to reflect on the week I decided to take a different route. After all the sessions were complete and I was reflecting on the large numbers of end-users that I had seen present, I decided to go through the schedule and pick out the names of companies that are leveraging Cloud Foundry in some way and were so passionate about it that they spoke about it at this event.   I might have missed a couple when compiling this list, so if you know of one not on here, it was not intentional. Allstate Humana T-Mobile ZipCar Comcast United States Air Force Scotiabank National Geospatial-Int...

Trying out Project Riff on Docker for Mac with Kubernetes

Today, I decided to play around a bit with the new local Kubernetes support in the new beta release of Docker for Mac.  I decided to take the Project Riff Setup work done by Brian McClain from my Tech Marketing team and get it working on this new environment as a quick test of its functionality and usability.    It’s actually REALLY easy to get started with the support.  Once you download and install the Beta, you need to enable the support within Docker.   You can find the settings inside the Docker Preferences panel. Once installed and up and running, it pretty much acts like minikube in that you get a tiny K8S implementation to play around with.   In order to test with Brians's stuff, I needed to make a few tweaks.  (https://github.com/dbbaskette/riff-demos).    1) Obviously, the minikube install and start can be removed. 2) You have to change your context to "docker-for-desktop" 3) minikube has a nice comman...

Must See TV

In my earlier years, NBC Television had a slogan for their primetime television line-up: Must See TV.   It really was must see, because there wasn’t a great way to watch it later if you missed it and most people in the office were talking about what happened the next day.  So, if you missed it, you either got a verbal recount of the show or you waited 6 months.  I have been to way too many conferences in my career and very few of the main stage presentation or keynotes would fall into what I would consider the Must See category.  SpringOne Platform 2017 provided us with LOTS of those moments, and thanks to recordings and streaming video, this Must-See TV can be posted for easy consumption.    If you missed the event, I encourage you to at least take a look at the replays of the Keynotes from S1P.   The Pivotal speakers all kept their presentations light, yet informative with a good dose of humor.  I will be the first to admit that vend...

Is Hadoop Dead or Just Much Less Important?

I recently read a blog discussing the fever to declare Hadoop as dead. While I agreed with the premise of the blog, I didn't agree with some of its conclusions. In summary, the conclusion was that if Hadoop is too complex you are using the wrong interface. I agree at face-value with that conclusion, but in my opinion, the user-interface only addresses a part of the complexity and the management of a Hadoop deployment is still a complex undertaking. Time to value is important for enterprise customers, so this is why the tooling above Hadoop was such an early pain-point. The core Hadoop vendors wanted to focus on how processes executed and programming paradigms and seemed to ignore the interface to Hadoop. Much of that stems from the desire for Hadoop to be the operating system for Big Data. There was even a push to make it the  compute cluster manager for all-things in the Enterprise. This effort, and others like it, tried to expand the footprint of commercial distributions...

Adding New Machine Types to Pivotal Cloud Foundry via Ops Manager API

Most of my career has been spent on infrastructure and data products, but recently I was asked to refocus slightly and work a bit more with data architectures in the cloud.  That's a pretty broad list of topics, but who doesn't love a challenge.   One of the first things I like to do when working with a new set of technologies is to set them up, break them, set them up again, and break them in a new and novel way.   I am actually pretty talented at breaking things, so this part comes really easy.    My first adventure was setting up Pivotal Cloud Foundry with Google Compute, and then using the Google Compute Service Broker.   The first step was getting the IaaS setup and configured. I looked around a bit and located a very helpful Terraform repo that was exactly what was needed to jumpstart the process.   Now, the process of setting up Google Compute for PCF was as simple as setting a couple variables and then running terraform a...

Pivotal HAWQ flies into the Hortonworks Sandbox

I have been working with Hadoop for quite a few years now and frequently find myself needing to try bits of code out on multiple distributions. During these times, the Single Node virtual editions of the various Hadoop distributions have always been my goto resource. Of all the VMs available, I believe the most seamless and well done version is the Hortonworks Sandbox. In fact, in the work I am starting now, to build a new PHD3.0 and HAWQ virtual playground, I view the Hortonworks Sandbox as the bar that needs to be exceeded. When we at Pivotal first announced HAWQ would be available on HDP, some of my first thoughts were about how nice it would be to provide customers the ability to install HAWQ directly onto the Hortonworks Sandbox to provide them with a place to take the software for a spin. Earlier this week, I had a request to do a live customer demonstration of installing HAWQ on HDP 2.2.4 leveraging Ambari.   This activity kicked off those Sandbox thoughts agai...