Skip to main content

Trying out Project Riff on Docker for Mac with Kubernetes

Today, I decided to play around a bit with the new local Kubernetes support in the new beta release of Docker for Mac.  I decided to take the Project Riff Setup work done by Brian McClain from my Tech Marketing team and get it working on this new environment as a quick test of its functionality and usability.    It’s actually REALLY easy to get started with the support.  Once you download and install the Beta, you need to enable the support within Docker.   You can find the settings inside the Docker Preferences panel.



Once installed and up and running, it pretty much acts like minikube in that you get a tiny K8S implementation to play around with.   In order to test with Brians's stuff, I needed to make a few tweaks.  (https://github.com/dbbaskette/riff-demos).   

1) Obviously, the minikube install and start can be removed.

2) You have to change your context to "docker-for-desktop"

3) minikube has a nice command to output a formatted URL to access a service you have create.   (minikube service --url demo-riff-http-gateway) I haven't found an equivalent type command for the Docker Kuberernetes solution, so I just dropped back to the kubectl cli to get the info I needed.   The hostname/ip is easy...it can be localhost, but we need the port.   
kubectl get svc demo-riff-http-gateway  o=jsonpath='{.spec.ports[0].nodePort}' 
will get that info for the service.

That's all you need to adjust to get these demos of Project Riff up and running on Docker for Mac - Kubernetes.


Comments

Popular posts from this blog

CF Summit 2018

I just returned from CF Summit 2018 in Boston. It was a great event this year that was even more exciting for Pivotal employees because of our IPO during the event. I had every intention of writing a technology focused post, but after having some time to reflect on the week I decided to take a different route. After all the sessions were complete and I was reflecting on the large numbers of end-users that I had seen present, I decided to go through the schedule and pick out the names of companies that are leveraging Cloud Foundry in some way and were so passionate about it that they spoke about it at this event.   I might have missed a couple when compiling this list, so if you know of one not on here, it was not intentional. Allstate Humana T-Mobile ZipCar Comcast United States Air Force Scotiabank National Geospatial-Intelligence

Isilon HDFS User Access

I recently posted a blog about using my app Mystique to enable you to use HUE (webHDFS) while leveraging Isilon for your HDFS data storage.   I had a few questions about the entire system and decided to also approach this from a different angle.   This angle is more of "Why would you even use WebHDFS and the HUE File Browser when you have Isilon?"    The reality is you really don't need it, because the Isilon platform give you multiple options for working directly with the files that need to be accessed via Hadoop.   Isilon HDFS is implemented as just another API, so the data stored in OneFS can be accessed via NFS, SMB, HTTP, FTP, and HDFS.   This actually open up a lot of possibilities that make the requirements for some of the traditional tools like WebHDFS, and in some cases Flume go away because I can read and write via something like NFS.   For example, one customer is leveraging the NFS functionality to write weblogs directly to the share, then Hadoop can run MapRe

Is Hadoop Dead or Just Much Less Important?

I recently read a blog discussing the fever to declare Hadoop as dead. While I agreed with the premise of the blog, I didn't agree with some of its conclusions. In summary, the conclusion was that if Hadoop is too complex you are using the wrong interface. I agree at face-value with that conclusion, but in my opinion, the user-interface only addresses a part of the complexity and the management of a Hadoop deployment is still a complex undertaking. Time to value is important for enterprise customers, so this is why the tooling above Hadoop was such an early pain-point. The core Hadoop vendors wanted to focus on how processes executed and programming paradigms and seemed to ignore the interface to Hadoop. Much of that stems from the desire for Hadoop to be the operating system for Big Data. There was even a push to make it the  compute cluster manager for all-things in the Enterprise. This effort, and others like it, tried to expand the footprint of commercial distributions of H