Skip to main content

Adding New Machine Types to Pivotal Cloud Foundry via Ops Manager API

Most of my career has been spent on infrastructure and data products, but recently I was asked to refocus slightly and work a bit more with data architectures in the cloud.  That's a pretty broad list of topics, but who doesn't love a challenge.   One of the first things I like to do when working with a new set of technologies is to set them up, break them, set them up again, and break them in a new and novel way.   I am actually pretty talented at breaking things, so this part comes really easy.   

My first adventure was setting up Pivotal Cloud Foundry with Google Compute, and then using the Google Compute Service Broker.   The first step was getting the IaaS setup and configured. I looked around a bit and located a very helpful Terraform repo that was exactly what was needed to jumpstart the process.   Now, the process of setting up Google Compute for PCF was as simple as setting a couple variables and then running terraform apply.   These Terraform scripts are very flexible and allow you to leverage Google SQL for the internal Pivotal Cloud Foundry databases, and Google storage for all the Cloud Foundry object storage requirements.

Once the infrastructure build-out was complete, I configured Ops Manager and began the install of Pivotal Elastic Runtime.    While configuring the Elastic Runtime, I noticed the machines that were offered for the resources were not aligned to standard Google Compute machine sizes.   Often, this is not a big issue.  For instance, when the requirements fall between 2 GCE standard machines it is often cheaper to use a custom machine type.  Sometimes, it's cheaper to use the standard Google machines, so I began to look into how to make that an option within the Elastic Runtime configuration screens.

My first attempt at this was a failure, but being new to these techs it was a great learning experience.   I  dropped to the command line and changed the cloud config to include the machine types I wanted, and then I changed the manifest of Elastic Runtime and redeployed.....SUCCESS.   But, that success was short-lived.  Next, I installed the Google Compute Service Broker and it was taking a REALLY long time.  I popped over to the GCE Console and noticed the spinning icons on many of the machines associated with ERT.   It was defaulting back to its GUI-configured state.   DOH!  I asked a couple people how I could solve the issue with very little luck, finally, I while I was in the San Francisco Pivotal Office I was pointed to the Ops Manager Product Manager.   He told me that I could indeed accomplish what I wanted via the Ops Manager APIs.

So, next step was to learn to use the OpsManager API.   The documentation was very helpful, but there was still a bit of trial and error that went into the process.

https://docs.pivotal.io/pivotalcf/1-9/customizing/ops-man-api.html
https://opsman-dev-api-docs.cfapps.io/#viewing-product-properties

1) Authenticate with the API Endpoint.
Since this was a test install and not production, I am not using signed certs, so I can skip SSL validation when ssh-ing into the OpsManager machine itself.  Once logged in, you need to get the admin token.   Enter Client ID, <Enter> for the secret and then the admin/password combo for logging into OpsManager.  This is the method when using Internal Authentication, so if an external provider is used the documentation will show you the alternative method.
$ ssh -i ubuntu.priv ubuntu@104.196.58.151
$ uaac target https://pcf.tech.demo.net/uaa --skip-ssl-validation
$ uaac token owner get
Client ID: opsman
Client secret: <ENTER>
User name: admin
Password: *******
Once authenticated, you can retrieve the access token that is used to access the OpsManager API.
$ uaac contexts
[0]*[https://pcf.tech.demo.net/uaa]
skip_ssl_validation: true
[0]*[admin]
user_id: aa2f7f87-e309-4236-ab0a-88c9e14cbbb9
client_id: opsman
access_token: eyJhbGciOiJSUzI1N … <TRUNCATED>
token_type: bearer
refresh_token: eyJhbGciOiJSUFSE … <TRUNCATED>
expires_in: 43199
scope: opsman.admin scim.me opsman.user uaa.admin clients.admin jti: f08d5c9bc162aa74
Armed with the access token, you can now GET/PUT information from with the API.  You can use CURL to access the API from the command-line to make things easy.  To query the VM-types available, you simply issue a GET to the VM Types endpoint.  This will return the configuration of all the vm types.
$ curl https://pcf.tech.demo.net/api/v0/vm_types -X GET -H "Authorization: Bearer eyJhbGciOiJSUzI1N … <TRUNCATED>" -k

{"vm_types":[{"name":"micro","ram":1024,"cpu":1,"ephemeral_disk":8192,"builtin":true},{"name":"micro.cpu","ram":2048,"cpu":2,"ephemeral_disk":8192,"builtin":true},{"name":"small","ram":2048,"cpu":1,"ephemeral_disk":8192,"builtin":true},{"name":"small.disk","ram":2048,"cpu":1,"ephemeral_disk":16384,"builtin":true},{"name":"medium","ram":4096,"cpu":2,"ephemeral_disk":8192,"builtin":true},{"name":"medium.mem","ram":6144,"cpu":1,"ephemeral_disk":8192,"builtin":true},{"name":"medium.disk","ram":4096,"cpu":2,"ephemeral_disk":32768,"builtin":true},{"name":"medium.cpu","ram":4096,"cpu":4,"ephemeral_disk":8192,"builtin":true},{"name":"large","ram":8192,"cpu":2,"ephemeral_disk":16384,"builtin":true},{"name":"large.mem","ram":12288,"cpu":2,"ephemeral_disk":16384,"builtin":true},{"name":"large.disk","ram":8192,"cpu":2,"ephemeral_disk":65536,"builtin":true},{"name":"large.cpu","ram":4096,"cpu":4,"ephemeral_disk":16384,"builtin":true},{"name":"xlarge","ram":16384,"cpu":4,"ephemeral_disk":32768,"builtin":true},{"name":"xlarge.mem","ram":24576,"cpu":4,"ephemeral_disk":32768,"builtin":true},{"name":"xlarge.disk","ram":16384,"cpu":4,"ephemeral_disk":131072,"builtin":true},{"name":"xlarge.cpu","ram":8192,"cpu":8,"ephemeral_disk":32768,"builtin":true},{"name":"2xlarge","ram":32768,"cpu":8,"ephemeral_disk":65536,"builtin":true},{"name":"2xlarge.mem","ram":49152,"cpu":8,"ephemeral_disk":65536,"builtin":true},{"name":"2xlarge.disk","ram":32768,"cpu":8,"ephemeral_disk":262144,"builtin":true},{"name":"2xlarge.cpu","ram":16384,"cpu":16,"ephemeral_disk":65536,"builtin":true}]}
One thing that’s very important to remember is that when adding new machines you are replacing the entire VM Types section, so you should copy the original entries and then add any new machines to that for submittal to the API.  This will ensure you are appending machines instead of replacing machines.

For this example, I added the following 2 machines to the list:  A g1-small, and an n1-standard-4, both standard Google machine types.
{

"name":"gce-small",
"machine_type":"g1-small",
"ephemeral_disk":8192,
"ram":1792,
"cpu":1,
"root_disk_type":"pd-standard",
"builtin":true
},
{

"name":"gce-standard-4",
"machine_type":"n1-standard-4",
"ephemeral_disk":131072,
"ram":15360,
"cpu":4,
"root_disk_type":"pd-standard",
"builtin":true
}

To add the machines, you issue a PUT to the same endpoint as above (vm_types).  Notice, the new machine list is just the previous results of the GET with the new types appended to it.


$ curl https://pcf.tm.demo.net/api/v0/vm_types -X PUT -H "Authorization: Bearer eyJhbGciOiJSUzI1N … <TRUNCATED>" -k -H "Content-Type: application/json" -d '{ "vm_types": [ { "name": "gce-small", "machine_type": "g1-small", "ephemeral_disk": 8192, "ram": 1792, "cpu": 1, "root_disk_type": "pd-standard", "builtin": true },{ "name": "gce-standard-4", "machine_type": "n1-standard-4", "ephemeral_disk": 131072, "ram": 15360, "cpu": 4, "root_disk_type": "pd-standard", "builtin": true }, { "name": "micro", "ram": 1024, "cpu": 1, "ephemeral_disk": 8192, "builtin": true }, { "name": "micro.cpu", "ram": 2048, "cpu": 2, "ephemeral_disk": 8192, "builtin": true }, { "name": "small", "ram": 2048, "cpu": 1, "ephemeral_disk": 8192, "builtin": true }, { "name": "small.disk", "ram": 2048, "cpu": 1, "ephemeral_disk": 16384, "builtin": true }, {"name": "medium", "ram": 4096, "cpu": 2, "ephemeral_disk": 8192, "builtin": true }, { "name": "medium.mem", "ram": 6144, "cpu": 1, "ephemeral_disk": 8192, "builtin": true }, { "name": "medium.disk", "ram": 4096, "cpu": 2, "ephemeral_disk": 32768, "builtin": true }, { "name": "medium.cpu", "ram": 4096, "cpu": 4, "ephemeral_disk": 8192, "builtin": true }, { "name": "large", "ram": 8192, "cpu": 2, "ephemeral_disk": 16384, "builtin": true }, { "name": "large.mem", "ram": 12288, "cpu": 2, "ephemeral_disk": 16384,"builtin": true }, { "name": "large.disk", "ram": 8192, "cpu": 2, "ephemeral_disk": 65536, "builtin": true }, { "name": "large.cpu", "ram": 4096, "cpu": 4, "ephemeral_disk": 16384, "builtin": true }, { "name": "xlarge", "ram": 16384, "cpu": 4, "ephemeral_disk": 32768, "builtin": true }, { "name": "xlarge.mem", "ram": 24576, "cpu": 4, "ephemeral_disk": 32768, "builtin": true }, { "name": "xlarge.disk", "ram": 16384, "cpu": 4, "ephemeral_disk": 131072, "builtin": true }, { "name": "xlarge.cpu", "ram": 8192,"cpu": 8, "ephemeral_disk": 32768, "builtin": true }, { "name": "2xlarge", "ram": 32768, "cpu": 8, "ephemeral_disk": 65536, "builtin": true }, { "name": "2xlarge.mem", "ram": 49152, "cpu": 8, "ephemeral_disk": 65536, "builtin": true }, { "name": "2xlarge.disk", "ram": 32768, "cpu": 8, "ephemeral_disk": 262144, "builtin": true }, { "name": "2xlarge.cpu", "ram": 16384, "cpu": 16, "ephemeral_disk": 65536, "builtin": true } ]}'
Now, reissue the GET query to verify the machines were added to the configuration.  (output is truncated)

$ curl https://pcf.tech.demo.net/api/v0/vm_types -X GET -H "Authorization: Bearer eyJhbGciOiJSUzI1N … <TRUNCATED>" -k


{"vm_types":[{"name":"gce-small","ram":1792,"cpu":1,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.675Z","updated_at":"2017-03-13T20:32:11.675Z","builtin":false},{"name":"gce-standard-4","ram":15360,"cpu":4,"ephemeral_disk":131072,"created_at":"2017-03-13T20:32:11.694Z","updated_at":"2017-03-13T20:32:11.694Z","builtin":false},{"name":"micro","ram":1024,"cpu":1,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.695Z","updated_at":"2017-03-13T20:32:11.695Z","builtin":false},{"name":"micro.cpu","ram":2048,"cpu":2,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.697Z","updated_at":"2017-03-13T20:32:11.697Z","builtin":false},{"name":"small","ram":2048,"cpu":1,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.699Z","updated_at":"2017-03-13T20:32:11.699Z","builtin":false},{"name":"small.disk","ram":2048,"cpu":1,"ephemeral_disk":16384,"created_at":"2017-03-13T20:32:11.700Z","updated_at":"meral_disk":65536,"created_at":"2017-03-13T20:32:11.724Z","updated_at":"2017-03-13T20:32:11.724Z","builtin":false}]}
You can also launch OpsManager and click Elastic Runtime to view the Resource Config Settings.  Clicking on the dropdown in the VM Type Column should now show now show the two new VM Types.  If something goes wrong, and you want to remove the new VMs, you can make a DELETE call and the system will reset itself.    Once the VMs show up in the system, you can then use them in your Elastic Runtime configuration.

And that’s all there is to it.  A successful step down the Pivotal Cloud Foundry path for me, and hopefully a simple, but useful tutorial for someone else.

Popular posts from this blog

CF Summit 2018

I just returned from CF Summit 2018 in Boston. It was a great event this year that was even more exciting for Pivotal employees because of our IPO during the event. I had every intention of writing a technology focused post, but after having some time to reflect on the week I decided to take a different route. After all the sessions were complete and I was reflecting on the large numbers of end-users that I had seen present, I decided to go through the schedule and pick out the names of companies that are leveraging Cloud Foundry in some way and were so passionate about it that they spoke about it at this event.   I might have missed a couple when compiling this list, so if you know of one not on here, it was not intentional. Allstate Humana T-Mobile ZipCar Comcast United States Air Force Scotiabank National Geospatial-Intelligence

Isilon HDFS User Access

I recently posted a blog about using my app Mystique to enable you to use HUE (webHDFS) while leveraging Isilon for your HDFS data storage.   I had a few questions about the entire system and decided to also approach this from a different angle.   This angle is more of "Why would you even use WebHDFS and the HUE File Browser when you have Isilon?"    The reality is you really don't need it, because the Isilon platform give you multiple options for working directly with the files that need to be accessed via Hadoop.   Isilon HDFS is implemented as just another API, so the data stored in OneFS can be accessed via NFS, SMB, HTTP, FTP, and HDFS.   This actually open up a lot of possibilities that make the requirements for some of the traditional tools like WebHDFS, and in some cases Flume go away because I can read and write via something like NFS.   For example, one customer is leveraging the NFS functionality to write weblogs directly to the share, then Hadoop can run MapRe

Is Hadoop Dead or Just Much Less Important?

I recently read a blog discussing the fever to declare Hadoop as dead. While I agreed with the premise of the blog, I didn't agree with some of its conclusions. In summary, the conclusion was that if Hadoop is too complex you are using the wrong interface. I agree at face-value with that conclusion, but in my opinion, the user-interface only addresses a part of the complexity and the management of a Hadoop deployment is still a complex undertaking. Time to value is important for enterprise customers, so this is why the tooling above Hadoop was such an early pain-point. The core Hadoop vendors wanted to focus on how processes executed and programming paradigms and seemed to ignore the interface to Hadoop. Much of that stems from the desire for Hadoop to be the operating system for Big Data. There was even a push to make it the  compute cluster manager for all-things in the Enterprise. This effort, and others like it, tried to expand the footprint of commercial distributions of H