Most of my career has been spent on infrastructure and data products, but recently I was asked to refocus slightly and work a bit more with data architectures in the cloud. That's a pretty broad list of topics, but who doesn't love a challenge. One of the first things I like to do when working with a new set of technologies is to set them up, break them, set them up again, and break them in a new and novel way. I am actually pretty talented at breaking things, so this part comes really easy.
My first adventure was setting up Pivotal Cloud Foundry with Google Compute, and then using the Google Compute Service Broker. The first step was getting the IaaS setup and configured. I looked around a bit and located a very helpful Terraform repo that was exactly what was needed to jumpstart the process. Now, the process of setting up Google Compute for PCF was as simple as setting a couple variables and then running terraform apply. These Terraform scripts are very flexible and allow you to leverage Google SQL for the internal Pivotal Cloud Foundry databases, and Google storage for all the Cloud Foundry object storage requirements.
Once the infrastructure build-out was complete, I configured Ops Manager and began the install of Pivotal Elastic Runtime. While configuring the Elastic Runtime, I noticed the machines that were offered for the resources were not aligned to standard Google Compute machine sizes. Often, this is not a big issue. For instance, when the requirements fall between 2 GCE standard machines it is often cheaper to use a custom machine type. Sometimes, it's cheaper to use the standard Google machines, so I began to look into how to make that an option within the Elastic Runtime configuration screens.
My first attempt at this was a failure, but being new to these techs it was a great learning experience. I dropped to the command line and changed the cloud config to include the machine types I wanted, and then I changed the manifest of Elastic Runtime and redeployed.....SUCCESS. But, that success was short-lived. Next, I installed the Google Compute Service Broker and it was taking a REALLY long time. I popped over to the GCE Console and noticed the spinning icons on many of the machines associated with ERT. It was defaulting back to its GUI-configured state. DOH! I asked a couple people how I could solve the issue with very little luck, finally, I while I was in the San Francisco Pivotal Office I was pointed to the Ops Manager Product Manager. He told me that I could indeed accomplish what I wanted via the Ops Manager APIs.
So, next step was to learn to use the OpsManager API. The documentation was very helpful, but there was still a bit of trial and error that went into the process.
https://docs.pivotal.io/pivotalcf/1-9/customizing/ops-man-api.html
https://opsman-dev-api-docs.cfapps.io/#viewing-product-properties
1) Authenticate with the API Endpoint.
Since this was a test install and not production, I am not using signed certs, so I can skip SSL validation when ssh-ing into the OpsManager machine itself. Once logged in, you need to get the admin token. Enter Client ID, <Enter> for the secret and then the admin/password combo for logging into OpsManager. This is the method when using Internal Authentication, so if an external provider is used the documentation will show you the alternative method.
$ ssh -i ubuntu.priv ubuntu@104.196.58.151Once authenticated, you can retrieve the access token that is used to access the OpsManager API.
$ uaac target https://pcf.tech.demo.net/uaa --skip-ssl-validation
$ uaac token owner get
Client ID: opsman
Client secret: <ENTER>
User name: admin
Password: *******
$ uaac contextsArmed with the access token, you can now GET/PUT information from with the API. You can use CURL to access the API from the command-line to make things easy. To query the VM-types available, you simply issue a GET to the VM Types endpoint. This will return the configuration of all the vm types.
[0]*[https://pcf.tech.demo.net/uaa]
skip_ssl_validation: true
[0]*[admin]
user_id: aa2f7f87-e309-4236-ab0a-88c9e14cbbb9
client_id: opsman
access_token: eyJhbGciOiJSUzI1N … <TRUNCATED>
token_type: bearer
refresh_token: eyJhbGciOiJSUFSE … <TRUNCATED>
expires_in: 43199
scope: opsman.admin scim.me opsman.user uaa.admin clients.admin jti: f08d5c9bc162aa74
$ curl https://pcf.tech.demo.net/api/v0/vm_types -X GET -H "Authorization: Bearer eyJhbGciOiJSUzI1N … <TRUNCATED>" -kOne thing that’s very important to remember is that when adding new machines you are replacing the entire VM Types section, so you should copy the original entries and then add any new machines to that for submittal to the API. This will ensure you are appending machines instead of replacing machines.
{"vm_types":[{"name":"micro","ram":1024,"cpu":1,"ephemeral_disk":8192,"builtin":true},{"name":"micro.cpu","ram":2048,"cpu":2,"ephemeral_disk":8192,"builtin":true},{"name":"small","ram":2048,"cpu":1,"ephemeral_disk":8192,"builtin":true},{"name":"small.disk","ram":2048,"cpu":1,"ephemeral_disk":16384,"builtin":true},{"name":"medium","ram":4096,"cpu":2,"ephemeral_disk":8192,"builtin":true},{"name":"medium.mem","ram":6144,"cpu":1,"ephemeral_disk":8192,"builtin":true},{"name":"medium.disk","ram":4096,"cpu":2,"ephemeral_disk":32768,"builtin":true},{"name":"medium.cpu","ram":4096,"cpu":4,"ephemeral_disk":8192,"builtin":true},{"name":"large","ram":8192,"cpu":2,"ephemeral_disk":16384,"builtin":true},{"name":"large.mem","ram":12288,"cpu":2,"ephemeral_disk":16384,"builtin":true},{"name":"large.disk","ram":8192,"cpu":2,"ephemeral_disk":65536,"builtin":true},{"name":"large.cpu","ram":4096,"cpu":4,"ephemeral_disk":16384,"builtin":true},{"name":"xlarge","ram":16384,"cpu":4,"ephemeral_disk":32768,"builtin":true},{"name":"xlarge.mem","ram":24576,"cpu":4,"ephemeral_disk":32768,"builtin":true},{"name":"xlarge.disk","ram":16384,"cpu":4,"ephemeral_disk":131072,"builtin":true},{"name":"xlarge.cpu","ram":8192,"cpu":8,"ephemeral_disk":32768,"builtin":true},{"name":"2xlarge","ram":32768,"cpu":8,"ephemeral_disk":65536,"builtin":true},{"name":"2xlarge.mem","ram":49152,"cpu":8,"ephemeral_disk":65536,"builtin":true},{"name":"2xlarge.disk","ram":32768,"cpu":8,"ephemeral_disk":262144,"builtin":true},{"name":"2xlarge.cpu","ram":16384,"cpu":16,"ephemeral_disk":65536,"builtin":true}]}
For this example, I added the following 2 machines to the list: A g1-small, and an n1-standard-4, both standard Google machine types.
{
"name":"gce-small",
"machine_type":"g1-small",
"ephemeral_disk":8192,
"ram":1792,
"cpu":1,
"root_disk_type":"pd-standard",
"builtin":true
},
{
"name":"gce-standard-4",
"machine_type":"n1-standard-4",
"ephemeral_disk":131072,
"ram":15360,
"cpu":4,
"root_disk_type":"pd-standard",
"builtin":true
}
To add the machines, you issue a PUT to the same endpoint as above (vm_types). Notice, the new machine list is just the previous results of the GET with the new types appended to it.
$ curl https://pcf.tm.demo.net/api/v0/vm_types -X PUT -H "Authorization: Bearer eyJhbGciOiJSUzI1N … <TRUNCATED>" -k -H "Content-Type: application/json" -d '{ "vm_types": [ { "name": "gce-small", "machine_type": "g1-small", "ephemeral_disk": 8192, "ram": 1792, "cpu": 1, "root_disk_type": "pd-standard", "builtin": true },{ "name": "gce-standard-4", "machine_type": "n1-standard-4", "ephemeral_disk": 131072, "ram": 15360, "cpu": 4, "root_disk_type": "pd-standard", "builtin": true }, { "name": "micro", "ram": 1024, "cpu": 1, "ephemeral_disk": 8192, "builtin": true }, { "name": "micro.cpu", "ram": 2048, "cpu": 2, "ephemeral_disk": 8192, "builtin": true }, { "name": "small", "ram": 2048, "cpu": 1, "ephemeral_disk": 8192, "builtin": true }, { "name": "small.disk", "ram": 2048, "cpu": 1, "ephemeral_disk": 16384, "builtin": true }, {"name": "medium", "ram": 4096, "cpu": 2, "ephemeral_disk": 8192, "builtin": true }, { "name": "medium.mem", "ram": 6144, "cpu": 1, "ephemeral_disk": 8192, "builtin": true }, { "name": "medium.disk", "ram": 4096, "cpu": 2, "ephemeral_disk": 32768, "builtin": true }, { "name": "medium.cpu", "ram": 4096, "cpu": 4, "ephemeral_disk": 8192, "builtin": true }, { "name": "large", "ram": 8192, "cpu": 2, "ephemeral_disk": 16384, "builtin": true }, { "name": "large.mem", "ram": 12288, "cpu": 2, "ephemeral_disk": 16384,"builtin": true }, { "name": "large.disk", "ram": 8192, "cpu": 2, "ephemeral_disk": 65536, "builtin": true }, { "name": "large.cpu", "ram": 4096, "cpu": 4, "ephemeral_disk": 16384, "builtin": true }, { "name": "xlarge", "ram": 16384, "cpu": 4, "ephemeral_disk": 32768, "builtin": true }, { "name": "xlarge.mem", "ram": 24576, "cpu": 4, "ephemeral_disk": 32768, "builtin": true }, { "name": "xlarge.disk", "ram": 16384, "cpu": 4, "ephemeral_disk": 131072, "builtin": true }, { "name": "xlarge.cpu", "ram": 8192,"cpu": 8, "ephemeral_disk": 32768, "builtin": true }, { "name": "2xlarge", "ram": 32768, "cpu": 8, "ephemeral_disk": 65536, "builtin": true }, { "name": "2xlarge.mem", "ram": 49152, "cpu": 8, "ephemeral_disk": 65536, "builtin": true }, { "name": "2xlarge.disk", "ram": 32768, "cpu": 8, "ephemeral_disk": 262144, "builtin": true }, { "name": "2xlarge.cpu", "ram": 16384, "cpu": 16, "ephemeral_disk": 65536, "builtin": true } ]}'Now, reissue the GET query to verify the machines were added to the configuration. (output is truncated)
$ curl https://pcf.tech.demo.net/api/v0/vm_types -X GET -H "Authorization: Bearer eyJhbGciOiJSUzI1N … <TRUNCATED>" -k
{"vm_types":[{"name":"gce-small","ram":1792,"cpu":1,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.675Z","updated_at":"2017-03-13T20:32:11.675Z","builtin":false},{"name":"gce-standard-4","ram":15360,"cpu":4,"ephemeral_disk":131072,"created_at":"2017-03-13T20:32:11.694Z","updated_at":"2017-03-13T20:32:11.694Z","builtin":false},{"name":"micro","ram":1024,"cpu":1,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.695Z","updated_at":"2017-03-13T20:32:11.695Z","builtin":false},{"name":"micro.cpu","ram":2048,"cpu":2,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.697Z","updated_at":"2017-03-13T20:32:11.697Z","builtin":false},{"name":"small","ram":2048,"cpu":1,"ephemeral_disk":8192,"created_at":"2017-03-13T20:32:11.699Z","updated_at":"2017-03-13T20:32:11.699Z","builtin":false},{"name":"small.disk","ram":2048,"cpu":1,"ephemeral_disk":16384,"created_at":"2017-03-13T20:32:11.700Z","updated_at":"meral_disk":65536,"created_at":"2017-03-13T20:32:11.724Z","updated_at":"2017-03-13T20:32:11.724Z","builtin":false}]}
And that’s all there is to it. A successful step down the Pivotal Cloud Foundry path for me, and hopefully a simple, but useful tutorial for someone else.