Thursday, February 23, 2017

Oracle Cloud - create storage volumes with JSON orchestration

Creating storage volumes in the Oracle Compute Cloud can be done in multiple ways. The most simple way is using the web console and following the guided way of creating a new storage volume. However, when you intend to automate this and integrate this in a continuous delivery model the manual way of doing things is not really working. In that case you will have to look into how you can create storage volumes based upon orchestrations. Orchestrations are JSON based building instructions to create objects in the Oracle Compute Cloud.

You can manually upload and start orchestrations or you can use the REST API to create a orchestration and start it.  In both cases you will need to understand how to craft a correct JSON file that will create your storage volume for you.

Storage volume JSON
The below JSON message shows the entire orchestration file used to create a storage volume named oplantest1boot.

 "name": "/Compute-demoname/",
 "description": "oplan test 1",
 "relationships": [],
 "oplans": [
             "label": "My storage volumes",
             "obj_type": "storage/volume",
             "objects": [{
                          "name": "/Compute-demoname/",
                          "bootable": true,
                          "imagelist": "/oracle/public/OL_6.4_UEKR3_x86_64",
                          "properties": ["/oracle/public/storage/default"],
                          "size": "12884901888",
                          "description": "boot device for oplan test 1"

The JSON file shown above can be broken down in two parts. We have the top-level attributes, top-level attributes contain the name and description of an orchestration, along with other information such as the relationship between objects defined in the orchestration, start and stop times for the orchestration, and the list of objects in the orchestration.

The top-level attributes construction will envelope one or more oplans (object plans). The oplan(s) are the description of the actual object or objects that will be created when the orchestration will be started.

Orchestration top level attributes
The top-level attributes part of the above example orchestration is shown below. As you can see we have removed the oplan for the storage creation to make it more readable.

 "name": "/Compute-demoname/",
 "description": "oplan test 1",
 "relationships": [],
 "oplans": [


Orchestration attributes for storage volumes
The below shows the oplan which will create the actual storage volume. For readability we have shown this as a separate part outside of the context of the top level attribute.

             "label": "My storage volumes",
             "obj_type": "storage/volume",
             "objects": [{
                          "name": "/Compute-demoname/",
                          "bootable": true,
                          "imagelist": "/oracle/public/OL_6.4_UEKR3_x86_64",
                          "properties": ["/oracle/public/storage/default"],
                          "size": "12884901888",
                          "description": "boot device for oplan test 1"

As you can see we have a number of attributes that are specified. The main attributes you can specify for every oplan (not only for storage) are:

  • label
    • A text string describing your object plan. This can be everything as long as it is not exceeding 256 characters. 
  • obj_type
    • the obj_type attribute lets you define what type of objects will be created as part of this specific oplan. In our case we will create a storage volume which means we will have to use the "storage/volume" object type. For other object types you can refer to the Oracle documentation on this subject.
  • objects
    • Objects is the placeholder for an array of objects of the object type specified in the obj_type attribute. This means, if you need to create multiple storage objects you can all defined them within the object placeholder.
  • ha_policy 
    • The ha_policy attribute is optional and not shown in the example above. You can state monitor as a value for this or leave it out. When the HA policy for an object is set to monitor, if the object goes to an error state or stops unexpectedly, the orchestration changes to the Error state. However, the object isn’t re-created automatically.
As you can see in the above example the actual object in this specific object plan is of the object type storage/volume. The descriptive information about the specific volume is in the first object instance in the instance array. Here we describe the actual object. For readability we have shown this seperatly below;

 "name": "/Compute-demoname/",
 "bootable": true,
 "imagelist": "/oracle/public/OL_6.4_UEKR3_x86_64",
 "properties": ["/oracle/public/storage/default"],
 "size": "12884901888",
 "description": "boot device for oplan test 1"

As you can see we have a number of attributes that are specified in the above section of the JSON message. These are the primary required attributes when creating a storage volume.

  • name
    • name is used to state the name of your storage volume. It needs to be constructed in the following manner /compute-identity_domain/user/name to ensure it is fully compatible and will be placed in the right location. 
  • size
    • Size can be given bytes, kilobytes, megabytes, gigabytes or terrabytes. The default is byets however every unit of measure can used by using an (uppercase or lowercase) identifyer like B, K, M, G or T. For example, to create a volume of size 10 gigabytes, you can specify 10G, or 10240M, or 10485760K, and so on. Where it need to be in the allowed range between from 1 GB and 2 TB, in increments of 1 GB.
  • properties
    • The properties section let you select the type of storage that you require. Currently the options available are standard and low latency storage. In case you are able to work with standard storage you can use /oracle/public/storage/default as a string value. In case you need low latency and high IOPS you can use /oracle/public/storage/latency as a string value.
  • description
    • A descriptive text string describing your storage volume. 
  • bootable
    • bootable is optional and wil indicate if this storage volume should be considered as the boot volume of a machine. The default is false and false will be used if the attribute is not specified. If bootable is set to true you have to provide the attribute imagelist and imagelist_entry. 
  • imagelist
    • Required when bootable is set to True. Name of machine image to extract onto this volume when created. In our example case this is a publicly available image from the images created by Oracle /oracle/public/OL_6.4_UEKR3_x86_64
  • imagelist_entry
    • the imagelist_entry attribute is used to specify the version of the image from the imagelist you want to use. The default value when not provided is 1. Do note, some Oracle Documentation states imagelistentry without the underscore, this is the wrong notation and you should use imagelist_entry (with the underscore) 
  • tags
    • tags is used to provide tags to the storage volume which can be used for administrative purposes.
Additionally you will have the option to use a snapshot to create an image. This can be used in a process where you want to clone machines using a storage snapshot. In this case you will have to use an existing snapshot and provide the following attributes to ensure the snapshot is restored in the new storage volume. 
  • snapshot
    • Multipart name of the storage snapshot from which you want to restore or clone the storage volume.
  • snapshot_id
    • ID of the parent snapshot from which you want to restore a storage volume.
  • snapshot_account
    • Account of the parent snapshot from which you want to restore a storage volume.
Using an orchestration
The above examples show you how you can create an orchestration for creating a storage volume. In most real-world cases you will mix the creation of a storage volume with the creation of other objects For example an instance where you will attach the storage at. 

However, as soon as you have a valid JSON payload you can upload this to the Oracle Public Cloud via the web interface or using an API. Orchestrations that have been uploaded to the cloud can be started and when completed they will result in the objects (storage volume in this case) to be created. 

Having the option to quickly create a JSON file as payload and send this to the Oracle Public Cloud highly supports the integration with existing automation tooling and helps in building automatic deployment and scaling solutions. 

Tuesday, February 21, 2017

Oracle Linux - Integrate Oracle Compute Cloud and Slack

In a recent post on this blog we already outlined how you can integrate Oracle Developer Cloud with Slack and receive messages on events that happen in the Oracle Developer Cloud in your channel. Having this integration will ensure that your DevOps teams are always aware of what is going on and have the ability to receive mobile updates on events and directly discuss them with other team members. Even though the integration with Slack is great it is only a part of the full DevOps chain you might deploy in the Oracle Public Cloud.

One of the other places you might want to have integration with Slack is the Oracle Compute Cloud Service. If we look at the below high level representation of a continues delivery flow in the Oracle Cloud we also see a "deployment automation" step. In this step new instance are created and a build can be deployed for unit testing, integration testing or production purposes.

In case you want to ensure your DevOps team is always aware of what is happening and you like to use Slack as one of the tools to enable your team to keep a tap on what is happening you should ensure integration with Slack in every step of the flow. This means, the creation of a new compute instance in the Oracle Compute Cloud should also report back to the Slack channel that the instance is created.

Creating a slack webHook
One of the things that you have to ensure if you want to have integration with Slack is that you have a slack webHook. A webHook is essentially an API endpoint where you can send your messages to and Slack will ensure that the message is promoted to the slack channel you are using as a DevOps team.

In the post where we described how to create a Slack webHook we already outlined how you can create a webHook that can be used. We will be using the same webHook in this example.

What is especially of importance when creating the integration between the Oracle Compute Cloud and slack is the part which is obfuscated in the above screenshot. This is the part of the webHook URL that is specific to your webHook and should look something like xxxxx/xxxxxx/xxxxxx . We refer to this as the slack_code in the scripting and the JSON payload when we start building our integration.

High Level Integrate
The main intend of this action is that we want to receive a message on Slack informing us when a new Oracle Linux compute instance has come online on the Oracle Compute Cloud Service. For this we will use a custom bash script. In this example we host the script in my personal github repository, however, in a real-life situation you most likely want to place this on a private location which you control and where you will not be depending on the github repository of someone else.

What in effect will happen is that we provide the creation process of a new instance with a set of user attributes in the JSON payload which is used for the orchestration process of creating a new instance. This part of the payload will be interpreted by the opc-init package which is shipped with all the standard Oracle images that are part of the Oracle Compute Cloud Service.

We will use some custom attributes to provide the script with the slack_code and the channel_name. We will also use the Prebootstrap attributes to state the location of the script that will communicate with slack as soon as the new instance is online.

Create an integrated instance
When creating a new instance on the Oracle Compute Cloud you can use the GUI or you can use the REST API to do so. In this example we will use the GUI, however, the same can be achieved by using the REST API.

When you create a new instance on the Oracle Compute Cloud you have the option to provide custom attributes to the creation process. The information provided needs to be in a JSON format and will be included in the overall orchestration JSON files. This is what we will use as the way to ensure the integration between the creation process of the new instance and slack.

What we will provide to the "custome attributes"field is the following JSON payload;

 "slack_channel": "general",
    "pre-bootstrap": {
                      "scriptURL": "",
                      "failonerror": true

As you can see we have a slack_code and a slack_channel. The slack_code will be used to place the code we got when we created the webHook on slack and the slack_channel will represent the channel in which we want to post the message.

The pre-bootstrap part holds the scriptUrl which tells opc-init on Oracle Linux where it needs to download the script which will be executed. The failonerror is currently set to true however in most cases you do want to have this on false.

If we start the instance creation with this additional JSON payload the script will be executed as soon as the opc-init downloads it during the boot procedure of the new Oracle Linux instance on the Oracle Compute Cloud. The script will take the input provided in the customer attributes by doing a call to the internal Oracle Cloud REST API. Next to this some additional information about the instance is collected by calling the meta-data REST API in the Oracle cloud.

This is effect will make sure that a message is posted to the slack channel you defined in the custom attributes. If we review the message we receive on slack we should see something like the example message below:

The bash scripting part
As already stated, the central part of this integration is based upon bash scripting currently hosted on github. In a real-world situation you would like to ensure you place this on a private server. However, it can very well be used for testing the solution. The bash script is available and released as open-source.

It will be downloaded and started on your Oracle Linux instance by opc-init based upon the information provided by the pre-bootstrap part in the custom attributes of your JSON payload and it will use some of the information provided in the same JSON payload.

Additionally it will retrieve meta-data about the instance from the internal REST-API for meta-data in the Oracle Compute Cloud. The combined information will be used to craft the message and send it to your slack channel. The code below is a version of the script which can be found in this location at github. Do note, the below version is not maintained and the latest version is only available on github.

#  To be used in combination with opc-init. The script will report
#  when a newly created instance is up on the Oracle Compute Cloud
#  into a slack channel. The information to be able to connect to
#  the right slack channel needs to be included in the userdata
#  part of the orchestration JSON file when created a new instance
#  This script is tested for Oracle Linux in combination with the 
#  Oracle public cloud / compute cloud.
# LOG:
# VERSION---DATE--------NAME-------------COMMENT
# 0.1       20FEB17     Johan Louwers    Initial creation
# Copyright (C) 2017  Johan Louwers
# This code is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this code; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
# *
# */
# Retrieve meta-data from the internal cloud API used to populate the slack message.
# includes the instance type, local IP and the local FQDN a registered in the OPC.
 metaDataInstanceType="$(curl -m 5 --fail -s"
 metaDataLocalIp="$(curl -m 5 --fail -s"
 metaDataLocalHost="$(curl -m 5 --fail -s"

# Retrieve the information needed to connect to slack. This includes the name of your
# channel on slack as well as the code requried to access the incomming webHook at the
# slack website.
 channelName="$(curl -m 5 --fail -s"
 slackCode="$(curl -m 5 --fail -s"

# set the slack message title
 msgTitle="Compute Cloud Service"

# Generte the slack message body, this is partially based upon the informtion which
# is retrieved from the meta-data api of the Oracle Public Cloud.
 msgBody="$(uname -s) instance $metaDataLocalHost is online with kernel $(uname -r). Sizing is : $metaDataInstanceType. Instance local cloud IP is $metaDataLocalIp"

# set the slack webhook url based upon a pre-defined first part and the slack code
# which we received from the user-data api from the Oracle Public Cloud. The info
# in the user-data is what you have to provide in the orchestration JSON file
# when provisioing a new instance on the Compute Cloud Service.

# Generate the JSON payload which will be send to the slack webhook. This will
# contain the message we will post to the slack channel.
read -d '' payLoad << EOF
        "channel": "#$channelName",
        "username": "Compute Cloud Service",
        "icon_url": "https:\/\/\/louwersj\/Oracle-Linux-Scripting\/raw\/master\/oracle_cloud\/compute_cloud\/postToSlack\/compute_cloud_icon.png",
        "attachments": [
                "fallback": "$msgTitle",
                "color": "good",
                "title": "Instance $(hostname) is created",
                "fields": [{
                    "value": "$msgBody",
                    "short": false

# send the payload to the Slack webhook to ensure the message is posted to slack.
statusCode=$(curl \
        --write-out %{http_code} \
        --silent \
        --output /dev/null \
        -X POST \
        -H 'Content-type: application/json' \
        --data "${payLoad}" ${slackUrl})

echo ${statusCode}

In conclusion
The above example showcases another point of integration between the Oracle Cloud and Slack. As DevOps teams more and more start to adopt interactive ways to communicate with each other it is a good practice to support them in this. Most likely your DevOps teams are already using WhatsApp, Slack or other tools to communicate.

Helping them and giving them the options to also include automated messaging will support the overall goal and makes them more productive and life more fun. 

Sunday, February 19, 2017

Oracle Linux - Download code from github with the command line

Developers use Github and Git more and more and it is becoming one of the standard ways to store code in a repository. Where developers will have the need to interact with the code, write new code and refactor code other people have "just" the need to download the code and use it as part of a, for example, a deployment on a server. Downloading code from Github is working in exactly the same way as downloading code from you  Git repository you might have as a company.

When we have the need to get code from a Github repository on Oracle Linux we can use the git command. The git command is not by default installed on your Oracle Linux instance when you do a basic installation however it is available in the standard Oracle Linux YUM repository so you can install it using the yum command.

The below command will ensure that git will install without prompting the you to confirm you really want to install git

yum -y install git

Now that we have ensured we have install git we can use it to download code from github. For example, we want to have the Open Oracle Public Cloud API library which is hosted on github. In case you just want to download the master branch from the project hosted at github you have to check the main URL of the project, in our example case this URL is which means we can download (clone) the repository by making use of the URL which is exactly only the addition of .git to the URL. The below example shows the effect of a full git clone command

[root@a0d544 test]# git clone
Initialized empty Git repository in /tmp/test/OPC_API_LIB/.git/
remote: Counting objects: 68, done.
remote: Compressing objects: 100% (52/52), done.
remote: Total 68 (delta 30), reused 0 (delta 0), pack-reused 11
Unpacking objects: 100% (68/68), done.
[root@a0d544 test]# 

The above example shows how to download (clone) a repository master which should contain the latest stable release (in most cases). However, the concept of branches is used within Git and Github which provides the option to have "working versions (branches) of a project which might differentiate from the stable (master) branch.

The below image shows a simple example where the master branch is forked and a "feature" branch is created which is developed upon while keeping the master branch clean.

There a lot of good reasons why one would like to clone a specific branch and not the master branch. For example, people might want to work with an unreleased version of a project or it might be part of your (automated) testing where you need a specific branch.

In this case the git clone command is somewhat different from the example shown above. For example, if we would have a feature branch in the Open Oracle Public Cloud API library (which it has not) the command would like the one shown below;

git clone -b feature --single-branch

This will ensure that you get the feature branch of the Open Oracle Public Cloud API library and not the master branch which is the default branch that will be downloaded when invoking the git clone command. 

Friday, February 17, 2017

Oracle Cloud - Integrate Oracle Developer Cloud Service with Slack

Slack is a cloud-based team collaboration tool founded by Stewart Butterfield. Slack finds a growing popularity with development, maintenance and DevOps teams due to the fact that it is easy to integrate with all kinds of tooling via the simple webhook methods provided by the slack team. This gives the power to develop simple applications that will send messages to a Slack channel where also humans are discussing daily business.

As an example, Oracle provides a standard integration from the Oracle Developer Cloud into the slack webhooks. This provides the option to push messages to the Slack channel from your DevOps team which contain information about, for example, builds, deployments, Git pushes, Merge Requests and others.

Having the option to integrate the Oracle Developer Cloud Service with Slack provides a great opportunity to engage your DevOps team in an always-on manner. As members will be able to see on the Slack website and on the Slack App on their mobile phones what is happening and directly discuss between each other it makes live much more easy and work much more interactive.

Create a slack webhook
To be able to make use of the integration functionality in the Oracle Developer Cloud Service towards Slack you will have to create a webhook in Slack. For this you will have to go to the Slack website and under "Channel Settings" select "Add an app or integration".

This will bring you to the app store from slack where you can search for the "Incoming WebHooks". Incoming Webhooks are a simple way to post messages from external sources into Slack. They make use of normal HTTP requests with a JSON payload, which includes the message and a few other optional details described later.

Selecting this will crate a Webhook and allow you to setup and configure your Webhook. The most important part of the setup is the Webhook URL which you will need in the Oracle Developer Cloud Service to setup the integration with Slack. A large number of other settings can be done in the Webhook configuration on the slack site.

In effect this is all that needs to be done to create the Slack webhook.

Configure Slack in the Oracle Cloud
The next step of the integration is going to your project page in the Oracle Developer Cloud Service and navigate to the Administrator section and select Webhooks. Here you will have the option to create new webhooks (from the Oracle side). When creating a new Webhook you will have the option to select Slack as a type which will show you the below set of options

As you can see you can subscribe to a number of things. For this example we are interested only in a number of specific events. To be precise, we want to see a message on our Slack channel for all Git push events and all Merge Requests on git in the Oracle Developer Cloud Service.

In effect this is all the configuration that is needed to ensure you have integration between the Oracle Developer Cloud Service and Slack to ensure that your DevOps team members can use slack as an additional information channel and discussion channel.

See the result in slack
As soon as we have configured the above we can test this with a test message in the Oracle Developer Cloud Service by sending a test message to Slack.

The real test comes when we start to push new code to the Git repository. As you can see in the below image, we are now receiving the required information in the Slack channel for the entire DevOps team to ensure everyone is aware of new Git pushes and Merge Requests.

In conclusion
Ensuring your DevOps teams are able to use all the tools they need to do the day to day job is important. It is also important to remember that this day to day job nowadays also includes always and everywhere on any device. This means that your team members want to be kept up to date on what is happening on the systems and discuss this directly with each other.

Most likely they are already using Slack or Slack like communication channel on their mobile phones. Most likely they already have a Slack channel, a WhatsApp group or they communicate on Facebook Messenger. Supporting your organisation in this and providing them even more integration in a controlled manner is adding to the overall team binding and the productivity.... and the fun. 

Oracle Cloud - Microsoft Visual Studio Code & Oracle Developer Cloud Service

Oracle Developer Cloud Service is a SaaS based solution for developers to make use of a fully integrated development engine in the Oracle Cloud. The Oracle Developer Cloud Service ties into a multitude of other cloud services in the Oracle Cloud. One of the central pieces for most developers is a source repository, within the Oracle Developer Cloud Service this is Git (for all good reasons)

Even though Microsoft might not be the first vendor to think about when talking about the Oracle Developer Cloud Service it actually has a great integration with Git and by having this with the Oracle Developer Cloud Service. If we take Microsoft Visual Studio Code as an example used by developers we can ensure this is directly connected with the Git repository within the Oracle Developer Cloud Service

When using Microsoft Visual Studio Code for the first time in combination with Oracle Developer Cloud Service you most likely need to install the Git client. As you can see you will get a message stating this in the below screenshot and directing you to the website where you can download the required client software to be able to interact with a Git repository.

Installing the Git client
As you encounter this message the first thing you will have to do is install the Git Client before you can continue. After you downloaded the windows Git Client installer it will take you through some standard steps of installing windows software.

Step 1
Agree with the license agreement

Step 2
Select a location to install the Git client

Step 3
Select the options you want the installer to install and the configuration you want it to make to your system

Step 4
Select a name for the shortcut to be used after the installation

Step 5
Select the way you want to use Git from the command line. As I am using Linux for the most part when developing and windows only on occasion I am OK with the windows command prompt only however this is a personal preference.

Step 6
When connecting to Git via SSH you will need a SSH client. Git comes with a SSH client however, to ensure integration with other SSH tools and Tortoise which you might already have running on your Windows system you can select to not use the SSH client that is shipped with Git and select another one that is more integrated in your day to day work already.

Step 7
Everyone who has been working cross platform between windows and UNUX / linux systems know why the below question is asked. To ensure you do not have to go through a hell of dos2unix and unix2dos commands to make what you just developed usable you have to select the right option for your situation in this step.

Step 8
Select your terminal emulator. I have the preference for MinTTY. In case you are developing code that is perfectly fine working with the windows console you can also select the second option however I would advise in most cases to use the first option.

Step 9
Configure the extra options as shown below. This is valid for most situations and should provide you the best result

Step 10
In case you feel experimental you can select this option. It will give you built in difftool to find diffs in between versions. However, as the screen states,….. it is not that well tested.

Step 11
Click the install button and wait for a short period of time.

Checking Git installation in Microsoft Visual Studio Code
After you have ensured that the Git client is installed you have to restart Microsoft Visual Studio Code and open a folder (file -> Open Folder). You will be able to see that you opened a folder as you now have the “explorer” section opened for you and it shows the folder you selected.

If we now click the Git button we notice that we get a different result and we have the option initialize a Git repository, as shown below;

Connecting to Oracle Developer Cloud
The initial connection to a project hosted on the Oracle Developer Cloud Service is however more easily done via the windows file explorer. If we go to a location where we want to have our project code from the Oracle Developer cloud we can use a left mouse click to open the option menu and we will notice a “Git GUI Here” option which, when clicked will result in a menu with 3 options (or more if you already have created some projects) The options presented are:

  • Create New Repository
  • Clone Existing Repository
  • Open Existing Repository

In effect we always start a project on the Oracle Developer Cloud Service so we do not want to create a new repository, we want to clone an existing repository and start working on it. When you select the “Clone Existing Repository” you will be presented with the below screen.

In this screen we have to enter the “source location” and the “target directory”. The target directory will be the directory on your local workstation you want to use to host a local working copy of the repository. The source location should be the http location of the Git repository in your Oracle Developer Cloud Service project. You can find the url needed when you navigate to the code section in the Oracle Developer Cloud Service as shown below which you have to copy and paste in the “source location” field of the Git GUI on your local workstation.

As soon as you click the clone button the process of cloning (downloading) the project to your local machine will start. In case this is the first time you will have to authenticate yourself using the username and password you use to access the Oracle Developer Cloud.

If you look into the folder that is created you will notice that all code (in our case only the file) is now present locally.

Using it in Microsoft Visual Studio Code
Now that we have established a local copy of the Git repository on our local workstation from the Oracle Developer Cloud Service we can also start using it in Microsoft Visual Studio Code. If we open Microsoft Visual Studio Code and open the folder that was created in the previous step we will see that we do not have to create a repository. It is already picking up the information from the Git client that this is under Git.

Now, if we add a file in the directory by creating a new file in Microsoft Visual Studio Code you will notice that this is detected and shown as a blue 1 icon on the Git button, indicating we have one changes that is not committed to Git.

If we go to the Git screen we can add a comment and “save” this. However, it is good to remember that this will not ensure that you actually commit the change to the Oracle Developer Cloud Service Git repository

If we want to ensure that the file is also actually pushed to the Git repository on the Oracle Developer Cloud Service we have to use the “Push to” option which will make sure your change is send to the Oracle Developer Cloud Service.

In conclusion
In effect every developer tool, including Microsoft Visual Studio Code has the ability to work with the Oracle Developer Cloud Service. And in cases where your developer tool is not supporting natively Git you can always use the Git GUI to ensure you have this integration. 

Thursday, February 16, 2017

Oracle Linux – forked logging to prevent script execution delay in high speed environments

Logging has always been important when developing code, with the changing way companies operate their IT footprint and move to a more DevOps logging becomes even more important. In a more DevOps oriented IT footprint logging is used to constantly monitor the behavior of the running services in the company and feed the information back to the DevOps team to take action on issues as well as to continuously improve the code.

As DevOps teams tend to consume much more logging than traditional oriented IT organizations the direct effect is that a lot more logging routines are developed and included in the code. While traditionally the number of lines that are written to log files are relatively limited the number of lines written to log files in a DevOps footprint can grow exponentially.

Even though this might not look like a challenge in the first instance it might cause an issue in environments with a lot of services deployed in combination with a high level of logging “steps” and a high level of execution. When developing code and when writing logging where you write to a log file you have to realize that every time you do send a line of logging to the logfile it actually takes cpu cycles, memory interaction, I/O operations to the file system as well as potentially waiting for lock on the file to be able to write to it. All this is resulting in a delay during the execution of your code.

Traditional logging implementation
In “traditional” code and especially in scripting like for example is done in bash scripting, you will see a lot of implementations like shown below. Between start and finish we have two “real” functions that will do something, after the execution of the function in the script flow the script will write some logging to a file.  In first instance this is a perfectly working solution, however, if you execute the script hundreds of times per minute the end 2 end execution of the script might slow down due to the fact that the log writing is developed in an inline manner.

Forked logging 
When developing code, even if you develop a small script in bash, it is good practice to ensure that your main code flow will not have to wait for the lines of logging to be written to file on the file system. In the example below you will see an implementation where the main flow of your code will for a secondary process which will take care of writing the logging to the logfile. 

By using this way of developing your code will execute the two functions that represent an actual step as one process on the operating system in a sequential manner. Every time there is the need to write logging to the logfile a new process is forked. The benefit of this is that your main flow of code will not wait until the “write to logfile” process is completed, it will directly go to the next step in the script.

In cases where you have congestion in writing to the file this implementation will ensure your main process will not have any delay in execution. The forked process that will take care writing to the file will experience the delay however main execution times will improve.

Code example of forked bash processes
If we take the below code as a starting point of the example. We have three main steps in the code, all three basically do the same. In a real world situation step 1 and step 3 would be “real” code execution while step 2 is writing to the log. The sleep commands are included to showcase delay of the system.


sleep 1
echo "step 1 : $(date)" >> ./result.txt

sleep 5
echo "step 2 : $(date)" >> ./result.txt

sleep 1
echo "step 3 : $(date)" >> ./result.txt

As you can see step 1 and step 3 take 1 second to complete while step 2 (writing to the log file) takes 5 seconds. If we run the script and read the content of ./result.txt we will see the following:

[opc@a0d544 test]$
[opc@a0d544 test]$ ./
[opc@a0d544 test]$ cat result.txt
step 1 : Thu Feb 16 17:19:49 EST 2017
step 2 : Thu Feb 16 17:19:54 EST 2017
step 3 : Thu Feb 16 17:19:55 EST 2017 
[opc@a0d544 test]$

The above is perfectly explainable and as expected. However, we want to ensure that the code runs faster, or in other words, we want the main sequential flow to finish faster while we do not really worry that the logging lags a bit behind. In case that the logging is experiencing issues with I/O performance you do not want to wait for this. Also, if your logging would be done by using a curl command to a central REST based logging server you do not want to wait for that.

In the below example we show a form of forking in bash, there are other ways of doing it, however this works for example purposes. The code is shown below:


function step2 () {
 sleep 5
 echo "step 2 : $(date)" >> ./result.txt


sleep 1
echo "step 1 : $(date)" >> ./result.txt

( step2 ) &

sleep 1
echo "step 3 : $(date)" >> ./result.txt

As you can see, we have placed step two in a function and we call the function in a bit of a different manner than we would normally do. This has the result that the function is executed in a sub-shell and the main sequential code flow will continue without waiting for the sub-shell to finish the execution of the function.

If we execute this and look at the results we see a somewhat different result than in the first test. We see that the first line is for step one at :49 and the second line is for step three at :08. This is also the moment the script finished. However, at :13 we have step two reporting to the file while the initial script already finished.

[opc@a0d544 test]$ ./
[opc@a0d544 test]$ cat result.txt
step 1 : Thu Feb 16 17:20:08 EST 2017
step 3 : Thu Feb 16 17:20:09 EST 2017
step 2 : Thu Feb 16 17:20:13 EST 2017
[opc@a0d544 test]$

By implementing the calling of step two in a different manner we ensured that the execution of the main sequential flow improved with 5 seconds. Even though you might not expect this kind of delay there are improvements that can be made to the script execution speed when adopting solutions like this.

Some considerations
In cases where time stamping of your logs is extremely critical and/or in cases where a expect congestion in writing to files it is good practice to ensure that the timestamp is taken on the moment that the process is forked and not the moment the line is written to the file. More precisely, the main flow should take the timestamp and hand this over to the forked process in combination with the information that needs to be written to the file.

This way you are sure that the timestamp of the actual occurrence is written to the log and the timestamp when the information was written to the logfile. 

Wednesday, February 15, 2017

Oracle Linux – Working with Memory Mapped Files

When working with Oracle Linux and developing your own solutions which make a more direct use of the underlying operating system you will, at one point in time, encounter the need to have multiple processes interact with the same file. As an example, you might have a processes that writes actions to an action-queue file while another process is reading this file and updates the file when the action is completed.

When you encounter such a situation you can work around this by fseek() and code via this method however there are more elegant ways of doing it. You could map the file to memory and use a pointer to the memory map to interact with it.

Using mmap()
To do so you can use the map maker mmap() to map a file to memory.  mmap() creates a new mapping in the virtual address space of the calling process. The mmap() system call can be called as shown below:

void *mmap(void *addr, size_t len, int prot, int flags, int fildes, off_t off);

addr : This is the address we want the file mapped into. The best way to use this is to set it to (caddr_t)0 and let the OS choose it for you. If you tell it to use an address the OS doesn't like (for instance, if it's not a multiple of the virtual memory page size), it'll give you an error.

len : This parameter is the length of the data we want to map into memory. This can be any length you want. (Aside: if len not a multiple of the virtual memory page size, you will get a blocksize that is rounded up to that size. The extra bytes will be 0, and any changes you make to them will not modify the file.)

prot : The "protection" argument allows you to specify what kind of access this process has to the memory mapped region. This can be a bitwise-ORd mixture of the following values: PROT_READ, PROT_WRITE, and PROT_EXEC, for read, write, and execute permissions, respectively. The value specified here must be equivalent to the mode specified in the open() system call that is used to get the file descriptor.

flags : There are just miscellaneous flags that can be set for the system call. You'll want to set it to MAP_SHARED if you're planning to share your changes to the file with other processes, or MAP_PRIVATE otherwise. If you set it to the latter, your process will get a copy of the mapped region, so any changes you make to it will not be reflected in the original file—thus, other processes will not be able to see them. We won't talk about MAP_PRIVATE here at all, since it doesn't have much to do with IPC.

fildes : This is where you put that file descriptor you opened earlier.

off : This is the offset in the file that you want to start mapping from. A restriction: this must be a multiple of the virtual memory page size. This page size can be obtained with a call to getpagesize().

Example code
As an example you can review the below code example. Here you see we need to include sys/mman.h explicitly. In the example we use the file  “somefile”

#include <unistd.h>
#include <sys/types.h>
#include <sys/mman.h>

int filedesc, pagesize;
char *data;

filedesc = open("somefile", O_RDONLY);
pagesize = getpagesize();
data = mmap((caddr_t)0, pagesize, PROT_READ, MAP_SHARED, filedesc,pagesize);

By making use of such as an approach you will have much more control over files and will much more control and ease of development in cases where you will need to interact with a file from multiple processes at the same time.

For users who use Oracle Linux as pure the operating system and do not develop custom code or only code on a higher level this might not directly be of interest. However, everyone who is building custom code on Oracle Linux that needs to interact more directly with the operating system using this approach can be very beneficial in some cases.

Oracle Cloud - Capgemini Experience Test Drive

In collaboration with Oracle a team of Capgemini and Oracle cloud leaders have been hosting customers and Capgemini employees to enjoy a full evening of experiencing the Oracle Cloud.

Sessions on subjects such as Mobile & Chatbots, Oracle Process Cloud, Oracle Integration cloud and Oracle Compute cloud have provided all attendees with a large set of experience on the Oracle Cloud. To get a short impression of the event, the youtube video below will give an impression;

The hands-on experience workshops have been provided by the following people;

Monday, February 13, 2017

Oracle Cloud - introducing the Open Oracle Public Cloud API library

Today we will be introducing the Open Oracle Public Cloud API library. The Open Oracle Public Cloud API library or OPC_API_LIB is an open source library of functions written to make it more easy for developers to code against the Oracle Public Cloud API's and ensure integration. The open source project is licensed by the GNU General Public License and is available free for everyone. The project is not an Oracle project, it is a community project.

To support developers, DevOps teams, continuous delivery teams, system incinerators and everyone with the need to interact with the Oracle Public Cloud in a programmatic manner can make use of this API library.

The main intend is to take a away the burden of fully dive into the details of the inner workings of the Oracle Public Cloud API's and provide an abstraction layer in the form of the library which can be used to code against.

The current release is a extreme small subset of functions and the intention is to grow the number of functions in the library in the upcoming time. As main language currently bash has been selected as the primary language for the library.

All the main code for the Open Oracle Public Cloud API library will be available on github.

Friday, February 10, 2017

Oracle Linux - install Jenkins on Oracle Linux

Jenkins in becoming more and more the tool of choice in most continuos integration and DevOps environments.

Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of the whole software development process, with now common things like continuous integration, but by further empowering teams to implement the technical part of a Continuous Delivery.

It is a server-based system running in a servlet container such as Apache Tomcat. It supports SCM tools including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, Clearcase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands. The creator of Jenkins is Kohsuke Kawaguchi. Released under the MIT License, Jenkins is free software.

Installing Jenkins on Oracle Linux is relative easy and only includes a small number of steps as outlined below;

First step is to ensure you have the Jenkins YUM repository available on your Oracle Linux instance so you can do the installation. This includes the following 3 steps

sudo wget -O /etc/yum.repos.d/jenkins.repo
sudo rpm --import
sudo yum install jenkins

As soon as you have completed those steps you are able to issue a simple yum install to install Jenkins on your Oracle Linux instance

yum install java

When the installation is done you will have to ensure that Jenkins is started and that it will start every time you reboot your system. The following two commands will make sure that Jenkins is started and that it is included in the startup routine of your Oracle Linux instance

service jenkins start/stop/restart
chkconfig jenkins on

If all completed without any issues you should now have a running Jenkins server on your Oracle Linux instance. This means that you should be able to access the server with a browser on port 8080. That is, if you have ensured that your local firewall will allow so if you have one installed.

If you have ensured you can access Jenkins on port 8080 you will see the below screen the first time you access it.

This means that your Jenkins server is running and you have to ensure you follow the instructions on the screen to unlock Jenkins. This is to ensure that you are the only one that can make the initial setup steps.

you should now be ready to start enjoying Jenkins on Oracle Linux.

Monday, February 06, 2017

Oracle Cloud - Adding swap space to your Oracle Linux compute instance

Whenever you deploy an Oracle Linux instance on the Oracle Compute Cloud at this moment you will notice that the deployment is bare minimal. In essence I do agree with the line of thinking that things you do not explicitly need should not belong on your system. Everything you need for a specific reason you are free to add at that moment in time while keeping the template as small as possible.

The same applies for the fact that systems should be sized for what they really need and one should not oversize the systems. For this reason I am a personal fan of just enough operating system (JEOS) kind of deployments and just enough hardware resources.

One of the downsides is that if you use this line of thinking and use a bare minimal operating system deployments with a limit set of compute resources you sometimes run into the issue that you miss things you actually would like to have. One of the things that you might run in at first when using this line of reasoning on the Oracle Public Cloud is that of swap space.

No swap space
When deploying a templated bare minimal system in the Oracle Public Cloud using the Compute Cloud Service you will notice that you do not have swap space. Depending on the goal you have for a specific instance this might be an issue or you might not even notice. Some applications are perfectly fine not having swap space while some even demand it during installation.

By default you willl not have swap space, this means you will have to add swap space at run time or you have to make sure that your automated deployment will take care of adding swap space for those instances where it is required.

Give me swap space
In cases where you need swap space you can simply add swap space. In effect there are two main ways of adding swap space. You can use a swap file you can create with the dd command or you can add a entire disk to your machine and use that for swap space. It used to be that case that adding additional disks to your machine was a task that was not as simple as only executing commands and it would involve actual hardware.

In the era of virtualization and cloud, and in all reality since we started using SAN solutions for storage, adding more diskspace to a machine is not that hard anymore. claiming and adding more disk space in the cloud era is simply requesting more space from your cloud provider.

Creating a disk in the Oracle Cloud
When we decide to use a disk to use for swap space the first thing we need to do is to ensure we have a disk to add. To create a disk we navigate to the storage tab in the compute cloud service console. Here we can create a new disk as a storage volume, an example of this is shown below

After the disk is created you can attached the disk to an instance in the Oracle Compute Cloud Service. This will result in a screen like the one below. You have to select the instance name from a list of values and you have to select as which number you add the disk to the instance.

Selecting the number to which you add the disk is important, it will result in the device name under which the new disk is known in the Oracle Linux instance. By default the first disk will be known device /dev/xvdb, the second device will be /dev/xvdc, the third will be /dev/xvdd etc.

Creating the swap space
As soon as you have attached the disk to the instance it will be known as a new device on the instance. This means you will have to tell the instance how you like to use it. You could use it for storage which would require you to mount it as a filesystem. However, in this case we like to use it as swap space which will require a bit of a different approach than using it as a filesystem

First we check the current amount of swap space that is available on the instance at this moment. As can be seen below we do currently do not have any swap space added.

[root@pocapp2 ~]#
[root@pocapp2 ~]# free
             total       used       free     shared    buffers     cached
Mem:       7657252    5632492    2024760          0      72088    5244236
-/+ buffers/cache:     316168    7341084
Swap:            0          0          0
[root@pocapp2 ~]#

Now we have to see where the disk is we just created and added to the instance. As we have selected that the disk should be added as the secondary disk we now should be able to find a new disk as device /dev/xvdc

[root@pocapp2 ~]#
[root@pocapp2 ~]# ls /dev/xvd*
/dev/xvdb  /dev/xvdb1  /dev/xvdb2  /dev/xvdc
[root@pocapp2 ~]#

As you can see from the above example we now have a device /dev/xvdc available on the system which we can use for swap space. Now we have to make the device a swap device by using mkswap

[root@pocapp2 ~]#
[root@pocapp2 ~]# mkswap /dev/xvdc
mkswap: /dev/xvdc: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 10485756 KiB
no label, UUID=8ac3eacf-42e6-43a4-8a53-d33f29767dee
[root@pocapp2 ~]#

Now we have ensured that we can use the new disk as swap space we have to enable swap in the system by making use of this new swap device. A simple swapon command on the device will ensure that the swap is used.

[root@pocapp2 ~]#
[root@pocapp2 ~]# swapon /dev/xvdc
[root@pocapp2 ~]#

After executing the swapon command the device should now be acting as a device to provide swaps space to the system. You can check so by again executing the free command and you will notice that additional swap space is now active.

[root@pocapp2 ~]#
[root@pocapp2 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7477       5508       1969          0         70       5121
-/+ buffers/cache:        316       7160
Swap:        10239          0      10239
[root@pocapp2 ~]#

Even though we now have the swap space available on the system we have not made it persistent. Meaning, next time we reboot the machine we will lose the swap space again. To ensure the swap space is persistent we have to add a line to /etc/fstab like the one shown as an example below.

/dev/xvdc               swap                    swap    defaults        0 0

Now we have ensured that our system is equipped with additional swap space and that it is done so in a persistent manner to ensure that swap space is available every time we reboot the machine.

Sunday, February 05, 2017

Oracle Cloud - Build secure hybrid cloud connections with Oracle Corente Gateway

When you start using the Oracle Cloud one of the things you most likely would like to understand is how you will connect users to systems deployed in the Oracle Cloud and how you might connect servers in your own datacenter or in another cloud to this. For some time the primary answer would be, using Oracle Fast Connect.

However, another solution is provided and finds its origin in this press release dating back to the beginning of 2014;

On January 7, 2014, Oracle announced that it has agreed to acquire Corente, a leading provider of software-defined networking (SDN) technology for wide area networks (WAN).

The transaction has closed.

Corente's software-defined WAN virtualization platform accelerates deployment of distributed and cloud-based applications and services by allowing customers to provision and manage global private networks connecting to any site, over any IP network, in a secure, centralized, and simple manner. Proven deployments at leading enterprises and cloud service providers have dramatically decreased time to deployment of cloud-based applications and services, and increased security and manageability across the enterprise ecosystem.

The combination of Oracle and Corente is expected to deliver software-defined networking offerings that create cost-effective, secure networks, spanning global deployments, delivering a complete technology portfolio for cloud deployments with SDN offerings that virtualize both the enterprise data center LAN and the WAN.

Oracle Cloud acquisition strategy
As it often go’s with Oracle and acquisitions, for some time you do not hear from the acquired product and suddenly it starts to be included in the wider portfolio. Ever since Oracle started the journey to the cloud you see that often companies are acquired to strengthen the service portfolio of the Oracle Public Cloud in some way or form. 

In some cases this is not a full new product line, it are the small additions that make the Oracle Public Cloud much more attractive and more easy to use and incorporate in your enterprise deployments. 

Connecting the Hybrid Cloud
The Oracle Corente Gateway provides a solution to a known problem when developing a hybrid cloud strategy. The issue resolves around the question; how do we connect the different clouds and locations? By default, cloud solution open up to the public internet, a model which you do not want in all situations. The recent issues with compromised MongoDB servers that have been configured to be accessible from the public interne made this painfully clear once again. 

The ideal model you want to see is that nothing is connected to the public internet directly unless there is a functional reason for. Meaning, webservers providing services to users on the public internet can very well be exposed to the public internet. However, all other services running on those specific servers and all other servers should be shielded from people trying to access them. 

Ideally a model is created where the different clouds, cloud locations and traditional datacenter locations are connected together via de secured network. This secured network can be a site-2-site VPN tunnel based network over public internet or this can be a secured network via the dark fiber backbone of the major network providers. The last is for example a service provided by Equinix in the form of the Equinix Cloud Exchange. 

Oracle Corente Services Gateway
Oracle provides an easy to use and easy to implement solution for a site-2-site VPN model in the form the Corente service. The Corente service can be seen as a virtual VPN end-point which you can connect to an on premise solution in your datacenter. As an example, you would be able to create a secure site-2-site VPN connection where you have Corente running in the Oracle Cloud and in your local datacenter you have a Juniper vSRX solution in place. 

By binding both the cloud and your local datacenter together by using a VPN site-2-site connection you can extend your datacenter to the private cloud. By ensuring the correct network routing services can be shared and administration can be done with a single network experience. This limits the needs to have direct and open connections between the two sites. The level of integration and the level of security is raised by binding the two location together. 

As can be seen in the diagram above, the Corente instance is provisioned in the Oracle Cloud. For this an Oracle Compute Service Instance is used which will run Oracle Linux to ensure the software defined VPN endpoint provides the needed services. From the Corente gateway you can route network traffic to the Oracle Compute Service Instances. However, also connections to other Oracle Public Cloud Services can be established. As an example, you can use this model to also ensure the connections to the Oracle Databases running in the Oracle Database Cloud Service. 

Oracle Cloud - Deploying Microservice Containers

Whenever you engage on a more microservice oriented way of developing your application it will become clear that this is an architecture that is more suitable to be developed in a DevOps manner than in the traditional way applications are developed and maintained.

One of the things that we see a lot in enterprises that start transforming from traditional IT to a more modern way of architecture, development and maintenance is the adoption of extreme fast and flexible building blocks. Those building blocks are often selected to provide the optimal support for a DevOps king op operation and ensuring a high flexibility for agile development as well as for scaling up and down compute resources.

One of the technologies we see gaining adoption in the enterprise in a rapid fashion is the use of containers in combination of microservies. Instead of provisioning "virtual machines" in a cloud environment to host a monolithic application the trend is changing to deploying containers to host webservices.

An example of such a deployment you can think of running Flask in a Docker container. Flask is a microframework for Python based on Werkzeug and Jinja 2 under the BSD license and it is ideal for developing microservices with Python. Dockerize a simple flask app is relative easy and a large number of tutorials and examples are available.

Deploying with Oracle Cloud
When you are building an enterprise class microservice architecture based footprint and start adopting a container based infrastructure in a DevOps fashion the Oracle Cloud is providing some ideal components to get started.

Oracle Cloud for docker based deployments

As you can see from the above high level representation Oracle provides some of the key components for building such a landscape. In this example your developers will make use of the Oracle Developer Cloud Service to develop and store code. However, this is also the foundation for an automated deployment of containers which will contain both the needed technical components such as Flask and the developed microservice.

In the above example you can see that "application consumers" have a "person" as icon. In reality the consumers of a microservice will be in most cases applications instead of real life persons. While stating applications, this can be real life applications or this can be also another set of microservices.

The Oracle Developer Cloud Service provides the basic components to facilitate a fully automated continues integration strategy. In case you desire more than what is provided out of the box you can deploy and integrate whatever you need by leveraging the Oracle Compute Cloud Service.

Integrate with other services
Even though you can in theory build everything based upon a container strategy. The question architects have to ask is, which parts do I want to develop and which parts do I want to consume. In many cases it is much more beneficial to consume a service rather than develop a service. For example messaging, you can build a messaging service yourself or you can make use of the Oracle Messaging Cloud Service instead and consume this rather than develop it.

The same is applicable for example for handling documents or storing data in a database. For this you could leverage some of the other Oracle Public Cloud Services.

In conclusion
When transforming your legacy applications or building a new solution it is advisable to look into how you can leverage more modern architecture principles such as microservices. Also advisable is to ensure you ensure you can leverage the flexibility and scalability of the cloud and adopt lightweight solutions such a containers.

In addition to the above, you should take into consideration how you can create your solution with DevOps and continuous integration in mind to ensure an agile development method which provides flexibility and speed for adopting new strategies. 

Saturday, February 04, 2017

Functional Decomposition for Microservices Architecture and Application Refactoring

When you are starting to consider building a new product, a new application or refactoring and retrofitting an existing application to make it future proof at one point in time you will most likely consider the use of microservices.

Microservices is a specialization of an implementation approach for service-oriented architectures used to build flexible, independently deployable software systems. Services in a microservice architecture are processes that communicate with each other over a network in order to fulfill a goal. These services use technology-agnostic protocols. The microservices approach is a first realization of SOA that followed the introduction of DevOps and is becoming more popular for building continuously deployed systems.

In a microservices architecture, services should have a small granularity and the protocols should be lightweight. A central microservices property that appears in multiple definitions is that services should be independently deployable. The benefit of distributing different responsibilities of the system into different smaller services is that it enhances the cohesion and decreases the coupling. This makes it easier to change and add functions and qualities to the system at any time. It also allows the architecture of an individual service to emerge through continuous refactoring, and hence reduces the need for a big up-front design and allows for releasing software early and continuously.

Knowing your functionality
Regardless of the fact that you are building a new application from scratch or that you are intending to build upon an existing application and will be retrofitting for the future you will need to understand its functionality. In traditional architecture methods it is vitally important to understand the functionality of your application, however, when moving to a Microservices Architecture it is even more important.

The primary reason why it is important to understand the application functionality and the business use of the application is driven by the fact that it is good practice to break application up into Microservices based upon the functional components.

Traditionally all functionality was captured in an overall monolithic application architecture. With Microservices each functionality can be in theory a separate Microservice.

Functional decomposition
Whenever starting to architect an application which will be based upon Microservices you will have to make a functional decomposition to break down the complexity and map this on functional areas and functional components and later to services and Microservices.

Functional decomposition is the process of taking a complex process and breaking it down into its smaller, simpler, parts.  This might result in the below high level functional decomposition which is represented in a flow.

In a real world application this will be a sub-decomposition of a much larger and much more complex set of processes. The main pitfall is that this is primarily done by developers, who tend to think in solutions. The best way to do a pure functional decomposition is doing it without thinking about how this should be implemented in code (or a system for that matter). This is best done when thinking about the process how it would be done with pen and paper.

After you have defined the functional steps you can break down the steps into the functions you need per step. In this model you can define which technical functions you need to be able to complete a step. As an example, if one of your steps is checking the transportation costs for sending a parcel to a customers shipping location you might need the following technical functions:

  • Get combined weight for all products in the order
  • Determine shipping box on product sizes
  • Get shipping destination 
  • Get shipping costs based upon weight, box dimensions and destination
  • Get customer discount
  • Apply customer discount on shipping costs

This will provide you a mapping like shown below as a visual example representation. In reality you will provide the actual function description.

Mapping functional decomposition to services
When you have been able to create a decomposition of your application in both functional steps as well as the functions needed to support the functional step you will be able to start mapping the overlap of technical functions over all the functional steps in your flow. If we take the example of the steps needed to determine the costs for sending a parcel you have the required technical function "get customer discount". This technical function will be required in multiple steps in an order intake flow.

The below representation shows the mapping of the functional decomposition and finding the double functions in the overall flow.

If you have been able to find the "double"functions you can map them to future microservices. The idea of a microservice is that you can call it from every functional step where it is needed. This holds that you can have it in a central location in your microservices deployment and call it from the functional step when needed.

Sizing your microservice deployment
In essence microservices are, due to the way microservices communicate, ideal to make a high available and highly scalable architecture. As a rule of thumb it is considered a good practice to always deploy a microservice in a high available mode and always have a minimal deployment of two instances of a microservice in your deployment.

Based upon the number of functional steps that use a specific microservice you can take a first guess of the number of instances you need for the microservices. As stated, a default rule every microservice needs to be deployed at least in a dual deployment fashion. If you see that certain microservices are used by multiple functional steps it might be wise to deploy more than only two instances for this specific microservice.

In conclusion
In every case, regardless of the fact you will use microservices, it is a vital step in the thinking process to brake down your your application in functional parts. When developing a microservices based solution the functional decomposition can be used to start mapping the different microservices you will need and align them with the functional steps in your application.