Thursday, March 23, 2017

Oracle Cloud - Architecture Blueprint - microservices transport protocol encryption

The default way microservices communicate with each other is based upon the http protocol. When one microservice needs to call another microservice it will initiate a service call based upon a HTTP request. The HTTP request can be all of the standard methods defined in the HTTP standard such as GET, POST and PUT. In effect this is a good mechanism and enables you to use all of the standards defined within HTTP. The main issue with HTTP is that it is clear text and by default will not have encryption enabled.

The reality one has to deal with is that the number of instances of microservices can be enormous and the possible connections can be enormous in a complex landscape. This also means that each possible path, each network connection can be a potentially be intercepted. Having no HTTPS SSL encryption implemented makes intercepting network traffic much more easy.



It is a best practice to ensure all of your connections are by default enabled, to do so it will be needed to make use of HTTPS instead of HTTP. Building your microservices deployment to only work with HTTPS and not with HTTP bring in a couple of additional challenges.

The challenge of scaling environments
In a microservices oriented deployment containers or virtual machines that provide instances of a microservice will be provisioned and de-provisioned in a matter of seconds. The issue that comes with this in relation to using HTTPS instead of HTTP is that you want to ensure that all HTTPS connections between the systems are baed upon valid certificates which are being created and controlled by a central certificate authority.

Even though it is a possibility that you have each service that is provisioned generate and sign its own certificate this is not advisable. using self signed certificates is considered in general as a not secure way of doing things. Most standard implementations of negotiating encryption between two parties do not see a self-signed certificate as a valid level of security. Even though you can force your code to accept a self-signed certificate and make it work you will be able to ensure encryption on the protocol level is negotiated and used, however, you will not be able to fully be assured that the other party is not a malicious node owned by an intruder.

To ensure that all instances can verify that the other instance they call is indeed a trusted party and to ensure that encryption is used in the manner it is intended you will have to make use of a certificate authority. A certificate authority is a central "bookkeeper" that will provide certificates to parties needing one and the certificate authority will provide the means to verify that a certificate that is offered during encryption negotiation is indeed a valid certificate and belongs to the instance that provides this certificate.

The main issue with using a certificate authority to provide signed certificates is that you will have to ensure that you have a certificate authority service in your landscape capable of generating and providing new certificates directly when this is needed.

In general, as we look at the common way certificates are signed and handed out, it is a tiresome process which might involve third parties and/or manual processing. Within a environment where signed certificates are needed directly and on the fly this is not a real option. This means, requesting signed certificates from the certificate authority needs to be direct and preferably based upon a REST API.

Certificate authority as a service
When designing your microservices deployment while making use of HTTPS and certificates signed by a certificate authority you will need to have the certificate authority as a service. The certificate authority as a service should enable services to request a new certificate when they are initialized. A slightly other alternative is that your orchestration tooling is requesting the certificate on behalf of the service that needs to be provisioned and provides the certificate during the provisioning phase.

In both cases you will need to have the option to request a new certificate, or request a certificate revocation when the service is terminated, via a REST API.

The below diagram shows on a high level the implementation of a certification authority as a service which enables (in this example) a service instance to request a signed certificate to be used to ensure the proper way of initiating HTTPS connections with assured protocol level encryption.


To ensure a decoupling between the microservices and the certificate authority we do not allow direct interaction between the microservice instances and the certificate authority. From a security point of view and a decoupling and compartmentalizing point of view this is a good practice and adds additional layers of security within the overall footprint.

When a new instance of a microservice is being initialized, this can be as a docker container in the Oracle Container Cloud Service or this can be as a virtual machine instance in the Oracle Compute Cloud Service, the initialization will request the certificate microservice for a new and signed certificate.

The certificate microservice will request a new certificate by calling the certificate authority server REST API on behalf of the initiating microservice. The answer provided back by the certificate authority is passed through by the certificate microservice towards the requesting party. In addition to, just being a proxy, it is good practice to ensure you certificate microservice will do a number of additional verification to see if the requesting party is authorized to request a certificate and to ensure the right level of auditing and logging is done to provide a audit trail.

Giving the CA a REST API
When exploring certificate authority implementations and solutions it will become apparent that they have been developed, in general, without the need for a REST API in mind. As the concept of the certificate authority is already in place long before microservice concepts came into play you will find that the integration options are not that well available.

An exception to this is the CFSSL, CloudFlare Secure Socket Layer, project on Github. The CFSSL project provides an opensource and free PKI toolkit which provides a full set of rich REST API's to undertake all required actions in a controlled manner.

As an example, the creation of a new certificate can be done by sending a JSON payload to the CFSSL REST API, the return message will consist out of a JSON file which contains the cryptographic materials needed to ensure the requesting party can enable HTTPS. Below you will notice the JSON payload you can send to the REST API. This is a specific request for a certificate for the ms001253 instance located in the Oracle Compute Cloud Service.

{
 "request": {
  "CN": "ms001253.compute-acme.oraclecloud.internal",
  "hosts": ["ms001253.compute-acme.oraclecloud.internal"],
  "key": {
   "algo": "rsa",
   "size": 2048
  },
  "names": [{
   "C": "NL",
   "ST": "North-Holland",
   "L": "Amsterdam",
   "O": "ACME Inc."
  }]
 }
}

As a result you will be given back a JSON payload containing all the required information. Due to the way CFSSL is build you will have the response almost instantly. The combiantion of having the option to request a certificate via a call to a REST API and getting the result back directly makes it very usable for cloud implementations where you scale the number of instances (VM's, containers,..) up or down all the time.

{
 "errors": [],
 "messages": [],
 "result": {
  "certificate": "-----BEGIN CERTIFICATE-----\nMIIDRzCCAjGgAwIBAg2 --SNIP-- 74m1d6\n-----END CERTIFICATE-----\n",
  "certificate_request": "-----BEGIN CERTIFICATE REQUEST-----\nMIj --SNIP-- BqMtkb\n-----END CERTIFICATE REQUEST-----\n",
  "private_key": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIJfVVIvN --SNIP-- hYYg==\n-----END EC PRIVATE KEY-----\n",
  "sums": {
   "certificate": {
    "md5": "E9308D1892F1B77E6721EA2F79C026BE",
    "sha-1": "4640E6DEC2C40B74F46C409C1D31928EE0073D25"
   },
   "certificate_request": {
    "md5": "AA924136405006E36CEE39FED9CBA5D7",
    "sha-1": "DF955A43DF669D38E07BF0479789D13881DC9024"
   }
  }
 },
 "success": true
}

The API endpoint for creating a new certificate will be /api/v1/cfssl/newcert however CFSSL provides a lot more API calls to undertake a number of actions. One of the reasons the implementation of the intermediate microservice is that it can ensure that clients cannot initiate some of those API calls whithout having the need to change the way CFSSL is build.

The below overview shows the main API endpoints that are provided by CFSSL. A full set of documentation on the endpoints can be found in the CFSSL documentation on Github.

  • /api/v1/cfssl/authsign
  • /api/v1/cfssl/bundle
  • /api/v1/cfssl/certinfo
  • /api/v1/cfssl/crl
  • /api/v1/cfssl/info
  • /api/v1/cfssl/init_ca
  • /api/v1/cfssl/newcert
  • /api/v1/cfssl/newkey
  • /api/v1/cfssl/revoke
  • /api/v1/cfssl/scan
  • /api/v1/cfssl/scaninfo
  • /api/v1/cfssl/sign


Certificate verification
One of the main reasons we stated one should ensure that you do not use self-signed certificates and why you should use certificates from a certificate authority is that you want to have the option of verification.

When conducting a verification of a certificate, checking if the certificate is indeed valid and by doing so getting an additional level of trust you will have to verify the certificate received from the other party with the certificate authority. This is done based upon OCSP or Online Certificate Status Protocol. A simple high level example of this is shown in the below diagram;

Within the high level diagram as shown above you can see that:

  • A service will request a certificate from the certificate microservice during the initialization phase
  • The certificate microservice requests a ceretificate on behalf at the certificate authority
  • The certificate authority sends the certificate back to the certificate microservice after which it is send to the requesting party
  • The requesting party uses the response to include the certificate in the configuration to allow HTTPS traffic


As soon as the instance is up and running it is eligible to receive requests from other services. As an example; if example service 0 would call example service 2 the first response during encryption negotiation would be that example service 2 sends back a certificate. If you have a OCSP responder in your network example service 1 can contact the OCSP responder check the validity of the certificate received from example service 2. If the response indicates that the certificate is valid one can assume that a secured connection can be made and the other party can be trusted

Conclusion
implementing and enforcing that only encrypted connections are used between services is a good practice and should be on the top of your list when desiging your microservices based solution. One should include this int he first stage and within the core of the architecture. Trying to implement a core security functionality at a later stage is commonly a cumbersome task.

Ensuring you have all the right tools and services in place to ensure you can easily scale up and down while using certificates is something that is vital to be successful.

Even though it might sounds relative easy to ensure https is used everywhere and in the right manner it will require effort to ensure it is done in the right way and it will become and asset and not a liability.

When done right it a ideal addition to a set of design decisions for ensuring a higher level of security in microservice based deployments.

Wednesday, March 22, 2017

Oracle Linux - Short Tip 6 - find memory usage per process

Everyone operating a Oracle Linux machine, or any other operating system for that matter, will at a certain point have to look at memory consumption. The first question when looking at memory consumption during a memory optimization project is the question, which process is using how much memory currently. Linux provides a wide range of tools and options to gain insight in all facets of system resource usage.

For those who "just" need to have a quick insight in the current memory consumption per process on Oracle Linux the below command can be extremely handy:

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'

It will provide a quick overview of the current memory consumption in mb per process.

[root@devopsdemo ~]# ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'
         0.00 Mb COMMAND
       524.63 Mb /usr/sbin/console-kit-daemon --no-daemon
       337.95 Mb automount --pid-file /var/run/autofs.pid
       216.54 Mb /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
         8.81 Mb hald
         8.46 Mb dbus-daemon --system
         8.36 Mb auditd
         2.14 Mb /sbin/udevd -d
         2.14 Mb /sbin/udevd -d
         1.38 Mb crond
         1.11 Mb /sbin/udevd -d
         1.04 Mb ps -eo size,pid,user,command --sort -size
         0.83 Mb sshd: root@pts/0
         0.74 Mb cupsd -C /etc/cups/cupsd.conf
         0.73 Mb qmgr -l -t fifo -u
         0.73 Mb login -- root
         0.65 Mb /usr/sbin/abrtd

The overview is extremely usefull when you need to quickly find the processes that consume the most memory or memory consuming processes which are not expected to use (this much) memory. 

Tuesday, March 21, 2017

Oracle Cloud - architecture blueprint - Central logging for microservices

When you engage in developing a microservices architecture based application landscape at one point in time the question about logging will become apparent. When starting to develop with microservices you will see that there are some differences with monolithic architectures that will drive you to rethink your logging strategy. Where we will have one central server, or a cluster of servers where the application is running within a monolithic architecture you will see in a microservices architecture you will have n nodes, containers, instances and services.

In a monolithic architecture you will see that most business flows run within a single server and end-to-end logging will be relative simple to implement and later to correlate and analyze. If we look at the below diagram you will see that a a call to the API gateway can result in calls to all available services as well as in the service registry. This also means that the end-to-end flow will be distributed over all different services and logging will for some parts also be done on each individual node and not in one central node (server) as it is the case in a monolithic application architecture.



When deploying microservices in, for example, the Oracle Public Cloud Container Cloud Service it will be a good practice to ensure that each individual docker container as well as the microservice push the logging to a central API which will receive the log files in a central location.

Implement central logging in the Container Cloud Service
The difference between the logging from the microservice  and the Docker container deployed in the Oracle Public Cloud Container Cloud Service is that the microservice will be sending specific logging of the service which is specific developed during the development of the service and which is being send to a central logging API. This can include technical logging as well as functional business flow logging which can be used for auditing.

In some applications the technical logging is specifically separated from the business logging. This to ensure that business information is not available to technical teams and can only be accessed by business users who need to undertake an audit.

Technical logging on container logging is more the lower level logging which is generated by docker and the daemon providing the needed services to enable to run the microservice.


The above diagram shows the implementation of an additional microservice for logging. This microservice will provide a REST API capable of receiving JSON based logging. This will ensure that all microservices will push the logging to this microservice.

When developing the mechanism which will push the log information, or audit information, to the logging microservice it is good to ensure that this is a forked logging implementation. More information on forked logging and how to implement this while preventing execution delay in high speed environments can be found in this blogpost where we illustrate this with a bash example.

Centralize logging with Oracle Management Cloud
Oracle provides, as part of the Public Cloud portfolio, the Oracle Management Cloud and as part of that it provides Log Analytics. When developing a strategy for centralized logging of your microservices you can have the receiving logging microservice push all logs to a central consolidation server in the Oracle Compute Cloud. You can have the Oracle Management Cloud Log Analytics service collect this and include this in the service provided by Oracle.

An example of this architecture is show on a high level in the below diagram.


The benefit of the Oracle Management Cloud is that it will provide an integrated solution which can be included withe other systems and services running in the Oracle Cloud, any other cloud or your traditional datacenter.


An example of the interface which is provided by default by the Oracle Management cloud is shown above. This framework can be used to collect logging and analyze it for both your docker containers, your microservices as well as other services deployed as part of the overall IT footprint.

The downside for some architects and developers is that you have to comply with a number of standards and methods defined in the solution by Oracle. The upside is that a large set of analysis tooling and intelligence is pre-defined and available outside of the box.

Centralize logging with the ELK stack
Another option to consolidate logging is making use of non-Oracle solutions. Splunk comes to mind, however, for this situation the ELK stack might be more appropriate. The ELKS stack consists out of ElasticSearch, Logstash and Kibana complimented with Elastic beats and the standard REST API's.

The ELK stack provides a lot more flexibility to developers and administrators however requires more understanding of how to work with ELK. The below image shows a high level representation of the ELK stack in combination with Beats.


As you can see in the above image there is a reservation for a {Future}beat. This is also the place where you can deploy your own developed Beat, you can also use this method to do a direct REST API call to logstash or directly to Elasticsearch. When developing a logging for microservices it might be advisable to directly store the log data into elasticsearch from within the code of the microservice. This might result in a deployment as shown below where the ELK stack components, including Kibana for reporting and visualization are deployed in the Oracle Compute Cloud Service.

This will result in a solution where all log data is consolidated in Elasticsearch and you can use Kibana for analysis and visualization. You can see a screenshot from Kibana below.


The upside in using the ELK stack is that you will have full freedom and possibly more ease in developing more direct influence in integration. The downside is, you will need to do more yourself and need a deeper knowledge of your end-to-end technology (not sure if that is a real bad thing).

Conclusion
when you start developing an architecture for microservices you will need to have a fresh look on how you will do logging. You will have to understand the needs of both your business as well as your DevOps teams. Implementing logging should be done in a centralized fashion to ensure you have a good insight in the end-to-end business flow as well as all technical components.

The platform you select for this will depend on a number of factors. Both solutions outlined in the above post show you some of the benefits and some of the downsides. Selecting the right solution will require some serious investigation. Ensuring you take the time to make this decision will pay back over time and should not be taken lightly. 

Friday, March 17, 2017

Oracle Linux - short tip #5 - check last logins

Need to quickly check how logged into a specific Oracle Linux machine and from where the logged into the system. You can use the last command to make that visible. In effect last will read the file /var/log/wtmp and display it in a human readable manner. If you would do a cat on /var/log/wtmp you might notice that this is not the most "easy" way of getting your information.

As an example if you execute last without any parameters you might see something like the below:
[root@temmpnode ~]# last -a
opc      pts/3        Fri Mar 17 08:42   still logged in    61.113.181.37
opc      pts/3        Fri Mar 17 07:45 - 07:45  (00:00)     61.113.181.37
opc      pts/2        Fri Mar 17 07:14 - 09:24  (02:10)     61.113.181.37
opc      pts/1        Fri Mar 17 07:09   still logged in    61.113.181.37
opc      pts/0        Fri Mar 17 07:03   still logged in    61.113.181.37


The last command has a number of parameters that can make your life more easy when trying to find out who did log into the system.

-f file
Tells last to use a specific file instead of /var/log/wtmp.

-num   
This is a count telling last how many lines to show.

-n num 
The same as -num

-t YYYYMMDDHHMMSS
Display  the  state of logins as of the specified time.  This is useful, e.g., to determine easily who was logged in at a particular time -- specify that time with -t and look for "still logged in".

-f file
Specifies a file to search other than /var/log/wtmp.

-R    
Suppresses the display of the hostname field.

-a    
Display the hostname in the last column. Useful in combination with the next flag.

-d    
For non-local logins, Linux stores not only the host name of the remote host but its IP number as well. This  option  translates  the  IP number back into a hostname.

-F    
Print full login and logout times and dates.

-i    
This  option is like -d in that it displays the IP number of the remote host, but it displays the IP number in numbers-and-dots notation.

-o    
Read an old-type wtmp file (written by linux-libc5 applications).

-w    
Display full user and domain names in the output.

-x    
Display the system shutdown entries and run level changes.

Thursday, March 09, 2017

Oracle Cloud - Backup Jenkins to the Oracle Cloud

If you are using Jenkins as the automation server in your build and DevOps processes it most likely is becoming an extremely valuable asset. It is very likely that you have a large number of processes automated and people have been spending a large amount of time to develop scripting, plugins and automation to ensure that your entire end-2-end process works in the most optimal manner.

In case Jenkins forms a critical role in your IT footprint you will most likely have a number of Jenkins servers working together to execute all the jobs you require to be executed. This means that if one node fails you will not have an issue. However, if you would loose a site or you would loose a storage appliance you do want to have a backup.

Making a backup of Jenkins is relative easy. In effect all artifacts to rebuild a Jenkins server to a running solution are stored in Jenkins home. This makes it extremely easy from a backup point of view. However, keeping backups in the same datacenter is never a good idea. For this you would like to backup Jenkins to another location.

Making the assumption you run Jenkins within your own datacenter, a backup target can be the Oracle Cloud. If you run your Jenkins server already in the Oracle Cloud, you can backup Jenkins to another cloud datacenter.

Backup Jenkins to the Oracle Storage cloud Service
As stated, the Jenkins objects are stored as files which makes that you can very simply create a backup. If you want to backup to the Oracle Storage cloud this would take in effect two steps which both can be scripted and periodically be execute.

Ensure you package all the content of your Jenkins directory. We assume you have all your information stored in the default location when installing Jenkins on Oracle Linux. This is /var/lib/jenkins . This means that we should package the content of this location and after that transport it to the Oracle storage Cloud Service.

The backup can be done by using the below example command which will create a .tar.gz file in the /tmp directory. The file will contain the epoch time stamp to ensure it is really unique.

tar -zcvf jenkins_backup_$(date +%s)_timestamp /var/lib/jenkins

After we have created the .tar.gz file we will have to move it to the Oracle Storage Cloud. To interact with the Oracle Storage Cloud and push a file to the Oracle Storage Cloud you can use the Oracle Storage Cloud File Transfer Manager command-line interface (FTM CLI). For more background information and more advanged features (like for example retention and such) you can refer to the FTM CLI documentation.

As a simple example we will upload the file we just created to a container in the Oracle Storage Cloud named JenkinsBackup.

java -jar ftmcli.jar upload -N jenkins_backup_1489089140_timestamp.tar.gz JenkinsBackup /tmp/jenkins_backup_1489089140_timestamp.tar.gz

Now we should have the file securely stored in the Oracle Storage Cloud and ready to be retrieved when needed. As you can see the above command will take a number of additional actions when you want to create a full scripted version of this. You will also have to make sure that you have the right configuration for the ftmcli stored in a ftmcli.properties file and you define you want to make use of the backup option and retention times in the backup cloud.

However, when done, you have the assurance that your backups are written to the Oracle Cloud and will be available in case of a disaster.

Backup Jenkins to the Oracle Developer Cloud Service.
As we know.... Jenkins and GIT are friends... so without a doubth it will not come as a supprise that you can also backup Jenkins to a GIT repository. The beauty of this is that Oracle will provide you a GUT repository as part of the Oracle Developer Cloud Service.

This means that you can backup Jenkins directly into the Oracle Developer Cloud Service if you want. Even though the solution is elegant, I do have a personal preference for the backup in the file based manner.

However, for those wanting to explore the options to backup to a GIT repository in the Oracle Developer Cloud Service, a plugin is available which can be used to undertake this task. You can find the plugin on this page on the Jenkins website.

Oracle Linux – Install Gitlab on Oracle Linux

Even though Oracle is providing the option to use GIT from within the Oracle Developer Cloud service there are situations where you do want to use your own GIT installation. For example situations where you need a local on premise installation for storing information in GIT where you are not allowed to store to information outside of the organization datacenter. Or, in situations where you need the additional level of freedom to undertake specific actions which are not always allowed by the Oracle Developer Cloud Service.

In effect GIT will be, just GIT, without a graphical user interface and additional functionality which makes live much more easy for developers and administrators. One of the solutions which would be fitting for deploying your own GIT repository on Oracle Linux with a full and rich set of options and a graphical user interface in the form of a web interface is GitLab.

GitLab functionality
When adopting GitLab you will get a lot more functionality opposed to “just” running git on your server. To name a couple of the features that will be introduced by GitLab see the below examples:

  • Organize your repositories into private, internal or public projects
  • Manage access and permissions with different user roles and settings for internal and external users
  • Create Websites for your GitLab projects, groups and users
  • Unlimited public and private repos, create a new repo for even the smallest projects
  • Import existing projects from GitHub, BitBucket, Google Code, Fogbugz, or any git repo with a URL.
  • Protected branches, control read/write permissions to specific branches.
  • Keep your documentation within the project using GitLab’s built-in wiki system.
  • Collect and share reusable code with code Snippets
  • Control GitLab with a set of powerful APIs.


As you can see from the image above, GitLab will provide you a full web GUI to use by administrators as well as end-users in your organization.

Install GitLab on Oracle Linux
Installation of  GitLab on Oracle Linux is relative easy. Assuming you have a standard Oracle Linux 6 installation available for deplying GitLab the below steps should be undertaken to ensure you have a full working GitLab environment.

Make sure you have the dependencies installed on your system. This can be done with the below commands:

sudo yum install curl openssh-server openssh-clients postfix cronie
sudo service postfix start
sudo chkconfig postfix on
sudo lokkit -s http -s ssh

Ensure that you have the GitLab YUM repository available so we can install GitLab with YUM.

curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
sudo yum install gitlab-ce

Now we can issue the reconfigure command to ensure that GitLab is configured fully for your specific host.

sudo gitlab-ctl reconfigure

If all the steps are completed without any issue you will be able to navigate with a browser to your machine and access GitLab on the default port, which is port 80.

Wednesday, March 08, 2017

Oracle Cloud – moving to a software defined cloud

When companies move from a traditional on premise IT footprint to a cloud based footprint this introduces a major change for the IT department. Where traditional IT departments are used to owning all assets and hosting it the company’s datacenter the physical assets are now owned by the cloud provider and the physical datacenter is largely off limits for customers. This means that all assets should be seen as virtual assets.

Traditional view 
Where processes and procedures in traditional on premise IT footprint are still largely based upon the more physical principles and not the virtual principles you see that a large part of processes and procedures include manual work. This includes manually change firewalls, manually plug network cables and for parts manually install operating systems and applications.

Even though a raised adoption of solutions like Puppet and Chef has been introduced in traditional IT footprints over the years a large part of the IT footprint is not based upon the principle of software defined infrastructure also referred to as infrastructure as code.

Over the years a large number of companies have moved from bare-metal systems to a more virtualized environment, VMWare, Oracle VM and other virtualization platforms have been introduced. By being adopted into the footprint they have introduced a level of software defined networking and software defined storage with them.

While visiting a large number of customers and supporting them with their IT footprints from both a infrastructure point of view as well as an application point of view I have seen that a large number of companies adopt those solutions as silo solutions. Solutions like Oracle Enterprise Manager, Oracle VM manager and VCenter from VMWare are used. In some situations cases customers have included Puppet and/or Chef. However, only a fraction of the companies do make use of the real advantages that are available and couple all the silo based solutions into an end-2-end chain.

The end-2-end chain
The end-2-end chain in a software defined IT footprint is the principle where you couple all the silo based solutions, management tooling, assets, applications and configuration into one automated solution. This holds that everything what you do, everything you build, deploy or configure is described in machine readable formats and used to automatically deploy the changes or new builds.

This also holds that everything is under version control, from your firewall settings to the virtual machines you deploy and applications and application configuration. Everything is stored under version control and is repeatable.

This also holds that in effect your IT staff has no direct need to be in the datacenter or execute changes manually. Changing configuration and pushing this into the full end-2-end automation stack which will take the needed actions based upon the infrastructure as code principle.

The difficulty with on premise infrastructure as code
One of the main challenges while implementing infrastructure as code in an existing on premise IT footprint is that the landscape has grown organically over the years. Due to the model in which IT footprints organically grow in the majority of companies you will see that a large number of solutions have been implemented over time. All doing their part in the total picture and deployed the moment they where needed.

The issue this is causing is that in most cases the components are selected only based upon the functionality they provide while not taking into account how they can be integrated in an end-2-end chain.

This makes that, in comparison to a deployment in a cloud, the implementation of a full end-2-end software defined model can become relatively hard and will require an increasing number of custom written scripts and integration models which are not always providing the most optimal way that one would like to achieve.

Building the software defined cloud model
When moving to a cloud based solution such as the Oracle Public Cloud a couple of advantages are directly present.

  • Companies are forced to rethink their strategies
  • Cloud will be in most cases a green field in comparison to the brown field of exsiting on premise IT footprints
  • Cloud, Oracle Public Cloud, provides standard all the components and interfaces required to adopt a full software defined model. 

In cases where a company starts to adopt the Oracle Public Cloud as the new default location to position new systems and solutions this means that the adoption of a software defined model becomes much easier.

All components that are used as the building blocks for the cloud are by default accessible by making use of API’s. Everything is developed and driven in a way that it will be able to hook into automation tooling. Providing the options to do full end-2-end software defined orchestration, deployment and maintenance of all assets.

While adopting a software defined model and while adopting automation and orchestration to a new level the same ground rule applies as for DevOps. For both software defined cloud automation and orchestration, just as for DevOps, there is no single recipe. Selecting the right tools for the job will be depending on what a company intends to achieve, what integrates the best with specific other tooling that is needed in the overall IT landscape.

Having stated that, everyone who starts looking into adopting a full software defined cloud model and adopting automation and orchestration in an end-2-end fashion the following toolsets are very much interest and should be evaluated and selected based upon their use and level of integration

  • TerraForm & Oracle Terraform provider
    • Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. The Oracle Terraform provider provides the connection between Terraform and the Oracle Public Cloud API’s
  • Jenkins
    • Jenkins is an open source automation server written in Java. Originally developed as a build server it currently is one of the main building blocks for companies who intend to build automation pipelines (chains). Providing a large set of plugins and the option to develop your own plugins and custom scripting it is currently becoming a tool of choice for a lot of companies.
  • Ansible / Puppet/ Chef
    • Ansibel: Ansible is an open-source automation engine that automates cloud provisioning, configuration management, and application deployment.
    • Puppet: Puppet is, among other things, an open-source software configuration management tool for central configuration management of large IT deployments.
    • Chef: Chef is, among other things, is a configuration management tool written in Ruby and Erlang for central configuration management of large IT deployments.
    • Without doing honor to the individual solutions we name them as one item in this blogpost. Each solution has specific additional usecases and additional benefits, however, in general the main use for all the solutions is to support during the automatic deployment (installation) of operating systems and applications as well as manage configuration over large numbers of systems which are deployed in a centralized manner.  
  • Oracle PaaS Service Manager Command Line Interface
    • The full CLI interface to the Oracle Cloud PaaS offerings which provides the option to fully automate the Oracle 
  • Bash / Python
    • Even with all the products and plugins in many cases a number of things desired in an end-2-end automation are so very specific that it needs to be scripted. For this a wide range of programming languages are available where Python and the Linux scripting language Bash have a strong foothold with respect to a lot of other popular languages. 

Defining your goal, selecting the tools and ensuring that you are able to make the best possible use of the cloud by adopting a full end-2-end software defined cloud will ensure you can benefit optimal from the options current technology is providing you. 

Sunday, March 05, 2017

Oracle Linux - perf - error while loading shared libraries: libdw.so.1

When using a standard Oracle Linux template based installation on the Oracle Public cloud and you try to start the perf command you will be hit by an error. Reason for this is that the perf command in part of the deployment however in a broken form. The libdw.so.1 is missing which is needed to start perf. For this reason we have to ensure that libdw.so.1 is available on the system.

libdw.so.1 is part of the elfutils lib, meaning you will have to install elfutils with yum. elfutils is a collection of utilities and libraries to read, create and modify ELF binary files, find and handle DWARF debug data, symbols, thread state and stacktraces for processes and core files on GNU/Linux.

Executable and Linkable Format (ELF, formerly called Extensible Linking Format) is a common standard file format for executables, object code, shared libraries, and core dumps. First published in the System V Release 4 (SVR4) Application Binary Interface (ABI) specification, and later in the Tool Interface Standard, it was quickly accepted among different vendors of Unix systems. In 1999 it was chosen as the standard binary file format for Unix and Unix-like systems on x86 by the 86open project.

In effect the issue you will see is the following prior to fixing the issue:

[opc@jenkins-dev 1]$ perf
/usr/libexec/perf.3.8.13-118.14.2.el6uek.x86_64: error while loading shared libraries: libdw.so.1: cannot open shared object file: No such file or directory
[opc@jenkins-dev 1]$

To install the needed package you can make use of the standard Oracle Linux YUM repository and execute the below command:

yum -y install elfutils

Now you can check that the needed file is present on the system as shown below:

[root@jenkins-dev ~]# ls -la /usr/lib64/libdw.so.1
lrwxrwxrwx 1 root root 14 Mar  5 11:04 /usr/lib64/libdw.so.1 -> libdw-0.164.so
[root@jenkins-dev ~]#

This will also make that if you want to start perf you will no longer be facing an issue and you will have the full capability of perf when needed:

[root@jenkins-dev ~]# perf

 usage: perf [--version] [--help] COMMAND [ARGS]

 The most commonly used perf commands are:
   annotate        Read perf.data (created by perf record) and display annotated code
   archive         Create archive with object files with build-ids found in perf.data file
   bench           General framework for benchmark suites
   buildid-cache   Manage build-id cache.
   buildid-list    List the buildids in a perf.data file
   diff            Read two perf.data files and display the differential profile
   evlist          List the event names in a perf.data file
   inject          Filter to augment the events stream with additional information
   kmem            Tool to trace/measure kernel memory(slab) properties
   kvm             Tool to trace/measure kvm guest os
   list            List all symbolic event types
   lock            Analyze lock events
   record          Run a command and record its profile into perf.data
   report          Read perf.data (created by perf record) and display the profile
   sched           Tool to trace/measure scheduler properties (latencies)
   script          Read perf.data (created by perf record) and display trace output
   stat            Run a command and gather performance counter statistics
   test            Runs sanity tests.
   timechart       Tool to visualize total system behavior during a workload
   top             System profiling tool.
   trace           strace inspired tool
   probe           Define new dynamic tracepoints

 See 'perf help COMMAND' for more information on a specific command.

[root@jenkins-dev ~]#

Oracle Linux - prevent sed errors when replacing URL strings

When scripting in bash under Oracle Linux and in need to search and replace strings in a text file the sed command is where most people turn to. reason for this is that sed is a stream editor for filtering and transforming text and makes it ideal for this purpose.

I recently started developing a full end to end automation and integration for supporting projects within our company to work with the Oracle public cloud. One of the options to do automation with the Oracle cloud is using Terraform. Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. Which means, Terraform helps you to make full use of infrastructure as code when working with the Oracle Cloud.

One of the simple bugs in my code I encountered was "strange" error somewhere in the functions developed to create a Terraform plan. In effect the Terraform base plan we developed was a plan without any specifics. One of the specifics needed was the API endpoint of the Oracle Public Cloud which needed to be changed from a placeholder into the real value when provided by a Jenkins build job.

The initial unit testing was without any issue while using random values, however, every time a valid URL format was used the code would break and the Jenkins build responsible for building the Terraform plan for the Oracle cloud would end up as a broken build.

The error message received was the following:

sed: -e expression #1, char 26: unknown option to `s'

Reason for this was the original construction of the sed command used in the code. Orignally used was the below sed command to replace the ##OPC_ENDPOINT## with the actual API endpoint for the Oracle public cloud.

sed -i -e "s/##OPC_ENDPOINT##/$endpoint/g" terraformplan.tf

Due to the way we use / in this command we have an issue if we populate the $endpoint with a URL which also contains a / character.  The fix is rather simple, when you know it. if you use sed to work with URL's you should use a , and not /. Meaning, your code should look like the one below to do a valid replace with sed

sed -i -e "s,##OPC_ENDPOINT##,$endpoint,g" terraformplan.tf

Wednesday, March 01, 2017

Oracle Linux - Install Google golang

Go (often referred to as golang) is a free and open source programming language created at Google. Even though Go is not the most popular programming language arround at this moment (sorry for all the golang people) there are still a lot of opensource projects that depend on Go. The installation from go is realtive simple however different from what the average Oracle Linux user used to do everything with yum command might expect.

If you want to install golang you will have to download the .tar.gz file and "install" it manually. The following steps are needed to get golang on your Oracle Linux machine:

Step 1
Download the file from the golang website

[root@jenkins-dev tmp]# curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 85.6M  100 85.6M    0     0  7974k      0  0:00:10  0:00:10 --:--:-- 10.1M
[root@jenkins-dev tmp]#

Step 2
Execute a checksum and verify the result with what is mentioned on the golang download site.

[root@jenkins-dev tmp]# sha256sum go1.8.linux-amd64.tar.gz
53ab94104ee3923e228a2cb2116e5e462ad3ebaeea06ff04463479d7f12d27ca  go1.8.linux-amd64.tar.gz
[root@jenkins-dev tmp]#

Step 3
Unpack the file into /usr/local

[root@jenkins-dev tmp]# tar -C /usr/local/ -xzf go1.8.linux-amd64.tar.gz

Step 4
verify that go in in the right location

[root@jenkins-dev tmp]# ls -la /usr/local/go
total 168
drwxr-xr-x  11 root root  4096 Feb 16 14:29 .
drwxr-xr-x. 13 root root  4096 Mar  1 14:47 ..
drwxr-xr-x   2 root root  4096 Feb 16 14:27 api
-rw-r--r--   1 root root 33243 Feb 16 14:27 AUTHORS
drwxr-xr-x   2 root root  4096 Feb 16 14:29 bin
drwxr-xr-x   4 root root  4096 Feb 16 14:29 blog
-rw-r--r--   1 root root  1366 Feb 16 14:27 CONTRIBUTING.md
-rw-r--r--   1 root root 45710 Feb 16 14:27 CONTRIBUTORS
drwxr-xr-x   8 root root  4096 Feb 16 14:27 doc
-rw-r--r--   1 root root  5686 Feb 16 14:27 favicon.ico
drwxr-xr-x   3 root root  4096 Feb 16 14:27 lib
-rw-r--r--   1 root root  1479 Feb 16 14:27 LICENSE
drwxr-xr-x  14 root root  4096 Feb 16 14:29 misc
-rw-r--r--   1 root root  1303 Feb 16 14:27 PATENTS
drwxr-xr-x   7 root root  4096 Feb 16 14:29 pkg
-rw-r--r--   1 root root  1399 Feb 16 14:27 README.md
-rw-r--r--   1 root root    26 Feb 16 14:27 robots.txt
drwxr-xr-x  46 root root  4096 Feb 16 14:27 src
drwxr-xr-x  17 root root 12288 Feb 16 14:27 test
-rw-r--r--   1 root root     5 Feb 16 14:27 VERSION
[root@jenkins-dev tmp]#

Step 5
add golang to your $path variable to make it available system wide and check if you can use go

[root@jenkins-dev tmp]#
[root@jenkins-dev tmp]# go --version
-bash: go: command not found
[root@jenkins-dev tmp]#
[root@jenkins-dev tmp]# PATH=$PATH:/usr/local/go/bin
[root@jenkins-dev tmp]#
[root@jenkins-dev tmp]# go version
go version go1.8 linux/amd64
[root@jenkins-dev tmp]#

This in effect would ensure that you now have the option to use Golang on your Oracle Linux system.