Monday, July 24, 2017

Oracle code - Jenkins check if file is present in workspace

When using Jenkins to automate parts of your build and deployment work in a CI/CD manner you do want to include certain failsafe manners. A common ask is to check if a certain file is present in your Jenkins workspace. In our example, we do pull code from a Gitlab repository to build a Maven based project. One of the first things we would like to ensure is that the pom.xml file is present. In case the pom.xml file is not present we know that the build will fail and we will never come to a position in which we can build the required .jar file for our project.

To check if a file is present you can use the below example

if (fileExists('pom.xml')) {
    echo 'Yes'
} else {
    echo 'No'
}

As you can see this is fairly straightforward check which will check if pom.xml is present. In case it is not present it will print "No", in case it is present it will print "Yes". In a realworld example you do want to take some action on this instead of printing that the file is not present, you could have the desire to abort the build. The below example could be used to do so

    currentBuild.result = 'ABORTED'
    error('pom.xml file has NOT been located')

The above example code will abort the Jenkins job and will give the error that the pom.xml file has not been found. The more complete example is shown below:

if (fileExists('pom.xml')) {
    echo 'Yes'
} else {
    currentBuild.result = 'ABORTED'
    error('pom.xml file has NOT been located')
}

Ensuring that you have checks like this in place will make the outcome of Jenkins more predictable and can safe you a lot of issues in a later stage. In reality, a large part of some of our code in Jenkins is often to make sure everything is in place and is doing what it is expected to do. Checking and error handling is a big part of automation. 

Sunday, July 23, 2017

Oracle Code - Jenkins failed to build maven project

The first time I did try to build a Oracle Java project with Maven it resulted in an error. Which is not surprising, every time you try to do something the first time the changes that it will not work are relative high. In my case I intended to build a REST API build with Spring and compile it with Maven in Jenkins. The steps Jenkins should undertake where, get the code from my local gitlab repository and build the code as I would do in a normal situation. The code I used is exactly the same code as I have shared on github for your reference.

The main error I received when starting the actual build with Maven was the one shown below:

[ERROR] No goals have been specified for this build. You must specify a valid 
lifecycle phase or a goal in the format : or :[:]:. Available lifecycle phases are: 
validate, initialize, generate-sources, process-sources, generate-resources, 
process-resources, compile, process-classes, generate-test-sources, 
process-test-sources, generate-test-resources, process-test-resources, test-compile, 
process-test-classes, test, prepare-package, package, pre-integration-test, 
integration-test, post-integration-test, verify, install, deploy, pre-clean, clean, 
post-clean, pre-site, site, post-site, site-deploy. -> [Help 1]

If we look at my githib page you can already see a hint for the solution. In the documentation I stated the following command for creating the actual .jar file (the result I wanted from Jenkins)

mvn clean package

If we look at how the project was defined in Jenkins, I left the "goals" section empty. Added package to the goals section resolved the issue and the next time I started the job I was presented with a successfull completed job and a fully compiled .jar file capable of being executed and server me the needed REST API.

As you can see from the error message, a lot of other goals can also be specified.




Oracle Linux - Configure Jenkins for Maven

When you are working a lot with Oracle Java and you have the ambition to start developing your Java applications with Maven in a manner that you can automate a lot of the steps by leveraging Jenkins you will have to configure Jenkins. The use of Jenkins in combination with Maven can speed up your continuous integration and continuous deployment models enormously.

I already posted an article on how to install Jenkins on Oracle Linux in another post on this weblog, you can find the original post here. Originally the post was coming from a project where we did not use Maven, we did use Jenkins for some other tasks. However, now the need arises to use Maven as well.

Configuring Maven under Jenkins is relative easy, you can use the "global tool configuration" menu under Jenkins to make the needed configurations. Advisable is to not have Jenkins make the installation however install Maven manually and after that configure it into Maven.

The common error
The common error when configuring Maven is that you tend to define the location of mvn as the maven home the first time you look at this. In our case mvn was located in /usr/bin on our Oracle Linux instance. However, stating /usr/bin as the maven home resulted in the error : /usr/bin doesn’t look like a Maven directory

Finding the maven home
As we just found out that /usr/bin is not the maven home we have to find the correct maven home. The resolution can be found in the mvn --version command as shown below

[root@jenkins /]#
[root@jenkins /]# mvn --version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00)
Maven home: /usr/share/apache-maven
Java version: 1.8.0_141, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-2.b16.el6_9.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.1.12-61.1.33.el6uek.x86_64", arch: "amd64", family: "unix"
[root@jenkins /]#
[root@jenkins /]#

As you can see the Maven home is stated in the output. Providing the Maven home /usr/share/apache-maven to Jenkins will ensure you will have configured maven correctly.

Saturday, July 22, 2017

Oracle Linux - changing the amount of memory of your Vagrant box

Vagrant is an open-source software product build by HashiCorp for building and maintaining portable virtual development environments. The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.

I use Vagrant a lot, really a lot, and especially in combination with Oracle LinuxOracle ships a number of default vagrant boxes from within oracle.com which speeds up the development, test and experimental way of working a lot. Without having the need to manually maintain local clones of Oracle virtualbox images you can now use vagrant to extremely fast run Oracle Linux instances.  A short guide on how to get started with vagrant can be found in this specific blogpost on my blog.

When you run a box for a short time you might not be that interested in memory tuning as long as it works. However , if you need to run multiple boxes for a longer periode of time as part of a wider development ecosystem you do want to ensure that all the boxes fit in your development system and you still have some free memory left to do actual things.

A default box is taking a relative large part of the memory of your host. Tuning this memory to what it actually should be is relatively easy. In our example the Oracle Linux 6.9 box starts by default using 2048MB of memory. We wanted to trim this down to 1024. To state the exact amount of memory you need to configure some parts in your Vagrantfile config file.

The below example we added to the Vagrantfile defined the amount of memory that could be given to the box:

config.vm.provider "virtualbox" do |vb|
  vb.memory = "1024"
end

This would make that the box will be given only 1024. Additional you can pass other configuration for example if want to provide only 1 cpu you could also add the below line right after the vb.memory line to do so.

v.cpus = 2

Understanding and using the Vagrantfile configuration options will help you in building and tuning your boxes in the most ideal way to have the best development environment you can imagine on your local machine.

Friday, July 21, 2017

Oracle Linux - Change hostname for Vagrant host

Vagrant is an open-source software product build by HashiCorp for building and maintaining portable virtual development environments. The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.

I use Vagrant a lot, really a lot, and especially in combination with Oracle Linux. Oracle ships a number of default vagrant boxes from within oracle.com which speeds up the development, test and experimental way of working a lot. Without having the need to manually maintain local clones of Oracle virtualbox images you can now use vagrant to extremely fast run Oracle Linux instances.  A short guide on how to get started with vagrant can be found in this specific blogpost on my blog.

When you do a default start of a Vagrant box, in our example an Oracle Linux 6.9 instance we will see that the hostname is not explicitly stated. In most cases this is not an issue, however, in some cases the hostname is a vital part of how your software might work. The most common way is changing the hostname by changing it directly within the Oracle Linux operating system. However, a better way of doing things when working with Vagrant is to do it by editing the Vagrantfile config file which can be found in the directory where you did a "vagrant init".

Change hostname in Vagrantfile
When using vagrant you should use the power of vagrant. This means, if you want your machine to have a specific hostname you can do so by changing the Vagrantfile instead of doing it on the Oracle Linux operating system within the box when it is running. If you read the Vagrant documentation you will find the following on this subject :

"config.vm.hostname - The hostname the machine should have. Defaults to nil. If nil, Vagrant will not manage the hostname. If set to a string, the hostname will be set on boot. "

if we take for example a running box which we initiated without having done anything for the hostname in the Vagrantfile you will notice the hostname is localhost.

[vagrant@localhost ~]$ 
[vagrant@localhost ~]$ uname -a
Linux localhost 4.1.12-61.1.33.el6uek.x86_64 #2 SMP Thu Mar 30 18:39:45 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@localhost ~]$ 
[vagrant@localhost ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[vagrant@localhost ~]$ 

If we want to have a box named somehost.example.com we could ensure we have the below line in our Vagrantfile config file when we start it:

config.vm.hostname = "somehost.example.com"

When you would login to the Oracle Linux operating system within the box and you would check the same as in the above example you would be able to see the difference;

[vagrant@somehost ~]$ 
[vagrant@somehost ~]$ uname -a
Linux somehost.example.com 4.1.12-61.1.33.el6uek.x86_64 #2 SMP Thu Mar 30 18:39:45 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@somehost ~]$ 
[vagrant@somehost ~]$ cat /etc/hosts
127.0.0.1 somehost.example.com somehost
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[vagrant@somehost ~]$ 

As you can see, changing the Vagrantfile will change the hostname within the box. Instead of changing it manually you should use the power of Vagrant to state the correct hostname in your Oracle Linux instance when using Vagrant.

Oracle Linux - using vagrant boxes with a static IP

Vagrant is an open-source software product build by HashiCorp for building and maintaining portable virtual development environments. The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.

I use Vagrant a lot, really a lot, and especially in combination with Oracle Linux. Oracle ships a number of default vagrant boxes from within oracle.com which speeds up the development, test and experimental way of working a lot. Without having the need to manually maintain local clones of Oracle virtualbox images you can now use vagrant to extremely fast run Oracle Linux instances.  A short guide on how to get started with vagrant can be found in this specific blogpost on my blog.

The main confusion on ports and ip addresses 
When I talk to people about Vagrant and running Oracle Linux, or any other box, in this system the main confusion comes from the networking side of things. In general the first confusion is how to be able to access ports running in the box from within your local machine. In effect Vagrant will do a port mapping of ports available on the operating system in your box to a specified port on localhost. That is, when you configure this in your Vagrantfile configuration file. (which I will dedicate another post on to explain).

The second confusion comes when people need to communicate between boxes. In those cases it would be very convenient. For example, if you would have one box running with an Oracle database while a secondary box would be running your application server you would like to be able to establish connectivity to the both of them.

Giving each box an external IP
the solution to this issue is providing each Vagrant box running your Oracle Linux instance an external IP address. A hint is already given in the Vagrantfile configuration file which resides in the directory where you gave a "vagrant init" command. If you read the file you will find a comment above a commented configuration line stating : "Create a private network, which allows host-only access to the machine using a specific IP.

I my example I wanted to give a specific box a specific IP address in a static manner. In this specific case the address needed to be 192.168.56.3 to be precise. This IP would become part of a private network which will only be accessible on my Macbook and can be accessed from my Macbook directly or from any other Vagrant box running on it. While you can choose any IP you would like, you should use an IP from the reserved private address space. These IPs are guaranteed to never be publicly routable, and most routers actually block traffic from going to them from the outside world.

To ensure my specific box would always run on 192.168.56.3 I had to uncomment the line and ensure that it would read as the line below:

 config.vm.network "private_network", ip: "192.168.56.3"

This binds the box via the config.vm.network to a private network with the specific IP we needed. If we now try to ping the box on this address it will respond and if I ping it. Also if I go into another box, for example a box with 192.168.56.2 and will try to ping 192.168.56.3 it will respond. Meaning, issue resolved and I have now two boxes who can freely communicate with each other without any issue.

Showing it in Oracle Linux
Now, if we have a look at the Oracle Linux operating system within the running box we can see we have a new interface for this specific address, as shown below:

eth1      Link encap:Ethernet  HWaddr 08:00:27:3D:A5:49  
          inet addr:192.168.56.3  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe3d:a549/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:86 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7328 (7.1 KiB)  TX bytes:1482 (1.4 KiB)

If we want to know how it gets the IP address inside of the Oracle Linux operating system and if you are wondering if this is done with some "hidden" DHCP server that binds to a specific virtual MAC address you can check the configuration by looking into the /etc/sysconfig/network-scripts/ifcfg-eth1 config file within the Oracle Linux operating system that runs within the Vagrant box. The content of the file is shown below:

#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.56.3
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
#VAGRANT-END

As you can see the file is generated by vagrant itself and no "hidden" DHCP trick is required. To push the generated file Vagrant is using parts of its own provisioning solution, which can be used for a lot more interesting things. 

Sunday, July 16, 2017

Oracle Linux - private build your docker images

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to privately build our docker images and containers. The request for this is not that uncommon. As docker is used for a large part in enterprises the need to ensure a safe way of building your internal docker images loaded with your own code deployments is seen often. In our example case we will use github.com as the source repo as well as a local file and we will depend on certain images available on hub.docker.com however when you deploy a full private environment you should use your own git implementation and your own Docker registry.

Building with github
when we build based upon a Dockerfile in github we have to provide "docker build" with the location of the Dockerfile. Alternatively, if your Dockerfile is in the root of your project you can call it without the explicit reference to the Dockerfile. In the below example we use the explicit calling of the Dockerfile which is not always the best way of doing it.

We use the below command in our example:

docker build --no-cache=true --rm -t databases/mongodb:latest https://raw.githubusercontent.com/louwersj/docker_mongodb_ol6/master/mongodb_3.4/OL6.9/Dockerfile

This will result in the download of the the Dockerfile from github and the start of the build, as you can see we call it .raw. in the url to ensure we get the raw file. Additionally we use a couple of flags for the build:

--no-cache=true
this is used to ensure we do not use any cache. In some cases it can be useful to use the cache of previous builds, in cases you want to be a hunderd percent sure you use the latest of everything you should prevent the use of cache by using this flag.

--rm
This flag will ensure that all temporary data is removed after the build. If not you will find a lot of directories under /tmp which hold old build data. To ensure the system stays clean you should include this flag during the build operation.

-t databases/mongodb:latest
This is used to provide the right tagging to the newly build image. As you can see we indicate that this is part of the databases set and is a MongoDB tagged as latest.

As soon as the build has completed you can check the list of images within Docker to see if it is available, this is shown in the example below:

[root@localhost tmp]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
databases/mongodb   latest              185f6f594f9e        About a minute ago   251.4 MB
[root@localhost tmp]# 

Testing the new build 
Now we have build a new image we would like to test this image. This can be done in the same way as that you would normally create a container from an image.

[root@localhost tmp]# docker run --name mongodb_node_0 -d -p 27017 databases/mongodb:latest
154b9b82e43186411c614ebdc45cdd1c7cc98ec8c6b7af525474f880a8356d52
[root@localhost tmp]# 

If we would now check the running containers we will find the newly created container with the name mongdb_node_0 up and running

[root@localhost tmp]# docker ps
CONTAINER ID        IMAGE                      COMMAND             CREATED             STATUS              PORTS                      NAMES
154b9b82e431        databases/mongodb:latest   "/usr/bin/mongod"   34 seconds ago      Up 34 seconds       0.0.0.0:32771->27017/tcp   mongodb_node_0
[root@localhost tmp]# 

As you can see from the above example, we now have the container running, we could take a look by using the exec command on bash to be extra sure

[root@localhost tmp]#
[root@localhost tmp]# docker exec -it 154b9b82e431 /bin/bash
[root@154b9b82e431 /]# ps -ef|grep mongo
root         1     0  0 19:17 ?        00:00:01 /usr/bin/mongod
root        35    24  0 19:19 ?        00:00:00 grep mongo
[root@154b9b82e431 /]# 
[root@154b9b82e431 /]# exit
exit
[root@localhost tmp]#
[root@localhost tmp]# 

Building with a local file
For a local file based build we have placed the Dockerfile in /tmp/build_test/ we can use the same manner to build a docker image as we did for the github example however now we have to state the location of the Dockerfile on the local file system. You have to be sure you state the location and not the file itself to prevent an error as shown below:

[root@localhost /]# docker build --no-cache=true --rm -t localbuild/mongodb:latest /tmp/build_test/Dockerfile
unable to prepare context: context must be a directory: /tmp/build_test/Dockerfile
[root@localhost /]#

As you can see, calling the file will give an issue, if we call the location the build will happen without any issues:

[root@localhost /]# docker build --no-cache=true --rm -t localbuild/mongodb:latest /tmp/build_test/
Sending build context to Docker daemon 4.096 kB
Step 1 : FROM oraclelinux:6.9
 ---> 7a4a8c404142
Step 2 : MAINTAINER Johan Louwers 
 ---> Running in e7df0ce9533b
 ---> 6bd403a6a188
Removing intermediate container e7df0ce9533b
Step 3 : LABEL maintainer "louwersj@gmail.com"
 ---> Running in 5dbe161c94c3
 ---> c1ccf03f5aaa
Removing intermediate container 5dbe161c94c3
Step 4 : ARG VERSION
 ---> Running in 70f75e234ec3
 ---> 8789acea412c
Removing intermediate container 70f75e234ec3
Step 5 : ARG VCS_URL
 ---> Running in a6fcb917dab0
 ---> 5ec17fc93bd5
Removing intermediate container a6fcb917dab0
Step 6 : ARG VCS_REF
 ---> Running in 8581b2273afb
 ---> f38bd895e43e
Removing intermediate container 8581b2273afb
Step 7 : ARG BUILD_DATE
 ---> Running in 3b10331e2f96
.......................ETC ETC ETC...........

A check on the images available right now will show we now have a new image named localbuild/mongodb:latest as shown below:

[root@localhost /]# docker images
REPOSITORY           TAG                 IMAGE ID            CREATED              SIZE
localbuild/mongodb   latest              ac7816da045f        About a minute ago   251.4 MB
[root@localhost /]# 

Using a local file (which can be pulled from a local git repository) can be very valuable, especially if you need to mix the build of your image with artifacts from other builds. For example, if you want to include the war files from a maven build to provide a microservice from within a container concept. In case you want to build very specific containers who contain specific business functionality the use of the local file option is a possible route. 

Saturday, July 15, 2017

Oracle Linux - Docker unable to delete image is referenced in one or more repositories

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to remove a number of unused images however are confronted with a reference dependency preventing the deletion of the unused image. The reason why this issue ocurred in this case is that we have two images present who are acutally the same image however are tagged in a different manner.

The reason this happens is the way the Oracle Linux images are tagged when they have been created and placed on the docker hub. We have one image which is tagged a 6.9 (the explicit version number) and one tagged as 6 which is a general reference to the highest version in 6 (which is 6.9). In effect the images 6.9 and 6 are exactly the same and are treated in the same manner.

Handling the version numbers as 6 and 6.9 is a convenient thing, especially in cases where a 6.10 version could be created (which is not the case for Oracle Linux 6). people would know that if they pulled 6 they would always have the latest version and if they wanted a specific version they could call 6.x (in our case 6.9)

Now, we have pulled 6.9 and 6 both to our Docker engine, during a cleanup we would like to remove both of them and we are faced with the below issue;

[root@localhost tmp]#
[root@localhost tmp]# docker images
REPOSITORY          TAG         IMAGE ID            CREATED             SIZE
oraclelinux         6           7a4a8c404142        3 weeks ago         170.9 MB
oraclelinux         6.9         7a4a8c404142        3 weeks ago         170.9 MB
[root@localhost tmp]#
[root@localhost tmp]#
[root@localhost tmp]# docker rmi 7a4a8c404142
Error response from daemon: conflict: unable to delete 7a4a8c404142 (must be forced) - image is referenced in one or more repositories
[root@localhost tmp]#
[root@localhost tmp]# 

As you can see the Docker Image ID's are the same, this is what is causing the issue as Docker references both images to each other. The way to resolve the issue is to force the remove image by using the -f flag in the command.

[root@localhost tmp]#
[root@localhost tmp]# docker rmi -f 7a4a8c404142
Untagged: oraclelinux:6
Untagged: oraclelinux:6.9
Untagged: oraclelinux@sha256:3501cce71958dab7f0486cd42753780cc2ff987e3f92bd084c95a53d52f4f1dc
Deleted: sha256:7a4a8c40414201cb671618dd99e8d327d4da4eba9d7991a86b191f4823925969
Deleted: sha256:d14f39f83be01eacab2aea7400a816a42ef7b8cdaa01beb8ff7102850248956d
[root@localhost tmp]#
[root@localhost tmp]# 

If you would now check the list of available images you will notice that 7a4a8c404142 has been gone, in fact, both 6 and 6.9 tags are gone who reference both to 7a4a8c404142. 

Tuesday, July 11, 2017

Oracle Linux - remove containers from Docker

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to remove a number of stopped containers which are still present on our Docker engine. One way of doing application updates in a container manner is to use a mechanism to start containers with the newer version of your application and when started add them to the load balancing mechanism of your footprint and exclude the old version. As soon as the new version is receiving the requests you can stop de old containers. To ensure you can do a quick rollback it can be useful to have the old containers hanging around for some time. Other options are to do rolling upgrades in which you only have a partial update and only 50% of your containers are updated as well as other strategies for updating which become extremely easy when working with containers.

In this example we have a number of containers we want to remove a number of containers that are no longer running. We identify the containers by using the "docker ps" command as shown below in combination with a grep;

[root@localhost log]# docker ps -a|grep Exited
21600ca72b4e        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite3
388910430cee        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite2
d6c2e4d9431a        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite
[root@localhost log]# 

now we know that we want to remove the above mentioned containers, not only stop them, also remove them. This can be done with docker rm. In the below example we remove only a single container with the "docker rm" command;

[root@localhost log]# docker rm 21600ca72b4e
21600ca72b4e
[root@localhost log]#

if we now check the number of containers with the state exit we will notice that only two are left and we have removed one from our Docker engine.

[root@localhost log]# docker ps -a|grep Exited
388910430cee        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite2
d6c2e4d9431a        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite
[root@localhost log]#

As shown in other examples, we can provide the docker command with a range of container ID's to take the same acton. The same is the case for the rm command which we can provide a number of container ID's and they will all be removed in a single action. This is shown in the example below;

[root@localhost log]# docker rm 388910430cee d6c2e4d9431a
388910430cee
d6c2e4d9431a
[root@localhost log]#

This results in our case in no results if we check for containers with the state Exited. This is shown in the example below:

[root@localhost log]# docker ps -a|grep Exited
[root@localhost log]#

Keeping container around on your Docker engine for some time when you do an application upgrade can be very good practice in case you need to be able to do an extreme fast rollback when things go wrong. However, keeping them around during an upgrade is no excuse of not doing housekeeping and keeping your IT footprint clean. This means that at one point in time you need to have a cleanup step in your rollout and deployment plan. The above example shows how to remove a container which is stopped. Other posts on this blog explain how to stop and again start containers when needed. Taking options like this in mind when creating a deployment and upgrade strategy can be vital to ensure a secure application upgrade with options to undertake a rollback extremely fast. 

Oracle Linux - start a stopped docker container

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to start a stopped container again. Stopping a container will not remove the container and this means we can start it again if we need to do so. Having the option to stop containers and start them again is a great option especially when you do a rollout of a new version of an application landscape. Building a rollout strategy with a very fast way of rolling back to the original version can be supported by exactly this, the option to stop and start containers. In case your rollout is done correctly you can decide to remove the containers completely, having them around until you decide your rollout is fully complete can be a good practice.

When you execute a standard "docker ps" command you will get only the running containers, in our case we want to see all containers regardless the fact what the state is. For this we need to include the -a flag with the docker ps command as shown in the example below:

[root@localhost log]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                        PORTS               NAMES
c1db637d5612        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (143) 30 minutes ago                       nosql_node_3
06fc415798e3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (143) 33 minutes ago                       nosql_node_2
bf2d698ebcb3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (143) 30 minutes ago                       nosql_node_1
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (130) 6 minutes ago                        nosql_node_0
[root@localhost log]# 

In case we want to start a container again, in our example we want to start node 0 again we have to use the start command in combination with the container ID. This is shown in the example below:

[root@localhost log]# docker start 0a52831c65e8
0a52831c65e8
[root@localhost log]#

if we now execute a docker ps command (without the -a flag) we will see a list of running containers and we will notice that node 0 of our Oracle NoSQL cluster is back online again and ready to serve requests.

[root@localhost log]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up About a minute   5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost log]#

As with most commands you can provide multiple container ID's. This means that if we want to start the remaining nodes we can do that with a single command as shown below:

[root@localhost log]# docker start c1db637d5612 06fc415798e3 bf2d698ebcb3
c1db637d5612
06fc415798e3
bf2d698ebcb3
[root@localhost log]#

Checking what is running will show that all four nodes of our Oracle NoSQL cluster are running on our docker engine.

[root@localhost log]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
c1db637d5612        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 2 seconds        5000-5001/tcp, 5010-5020/tcp   nosql_node_3
06fc415798e3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 2 seconds        5000-5001/tcp, 5010-5020/tcp   nosql_node_2
bf2d698ebcb3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 2 seconds        5000-5001/tcp, 5010-5020/tcp   nosql_node_1
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up About a minute   5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost log]# 

Oracle Linux - finding your docker container IP

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we have a single Oracle NoSQL container running on our docker engine. When a Docker containers is started in a default manner it will get an internal IP which is accessible within docker by other containers. As part of your deployment model and scripting it is very well likely that you want to know the IP address assigned to a newly started conatiner without the need to go into the container. The need to get this information directly is a very likely scenario. To get more information from a host you can use the inspect command as part of the docker CLI. The inspect command provides you a JSON response containing a large set of information about a specific container.

The inspect command is used in combination with the container ID. This means we first have to get the container ID, one way of getting the container ID is using the "docker ps" command as shown below:

[root@localhost etc]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 11 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost etc]# 

If we want to know more about the container running, and identified with container ID 0a52831c65e8 wen can execute the "docker inspect" command to retrieve the JSON response as shown in the example below:

[root@localhost etc]# docker inspect 0a52831c65e8
[
    {
        "Id": "0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f",
        "Created": "2017-07-10T19:43:49.205922395Z",
        "Path": "java",
        "Args": [
            "-jar",
            "lib/kvstore.jar",
            "kvlite",
            "-secure-config",
            "disable",
            "-root",
            "/kvroot"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 5364,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-07-10T19:43:49.636183576Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:247be918b211e6690ad33463336e502c260b1a35010102d93967bd49dc061e46",
        "ResolvConfPath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/hostname",
        "HostsPath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/hosts",
        "LogPath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f-json.log",
        "Name": "/nosql_node_0",
        "RestartCount": 0,
        "Driver": "devicemapper",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "devicemapper",
            "Data": {
                "DeviceId": "34",
                "DeviceName": "docker-251:1-1835143-986c026ad2d7e69cae96df9d46f1d15c23c88103f7dc7d7756a8be2f50f474ea",
                "DeviceSize": "10737418240"
            }
        },
        "Mounts": [
            {
                "Name": "a37d7c33e0d78922160c4b411f13350e9692dddf26cdea794a5dd6f266723175",
                "Source": "/var/lib/docker/volumes/a37d7c33e0d78922160c4b411f13350e9692dddf26cdea794a5dd6f266723175/_data",
                "Destination": "/kvroot",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "0a52831c65e8",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "5000/tcp": {},
                "5001/tcp": {},
                "5010/tcp": {},
                "5011/tcp": {},
                "5012/tcp": {},
                "5013/tcp": {},
                "5014/tcp": {},
                "5015/tcp": {},
                "5016/tcp": {},
                "5017/tcp": {},
                "5018/tcp": {},
                "5019/tcp": {},
                "5020/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "JAVA_HOME=/usr/lib/jvm/java-openjdk",
                "VERSION=4.3.11",
                "KVHOME=/kv-4.3.11",
                "PACKAGE=kv-ce",
                "EXTENSION=zip",
                "BASE_URL=http://download.oracle.com/otn-pub/otn_software/nosql-database/",
                "_JAVA_OPTIONS=-Djava.security.egd=file:/dev/./urandom"
            ],
            "Cmd": [
                "java",
                "-jar",
                "lib/kvstore.jar",
                "kvlite",
                "-secure-config",
                "disable",
                "-root",
                "/kvroot"
            ],
            "Image": "oracle/nosql",
            "Volumes": {
                "/kvroot": {}
            },
            "WorkingDir": "/kv-4.3.11",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "3f3505304a821da97a668ae622a09738cf8c88768e77f5c1995154f431461700",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "5000/tcp": null,
                "5001/tcp": null,
                "5010/tcp": null,
                "5011/tcp": null,
                "5012/tcp": null,
                "5013/tcp": null,
                "5014/tcp": null,
                "5015/tcp": null,
                "5016/tcp": null,
                "5017/tcp": null,
                "5018/tcp": null,
                "5019/tcp": null,
                "5020/tcp": null
            },
            "SandboxKey": "/var/run/docker/netns/3f3505304a82",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "28368b7c9058e300d08cc3a3453568cb903504ed33b4c61c6349963ad190160f",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "fc7e96b764adee7ae2a6f061b766c61ea82ec7754faa64f2483889fa0a5a3a5f",
                    "EndpointID": "28368b7c9058e300d08cc3a3453568cb903504ed33b4c61c6349963ad190160f",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02"
                }
            }
        }
    }
]
[root@localhost etc]#

As you can see above the "docker inspect" command provides a very rich set of information whcih can be used for a variaty of things. However, in our example case we only wanted to know the IP address which has been assigned to the container. This means we will have to extract the IP from the JSON response. The below comman will help you in extracting just that from the larger set of information:

[root@localhost etc]# docker inspect 0a52831c65e8 | grep IPAddress | cut -d '"' -f 4| sort -r | head -1
172.17.0.2
[root@localhost etc]#

The above example could be the starting point of a bash function in a wider script which allows you to simply call a function in combination with a variable for the container ID and return the IP information you need. Knowing and understanding how to quickly extract the IP information from the inspect command can help you in building scripting arround docker.

Oracle Linux - stopping a docker container

In our examples we run the Docker engine on Oracle Linux and in this specific example we run Oracle NoSQL in a four node cluster setup. All four nodes are running on one single Docker engine with an Oracle Linux base image. Running Docker containers is great and a lot of benefits can be found in building a strategy which involves the use of Docker as the foundation of your IT footprint. Even though we are happy we can see we have started our four nodes and they are all running happily and performing the task they need to perform at one point in time we might have the need to stop them. In a normal situation and a production situation you will use tooling to control what is running and what is not running. However, in some cases you might have the need to stop a container manually.

Stopping containers manually is done based upon the container id in combination with the docker cli. The first step is to identify the container you want to stop and get the container ID. The most easy way to do so is to use the "docker ps" command which will provide you a list of all containers active on your docker engine. An example of our Docker engine in combination with the ps command is shown below:

[root@localhost etc]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
c1db637d5612        oracle/nosql        "java -jar lib/kvstor"   10 hours ago        Up 10 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_3
06fc415798e3        oracle/nosql        "java -jar lib/kvstor"   10 hours ago        Up 10 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_2
bf2d698ebcb3        oracle/nosql        "java -jar lib/kvstor"   10 hours ago        Up 10 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_1
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   10 hours ago        Up 10 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost etc]# 

In this example case we like to stop one of the Oracle NoSQL nodes, to be more precise we want to stop nosql_node_2 using the Docker command line interface. For this we can use the "stop" command as shown in the example below.

[root@localhost etc]# docker stop 06fc415798e3
06fc415798e3
[root@localhost etc]#

If we now check the running containers  again we will see that the container with ID 06fc415798e3 is no longer active.

[root@localhost etc]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
c1db637d5612        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 11 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_3
bf2d698ebcb3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 11 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_1
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 11 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost etc]#

You can use the stop command on one single container, as shown in the example above, you can also use the stop command for multiple containers at once by providing multiple container ID's. This will make sure that you do not need to execute a single command per container and this can make your life more easy when scripting a solution to stop multiple containers at once. In the below example we also stop node 3 and node 1 of our Oracle NoSQL cluster with a single command;

[root@localhost etc]# docker stop bf2d698ebcb3 c1db637d5612
bf2d698ebcb3
c1db637d5612
[root@localhost etc]#

If we now check the running containers we will notice that only node 0 of the Oracle NoSQL cluster is active as a Docker container and the other containers have stopped.

[root@localhost etc]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 11 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost etc]#

As stated, in a normal and more production like state where you use Docker you will most likely not use the docker CLI in a manually manner, you will use tooling around Docker for stopping, starting and managing your containers. Having stated that, knowing how to do things manually is something that is important and should be known by everyone. When resolving issues it is vital to understand how to do things manually. 

Monday, July 10, 2017

Oracle Linux - removing image from Docker

When using docker you do want to have a basic set of images available to be used to deploy application containers on your Docker engine. In some cases, due to lifecycle management you can come to a point that you do no longer want to have certain images locally. For development and test environments it can be very good to have some older versions available to do some testing on older images. However, for your production machine most people tend to keep it as clean as possible. This means that you have to clean some old things up. Cleaning up is a task that need to be done with care.

A way of doing this is using the dangling future. Dangling will provide you a way of filtering the images you have that are dangling. An example is shown below:

$ docker images --filter "dangling=true"

REPOSITORY          TAG     IMAGE ID            CREATED             SIZE
none                none    8abc22fbb042        4 weeks ago         0 B
none                none    48e5f45168b9        4 weeks ago         2.489 MB
none                none    bf747efa0e2f        4 weeks ago         0 B
none                none    980fe10e5736        12 weeks ago        101.4 MB
none                none    dea752e4e117        12 weeks ago        101.4 MB
none                none    511136ea3c5a        8 months ago        0 B

You can use this in combination with the rmi command (remove image) as shown in the example below:

$ docker rmi $(docker images -f "dangling=true" -q)

8abc22fbb042
48e5f45168b9
bf747efa0e2f
980fe10e5736
dea752e4e117
511136ea3c5a

even though the dangling option provides a good way of doing things it is still possible that it is error prone. Using it to find them is a good idea, using it for automatically remove the images might be causing some issues and is considerd not the best option by a lot of people. Advised is to use the dangling option in combination with simply knowing what is on your Docker enigine and initiate the remove command in a more controlled fashion.

If you want to remove an image you can use the rmi command. In the example below an Oracle Linux image, in this case oraclelinux:6-slim using rmi. First we check which images we have available:

[root@localhost etc]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
oraclelinux         6.8                 6214272b9f34        24 minutes ago      170.4 MB
oraclelinux         6                   7a4a8c404142        2 weeks ago         170.9 MB
oraclelinux         6-slim              aa531a50e156        2 weeks ago         120.6 MB
[root@localhost etc]#

After this we initiate rmi to remove the image we do no longer want to be present, in our example case oraclelinux:6-slim

[root@localhost etc]# docker rmi oraclelinux:6-slim
Untagged: oraclelinux:6-slim
Untagged: oraclelinux@sha256:0ff2303ddec4d664097768b840b6c76af9bfd6f3b49e7be82e09cfad49939c3c
Deleted: sha256:aa531a50e1565c032d1822d361b7510b55cb1be553d3eb2c3e89c928aa9ff5bd
Deleted: sha256:15ee397aafe48f04935592a0c9fd7a0948b83eac1f43c2cf9f27264a41345e88
[root@localhost etc]#

If we now check we can notice that the 6-slim image have been removed from local storage.

[root@localhost etc]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
oraclelinux         6.8                 6214272b9f34        48 minutes ago      170.4 MB
oraclelinux         6                   7a4a8c404142        2 weeks ago         170.9 MB
[root@localhost etc]#

Keeping your local docker engine clean and making sure you remove unused old images is a good practice, partially you can use the dangling option, building a more controlled way in another way might be a better option and a more save way of doing your housekeeping.

Oracle Linux - Pulling OL6 on Docker

When you are running and you are planning to build a number of Docker containers based upon Oracle Linux 6 you will have to ensure that you have Oracle Linux 6 available on your docker host. Good news is, Oracle is providing a lot of docker images from hub.docker.com/u/oracle and hub.docker.com/_/oraclelinux/ and in case you need Oracle Linux 6 you can use the docker hub to pull it into your local docker engine. at this moment a number of Oracle Linux 6 and Oracle Linux 7 images are available publicly.  To pull a basic Oracle Linux 6 dockerfile you can use the docker pull command as shown in the example below:

[root@localhost etc]# docker pull oraclelinux:6
6: Pulling from library/oraclelinux

9bf12f7628ee: Pull complete 
Digest: sha256:3501cce71958dab7f0486cd42753780cc2ff987e3f92bd084c95a53d52f4f1dc
Status: Downloaded newer image for oraclelinux:6
[root@localhost etc]# 

If we now check if we indeed have the image we need to support our containers we can see that it is available for use:

[root@localhost etc]# docker images
REPOSITORY          TAG               IMAGE ID            CREATED             SIZE
oraclelinux         6                 7a4a8c404142        2 weeks ago         170.9 MB
[root@localhost etc]#

if required we can pull other images into our local docker engine. As an example we pulled oraclelinux:6-slim and oraclelinux:6.8 in the manner we pulled oraclelinux:6 which resuslts in the below example of the docker image command:

[root@localhost etc]# docker images
REPOSITORY          TAG               IMAGE ID            CREATED             SIZE
oraclelinux         6.8               6214272b9f34        24 minutes ago      170.4 MB
oraclelinux         6                 7a4a8c404142        2 weeks ago         170.9 MB
oraclelinux         6-slim            aa531a50e156        2 weeks ago         120.6 MB
[root@localhost etc]#

The Oracle Linux images are intended for use in the FROM field of an application's Dockerfile. For example, to use Oracle Linux 6 as the base of an image, specify FROM oraclelinux:6. If you now deploy an application that needs oraclelinux:6, oraclelinux:6.8 or oraclelinux:6-slim the image will already be available for you. 

Oracle Linux - Install Docker on OL6

Docker is a software technology providing containers, promoted by the company Docker, Inc. Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.

The Linux kernel's support for namespaces mostly isolates an application's view of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel's cgroups provide resource limiting, including the CPU, memory, block I/O, and network. Since version 0.9, Docker includes the libcontainer library as its own way to directly use virtualization facilities provided by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC (Linux Containers) and systemd-nspawn.

This blogpost will go into the details of installing a very basic Docker engine on Oracle Linux for testing purposes. Oracle Linux 6 is installed using the official Vagrant distribution for Oracle Linux.

Enable addons
to be able to install Docker using yum you will have to ensure that the yum addons repository is enabled. This can be done by ensuring that you have enabled set to 1 for this channel in the /etc/yum.repos.d/public-yum-ol6.repo file. An example of this change is shown below

[public_ol6_addons]
name=Oracle Linux $releasever Add ons ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/addons/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

Install with yum
to install Docker on Oracle Linux 6 you can use yum, docker will be located in the addons channel. Hence the reason why we enabled this in the previous step. Install Docker is simply executing the below command:

yum install docker-engine

This will take care of resolving the dependencies and install the docker engine on Oracle Linux.

Change docker config:
As part of the best practices as described by Oracle you need to change some things to the init script used to start the docker engine. In version 1.5 and later of Docker, the docker service unshares its mount namespace to resolve device busy issues with the device mapper storage driver. However, this configuration breaks autofs in the host system and prevents you from accessing subsequently mounted volumes in Docker containers. The workaround is to stop the Docker service from unsharing its mount namespace.

Edit /etc/init.d/docker and remove the $unshare -m -- parameters from the line that starts the daemon. For example, change the line that reads similar to the following:

"$unshare" -m -- $exec $other_args >> $logfile 2>&1 &

This is part of the start() function in the init script, the more complete example of this part of the script is shown below:

start() {
     if [ ! -x $exec ]; then
       if [ ! -e $exec ]; then
         echo "Docker executable $exec not found"
       else
         echo "You do not have permission to execute the Docker executable $exec"
       fi
       exit 5
     fi

     check_for_cleanup

     if ! [ -f $pidfile ]; then
         prestart
         printf "Starting $prog:\t"
         echo "\n$(date)\n" >> $logfile
        "$unshare" -m -- $exec $other_args >> $logfile 2>&1 &

         pid=$!
         touch $lockfile
         # wait up to 10 seconds for the pidfile to exist.  see
         # https://github.com/docker/docker/issues/5359
         tries=0
         while [ ! -f $pidfile -a $tries -lt 10 ]; do
             sleep 1
             tries=$((tries + 1))
             echo -n '.'
         done
         if [ ! -f $pidfile ]; then
           failure
           echo
           exit 1
         fi
         success
         echo
     else
         failure
         echo
         printf "$pidfile still exists...\n"
         exit 7
     fi
 }

 The mentioned line should be removed (commented out) and replaced with the below.

$exec $other_args &>> $logfile &

 A word of caution is that you might want to check this part of the script after you do an update of the docker engine. As the init script is part of the docker installation it might be changed when you install a newer version of docker on your system.A good practice is to get a init version in your local repository and use something like chef inspect after an update on your system to ensure the right way of starting is used and you prevent breaking autosf.

 Starting docker:
 now the installation is completed which means we should be able to start docker on our Oracle Linux instance. You can start docker with the below command:

 service docker start

 to ensure that the docker engine starts every time we boot the machine we have to ensure that we register it in the right manner. This can be done with the below command:

 chkconfig docker on

to check if this is done correctly you should  check this with the chkconfig command. This results on our test machine in the below output. You can find docker in the below list and you can notice that it will start automatically.

[root@localhost ~]# chkconfig
acpid           0:off 1:off 2:on 3:on 4:on 5:on 6:off
blk-availability 0:off 1:on 2:on 3:on 4:on 5:on 6:off
cgconfig        0:off 1:off 2:off 3:off 4:off 5:off 6:off
cgred           0:off 1:off 2:off 3:off 4:off 5:off 6:off
crond           0:off 1:off 2:on 3:on 4:on 5:on 6:off
docker          0:off 1:off 2:on 3:on 4:on 5:on 6:off
ip6tables       0:off 1:off 2:on 3:on 4:on 5:on 6:off
iptables        0:off 1:off 2:on 3:on 4:on 5:on 6:off
lvm2-monitor    0:off 1:on 2:on 3:on 4:on 5:on 6:off
netconsole      0:off 1:off 2:off 3:off 4:off 5:off 6:off
netfs           0:off 1:off 2:off 3:on 4:on 5:on 6:off
network         0:off 1:off 2:on 3:on 4:on 5:on 6:off
ntpd            0:off 1:off 2:on 3:on 4:on 5:on 6:off
ntpdate         0:off 1:off 2:off 3:off 4:off 5:off 6:off
rdisc           0:off 1:off 2:off 3:off 4:off 5:off 6:off
restorecond     0:off 1:off 2:off 3:off 4:off 5:off 6:off
rsyslog         0:off 1:off 2:on 3:on 4:on 5:on 6:off
saslauthd       0:off 1:off 2:off 3:off 4:off 5:off 6:off
sendmail        0:off 1:off 2:on 3:on 4:on 5:on 6:off
sshd            0:off 1:off 2:on 3:on 4:on 5:on 6:off
udev-post       0:off 1:on 2:on 3:on 4:on 5:on 6:off
vboxadd         0:off 1:off 2:on 3:on 4:on 5:on 6:off
vboxadd-service 0:off 1:off 2:on 3:on 4:on 5:on 6:off
vboxadd-x11     0:off 1:off 2:off 3:on 4:off 5:on 6:off
[root@localhost ~]#

To ensure docker is running you can execute the docker info command, an example of this is shown below and as expected we have nothing running on our docker engine:

[root@localhost ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.6
Storage Driver: devicemapper
 Pool Name: docker-251:1-1835143-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 305.7 MB
 Data Space Total: 107.4 GB
 Data Space Available: 31.62 GB
 Metadata Space Used: 729.1 kB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.147 GB
 Thin Pool Minimum Free Space: 10.74 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.117-RHEL6 (2016-12-13)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge overlay null host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 4.1.12-61.1.28.el6uek.x86_64
Operating System: Oracle Linux Server 6.9
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953 GiB
Name: localhost
ID: GU6G:JV6O:7Y6R:5F5R:OGEI:5AZG:SOVP:BBFF:4DME:YKDU:24MC:54MK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8
[root@localhost ~]#

In addition to docker info you can also do a docker version to find out the exact version of the version which is running on the Oracle Linux instance. An example is shown below:

[root@localhost ~]# docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   1512168
 Built:        Wed Jan 11 09:49:56 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   1512168
 Built:        Wed Jan 11 09:49:56 2017
 OS/Arch:      linux/amd64
[root@localhost ~]#

This basically gives you a really standard and basic installation of Docker which enables you to get started to experiment with Docker on Oracle Linux. 

Saturday, July 08, 2017

Oracle Jet - understanding the MVVM application design pattern

Oracle JavaScript Extension Toolkit (Oracle JET) is a complete yet modular JavaScript development toolkit helping developers build engaging user interfaces. Based on industry standards and popular open-source frameworks, Oracle JET further adds advanced functionality and services to help developer build better applications faster.

Oracle Jet revolves around the MVVM model, MVVM is the short version of the View, View Model, application design pattern  (I know the MVVM is not in line with the long version). The application design pattern is shown in the below diagram:



The MVVM application design pattern is not something that has been developed by Oracle, it is rather a generic application design pattern adopted by many vendors and applications when developing modern day web and mobile applications.

Some of the benefits of the MVVM application design pattern are that the application can be developed in a very component based manner with a strong REST API based backend and allows for less data transactions between the server side and the client side. By doing so the applications become, in general, more flexible and faster while requiring less server resources.

Breaking it into layers:
Within the application design pattern we see the following main parts:

View: as in the MVC and MVP patterns, the view is the structure, layout, and appearance of what a user sees on the screen. With Oracle Jet this will be mainly the Alta UI.

ViewModel: the view model is an abstraction of the view exposing public properties and commands. Instead of the controller of the MVC pattern, or the presenter of the MVP pattern, MVVM has a binder. In the view model, the binder mediates communication between the view and the data binder.The view model has been described as a state of the data in the model.

Model: model refers either to a domain model, which represents real state content (an object-oriented approach), or to the data access layer, which represents content (a data-centric approach).

The Oracle JET framework supports two-way data binding between the View and Model layers in the Model-View-ViewModel (MVVM) design. Data changes in the ViewModel are sent to the UI components, and user input from the UI components is written back into the ViewModel. For this Oracle Jet uses knockout which is an opensource project that can be found at the knockout website

Friday, July 07, 2017

Oracle Linux - reading CPU flags from /proc/cpuinfo

Most people using Linux take the CPU they have for granted. Commonly the only question is, how fast is that CPU handling my code. However, in some cases, especially when doing lower level development work or doing more advanced Oracle Linux system tuning it is good to have a bit more insight into your physical CPU. The first place to look for more information on the CPU is /proc/cpuinfo which might give a wealth of additional information.

Below is an example of the content of my /proc/cpuinfo (running Oracle Linux in a Virtualbox image on a MacBook Pro):

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model  : 78
model name : Intel(R) Core(TM) i5-6267U CPU @ 2.90GHz
stepping : 3
cpu MHz  : 2903.998
cache size : 4096 KB
physical id : 0
siblings : 2
core id  : 0
cpu cores : 2
apicid  : 0
initial apicid : 0
fpu  : yes
fpu_exception : yes
cpuid level : 22
wp  : yes
flags  : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch rdseed clflushopt
bugs  :
bogomips : 5807.99
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model  : 78
model name : Intel(R) Core(TM) i5-6267U CPU @ 2.90GHz
stepping : 3
cpu MHz  : 2903.998
cache size : 4096 KB
physical id : 0
siblings : 2
core id  : 1
cpu cores : 2
apicid  : 1
initial apicid : 1
fpu  : yes
fpu_exception : yes
cpuid level : 22
wp  : yes
flags  : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch rdseed clflushopt
bugs  :
bogomips : 5807.99
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:

As you can see, a lot of flags are mentioned, those might be interesting to understand the capabilities of your processor in more detail. In our case the following flags are mentioned:  fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch rdseed clflushopt.

fpu
Onboard FPU (floating point support). designed to carry out operations on floating point numbers. Typical operations are addition, subtraction, multiplication, division, square root, and bitshifting.

vme
Virtual 8086 mode enhancements. In the 80386 microprocessor and later, virtual 8086 mode (also called virtual real mode, V86-mode or VM86) allows the execution of real mode applications that are incapable of running directly in protected mode while the processor is running a protected mode operating system. It is a hardware virtualization technique that allowed multiple 8086 processors to be emulated by the 386 chip; it emerged from the painful experiences with the 80286 protected mode, which by itself was not suitable to run concurrent real mode applications well

de
Debugging Extensions (CR4.DE). A control register CR is a processor register which changes or controls the general behavior of a CPU or other digital device. Common tasks performed by control registers include interrupt control, switching the addressing mode, paging control, and coprocessor control. CR4 is used in protected mode to control operations such as virtual-8086 support, enabling I/O breakpoints, page size extension and machine check exceptions. CR4 bit 3 controls the DE (Debugging Extension), if set, enables debug register based breaks on I/O space access.

pse
Page Size Extensions (4MB memory pages). PSE is a feature of x86 processors that allows for pages larger than the traditional 4 KiB size. It was introduced in the original Pentium processor, but it was only publicly documented by Intel with the release of the Pentium Pro.

tsc
Time Stamp Counter (RDTSC). is a 64-bit register present on all x86 processors since the Pentium. It counts the number of cycles since reset. The instruction RDTSC returns the TSC in EDX:EAX. In x86-64 mode, RDTSC also clears the higher 32 bits of RAX and RDX. Its opcode is 0F 31.

msr
Model-Specific Registers (RDMSR, WRMSR). MSR is any of various control registers in the x86 instruction set used for debugging, program execution tracing, computer performance monitoring, and toggling certain CPU features.

pae
Physical Address Extensions (support for more than 4GB of RAM). Physical Address Extension (PAE), sometimes referred to as Page Address Extension, is a memory management feature for the x86 architecture. PAE was first introduced by Intel in the Pentium Pro, and later by AMD in the Athlon processor.[2] It defines a page table hierarchy of three levels, with table entries of 64 bits each instead of 32, allowing these CPUs to directly access a physical address space larger than 4 gigabytes

mce
Machine Check Exception. A Machine Check Exception (MCE) is a type of computer hardware error that occurs when a computer's central processing unit detects a hardware problem. Modern versions of Microsoft Windows handle machine check exceptions through the Windows Hardware Error Architecture. On Linux, a process (such as klogd[2]) writes a message to the kernel log and/or the console screen (usually only to the console when the error is non-recoverable and the machine crashes as a result)

cx8
CMPXCHG8 instruction (64-bit compare-and-swap). Compares the 64-bit value in EDX:EAX (or 128-bit value in RDX:RAX if operand size is 128 bits) with the operand (destination operand). If the values are equal, the 64-bit value in ECX:EBX (or 128-bit value in RCX:RBX) is stored in the destination operand. Otherwise, the value in the destination operand is loaded into EDX:EAX (or RDX:RAX). The destination operand is an 8-byte memory location (or 16-byte memory location if operand size is 128 bits). For the EDX:EAX and ECX:EBX register pairs, EDX and ECX contain the high-order 32 bits and EAX and EBX contain the low-order 32 bits of a 64-bit value. For the RDX:RAX and RCX:RBX register pairs, RDX and RCX contain the high-order 64 bits and RAX and RBX contain the low-order 64bits of a 128-bit value. This instruction encoding is not supported on Intel processors earlier than the Pentium processors.

apic
Onboard APIC. Advanced Programmable Interrupt Controller (APIC) is a family of interrupt controllers. As its name suggests, the APIC is more advanced than Intel's 8259 Programmable Interrupt Controller (PIC), particularly enabling the construction of multiprocessor systems. It is one of several architectural designs intended to solve interrupt routing efficiency issues in multiprocessor computer systems.

sep
SYSENTER/SYSEXIT. Executes a fast call to a level 0 system procedure or routine. SYSENTER is a companion instruction to SYSEXIT. The instruction is optimized to provide the maximum performance for system calls from user code running at privilege level 3 to operating system or executive procedures running at privilege level 0.

mtrr
Memory Type Range Registers. Memory type range registers (MTRRs) are a set of processor supplementary capabilities control registers that provide system software with control of how accesses to memory ranges by the CPU are cached. It uses a set of programmable model-specific registers (MSRs) which are special registers provided by most modern CPUs. Possible access modes to memory ranges can be uncached, write-through, write-combining, write-protect, and write-back. In write-back mode, writes are written to the CPU's cache and the cache is marked dirty, so that its contents are written to memory later.

pge
Page Global Enable (global bit in PDEs and PTEs). the page global enable (PGE) flag in the register CR4 and the global (G) flag of a page-directory or page-table entry can be used to prevent frequently used pages from being automatically invalidated in the TLBs on a task switch or a load of register CR3. Bit 7 in CR4 is used to set PGE, if set, address translations (PDE or PTE records) may be shared between address spaces.

mca
Machine Check Architecture. Machine Check Architecture (MCA) is an Intel mechanism in which the CPU reports hardware errors to the operating system. Intel's Pentium 4, Intel Xeon, P6 family processors as well as the Itanium architecture implement a machine check architecture that provides a mechanism for detecting and reporting hardware (machine) errors, such as: system bus errors, ECC errors, parity errors, cache errors, and translation lookaside buffer errors. It consists of a set of model-specific registers (MSRs) that are used to set up machine checking and additional banks of MSRs used for recording errors that are detected.

cmov
CMOV instructions (conditional move) (also FCMOV). FCMOV is a floating point conditional move opcode of the Intel x86 architecture, first introduced in Pentium Pro processors. It copies the contents of one of the floating point stack register, depending on the contents of EFLAGS integer flag register, to the ST(0) (top of stack) register. There are 8 variants of the instruction selected by the condition codes that need be set for the instruction to perform the move. Similar to the CMOV instruction, FCMOV allows some conditional operations to be performed without the usual branching overhead. However, it has a higher latency than conditional branch instructions.[2] Therefore, it is most useful for simple yet unpredictable comparison or conditional operations, where it can provide substantial performance gains. The instruction is usually used with the FCOMI instruction or the FCOM-FSTSW-SAHF idiom to set the relevant condition codes based on the result of a floating point comparison.

pat
Page Attribute Table. The page attribute table (PAT) is a processor supplementary capability extension to the page table format of certain x86 and x86-64 microprocessors. Like memory type range registers (MTRRs), they allow for fine-grained control over how areas of memory are cached, and are a companion feature to the MTRRs.

pse36
36-bit PSEs (huge pages). PSE-36 (36-bit Page Size Extension) refers to a feature of x86 processors that extends the physical memory addressing capabilities from 32 bits to 36 bits, allowing addressing to up to 64 GB of memory. Compared to the Physical Address Extension (PAE) method, PSE-36 is a simpler alternative to addressing more than 4 GB of memory. It uses the Page Size Extension (PSE) mode and a modified page directory table to map 4 MB pages into a 64 GB physical address space. PSE-36's downside is that, unlike PAE, it doesn't have 4-KB page granularity above the 4 GB mark.

clflush
Cache Line Flush instruction. Invalidates the cache line that contains the linear address specified with the source operand from all levels of the processor cache hierarchy (data and instruction). The invalidation is broadcast throughout the cache coherence domain. If, at any level of the cache hierarchy, the line is inconsistent with memory (dirty) it is written to memory before invalidation. The source operand is a byte memory location. The availability of CLFLUSH is indicated by the presence of the CPUID feature flag CLFSH (bit 19 of the EDX register, see Section , CPUID-CPU Identification). The aligned cache line size affected is also indicated with the CPUID instruction (bits 8 through 15 of the EBX register when the initial value in the EAX register is 1).

mmx
Multimedia Extensions. MMX is a single instruction, multiple data (SIMD) instruction set designed by Intel, introduced in 1997 with its P5-based Pentium line of microprocessors, designated as "Pentium with MMX Technology".It developed out of a similar unit introduced on the Intel i860, and earlier the Intel i750 video pixel processor. MMX is a processor supplementary capability that is supported on recent IA-32 processors by Intel and other vendors.

fxsr
FXSAVE/FXRSTOR, CR4.OSFXSR. Operating system support for FXSAVE and FXRSTOR instructions. If set, enables SSE instructions and fast FPU save & restore. bit 9 in CR4 is used to control fxsr.

sse
Intel SSE vector instructions. Streaming SIMD Extensions (SSE) is an SIMD instruction set extension to the x86 architecture, designed by Intel and introduced in 1999 in their Pentium III series of processors shortly after the appearance of AMD's 3DNow!. SSE contains 70 new instructions, most of which work on single precision floating point data. SIMD instructions can greatly increase performance when exactly the same operations are to be performed on multiple data objects. Typical applications are digital signal processing and graphics processing.

sse2
SSE2 (Streaming SIMD Extensions 2), is one of the Intel SIMD (Single Instruction, Multiple Data) processor supplementary instruction sets first introduced by Intel with the initial version of the Pentium 4 in 2001. It extends the earlier SSE instruction set, and is intended to fully replace MMX. Intel extended SSE2 to create SSE3 in 2004. SSE2 added 144 new instructions to SSE, which has 70 instructions. Competing chip-maker AMD added support for SSE2 with the introduction of their Opteron and Athlon 64 ranges of AMD64 64-bit CPUs in 2003.

ht
Hyper-threading (officially called Hyper-Threading Technology or HT Technology, and abbreviated as HTT or HT) is Intel's proprietary simultaneous multithreading (SMT) implementation used to improve parallelization of computations (doing multiple tasks at once) performed on x86 microprocessors. It first appeared in February 2002 on Xeon server processors and in November 2002 on Pentium 4 desktop CPUs.[4] Later, Intel included this technology in Itanium, Atom, and Core 'i' Series CPUs, among others.

syscall
SYSCALL (Fast System Call) and SYSRET (Return From Fast System Call). SYSCALL invokes an OS system-call handler at privilege level 0. It does so by loading RIP from the IA32_LSTAR MSR (after saving the address of the instruction following SYSCALL into RCX). (The WRMSR instruction ensures that the IA32_LSTAR MSR always contain a canonical address.)

nx
Execute Disable. The NX bit, which stands for No-eXecute, is a technology used in CPUs to segregate areas of memory for use by either storage of processor instructions (code) or for storage of data, a feature normally only found in Harvard architecture processors. However, the NX bit is being increasingly used in conventional von Neumann architecture processors, for security reasons.

rdtscp
Read Time-Stamp Counter and Processor ID. Loads the current value of the processor’s time-stamp counter (a 64-bit MSR) into the EDX:EAX registers and also loads the IA32_TSC_AUX MSR (address C000_0103H) into the ECX register. The EDX register is loaded with the high-order 32 bits of the IA32_TSC MSR; the EAX register is loaded with the low-order 32 bits of the IA32_TSC MSR; and the ECX register is loaded with the low-order 32-bits of IA32_TSC_AUX MSR. On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX, RDX, and RCX are cleared.

lm
Long Mode (x86-64: amd64, also known as Intel 64, i.e. 64-bit capable). Long mode is the mode where a 64-bit operating system can access 64-bit instructions and registers. 64-bit programs are run in a sub-mode called 64-bit mode, while 32-bit programs and 16-bit protected mode programs are executed in a sub-mode called compatibility mode. Real mode or virtual 8086 mode programs cannot be natively run in long mode.

constant_tsc
TSC ticks at a constant rate. Recent Intel processors include a constant rate TSC (identified by the kern.timecounter.invariant_tsc sysctl on FreeBSD or by the "constant_tsc" flag in Linux's /proc/cpuinfo). With these processors, the TSC ticks at the processor's nominal frequency, regardless of the actual CPU clock frequency due to turbo or power saving states. Hence TSC ticks are counting the passage of time, not the number of CPU clock cycles elapsed.

rep_good
rep microcode works well.

nopl
The NOPL (0F 1F) instructions

xtopology
cpu topology enum extensions

nonstop_tsc
TSC does not stop in C states. NONSTOP_TSC acts in conjunction with CONSTANT_TSC. CONSTANT_TSC indicates that the TSC runs at constant frequency irrespective of P/T- states, and NONSTOP_TSC indicates that TSC does not stop in deep C-states.

pni
SSE-3 (“Prescott New Instructions”). SSE3, Streaming SIMD Extensions 3, also known by its Intel code name Prescott New Instructions (PNI), is the third iteration of the SSE instruction set for the IA-32 (x86) architecture. Intel introduced SSE3 in early 2004 with the Prescott revision of their Pentium 4 CPU. In April 2005, AMD introduced a subset of SSE3 in revision E (Venice and San Diego) of their Athlon 64 CPUs. The earlier SIMD instruction sets on the x86 platform, from oldest to newest, are MMX, 3DNow! (developed by AMD, but not supported by Intel processors), SSE, and SSE2.

pclmulqdq
Perform a Carry-Less Multiplication of Quadword instruction — accelerator for GCM). Carry-less Multiplication (CLMUL) is an extension to the x86 instruction set used by microprocessors from Intel and AMD which was proposed by Intel in March 2008 and made available in the Intel Westmere processors announced in early 2010. One use of these instructions is to improve the speed of applications doing block cipher encryption in Galois/Counter Mode, which depends on finite field GF(2k)) multiplication, which can be implemented more efficiently with the new CLMUL instructions than with the traditional instruction set. Another application is the fast calculation of CRC values, including those used to implement the LZ77 sliding window DEFLATE algorithm in zlib and pngcrush.

ssse3
Supplemental SSE-3. SSSE3 was first introduced with Intel processors based on the Core microarchitecture on 26 June 2006 with the "Woodcrest" Xeons. SSSE3 has been referred to by the codenames Tejas New Instructions (TNI) or Merom New Instructions (MNI) for the first processor designs intended to support it. SSSE3 contains 16 new discrete instructions as a supplement on SSE-3

cx16
CMPXCHG16B. al words. This is useful for parallel algorithms that use compare and swap on data larger than the size of a pointer, common in lock-free and wait-free algorithms. Without CMPXCHG16B one must use workarounds, such as a critical section or alternative lock-free approaches.

sse4_1
SSE4.1 instruction set. These instructions were introduced with Penryn microarchitecture, the 45 nm shrink of Intel's Core microarchitecture. Support is indicated via the CPUID.01H:ECX.SSE41[Bit 19] flag.

sse4_2
SSE4.2 instruction set. SSE4.2 added STTNI (String and Text New Instructions), several new instructions that perform character searches and comparison on two operands of 16 bytes at a time. These were designed (among other things) to speed up the parsing of XML documents. It also added a CRC32 instruction to compute cyclic redundancy checks as used in certain data transfer protocols. These instructions were first implemented in the Nehalem-based Intel Core i7 product line and complete the SSE4 instruction set. Support is indicated via the CPUID.01H:ECX.SSE42[Bit 20] flag.

x2apic
The xAPIC was introduced with the Pentium 4, while the x2APIC is the most recent generation of the Intel's programmable interrupt controller, introduced with the Nehalem microarchitecture. The major improvements of the x2APIC address the number of supported CPUs and performance of the interface. The x2APIC now uses 32 bits to address CPUs, allowing to address up to 232 − 1 CPUs using the physical destination mode. The logical destination mode now works differently and introduces clusters; using this mode, one can address up to 220 − 16 processors. The x2APIC architecture also provides backward compatibility modes to the original Intel APIC Architecture (introduced with the Pentium/P6) and with the xAPIC architecture (introduced with the Pentium 4).

movbe
Move Data After Swapping Bytes instruction. Performs a byte swap operation on the data copied from the second operand (source operand) and store the result in the first operand (destination operand). The source operand can be a general-purpose register, or memory loca-tion; the destination register can be a general-purpose register, or a memory location; however, both operands can not be registers, and only one operand can be a memory location. Both operands must be the same size, which can be a word, a doubleword or quadword. The MOVBE instruction is provided for swapping the bytes on a read from memory or on a write to memory; thus providing support for converting little-endian values to big-endian format and vice versa. In 64-bit mode, the instruction's default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

popcnt
Return the Count of Number of Bits Set to 1 instruction (Hamming weight, i.e. bit count). These instructions operate on integer rather than SSE registers, because they are not SIMD instructions, but appear at the same time and although introduced by AMD with the SSE4a instruction set, they are counted as separate extensions with their own dedicated CPUID bits to indicate support. Intel implements POPCNT beginning with the Nehalem microarchitecture and LZCNT beginning with the Haswell microarchitecture. AMD implements both beginning with the Barcelona microarchitecture. Population count (count number of bits set to 1). Support is indicated via the CPUID.01H:ECX.POPCNT[Bit 23] flag.

aes
Advanced Encryption Standard Instruction Set (or the Intel Advanced Encryption Standard New Instructions; AES-NI) is an extension to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008. The purpose of the instruction set is to improve the speed of applications performing encryption and decryption using the Advanced Encryption Standard (AES).

xsave
Save Processor Extended States: also provides XGETBY,XRSTOR,XSETBY. Performs a full or partial save of processor state components to the XSAVE area located at the memory address specified by the destination operand. The implicit EDX:EAX register pair specifies a 64-bit instruction mask. The specific state components saved correspond to the bits set in the requested-feature bitmap (RFBM), which is the logical-AND of EDX:EAX and XCR0.

avx
Advanced Vector Extensions. Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.

rdrand
Read Random Number from hardware random number generator instruction. RDRAND (previously known as Bull Mountain) is an instruction for returning random numbers from an Intel on-chip hardware random number generator which has been seeded by an on-chip entropy source. RDRAND is available in Ivy Bridge processors and is part of the Intel 64 and IA-32 instruction set architectures. AMD added support for the instruction in June 2015. The random number generator is compliant with security and cryptographic standards such as NIST SP 800-90A, FIPS 140-2, and ANSI X9.82. Intel also requested Cryptography Research Inc. to review the random number generator in 1999 and 2012, which resulted in two published papers: The Intel Random Number Generator in 1999, and Analysis of Intel's Ivy Bridge Digital Random Number Generator in 2012.

hypervisor
This flag is set by the hypervisor to indicate that your machine is running as a virtual machine on a hypervisor. Even though the presence of the flag is a good indicator that your machine is in fact a virtual machine running on a hypervisor you should be careful when building logic upon the presence of this flag. It is the decision of the hypervisor to push the flag or not. This means that if the flag is not pushed by the hypervisor, and is not present, the machine can still be a virtual machine instead of a bare-metal machine.

lahf_lm 
Load AH from Flags (LAHF) and Store AH into Flags (SAHF) in long mode.

abm
Advanced Bit Manipulation. ABM is only implemented as a single instruction set by AMD; all AMD processors support both instructions or neither. Intel considers POPCNT as part of SSE4.2, and LZCNT as part of BMI1. POPCNT has a separate CPUID flag; however, Intel uses AMD's ABM flag to indicate LZCNT support (since LZCNT completes the ABM).

3dnowprefetch
3DNow prefetch instructions. 3DNow! is an extension to the x86 instruction set developed by Advanced Micro Devices (AMD). It adds single instruction multiple data (SIMD) instructions to the base x86 instruction set, enabling it to perform vector processing, which improves the performance of many graphic-intensive applications. The first microprocessor to implement 3DNow was the AMD K6-2, which was introduced in 1998. When the application was appropriate this raised the speed by about 2-4 times. However, the instruction set never gained much popularity, and AMD announced on August 2010 that support for 3DNow would be dropped in future AMD processors, except for two instructions (the PREFETCH and PREFETCHW instructions). The two instructions are also available in Bay-Trail Intel processors.

rdseed
The RDSEED instruction. Non-deterministic random bit generator compatible with NIST SP 800-90B & C (drafts)

clflushopt
CLFLUSHOPT instruction. With the CLFLUSHOPT instruction, a store buffer only needs to be flushed if it holds data from the same cache line that the CLFLUSHOPT is accessing.   Store buffers holding any other addresses can continue to "cache" their stored data until some other mechanism forces them to push that data to the L1 Data Cache (where it becomes visible to all agents in the coherence fabric).

Reading and understanding the above flags will give you a good insight into the capabilities of your processor and how Oracle Linux is seeing the processor. Additionally, when you are writing low level code for processes that need to interact relatively direct with the hardware (or virtual hardware) it is of vital essence to understand what the options are at your disposal and what the machine is capable of doing.