Friday, December 02, 2016

Oracle Linux - installing Consul as server

Consul, developed by hashicorp,  is a solution for service discovery and configuration. Consul is completely distributed, highly available, and scales to thousands of nodes and services across multiple datacenters. Some concrete problems Consul solves: finding the services applications need (database, queue, mail server, etc.), configuring services with key/value information such as enabling maintenance mode for a web application, and health checking services so that unhealthy services aren't used. These are just a handful of important problems Consul addresses.

Consul solves the problem of service discovery and configuration. Built on top of a foundation of rigorous academic research, Consul keeps your data safe and works with the largest of infrastructures. Consul embraces modern practices and is friendly to existing DevOps tooling. Consul is already deployed in very large infrastructures across multiple datacenters and has been running in production for several months. We're excited to share it publicly.

Installing Consul on Oracle Linux is relative easy. You can download Consul from the consul.io website and unpack it. After this you already have a directly working Consul deployment. In essence it is not requiring an installation to be able to function. However, to ensure you can use consul in a production system and it starts as a service you will have to do some more things.

First, make sure your consul binary is in a good location where it is accessible for everyone. For example you can decide to move it to /usr/bin where it is widely accessible throughout the system.

Next we have to make sure we can start it relatively easy. You can start consul with all configuration as command line options however you can also put all configuration in a JSON file which makes a lot more sense. The below example is the content of a file /etc/consul.d/consul.json which I created on my test server to make consul work with a configuration file. The data_dir specified is not the best location to store persistent data so you might want to select a different data_dir location for that.

{
  "datacenter": "private_dc",
  "data_dir": "/tmp/consul3",
  "log_level": "INFO",
  "node_name": "consul_0",
  "server": true,
  "bind_addr": "127.0.0.1",
  "bootstrap_expect": 1
}

Now we have ensure the configuration is located in /etc/consul.d/consul.json we would like to ensure that consul is starting the consul server as a service every time the machine boots. I used the below code as the init script in /etc/init.d

#!/bin/sh
#
# consul - this script manages the consul agent
#
# chkconfig:   345 95 05
# processname: consul

### BEGIN INIT INFO
# Provides:       consul
# Required-Start: $local_fs $network
# Required-Stop:  $local_fs $network
# Default-Start: 3 4 5
# Default-Stop:  0 1 2 6
# Short-Description: Manage the consul agent
### END INIT INFO

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

exec="/usr/bin/consul"
prog=${exec##*/}

lockfile="/var/lock/subsys/$prog"
pidfile="/var/run/${prog}.pid"
logfile="/var/log/${prog}.log"
sysconfig="/etc/sysconfig/$prog"
confdir="/etc/${prog}.d"

[ -f $sysconfig ] && . $sysconfig

export GOMAXPROCS=${GOMAXPROCS:-2}

start() {
    [ -x $exec ] || exit 5
    [ -d $confdir ] || exit 6

    echo -n $"Starting $prog: "
    touch $logfile $pidfile
    daemon "{ $exec agent $OPTIONS -config-dir=$confdir &>> $logfile & }; echo \$! >| $pidfile"

    RETVAL=$?
    [ $RETVAL -eq 0 ] && touch $lockfile
    echo
    return $RETVAL
}

stop() {
    echo -n $"Stopping $prog: "
    killproc -p $pidfile $exec -INT 2&& $logfile
    RETVAL=$?
    [ $RETVAL -eq 0 ] && rm -f $pidfile $lockfile
    echo
    return $RETVAL
}

restart() {
    stop
    while :
    do
        ss -pl | fgrep "((\"$prog\"," > /dev/null
        [ $? -ne 0 ] && break
        sleep 0.1
    done
    start
}

reload() {
    echo -n $"Reloading $prog: "
    killproc -p $pidfile $exec -HUP
    echo
}

force_reload() {
    restart
}

configtest() {
    $exec configtest -config-dir=$confdir
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart)
        $1
        ;;
    reload|force-reload)
        rh_status_q || exit 7
        $1
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 7
        restart
        ;;
    configtest)
        $1
        ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

exit $?

As soon as you have the above code in the /etc/init.d/consul file and make sure this file is executable you can use chkconfig to add it as a system service and it will ensure consul is stopped and started in the right way whenever you stop or start your server. This makes your consul server a lot more resilient and you do not have to undertake any manual actions when you restart your Oracle Linux machine.

You are able to find the latest version of the script and the configuration file on my github repository. This is tested on Oracle Linux 6. It will most likely also work on other Linux distributions however it is not tested.

Thursday, December 01, 2016

Oracle Linux – short tip #2 – reuse a command from history

Whenever using Linux from a command line you will be typing a large number of commands throughout the day in your terminal. Every now and then you want to review what command you used to achieve something, or you might even want to reuse a command from the history. As your terminal will only show a limited number of lines and you cannot scroll backwards endlessly you can make use of history. The history command will show you a long list of commands you executed.

As an example of the history command you can see the output of one of my machines:

[root@localhost ~]# history
    1  ./filebeat.sh -configtest -e
    2  service filebeat.sh status
    3  service filebeat status
    4  chkconfig --list
    5  chkconfig --list | grep file
    6  chkconfig --add filebeat
    7  service filebeat status
    8  service filebeat start
    9  cd /var/log/
   10  ls
   11  cat messages
   12  cd /etc/filebeat/
   13  ls
   14  vi filebeat.yml
   15  service filebeat stop
   16  service filebeat start
   17  date
   18  tail -20 /var/log/messages
   19  date
   20  tail -f /var/log/messages
   21  clear

Having the option to travel back in time a review which commands you used is great, especially if you are trying to figure out something and have tried a command a number of times in different ways and you are no longer sure what some of the previous “versions” of your attempt where.

An additional trick you can do with history is to reuse the command by simply calling it back from history without the need to enter it again. As an example, in the above example you can notice that line 17 is date. If we want to reuse it we can simply do a !17 on the command line interface. As an example we execute 17 again.

[root@localhost ~]# !17
date
Sun Nov 27 13:41:55 CET 2016
[root@localhost ~]#

Oracle Linux – short tip #1 – using apropos

It happens to everyone, especially on Monday mornings, you suddenly cannot remember a command which normally is at the top of your head and you used a thousand times. The way to find the command you are looking for while using Linux is making use of the apropos command. apropos  searches  a set of database files containing short descriptions of system commands for keywords and displays the result on the standard output.

As an example, I want to do something with a service however not sure which command to use or where to start researching for it. We can use apropos to take a first hint as shown below:

[root@localhost ~]# apropos "system service"
chkconfig            (8)  - updates and queries runlevel information for system services
[root@localhost ~]#

As another example, I want to do something with utmp and I want to know which commands would be providing me functionality to work with utmp. I can use the below apropos command to find out.

[root@localhost ~]# apropos utmp
dump-utmp            (8)  - print a utmp file in human-readable format
endutent [getutent]  (3)  - access utmp file entries
getutent             (3)  - access utmp file entries
getutid [getutent]   (3)  - access utmp file entries
getutline [getutent] (3)  - access utmp file entries
getutmp              (3)  - copy utmp structure to utmpx, and vice versa
getutmpx [getutmp]   (3)  - copy utmp structure to utmpx, and vice versa
login                (3)  - write utmp and wtmp entries
logout [login]       (3)  - write utmp and wtmp entries
pututline [getutent] (3)  - access utmp file entries
setutent [getutent]  (3)  - access utmp file entries
utmp                 (5)  - login records
utmpname [getutent]  (3)  - access utmp file entries
utmpx.h [utmpx]      (0p)  - user accounting database definitions
wtmp [utmp]          (5)  - login records
[root@localhost ~]#

It is not the best solution and you have to be a bit creative in understanding how the string apropos uses could be defined however, in general it can be a good start when looking for a command while using Linux. 

Oracle - Profitability and Cost Management Cloud Service

Often it is hard to find the true costs of a product and ensure you make a true calculation for the profitability of a product. As an example, a retail organization might figure the sales price minus the purchase price is the profitability of a single product. The hidden overall costs for overhead, transportation, IT services and others are often deducted from the overall company revenue. Even though this will ensure you have the correct overall company revenue it is not giving you the correct profitability figures per product of services.

Not being able to see on a product or service level what the exact profitability is might result in having a sub-optimal set of products. Being able to see on a product or service level which products and services are profitable and which are not can help companies to create a clean portfolio and become more profitable overall.


With the launch of the profitability and cost management cloud service Oracle tries to provide a solution for this.

It takes information streams like production cost figures, facility costs figures and human resource cost figures and combines those with the information from your core general ledger. The combined set of figures is loaded into the performance ledger to enable the profitability and cost management cloud to analyze and calculate the true costs of a product.

The core of the product is a web-based interface which allows analysts to combine, include and exclude sets of data to create a calculation model which will enable them to see the true costs which included the hidden costs.

As profitability and cost management cloud makes use of Hyperion in the background you are also able to use the smart view for office options and include the data and the data model results you crate in profitability and cost management cloud in your local excel. As a lot of people still tend to like Microsoft excel and do some additional analysis in Excel the inclusion of the smart view for office options makes a lot of sense to business users.

In the above video you can see an introduction to the new cloud service from Oracle. 

Sunday, November 27, 2016

Oracle Linux - Consul failed to sync remote state: No cluster leader

Whenever you are installing and running Consul from HashiCorp on Oracle Linux you might run into some strange errors. Even though your configuration JSON file passes the configuration validation the log file contains a long repetitive lst of the same error complaining about "failed to sync remote state: No cluster leader" and " coordinate update error: No cluster leader".

Consul is a tool for service discovery and configuration. It provides high level features such as service discovery, health checking and key/value storage. It makes use of a group of strongly consistent servers to manage the datacenter. Consul is developed by HasiCorp and is available from its own website.

It might be that you have the below output when you start consul:

    2016/11/25 21:03:50 [INFO] raft: Initial configuration (index=0): []
    2016/11/25 21:03:50 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1 127.0.0.1
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1.private_dc 127.0.0.1
    2016/11/25 21:03:50 [INFO] consul: Adding LAN server consul_1 (Addr: tcp/127.0.0.1:8300) (DC: private_dc)
    2016/11/25 21:03:50 [INFO] consul: Adding WAN server consul_1.private_dc (Addr: tcp/127.0.0.1:8300) (DC: private_dc)
    2016/11/25 21:03:55 [WARN] raft: no known peers, aborting election
    2016/11/25 21:03:57 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:14 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:04:30 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:50 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:01 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:05:26 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:34 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:02 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:06:10 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:35 [ERR] agent: coordinate update error: No cluster leader

The main reason for the above is that you try to start consul in an environment where there is no cluster available, or it is the first node of the cluster. In case you start it as the first node of the cluster or as the only node of the cluster you have to ensure that you include -bootstrap-expect 1 as a command line option when starting (in case you will only have one node).

You can also include "bootstrap_expect": 1 in the json configuration file if you use a configuration file to start Consul.

As an example, the below start of Consult will prevent the above errors:

consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul

Friday, November 25, 2016

Oracle Linux - build Elasticsearch network.host configuration

With the latest version of Elasticsearch the directives used to ensure your Elasticsearch daemon is listening to the correct interfaces on your Linux machine have changed. By default Elasticsearch will listen on your local interface only which is a bit useless in most cases.

Whenever deploying Elasticsearch manually it will not be a problem to configure it manually, however, we are moving more and more to a world where deployments are done fully automatic. In case you use fully automatic deployment and depend on bash scripting to do some of the tasks for you the below scripts will be handy to use.

In my case I used the below scripts to automatically configure Elasticsearch on Oracle Linux 6 instances to listen on all available interfaces to ensure that Elasticsearch is directly useable for external servers and users.

To ensure your Elasticsearch daemon is listening on all ports you will have to ensure the below line is available, at least in my case as I have 2 external and one local loopback interface in my instance.

network.host: _eth0_,_eth1_,_local_

When you are sure your machine will always have 2 external network interfaces and one local loopback interface you want Elasticsearch to listen on you could hardcode this. However, if you want to make a more generic and stable solution you should read the interface names and build this configuration line.

The ifconfig command will give you the interfaces in a human readable format which is not very useable in a programmatic manner. However, ifconfig will provide you the required output which means we can use it in combination with sed to get a list of the interface names only. The below example shows this:

[root@localhost tmp]# ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d'
eth0
eth1
[root@localhost tmp]#

However, this is not in the format we want it, so we have to create a small script to make sure we do get it more in the format we want it. The below code example can be used for this:

#!/bin/bash

  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
  do
   echo "_"$OUTPUT"_"
  done

If we execute this we will have the following result:

[root@localhost tmp]# ./test.sh
_eth0_
_eth1_
[root@localhost tmp]#

As you can see it is looking more like how we want to have this as input for the Elasticsearch configuration file however we are not fully done. First of all the _local_ is missing and we still have it in a multi-line representation. The below code example shows the full script you can use to build the configuration line. We have added the _local_ and we use awk to make sure it is one comma separated line you can use.

#!/bin/bash
 {
  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
  do
   echo "_"$OUTPUT"_"
  done
echo "_local_"
 } | awk -vORS=, '{ print $1 }' | sed 's/,$/\n/'

If we run the above code we will get the below result:

[root@localhost tmp]# ./test.sh
_eth0_,_eth1_,_local_
[root@localhost tmp]#

You can use this in a more wider script to ensure it is written (including network.host:) to the /etc/elasticsearch/elasticsearch.yml file which is used by Elasticsearch as the main configuration file. As stated, I used this script and tested in while deploying Elasticsearch on Oracle Linux 6. It is expected to be working on other Linux distributions however it has not been tested.

Monday, November 21, 2016

Using Oracle cloud to integrate Salesforce and Amazon hosted SAP

Oracle Integration Cloud Service (ICS) delivers “Hybrid” Integration. Oracle Integration Cloud Service is a simple and powerful integration platform in the cloud to maximize the value of your investments in SaaS and on-premises applications. It includes an intuitive web based integration designer for point and click integration between applications and a rich monitoring dashboard that provides real-time insight into the transactions, all running on Oracle Public Cloud. Oracle Integration Cloud Service will help accelerate integration projects and significantly shorten the time-to-market through it's intuitive and simplified designer, an intelligent data mapper, and a library of adapters to connect to various applications.

Oracle Integration Cloud Service can also be leveraged during a transition from on premise to cloud or by building a multi-cloud strategy. As an example, Oracle provides a standardized connection between Salesforce and SAP as shown above.


An outline of how to achieve this integration is shown in the below video which outlines the options and easy of developing an integration between Salesforce and SAP and ensure the two solution work as an integrated and hybrid solution.



As enterprises start to move more and more to a full cloud strategy ensuring you have a central ingratiation point already in the cloud positioned is ideal when moving to a more cloud based strategy. As an example, SAP can run on Amazon. During a test and migration path to the cloud you most likely do want to ensure you can test your integration between salesforce and SAP without the need of re-coding and re-developing integration.



By ensuring you use Oracle Integration Cloud Service as your central integration solution the move to a cloud strategy for your non-cloud native applications becomes much more easy. You can add a second integration during your test and migration phase and when your migration to, for example, Amazon has been completed you can discontinue your integration to your old on premise SAP instances.


This will finally result in an all-cloud deployment where you have certain business functions running in Saleforce, your SAP systems running in Amazon while you leverage Oracle Integration Cloud Service to bind all systems together and make it a true hybrid multi-cloud solution.

Monday, November 14, 2016

Oracle Cloud API - authenticate user cookie

Whenever interacting with the API’s from the Oracle Compute service cloud the thing first thing that needs to be done is to authenticate yourself against the API. This is done by providing the required authentication details and in return you will receive a cookie. This cookie is used for the future API calls you will do until the cookie lifetime expiries.

The Oracle documentation shows an example as shown below:

curl -i -X POST -H "Content-Type: application/oracle-compute-v3+json" -d "@requestbody.json" https://api-z999.compute.us0.oraclecloud.com/authenticate/

A couple of things you have to keep in mind when looking at example; this will execute the curl command against the REST API endpoint URL for the US0 cloud datacenter and it expect to have a file named requestbody.json available with the “payload” data.

In the example the payload file has the following content:
{
 "password": "acme2passwrd123",
 "user": "/Compute-acme/jack.jones@example.com"
}

The thing to keep in mind when constructing your own payload JSON is that the Compute- part in Compute-acme should remain the “Compute-“ part. Meaning, if your identity domain is “someiddomain” it should look like “Compute- someiddomain” and not “someiddomain”

When executing the curl command you will receive the response like shown below:

HTTP/1.1 204 No Content
Date: Tue, 12 Apr 2016 15:34:52 GMT
Server: nginx
Content-Type: text/plain; charset=UTF-8
X-Oracle-Compute-Call-Id: 16041248df2d44217683a6a67f76a517a59df3
Expires: Tue, 12 Apr 2016 15:34:52 GMT
Cache-Control: no-cache
Vary: Accept
Content-Length: 0
Set-Cookie: nimbula=eyJpZGVudGl0eSI6ICJ7XC...fSJ9; Path=/; Max-Age=1800
Content-Language: en

The part that you need, and the actual cookie data which need to be used in later API calls is actually only a subpart of the response received. The part you need is:

nimbula=eyJpZGVudGl0eSI6ICJ7XC...fSJ9; Path=/; Max-Age=1800

The rest of the response is not directly needed.

Monday, October 31, 2016

Oracle Linux - inspect hardware for configuration management database

In many cases the ideal world and the real world are miles apart. In an ideal world every system ever put into the datacenter is entered into a configuration management database and you will be able to find out with the click of a button what specific configuration is done to a system, what its use is and what hardware components it is using. As second part of the ideal world is that all your hardware for your compute farm is made of exactly the same hardware. However, reality is grim and in general configuration management database and asset management databases are not always as up to date as one would like.

When using Oracle Enterprise Manager and placing all operating systems under the management umbrella of Oracle Enterprise Manager you will already start to get the needed input for a unified and central database where you can look up a lot of the specification of a system. However, Oracle Enterprise Manager is build around the database, management of (Oracle) applications is added at a later stage just like the management of operating systems. For non-Oracle hardware the hardware inspect is also not as deep as one would like sometimes.

However, it can be vital to have a more in depth insight in the hardware that is used in a system. For example if you want to understand how your landscape is build up from an hardware point of view. A Linux tool that might be able to help you with that is lshw which will give you with a single command an overview of the hardware present in your system.

The Oracle YUM repository has the needed packages for lshw which makes the installation of lshw extremely easy as you can use the yum command for the installation as shown below;

yum install lshw

When using lshw in a standard mode you will get a standard user friendly view of the hardware as shown below. Interesting to note, the below is running on an Oracle Linux instance on the Oracle Compute cloud so you will some interesting insights into the inner workings of the Oracle Compute cloud while reading through the below output. When running this on physical hardware the output will look a bit different and more realistic.

[root@testbox09 ~]# lshw
testbox09
    description: Computer
    product: HVM domU
    vendor: Xen
    version: 4.3.1OVM
    serial: ffc59abb-f496-4819-8d0c-a6fad4334391
    width: 64 bits
    capabilities: smbios-2.4 dmi-2.4 vsyscall32
    configuration: boot=normal uuid=FFC59ABB-F496-4819-8D0C-A6FAD4334391
  *-core
       description: Motherboard
       physical id: 0
     *-firmware:0
          description: BIOS
          vendor: Xen
          physical id: 0
          version: 4.3.1OVM
          date: 11/05/2015
          size: 96KiB
          capabilities: pci edd
     *-cpu:0
          description: CPU
          product: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
          vendor: Intel Corp.
          vendor_id: GenuineIntel
          physical id: 1
          bus info: cpu@0
          slot: CPU 1
          size: 2993MHz
          capacity: 2993MHz
          width: 64 bits
          capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp x86-64 constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
     *-cpu:1
          description: CPU
          vendor: Intel
          physical id: 2
          bus info: cpu@1
          slot: CPU 2
          size: 2993MHz
          capacity: 2993MHz
     *-memory:0
          description: System Memory
          physical id: 3
          capacity: 3584MiB
        *-bank:0
             description: DIMM RAM
             physical id: 0
             slot: DIMM 0
             size: 7680MiB
             width: 64 bits
        *-bank:1
             description: DIMM RAM
             physical id: 1
             slot: DIMM 0
             size: 7680MiB
             width: 64 bits
     *-firmware:1
          description: BIOS
          vendor: Xen
          physical id: 4
          version: 4.3.1OVM
          date: 11/05/2015
          size: 96KiB
          capabilities: pci edd
     *-cpu:2
          description: CPU
          vendor: Intel
          physical id: 5
          bus info: cpu@2
          slot: CPU 1
          size: 2993MHz
          capacity: 2993MHz
     *-cpu:3
          description: CPU
          vendor: Intel
          physical id: 6
          bus info: cpu@3
          slot: CPU 2
          size: 2993MHz
          capacity: 2993MHz
     *-memory:1
          description: System Memory
          physical id: 7
          capacity: 3584MiB
     *-memory:2 UNCLAIMED
          physical id: 8
     *-memory:3 UNCLAIMED
          physical id: 9
     *-pci
          description: Host bridge
          product: 440FX - 82441FX PMC [Natoma]
          vendor: Intel Corporation
          physical id: 100
          bus info: pci@0000:00:00.0
          version: 02
          width: 32 bits
          clock: 33MHz
        *-isa
             description: ISA bridge
             product: 82371SB PIIX3 ISA [Natoma/Triton II]
             vendor: Intel Corporation
             physical id: 1
             bus info: pci@0000:00:01.0
             version: 00
             width: 32 bits
             clock: 33MHz
             capabilities: isa bus_master
             configuration: latency=0
        *-ide
             description: IDE interface
             product: 82371SB PIIX3 IDE [Natoma/Triton II]
             vendor: Intel Corporation
             physical id: 1.1
             bus info: pci@0000:00:01.1
             version: 00
             width: 32 bits
             clock: 33MHz
             capabilities: ide bus_master
             configuration: driver=ata_piix latency=64
             resources: irq:0 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:c140(size=16)
        *-bridge UNCLAIMED
             description: Bridge
             product: 82371AB/EB/MB PIIX4 ACPI
             vendor: Intel Corporation
             physical id: 1.3
             bus info: pci@0000:00:01.3
             version: 01
             width: 32 bits
             clock: 33MHz
             capabilities: bridge bus_master
             configuration: latency=0
        *-display UNCLAIMED
             description: VGA compatible controller
             product: GD 5446
             vendor: Cirrus Logic
             physical id: 2
             bus info: pci@0000:00:02.0
             version: 00
             width: 32 bits
             clock: 33MHz
             capabilities: vga_controller bus_master
             configuration: latency=0
             resources: memory:f0000000-f1ffffff memory:f3020000-f3020fff
        *-generic
             description: Unassigned class
             product: Xen Platform Device
             vendor: XenSource, Inc.
             physical id: 3
             bus info: pci@0000:00:03.0
             version: 01
             width: 32 bits
             clock: 33MHz
             capabilities: bus_master
             configuration: driver=xen-platform-pci latency=0
             resources: irq:28 ioport:c000(size=256) memory:f2000000-f2ffffff
  *-network
       description: Ethernet interface
       physical id: 1
       logical name: eth0
       serial: c6:b0:ed:00:52:16
       capabilities: ethernet physical
       configuration: broadcast=yes driver=vif ip=10.196.73.178 link=yes multicast=yes
[root@testbox09 ~]#

Even though the above is interesting, it is not helping in building a unified database containing the physical hardware of your servers. However, lshw has some more options that can be used as shown below;

[root@testbox09 ~]# lshw --help
Hardware Lister (lshw) - B.02.17
usage: lshw [-format] [-options ...]
       lshw -version

        -version        print program version (B.02.17)

format can be
        -html           output hardware tree as HTML
        -xml            output hardware tree as XML
        -short          output hardware paths
        -businfo        output bus information

options can be
        -dump OUTFILE   save hardware tree to a file
        -class CLASS    only show a certain class of hardware
        -C CLASS        same as '-class CLASS'
        -c CLASS        same as '-class CLASS'
        -disable TEST   disable a test (like pci, isapnp, cpuid, etc. )
        -enable TEST    enable a test (like pci, isapnp, cpuid, etc. )
        -quiet          don't display status
        -sanitize       sanitize output (remove sensitive information like serial numbers, etc.)
        -numeric        output numeric IDs (for PCI, USB, etc.)

[root@testbox09 ~]#

The most interesting to note from the above is the xml option. This means you can have the above output in an xml format. We can use the xml format option in a custom check within Oracle Enterprise Manager and instruct the agent deployed on Oracle Linux to use the xml output from lshw as input for Oracle Enterprise Manager and so automatically maintain a hardware configuration management database in Oracle Enterprise Manager without the need to undertake manual actions.

For those who want to check the xml output, you can print it to screen or save it to a file using the below command;

[root@testbox09 ~]#
[root@testbox09 ~]# lshw -xml >> /tmp/lshw.xml
[root@testbox09 ~]# ls -la /tmp/lshw.xml
-rw-r--r-- 1 root root 12151 Oct 31 14:29 /tmp/lshw.xml
[root@testbox09 ~]#

Oracle Linux - resolve dependency hell

Whenever you worked with a system that is not connected to a YUM repository you will know that installing software sometimes might result in something known as dependency hell. You want to install a single package however when you try to install the RPM file manually it tells you that you are missing dependencies. As soon as you have downloaded those they will tell you that they have dependencies as well. Anyone every attempting to install software like this will be able to tell you it is not a fun job to do and it can take a lot of time. However, having an insight into dependencies upfront can safe a lot of time.

A way to ensure that you know more upfront is to use some simple commands. When you have an Oracle Linux machine already installed which can access the public internet you can for example run the yum command with the deplist attribute.

the below example is from the deplist attribute where we filter on lines containing "dependency" reason for that is, it will by default also show you the "provider" result which might be a very long list.

[root@testbox09 lynis]# yum deplist man | grep dependency
  dependency: coreutils
  dependency: rpm
  dependency: nroff-i18n
  dependency: libc.so.6(GLIBC_2.3.4)(64bit)
  dependency: /bin/bash
  dependency: libc.so.6()(64bit)
  dependency: libc.so.6(GLIBC_2.4)(64bit)
  dependency: less
  dependency: config(man) = 1.6f-29.el6
  dependency: lzma
  dependency: libc.so.6(GLIBC_2.3)(64bit)
  dependency: rtld(GNU_HASH)
  dependency: bzip2
  dependency: findutils
  dependency: gzip
  dependency: /bin/sh
  dependency: mktemp >= 1.5-2.1.5x
  dependency: libc.so.6(GLIBC_2.2.5)(64bit)
  dependency: groff >= 1.18
  dependency: coreutils
  dependency: rpm
  dependency: nroff-i18n
  dependency: libc.so.6(GLIBC_2.3.4)(64bit)
  dependency: /bin/bash
  dependency: libc.so.6()(64bit)
  dependency: libc.so.6(GLIBC_2.4)(64bit)
  dependency: less
  dependency: config(man) = 1.6f-30.el6
  dependency: mktemp >= 1.5-2.1.5x
  dependency: libc.so.6(GLIBC_2.3)(64bit)
  dependency: rtld(GNU_HASH)
  dependency: bzip2
  dependency: findutils
  dependency: gzip
  dependency: /bin/sh
  dependency: lzma
  dependency: libc.so.6(GLIBC_2.2.5)(64bit)
  dependency: groff >= 1.18
  dependency: coreutils
  dependency: rpm
  dependency: nroff-i18n
  dependency: libc.so.6(GLIBC_2.3)(64bit)
  dependency: /bin/bash
  dependency: libc.so.6()(64bit)
  dependency: libc.so.6(GLIBC_2.4)(64bit)
  dependency: less
  dependency: libc.so.6(GLIBC_2.3.4)(64bit)
  dependency: lzma
  dependency: config(man) = 1.6f-32.el6
  dependency: rtld(GNU_HASH)
  dependency: bzip2
  dependency: findutils
  dependency: gzip
  dependency: /bin/sh
  dependency: mktemp >= 1.5-2.1.5x
  dependency: libc.so.6(GLIBC_2.2.5)(64bit)
  dependency: groff >= 1.18
[root@testbox09 lynis]#

Here you see that a simple package as man has a lot of dependencies. Without the filtering you will have lines like the one below:

 dependency: libc.so.6(GLIBC_2.2.5)(64bit)
   provider: glibc.x86_64 2.12-1.7.el6_0.3

This shows the libc.so.6 is porvided by glibc.x86_64. Having this information up front can safe a lot of time when preparing an installation on a disconnected machine. You can also use some rpm command attributes as shown below to get more insight into the dependencies a RPM file might have during installation:

  • rpm -Uvh --test *.rpm
  • rpm -qpR *.rpm

Security auditing Oracle Linux with Lynis

When it comes to security it is good practice that you undertake auditing yourself. A large set of tools are available to do auditing on Linux systems. When running Oracle Linux and you have an Oracle oriented IT footprint you most likely have Oracle Enterprise Manager running within the overall IT footprint. It is good practice to ensure that the security compliancy framework is activated for all your Oracle Linux systems. This will ensure that  the security checks are done constantly and Oracle Enterprise Manager will inform you when something is configured incorrect. However, sometimes you want a second opinion and a second check on security.

One of the tools that is available as opensource is Lynis, provided by a company called CISOFY. Lynis is an open source security auditing tool. Used by system administrators, security professionals, and auditors, to evaluate the security defenses of their Linux and UNIX-based systems. It runs on the host itself, so it performs more extensive security scans than vulnerability scanners.

Installing Lynis:
The installation of Lynis is extremely easy, the code is available on github and can be retrieved with a git clone command as shown below:

[root@testbox09 tmp]#
[root@testbox09 tmp]# git clone https://github.com/CISOfy/lynis
Initialized empty Git repository in /tmp/lynis/.git/
remote: Counting objects: 7092, done.
remote: Compressing objects: 100% (125/125), done.
remote: Total 7092 (delta 75), reused 0 (delta 0), pack-reused 6967
Receiving objects: 100% (7092/7092), 3.26 MiB | 1.99 MiB/s, done.
Resolving deltas: 100% (5159/5159), done.
[root@testbox09 tmp]#
[root@testbox09 tmp]#

As soon as you have the Lynis code on your Oracle Linux instance it can be used.

Running Lynis:
To start the standard Lynis auditing run you can run the below command in the location you have downloaded the Lynis code from Github:

./lynis audit system -Q

This will result in an onscreen result however, the result is also stored in /var/log where the following files will be stored:

  • Test and debug information stored in /var/log/lynis.log
  • Report data stored in/var/log/lynis-report.dat

Below is an example of a Lynis run:
Conclusion:
If you need a fast additional check to security auditing, Lynis, next to some other available tools, is a great starting point to see what best fits your need.

Application clusters in the Oracle cloud

Traditionally (in the past) applications have been deployed commonly in a single instance manner. One application server running a specific application for a specific business purpose. When the application server encountered a disruption this automatically resulted in downtime for the business.

As this is not the ideal situation systems have been build more and more in a clustered fashion. Multiple machines (nodes) running all an instance of the application and balancing load between the nodes. When one node fails the other nodes take over the load. This is great model in which your end-users are protected against the failure of one of the nodes. Commonly an engineer would take the malfunctioning node and repair the issue and introduce it back to the cluster when fixed.

With the cloud (private cloud and public cloud) and the move to a more cattle like model the use of clustered solutions starts to make sense even more. In this model the engineer who traditionally fixed an issue and re-introduced the node back to the cluster will now be instructed to only spend a very limited time in fixing the issue. If he is unable to fix the issue on a node in a given set of minutes the action will be to “destroy” the node and re-deploy a fresh node.

Due to this model engineers will not spend hours and hours on fixing an issue on an individual node, they will only spend a couple of minutes trying to fix the issue. Due to this the number of nodes and engineer can maintain will be much higher, resulting in a lower cost for maintenance per node.

To be able to adopt a model where nodes are considered replaceable cattle and no longer pets a couple of things need to be in place and needs to be taken care of. The conceptual prerequisites are the same for a private cloud as they are for a public cloud even though the technical implementation might differ.

  1. Nodes should be stateless. 
  2. Nodes should be automatically deployable.
  3. Nodes should join the cluster automatically.
  4. The cluster needs to auto-aware.


Nodes should be stateless.
This means that a node, an application node, is not allowed to have a state. Meaning, it cannot hold transactions or application data. The application node is, simply put, to execute application tasks. Whenever a node is destroyed no data will be lost and whenever a node is deployed it can directly take its role in the cluster.

Nodes should be automatically deployable
A node should be deployable automatically. This means, fully automatically without any human interaction after the moment the node is deployed. Oracle provides a mechanism to deploy new compute nodes in the Oracle Public cloud based upon templates in combination with customer definable parameters. This will give you in essence only a virtual machine running Oracle Linux (or another operating system if so defined). The node will have to be configured automatically after the initial deployment step. You can use custom scripting to achieve this or you can use Puppet or Chef like mechanisms. In general a combination of both customer scripting within the VM and Puppet or Chef is the most ideal solution for fully automated deployment of a new node in the cluster.

Nodes should join the cluster automatically 
In many cases the automatic deployment of a new node, deploying Oracle Linux and configuring the application node within the Oracle Linux virtual machine is something that is achieved. What in many cases is lacking in the fully automated way of working is that this node is joining the cluster. Depending on your type of application, application server and node-distribution (load balancing for example) mechanism the technical implementation will differ. However, it is important to ensure that a newly provisioned node is able to directly become a part of the cluster and take its role in the cluster.

The cluster needs to auto-aware
The automatic awareness of the cluster go’s partially in to the previous section where we mention the fact that a new node needs to join the cluster fully automatically and ensure the node will take the requested role in the cluster. This means that the cluster needs to be auto-aware and aware of the fact that a new node has joined. Also the cluster needs to be automatically aware of the fact if a node malfunctions. In case one of the nodes become unresponsive the cluster should automatically ensure that the node is no longer served new workloads. For example, in case of a application server cluster which makes use of load-balancing, the malfunctioning node should be taken out of the balancing algorithm until the moment it is repaired or replaced. When using a product which is developed to be cluster aware, for example Oracle Weblogic this might not be that hard to achieve and the cluster will handle this internally. When you use a customer build cluster, for example a micro-services based application running

with NGINX and Flask and depending on load-balancing you will have to take your own precautions and ensure that this auto-aware mechanism is in place.  

Oracle Public cloud conceptual deployment
When we use the above model in the Oracle Public cloud conceptual deployment could look like the one below where we deploy a web-based application.


In this model, the API server will create a new instance for one of the applications in the application cluster it is part of. As soon as this is done the new server will report back to the API server. Based upon the machine will self-register at puppet and all required configuration will be done on the node. The latest version of the application software will be downloaded from the GIT repository and as a last step the new node will be added to the load balancer cluster to become a full member of the application cluster.

The above example uses a number of standard components from the Oracle cloud, however, when deploying a full working solution you will have to ensure you have some components configured specifically for your situation. For example, the API server needs to be build to undertake some basic tasks and you will have to ensure the correct puppet plans are available on the puppet server to make sure everything will be automatically configured in the right manner.

As soon as you have done so however you will have a fully automatic scaling cluster environment running in the Oracle Public Cloud. As soon as you have done so for one environment this is relatively easy to change into other types of deployments on the same cloud. 

API based architecture for web applications in the Oracle Cloud

When it comes down to developing websites and web applications a lot has changed since the time I developed my first web based applications. Changes ways of developing, changing platforms and frameworks, changing programming languages and changing architectures. Almost every company today is considering, having a website, as a given. Where websites used to be a relative static brochure showing what a company was about we are already well on the track of making websites a customer portal and application. Websites and web based applications are becoming more and more a part of the overall customer experience and customers expect that they can do everything and find everything they might want on a corporate website.

This makes that a website or web based application is becoming more and more critical to the success. A failing website, a slow website or a website unable to deliver the experience expected by the customer will have a direct negative effect on the customer satisfaction and as a result of this a decreasing willingness to do business with a company.

To cope with the growing importance of web based applications and to cope with the requirement to be scalable architectural principles used to develop web based applications are changing rapidly. One of the examples currently seen is the change to API centric development.

Traditional architecture
The traditional, as shown in figure 1, is based upon a direct connection between the web application and the Oracle database instance (or any other database for that matter). A customer would interact with the web application using HTTP or HTTPS (A) and the web application would interact with the database (B) using SQL.NET whenever needed.

figure 1 - traditional architecture 

Simple API based architecture
When looking at applications that are currently build a number of web based applications are still being developed using the more traditional architecture as shown above. Companies who do require their applications to be more fault tolerant, scalable and secure are adopting new architecture principles. Commonly a more API based architecture is being used where the web application is not directly communicating with the database. In those cases the customer facing application is communicating with an API and the API service will communicate with the database on behalf of the web application.

Another observation that can be made is that the trend is that more and more open source frameworks and solutions are used. In the below example you will for example see NGINX and Flask being deployed on Oracle Linux for serving the API’s to the web applications.

In the below model the web application is not directly communicating with the Oracle Database. In this model a customer would interact with the web application using HTTP or HTTPS (A) and the web application would interact with the API’s (B) using HTTPS whenever needed. The API server running NGINX and Flask will interact with the database server (C) using SQL.NET whenever needed.

figure 2 - API based architecture

As shown the above diagram in figure 2 shows an additional layer is introduced in the form of NGINX and flask deployed on Oracle Linux. The use of NGINX and flask is only an example which is chosen in this post as both as it is becoming more and more a popular deployment used in this type of scenarios. Other solutions can be used also in this place and play this role.
Added value

Companies who do require their applications to be more fault tolerant, scalable and secure are adopting new architecture principles.

The added options for scalability and fault tolerance are provided in the “layer” by having the option to create a cluster of servers providing the API functions to  the web application layer. As shown in figure 3, you can create a cluster of NGINX and Flask nodes running on Oracle Linux to your overall deployment architecture. When running a deployment as shown in this example on a public or private cloud you can quickly scale up and down your API layer when needed.

What you have to take into account when deploying a automatic or semi-automatic scaling of your API layer is the way you will distribute the load over the different nodes and how your routing / load balancing solution will be made aware of nodes that are added or removed.

By having a clustered API layer you will at the same time add on resilience against the failure of a node. In the deployment model shown in figure 3 the loss of a single node will not stop your web application to function. This provides an additional insurance that your application will be available to end users.

Figure 3 - API cluster
The added security by using this deployment model, regardless of the use of a clustered layer or not, comes from the fact that your customer facing web application is no longer directly connected to your database. In a traditional web configuration a malicious user could try to exploit for example SQL injection and regardless of the fact if that succeeded he would directly interact with the database. In the API based model injection would result in sending injected code to the API and not to the database directly. 

In this model you can add security measures in the web application itself, the API layer and the database opposed to having only security measures in the web application and the database. By adding this additional layer you can have build the API to function as an additional security tollgate. 

Enabling the next step
An added benefit of this model is that it enables enterprise to prepare, or adopt the next step in architecting applications. The model described above is ideally fitted to develop microservices oriented architecture applications. In a microservices oriented architecture you can use the API layer to develop and run your microservices. 

Applications build upon microservices are currently seen as the way forward and the next step to build robust, flexible and scalable applications. 

Moving it all to the cloud
The above example is drawn in a way that hints at a traditional deployment model. As stated, when deployed in a private or public cloud you will have the added benefits of options to quickly scale up and scale down the number of nodes in your API layer.

When selecting a public cloud solution the option for Oracle Public Cloud might be quite obvious. As Oracle provides both Oracle Database and Java a Oracle database and Java oriented application might find its best public cloud partner with Oracle. 

Figure 4 - cloud deployment

Database cloud deployment
The deployment of your database in the Oracle public cloud can be done by making use of the Oracle Public Cloud database service.

Java application deployment
For your java application you can make a selection out of a couple of products. The Oracle Java Cloud Service might make the most sense. However, you might also be able to make use of the Oracle Application Container Cloud or you can deploy everything yourself by making use of the Oracle Compute service which provides you raw computing power in the form of a Oracle Linux VM.

API layer deployment
Depending on the language you want to develop your application in and the level of “outside of the box” development you might want to do you can select the Oracle Integration cloud, you can also make use of the Oracle Application Container Cloud which enables you to deploy docker containers and deploy your API custom code in the container. When using the combination of NGINX and Flask and you focus on developing Python code (which is a very good choice for the API layer in my personal opinion) you might want to make use of the Oracle Compute service and the Oracle Linux VM’s provided by this public cloud service. 

Bridging the hybrid database gap for Oracle with memcached

Enterprises are starting to adopt the hybrid cloud model, this means that in some cases applications that are normally hosted on-premise are moved into the cloud in full. This also means that in some cases parts of an application are moved to the cloud and that some systems on which an application depends will stay on premise and will not move to the cloud.

A common question asked when discussing moving parts of an IT estate to the cloud is how to bridge the gap between the systems in the cloud and the systems that remain on premise. The below diagram shows a common deployment in enterprises where one application is depending on the database of another application.

Figure 1 - shared database deployment

In this deployment shown in figure 1 the following applies:
  • Application 0 makes use of database instance A
  • Application 1 makes use of database instance B and makes use of database instance A

When deployed in a single datacenter the connection between the applications and the databases will all be equal (in general). No significant delay or latency is to be expected and the user experience is equal for both the users of application 1 and application 2

Moving to the cloud
In case a requirement is stated that application 2 is moved to the cloud and application one (for a specific reason) will stay on premise, including the directly associated database, the deployment model will start to look as shown in figure 2. 

Figure 2 - Crossing the line

In this case the worry of many companies is with connection A shown in the above figure worry about potential latency over connection A, they worry what might happen to application availability of application 1 in case the connection becomes unavailable for a moment. 

A possible solution is making use of a caching mechanism, a solution often used to alleviate workloads from a database server and speed up application performance. A caching solution like this can also be used to resolve the issue of bridging the gap between cloud based applications and on premise data stores. Do note, data stores, as this can also be something else than the Oracle database used in this example. 

Using cache as a bridge
A good open source solution for this is memcached. A large number of enterprises use memcached. It is good to realize what memcached is, memcached is an in memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls or page rendering. If your application is able to function based upon this principle memcached is a very good solution to implement and mitigate the risk of a breaking or limited connection between application-1 and database A. This would result in a deployment as shown in figure 3.

figure 3 - using memcached

Understanding memcached
To fully grasp the possibilities it is important to take a closer look at memcached and how it can be used. As stated, memcached is an open source in memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls(, API calls or page rendering). You will be able to run memcached on an Oracle Linux instance, this can be an instance in the Oracle public cloud as part of the compute service (as shown in this example) or it can be an Oracle Linux instance deployed in a private cloud / traditional bare-metal deployment. 

Optimize the database footprint with AWR Warehouse

In general Oracle databases do play a role in wider critical production system used to provide vital services to the business. In cases where databases are deployed and they are being used to almost the maximum capabilities it is important to ensure you have a strategy for monitoring and tuning performance.

In those cases the majority of the companies will invest in implementing the correct monitoring and management solutions. For Oracle databases this is commonly, and for good reasons, Oracle Enterprise Manager. Oracle Enterprise Manager will provide a lot of options for monitoring performance out of the box and as part of the free (gratis) base installation.

When dealing with critical database systems, as described above, that are in need of more thorough performance monitoring and tuning companies will make use of AWR. A less known options is AWRW or in full Automatic Workload Repository Warehouse.

AWR Warehouse
The AWR Warehouse is a part of Oracle Enterprise Manager and provides a solution to one of the shortcomings of “standard” AWR and provides a lot more options to DBA’s and performance tuning specialist. With “standard” AWR you will be able to keep an 8 day set of data on your local database server. The advantage of AWR Warehouse is that all AWR data is collected and stored in one central warehouse as part of Oracle Enterprise Manager.



This provides a number of direct advantages as listed below;
  • The ability to store a long(er) period of AWR data
  • The ability to easily compare AWR data from different databases in one single location
  • Use out of the box diagnostics features from OEM on the historical AWR snapshots
Query, analyze and compare
One of the things AWR Warehouse will be supporting you in is to make your performance tuning team more efficient. With AWR Warehouse you have to option to query the AWR snapshots directly, you can analyze the data and run this same query for another database instance or for all database instances on your engineered system or the entire IT footprint. 

As an example, if you find a sub-optimal implementation of SQL code in an isolated database you might be interested if this same implementation is used in other databases across your estate. By making use of AWR Warehouse you will have the ability to check with one single query on which systems this also might be an issue and where you might need to do code refactoring or performance tuning. 

The business benefit
The benefits to the business are obvious. By enabling performance tuning teams, development teams and DBA’s to analyze all databases at once using a centralized AWR Warehouse the time to find possible performance issues  is shortened. 

The ability to and the effectiveness of analyzing AWR reports and finding possible performance issues is drastically improved while the time needed for the analysis is shortened. 

AWR Warehouse will give the ability to move away from case-by-case tuning. It provides the ability to move to a more overall tuning strategy. In general tuning teams and DBA’s will work on a case by case basis where they take an isolated issue in a single database. Tracking down if the same type of performance issues is in another database somewhere in the vast IT footprint is often a tedious task which is not performed. AWR Warehouse provides the option to run the same diagnostics you run for a single isolated database on all databases in your IT footprint. This moves a company into a wider, a better, tuning strategy which directly benefits the business. 

By optimizing your database, by finding issues in your SQL code you will be able to make your database instances more effective. Ensuring you do remove bottleneck and remove sub optimal implementations that use far more resources than required. Essentially it will free compute resources by doing this which can be used for other purposes. It provides the ability to run more database instances on existing hardware or grow load on your systems without the need to purchase additional hardware. 

The business case 
The business case for purchasing the required licenses needed to use AWR Warehouse need to involve a couple of data points to make a fare business case to invest into this. 
  • The number of critical databases in need of tuning
  • The amount of FTE spending time on tuning
  • The (potential) los in revenue due to slow performance
  • The (potential) gain in freeing compute resources due to tuning
  • The (potential) not needed investment in hardware expansion
Those pointers should be incorporated in a business case, next to the standard data points that you would include in a standard business case. Failing to include the above will result in a sub-optimal business case. 

General advice
In general the advice is to look into using AWR Warehouse when: 
  • You do have an Oracle database footprint which is significant. Significant in this case is open for discussion, we use a 15 production database threshold.  
  • You do have the need to tune your databases in an optimized manner without the need to have a significant number of people investing in optimizing. 
  • You have a system sizing which is “tight” and you need to ensure your databases are optimized in the most optimal way
  • You have a system sizing which is “generous” and you like to limit the number of resources per database to free resources for other use (other / more database instances on the same hardware)
  • You foresee that the load on your systems will grow in the near future and you need to ensure you are prepared for this and database response times will stay acceptable by the database.