Sunday, December 31, 2017

Oracle Linux - start Apache Kafka as service

In a previous post I showcased how to run Apache Kafka on Oracle Linux. This was intended as an example for testing purposes, the downside of this example was that you needed to start Zookeeper and Kafka manual. Adding it to the startup scripting of your Oracle Linux system makes sense. Not only in a test environment, also in a production environment (especially) you want to have Kafka started when you boot your machine. The below code snippet can be used for this.

You have to remember that, after you place this in /etc/init.d, you have to use chkconfig to add it as a service and you have to use the service command to start it for testing. 

Saturday, December 30, 2017

Oracle Linux - Install Apache Kafka

Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Its storage layer is essentially a "massively scalable pub/sub message queue architected as a distributed transaction log," making it highly valuable for enterprise infrastructures to process streaming data. Additionally, Kafka connects to external systems (for data import/export) via Kafka Connect and provides Kafka Streams, a Java stream processing library.



In this blogpost we will install Apache Kafka on Oracle Linux, the installation will be done in a test setup which is not ready for production environments however can very well be used to explore Apache Kafka running on Oracle Linux.

Apache Kafka is also provided as a service from the Oracle Cloud in the form of the Oracle Cloud Event Hub. This provides you a running Kafka installation that can be used directly from the start. The below video shows the highlights of this service in the Oracle Cloud.


In this example we will not use the Event Hub service from the Oracle Cloud, we will install Kafka from the ground up. This can be done on a local Oracle Linux installation or it can be done on a Oracle Linux installation in the Oracle Cloud, making use of the Oracle IaaS components in the Oracle Cloud.

Prepare the system for installation.
In esscence, the most important step you need to undertake is to ensure you have Java installed on your machine. The below steps outline how this should be done on Oracle Linux.

You can install the Java OpenJDK using YUM and the standard Oracle Linux repositories.

yum install java-1.8.0-openjdk.x86_64

You should now be able to verify that Java is installed in the manner shown below as an example.

[root@localhost /]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
[root@localhost /]#

This however is not making sure you have set the JAVA_HOME and JRE_HOME as environment variables. To make sure you will have the following two lines in /etc/profile.

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
export JRE_HOME=/usr/lib/jvm/jre

After you made the changes to this file you reload the profile by issuing a source /etc/profile command. This will ensure that the JRE_HOME and JAVA_HOME environment variables are loaded in the correct manner.

[root@localhost ~]# source /etc/profile
[root@localhost ~]#
[root@localhost ~]# env | grep jre
JRE_HOME=/usr/lib/jvm/jre
JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
[root@localhost ~]#

Downloading Kafka for installation
Before following the below instructions, it is good practice to check what the latest version is and download the latest stable Apache Kafka version. In our case we download the file kafka_2.11-1.0.0.tgz for the version we want to install in our example installation.

[root@localhost /]# cd /tmp
[root@localhost tmp]# wget http://www-us.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz
--2017-12-27 13:35:51--  http://www-us.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz
Resolving www-us.apache.org (www-us.apache.org)... 140.211.11.105
Connecting to www-us.apache.org (www-us.apache.org)|140.211.11.105|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 49475271 (47M) [application/x-gzip]
Saving to: ‘kafka_2.11-1.0.0.tgz’

100%[======================================>] 49,475,271  2.89MB/s   in 16s 

2017-12-27 13:36:09 (3.01 MB/s) - ‘kafka_2.11-1.0.0.tgz’ saved [49475271/49475271]

[root@localhost tmp]# ls -la *.tgz
-rw-r--r--. 1 root root 49475271 Nov  1 05:39 kafka_2.11-1.0.0.tgz
[root@localhost tmp]#

You can untar the downloaded file with a tar -xvf kafka_2.11-1.0.0.tgz and than move it to the location where you want to place Apache Kafka. In our case we want to place kafka in /opt/kafka so we undertake the below actions:

[root@localhost tmp]# mkdir /opt/kafka
[root@localhost tmp]#
[root@localhost tmp]# cd /tmp/kafka_2.11-1.0.0
[root@localhost kafka_2.11-1.0.0]# cp -r * /opt/kafka
[root@localhost kafka_2.11-1.0.0]# ls -la /opt/kafka/
total 48
drwxr-xr-x. 6 root root    83 Dec 27 13:39 .
drwxr-xr-x. 4 root root    50 Dec 27 13:39 ..
drwxr-xr-x. 3 root root  4096 Dec 27 13:39 bin
drwxr-xr-x. 2 root root  4096 Dec 27 13:39 config
drwxr-xr-x. 2 root root  4096 Dec 27 13:39 libs
-rw-r--r--. 1 root root 28824 Dec 27 13:39 LICENSE
-rw-r--r--. 1 root root   336 Dec 27 13:39 NOTICE
drwxr-xr-x. 2 root root    43 Dec 27 13:39 site-docs
[root@localhost kafka_2.11-1.0.0]#

Start Apache Kafka
The above steps should have placed Apache Kafka on your Oracle Linux system, now we will have to start it and test it for its working. Before we can start Kafka on our Oracle Linux system we first have to ensure we have ZooKeeper up and running. To do so, execute the below command in the /opt/kafka directory.

bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

Depending on the sizing your machine you might want to change some things to the startup script for Apache Kafka. When, as in my case, you deploy Apache Kafka in an Oracle Linux test machine you might not have as much memory allocated to the test machine as you might have on a "real" server. The below line is present in the bin/kafka-server-start.sh file which sets the memory heap size that should be used.

    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

in our case we changed the heap size to 128 MB which is more than adequate for testing purposes however migth be way to less when trying to deploy a production system. The below is an example of the setting as used for this test:

    export KAFKA_HEAP_OPTS="-Xmx1G -Xms128M"


This should enable you to start Apache Kafka for the first time as a test on Oracle Linux. You can start Apache Kafka from the /opt/kafka directory using the below command:

bin/kafka-server-start.sh config/server.properties

You should be able to see a trail of messages from the startup routine and, if all is gone right the last message should be the one shown below. This shuuld be an indication that Kafka is up and running.

INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

Testing Kafka
As we now have Apache Kafka up and running we could (should) test if Apache Kafka is working as expected. Kafka comes with a number of scripts that will make testing more easy. The below scrips come in use when starting to test (or debug) Apache Kafka;

bin/kafka-topics.sh
For taking actions on topics, for example creating a new topic

bin/kafka-console-producer.sh
Used for the role as producer of event messages

bin/kafka-console-consumer.sh
for the role as consumer for receiving event messages.

The first step in testing is to ensure we have a topic in Apache kafka to publish event messsage towards. For the we can use the kafka-topics.sh script. We will create the topic "test" as shown below;

[vagrant@localhost kafka]$
[vagrant@localhost kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
[vagrant@localhost kafka]$

To ensure the topic is really available in Apache Kafka we list the available topics with the below command:

[vagrant@localhost kafka]$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test
[vagrant@localhost kafka]$

having a topic in Apache Kafka should enable you to start producing messages as a producer. The below example showcases starting the kafka-console-producer.sh script. This will give you an interactive commandline where you can type messages.

[vagrant@localhost kafka]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
hello 1
hello 2
thisisatest
this is a test
{test 123}

[vagrant@localhost kafka]$

As the whole idea is to produce messages on one side and recieve them at the other side we will also have to test the subscriber side of Apache Kafka to see if we can receive the messages on the topic "test" as a subscriber. The below command subscribes to the topic "test" and the --from-beginning options indicates we want to recieve not only event messages that are created as from this moment, we want to receive all messages from the beginning (creation) of the topic (as far as they are available in Apache Kafka).

[vagrant@localhost kafka]$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
hello 1
hello 2
thisisatest
this is a test
{test 123}

As you can see the event messages we created as the producer are received by the consumer (subscriber) on the test topic.

Wednesday, December 27, 2017

Oracle Cloud - The future of retail

Retail in general is changing, customers are more and more getting into the drivers seat and voice what they expect from a retailer. The situation where retailers are ensured from business based upon the location where their store is located are over. Customers more and more tend to use online retailers in combination with brick and mortar stores. Ensuring you provide exactly the services customers desire a the right survival tactic for most retailers. For those who are unable to make this change it is not uncommon to find themselves in a situation that they go bankrupt simply due to the fact that customers are overlooking them and do business with other retailers.

Next to providing customers what they know they want it is the tactic of being able to understand the customer in such a way that you can provide them services they did not knew they wanted. Surprising customers with unexpected services and options is vital for survival.

being able to deliver goods the same day to any location the customer might be or might visit, changing advertising screens based upon customers that are close to them, dynamically change pricing throughout the day, analyzing eye movement from customers to optimize shelving or deploying robots in stores to help customers.... all examples of retail innovations that more than real and are currently already used and tested by retailers.



Within this presentation a number of innovations are showcased and aligned with Oracle Cloud technology to support retailers to implement them. Leveraging the Oracle Cloud for retail innovation is something retailers should look into to ensure they are able to stay ahead of competition. 

Oracle Cloud - Enterprise Security

Security is becoming more and more a topic which is getting the attention it needs. Enterprises and C level executives start grasp the reality of the importance of security, legislation forcing enterprises to ensure the right level of security is in place to safeguard customer information and to ensure the safety of vital infrastructure services are pushing the topic in the right direction. Even though the topic is starting to get the correct level of attention the actual state of security at most enterprises is still lacking behind the reality of threads.  The majority of enterprises are still relying on old and outdated mechanisms to secure their vital IT assets and data.



In this presentation an outline is given on the brutal truth of the state of security at the majority of enterprises. Additionally an insight is given in the solutions Oracle and the Oracle Cloud are providing to help enterprises to face the security challenges of today. 

Tuesday, December 19, 2017

Oracle Management Cloud - Manage multiple clouds like OPC, AWS & AZURE

There will be almost no enterprise that will have all the systems in a single datacenter or with a single cloud provider. With the rise of hybrid cloud strategies and with the rise of cloud native and cloud born application we see more and more that application footprints become hybrid and span multiple cloud providers and in some cases also multiples clouds and customer datacenters. There are very good reasons for this and technically there are no issue in achieving this.

However, management and monitoring of systems and services in a distributed environment that is spanning multiple cloud vendors and private datacenters can become extremely difficult if you do not plan this and ensure you have a strategy for it. Planning for it and having a strategy for it can, and this is very advisable, include the use of one single and central management and monitoring application.

When you use the Oracle Cloud Management solution you can, for example, monitor an application as a single application even though it might span over multiple cloud vendors.



The screenshot above shows the automatically generated and updated application topology of an application spanning both the Oracle Public Cloud (OPC) and AWS. In this example we have servers and services that are consumed by both cloud providers, however, monitoring is done from a single solution.

Having the option to manage all your applications, regardless from the location where they reside is a big benefit for every company and can improve the stability of your IT footprint and lower the time needed to resolve incidents. 

Oracle Linux - Base64 encoding

Base64 is a group of similar binary-to-text encoding schemes that represent binary data in an ASCII string format by translating it into a radix-64 representation. The term Base64 originates from a specific MIME content transfer encoding. For all clarity, Base64 encoding is NOT encryption and it will not make your message or data secure. However, a lot of good use cases for using a Base64 encoding exist and you will come across them frequently.

When working with Base64 encoded data (files, strings, etc) while developing scripting in Bash requires you to understand the basics of how to handle it. Under Oracle Linux the most common way is using the base64 command.

As an example, we have a string which we do want to apply base64 encoding on:

[vagrant@consul-dc2 tmp]$ echo "this is a clear text" | base64
dGhpcyBpcyBhIGNsZWFyIHRleHQK
[vagrant@consul-dc2 tmp]$ 

In the above example you will see that the clear text string is transformed into a base64 encoded string.

As an example, we have the base64 encoded string from the previous example and we want to decode this to clear text:

[vagrant@consul-dc2 tmp]$ echo "dGhpcyBpcyBhIGNsZWFyIHRleHQK" | base64 --decode
this is a clear text
[vagrant@consul-dc2 tmp]$

This, in effect, is all you have to do do handle base64 encoding under Oracle Linux.

Monday, December 18, 2017

Oracle Linux | Consul ServiceAddress from multiple datacenters

Consul, developed by HashiCorp, is a tool commonly used for service discovery and service registration in microservice based landscapes. When running a distributed landscape spanning multiple datacenters or multiple Oracle Cloud zones you will, most likely, be running multiple cosul datacenter clusters. A consul datacenter cluster will take care of all service registration requests and all service discovery requests for this specific datacenter. Service discovery can be done based upon a REST API interface or it can be done based upon a DNS resolve mechanism.

Having stated that a Consul datacenter cluster takes care of all service registered in that datacenter is not fully true. You can connect the Consul clusters in the different datacenters or Oracle Cloud zones together with a concept called WAN gossip. Using WAN gossip the clusters will be aware of each other and you can use you local cluster to query services in the other datacenter.

An example of the construct of a FQDN is shown below. In this example we query a service named testservice in datacenter dc1. A DNS resolve on this FQDN will give you all the IP addresses for servers (or Docker containers) running this service.

testservice.service.dc1.consul

if we query the same Consul cluster without dc1 (testservice.service.consul) we will get exactly the same IP's as the DNS service from Consul will default the datacenter name to the name he is responsible for. however, if we query testservice.service.dc2.consul we will get a list of all IP's for  instances for the testservice registered in the other datacenter.

In a lot of cases this is a very workable solution and it will solve most of the situations, it also gives a level of safeguard against cross-dc requests. However, in some cases you would like to have a full list of all IP's for a services without taking into consideration in which DC they reside.

In effect, Consul is not supporting this out of the box at this moment. If you use the API and not DNS the below command is the quickest way to get a list of all IP addresses for a service without taking into account in which datacenter they reside.

[vagrant@consul-dc2 testjson]$ { curl -s "http://localhost:8500/v1/catalog/service/testservice?dc=dc1" ; curl -s "http://localhost:8500/v1/catalog/service/testservice?dc=dc2" ; } | jq '.[].ServiceAddress'
"10.0.1.16"
"10.0.1.17"
"10.0.1.15"
"10.0.1.16"
"10.0.2.20"
"10.0.2.15"
"10.0.2.16"
[vagrant@consul-dc2 testjson]$ 

A similar command could be made using dig and leveraging the DNS way of doing things. However, for some reason it feels like this is a missing option in Consul.

Do note, we use JQ also, JQ is available from the Oracle Linux YUM repository, however, you will have to enable an additional channel as it is not available in the default enabled ones. 

Saturday, December 09, 2017

Oracle Linux - use tee to write to stdout and file

When writing scripting for Oracle Linux using Bash a very common scenario is that you want to write something to both the screen and to a file. In effect there is nothing that would prevent you from writing a line that will write to the screen and the other to write to a file. However, it is not that practical. You could decide to write a function that will take the line as input and will take both tasks. This will already limit the number of lines of code you might need. However a more gentle solution can be found in the tee command.

The tee command will read from standard input and write to standard output and files and can take the following flags:

-a, --append  : append to the given FILEs, do not overwrite

-i, --ignore-interrupts : ignore interrupt signals

so, in effect, if you want to write to screen and file in one single line of code you can use the tee command to achieve this. An example of this is shown below where we will write "Hello world" to both the screen and a file.

[root@docker consul]# echo "hello world" | tee /tmp/world.txt
hello world
[root@docker consul]# cat /tmp/world.txt 
hello world
[root@docker consul]# 

The nice thing about the tee command is that this will work for all output which is send to stdout. Meaning, you can spool almost everything to a file and show it on the screen at the same time.

Wednesday, November 22, 2017

Oracle Linux - import / load a docker image

When running Docker on Oracle Linux (or any other distro) you can pull images from a registry. The pull command will contact the registry and download the needed. You can use a public registry such as hub.docker.com or you can have a private registry within your secured environment. This mechanism works fine and you should always strive to have the option to use a registry, being it private or public. However, in some cases you do not have the luxury of having a registry and you will have to import / load an image directly into Docker to be able to start a container based upon the image.

When you want to export an image and import it on another Docker node the following commands should be used.

To save an image use the following command:
docker save -o {save image to path} {image name}

To load an image use the following command:
docker load -i {path to image tar file}

It is important to understand you need to use the save and load commands and not the export and import commands as they do very different things:

export - Export a container's filesystem as a tar archive
import - Import the contents from a tarball to create a filesystem image
save - Save one or more images to a tar archive (streamed to STDOUT by default)
load - Load an image from a tar archive or STDIN

Monday, November 20, 2017

Oracle Linux - Set line numbers in VI by default

Without fail almost all Linux distributions come with an editor, vi, that can be used from the command line. Most people who spend time on the Linux command line will be able to use vi. One of the things that can be done in vi is setting line numbers which is extremely handy when you are working on code from the command line. When you do want to set line number on you can do so directly from within vi.

Setting line numbers on can be done using:

:set nu

However when you need to do this often setting the line number to on every time you open vi can become annoying. You would like to have line numbers on by default. The easy way to do so is by ensuring you have a local setting file for vi in your home directory, the file should be named: .vimrc

adding the below content to .vimrc in you home directory will ensure that line numbers are always on when you start vi under Oracle Linux.

[root@localhost ~]# cat ~/.vimrc 
set number
[root@localhost ~]# 

This is however only one of many settings you can do to change the default behaviour of vi. When working a lot with vi you might want to dive a bit into the options to influence the default behaviour as it can save you a lot of time. 

Oracle Linux - Redis & transparent hugepage settings

When deploying Redis on Oracle Linux you might run into a warning situation making a statement about transparent hugepage settings. The statement made by Redis is that Redis might have performance issues when working on a system that is configured to use transparent hugepages (THP). THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages in your memory. If you want to be compliant with the best practices from Redis you will have to take notice if this warning level during startup.

In general the statement made by Redis is the following you will see:

Transparent huge pages must be disabled from your kernel. Use echo never > /sys/kernel/mm/transparent_hugepage/enabled to disable them, and restart your Redis process.

As you can see the statement also includes a clear message on how to resolve this. To be able to see if things go as expected you will have to understand how the content of this file is represented. In general you will have a number of values that are valid:

always
Transparent hugepages will always be used by Oracle Linux

madvise
Oracle Linux will ask advice from madvise for advice if transparent hugepages should be used or not. The madvise() system call is used to give advice or directions to the kernel about the address range beginning at address addr and with size length bytes In most cases, the goal of such advice is to improve system or application performance.

never
Transparent hugepages will never be used by Oracle Linux

If you check the file in question you will see that the choice made is represented by the value between brackets. In our case we have selected never and as you can see in the below example never is between brackets.

[root@localhost tmp]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
[root@localhost tmp]# 

Understanding which value is active, in our case never, is vital for understanding the current setting in your system. Do note, this holds you might want to build a check which is just a bit more intelligent than doing a cat on the file.

Friday, November 03, 2017

Oracle linux - building small Docker containers while using yum

Docker and more precise Docker containers are in general small. The rule for Docker containers is, keep them as small as possible. Building containers is a process where you have to look every step of the way on what you really need and what you can remove or clean. When building a container most people will depend on a mainstream base image, in our case we base most of our containers on the Oracle Linux 7 slim image which is the smallest Docker base image build upon Oracle Linux 7.

One of the rules when you build your own containers on top of the Oracle Linux 7 slim base image (rule applies for all others as well) is that you have to clean after you build. In one example we install a lot of things using yum which are needed as part of the build process. Rule is that you have to clean after you are done to ensure the resulting container is small and is not carrying unneeded things.

in our example we have totally in the beginning a run command, as shown below, that will install a number of packages during the build phase:

# Install dependencies
RUN yum -y install make gcc cc tar gzip

Right after this we initiate a number of compile and build steps. However, after those are done we can remove the installed dependencies again.

# Cleanup dependencies
RUN yum -y erase gcc cc tar gzip

Even though this is good housekeeping it is not enough to ensure your image is small. Yum will keep some files in cache which you also have to ensure are cleaned. If this is not done you will have a lot of unneeded things in your local yum cache which will make your docker image for a container much bigger than needed.

To clean the cache you will also have to ensure you have the following run command in your Dockerfile;

# clean the YUM cache.
RUN yum clean all

Making sure you remove the unneeded dependencies you installed at the end of your Dockerfile and also ensure that you can clean your yum cache will make your image shrink and will ensure you have small Docker images and Docker containers with Oracle Linux 7 in your environment. 

Thursday, November 02, 2017

Oracle Linux - getting newer versions of Docker via yum

When using Oracle Linux to run Docker you might depend by default on the Oracle YUM repository. While this is perfectly fine, the Oracle repository might be running behind with the mainstream Docker versions and you will not by default get the latest versions of Docker. As Docker introduces a lot of new features per version and you want to be using the latest stable version the way to go is to use a Docker owned repository.

For Oracle Linux 6 there is a specific repository provided by Docker. you can use this as an addition to the standard Oracle Linux repositories. The example below showcases a docker.repo file located in /etc/yum.repos.d/

[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/oraclelinux/6/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

If you have this file available YUM will be checking this repository when you want to install (or update) docker to a newer version, which in some cases is a newer version than would be availabel via the public-yum.oracle.com repository.

Monday, October 30, 2017

Oracle Linux - monitor file changes with auditd

As part of a security and audit strategy it is very common to ensure certain files are monitored for access, changes and execution. This is especially useful for systems that require level of security and where you need to ensure that every change to critical files is monitored. Also some auditors will require that you are able to provide proof of who has had access to a file. When you have those requirements auditd will be the solution you want to implement under Oracle Linux.

The auditd solution is the userspace component to the Linux Auditing System. It's responsible for writing audit records to the disk. Viewing the logs is done with the ausearch or aureport utilities.

Installing auditd
Installing auditd under Oracle Linux can be done by using YUM by executing the below command;

yum install audit

If you now do a check with which you will find that you now have auditd under /sbin/auditd which we now have to ensure will start when your system boots. This will ensure that all configuration you make for auditd will be active every time you boot.

To  ensure it will start at boot execute the below command.

chkconfig auditd on

To check if auditd is configured to start at boot use the chkconfig command. As you can see it is stated as "on" for runlevel 2, 3, 4 and 5.

[root@docker ~]# chkconfig --list auditd
auditd          0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@docker ~]# 

Now you will have to make sure auditd is running manually the first time. You can use the below example where we check the status of auditd, find out it is not running, start it, check again and see it is running. At the end of this we are sure we have auditd up and running.

[root@docker ~]# 
[root@docker ~]# service auditd status
auditd is stopped
[root@docker ~]# 
[root@docker ~]# service auditd start
Starting auditd:                                           [  OK  ]
[root@docker ~]# 
[root@docker ~]# service auditd status
auditd (pid  17993) is running...
[root@docker ~]# 

Configure auditd 
As an example we will create a rule to watch changes on a file. Based upon this rule the auditd daemon will monitor it and as soon as someone changes the file the audit data will be written to disk.

In this example we will place the repository file for the Oracle Linux repository under audit and we want to be informed when someone reads the file, changes the content or append the file. This is done with the below command:

[root@docker yum.repos.d]# auditctl -w /etc/yum.repos.d/public-yum-ol6.repo -p war -k main-repo-file
[root@docker yum.repos.d]#

In this example the following flags are used:

-w /etc/yum.repos.d.public-yum-ol6.repo is used to insert a watch on the file.
-p war is used to state the watch applies on write, append and read.
-k main-repo-file is used to make a simple naming for the watch rule.

Do note... that if you want to have your auditd rules persistent you have to ensure the rules are in the .rule file. An empty example is shown below:

[root@docker yum.repos.d]# cat /etc/audit/audit.rules 
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.

# First rule - delete all
-D

# Increase the buffers to survive stress events.
# Make this bigger for busy systems
-b 320

# Feel free to add below this line. See auditctl man page

[root@docker yum.repos.d]# 

Watching auditd in action
with the rule in place you can see that changes (or views) are registered. An example is shown below where we (as root) made a change to the file:

----
time->Mon Oct 30 19:16:13 2017
type=PROCTITLE msg=audit(1509390973.068:30): proctitle=7669002F6574632F79756D2E7265706F732E642F7075626C69632D79756D2D6F6C362E7265706F
type=PATH msg=audit(1509390973.068:30): item=0 name="/etc/yum.repos.d/public-yum-ol6.repo" inode=138650 dev=fb:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL
type=CWD msg=audit(1509390973.068:30):  cwd="/etc/yum.repos.d"
type=SYSCALL msg=audit(1509390973.068:30): arch=c000003e syscall=89 success=no exit=-22 a0=7ffd6bb1ed80 a1=7ffd6bb1fdd0 a2=fff a3=7ffd6bb1eb00 items=1 ppid=17847 pid=18206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts3 ses=16 comm="vi" exe="/bin/vi" key="main-repo-file"
----

You can use a number of tools such as aureport or ausearch to find the changes that have happend on your system. Having auditd up and running and ensuring you have the right configuration in place is just the beginning. You will have to ensure you that you have the right reporting, alerting and triggering in place. Just logging it is not providing security, (automatically) reviewing and taking action upon events is what will help you to get a higher level of security on your Oracle Linux system.

Friday, October 27, 2017

Oracle Linux - digging into the bioset Kernel working

when you are looking into the processes that are running on your Oracle Linux machine you might notice a process, or processes named bioset. In our example we are running an Oracle Linux 6 machine in a vagrant box, and for informational purposes, we are using it to run dockerd to serve Docker containers for development purposes.

As you can see in the below example, we find a number of bioset processes running on our Oracle Linux machine;

[root@docker ~]# ps -ef|grep bioset
root        30     2  0 12:04 ?        00:00:00 [bioset]
root       299     2  0 12:04 ?        00:00:00 [bioset]
root       302     2  0 12:04 ?        00:00:00 [bioset]
root      4136     2  0 14:20 ?        00:00:00 [bioset]
root      4138     2  0 14:20 ?        00:00:00 [bioset]
root      4140     2  0 14:20 ?        00:00:00 [bioset]
root      4379  3470  0 14:35 pts/0    00:00:00 grep bioset
[root@docker ~]# 

Finding bioset
To find out what bioset is and where it is being spawned from you can look at your processes in a number of ways. You can use the below command which provides a nice tree view of all the processes based upon the ps command or you can use the standard pstree command.

ps axfo pid,euser,egroup,args

using the above command we get the below output (cut down to the relevant part only);

As you can see bioset is a mentioned a number of times, and always as a "child" from kthreadd. Where kthreadd is within Oracle Linux (Linux in general) the kernel thread daemon and by doing so it is one of the backbone components of the Linux kernel which makes sure your kernel threads are handled in the right manner.

root@docker ~]# ps axfo pid,euser,egroup,args
  PID EUSER    EGROUP   COMMAND
    2 root     root     [kthreadd]
    3 root     root      \_ [ksoftirqd/0]
    5 root     root      \_ [kworker/0:0H]
    6 root     root      \_ [kworker/u4:0]
    7 root     root      \_ [rcu_sched]
    8 root     root      \_ [rcu_bh]
    9 root     root      \_ [rcuos/0]
   10 root     root      \_ [rcuob/0]
   11 root     root      \_ [migration/0]
   12 root     root      \_ [watchdog/0]
   13 root     root      \_ [watchdog/1]
   14 root     root      \_ [migration/1]
   15 root     root      \_ [ksoftirqd/1]
   16 root     root      \_ [kworker/1:0]
   17 root     root      \_ [kworker/1:0H]
   18 root     root      \_ [rcuos/1]
   19 root     root      \_ [rcuob/1]
   20 root     root      \_ [khelper]
   21 root     root      \_ [kdevtmpfs]
   22 root     root      \_ [netns]
   23 root     root      \_ [perf]
   24 root     root      \_ [khungtaskd]
   25 root     root      \_ [writeback]
   26 root     root      \_ [ksmd]
   27 root     root      \_ [khugepaged]
   28 root     root      \_ [crypto]
   29 root     root      \_ [kintegrityd]
   30 root     root      \_ [bioset]
   31 root     root      \_ [kblockd]
   32 root     root      \_ [ata_sff]
   33 root     root      \_ [md]
   34 root     root      \_ [kworker/0:1]
   36 root     root      \_ [kswapd0]
   37 root     root      \_ [fsnotify_mark]
   49 root     root      \_ [kthrotld]
   50 root     root      \_ [acpi_thermal_pm]
   51 root     root      \_ [kworker/u4:1]
   52 root     root      \_ [kworker/1:1]
   53 root     root      \_ [kpsmoused]
   54 root     root      \_ [bcache]
   55 root     root      \_ [deferwq]
   56 root     root      \_ [kworker/0:2]
  200 root     root      \_ [scsi_eh_0]
  201 root     root      \_ [scsi_tmf_0]
  202 root     root      \_ [scsi_eh_1]
  203 root     root      \_ [scsi_tmf_1]
  233 root     root      \_ [scsi_eh_2]
  234 root     root      \_ [scsi_tmf_2]
  298 root     root      \_ [kdmflush]
  299 root     root      \_ [bioset]
  301 root     root      \_ [kdmflush]
  302 root     root      \_ [bioset]
  368 root     root      \_ [kworker/0:1H]
  369 root     root      \_ [jbd2/dm-1-8]
  370 root     root      \_ [ext4-rsv-conver]
  626 root     root      \_ [iprt-VBoxWQueue]
  627 root     root      \_ [ttm_swap]
  738 root     root      \_ [jbd2/sda1-8]
  739 root     root      \_ [ext4-rsv-conver]
  827 root     root      \_ [kauditd]
  889 root     root      \_ [ipv6_addrconf]
 3744 root     root      \_ [kworker/1:1H]
 4122 root     root      \_ [loop0]
 4123 root     root      \_ [loop1]
 4126 root     root      \_ [kdmflush]
 4133 root     root      \_ [dm_bufio_cache]
 4136 root     root      \_ [bioset]
 4137 root     root      \_ [kcopyd]
 4138 root     root      \_ [bioset]
 4139 root     root      \_ [dm-thin]
 4140 root     root      \_ [bioset]

However, now we know where bioset is commong from and the parent is the kernel thread daemon we still do not know what bioset is doing.

What is bioset
As we now known bioset is part of the Linux kernel, it is used to Block Input Output instructions. This means it is an integrated part of the inner workings of the kernel when operations to a block device need to be done. bioset finds its origin in the struct bios_set located in bio.h within the Linux kernel code. The bio_set struct is shown below and can also be found in the kernel Github code repository.

/*
 * bio_set is used to allow other portions of the IO system to
 * allocate their own private memory pools for bio and iovec structures.
 * These memory pools in turn all allocate from the bio_slab
 * and the bvec_slabs[].
 */
#define BIO_POOL_SIZE 2

struct bio_set {
 struct kmem_cache *bio_slab;
 unsigned int front_pad;

 mempool_t *bio_pool;
 mempool_t *bvec_pool;
#if defined(CONFIG_BLK_DEV_INTEGRITY)
 mempool_t *bio_integrity_pool;
 mempool_t *bvec_integrity_pool;
#endif

 /*
  * Deadlock avoidance for stacking block drivers: see comments in
  * bio_alloc_bioset() for details
  */
 spinlock_t  rescue_lock;
 struct bio_list  rescue_list;
 struct work_struct rescue_work;
 struct workqueue_struct *rescue_workqueue;
};

If you explore the Linux kernel code you will find that within /block/bio.c will be include /linux/bio.h . The Linux block I/O subsystem includes bioset and this is a vital part of how BIOs are handled within the kernel.

Below images shows the Linux storage stack for kernel version 4.0 where you can see the flow of BIOs. BIOs are send from LIO (Linux I/O) or the VFS to the block I/O layer who in return will communicate with the I/O scheduler who will schedule the I/O operation to be send on the actual block device.



In conclusion
A lot of posts are being written that would indicate that bioset would be something that could be removed or would be malicious for the system, as outlined in the above post bioset is however a vital part of the Linux Kernel and an integrated part of the I/O subsystem for handling I/O requests in combination with block devices.

Wednesday, October 18, 2017

Oracle Linux - Check your kernel modules

Knowing and understanding what is running on your Oracle Linux system is vital for proper maintenance and proper tuning. As operating systems are seen more and more as something that is just there and should not be hindrance for development, as we see the rise of container based solutions and serverless computing it might look like that the operating system becomes less and less important. However, the opposite is true, the operating system becomes more and more important as it need to be able to facilitate all the requirements from the containers and functions running on top of it without human intervention or at least as less human intervention as possible.

This brings that, if you operate a large deployment of servers and you have to ensure everything is automated and operating at the best of performance at any moment in time without having to touch the systems or at least as less as possible, you need to optimize it and automate it. To be able to do so you need to be able to understand every component and be able to check if you need it or that you can drop it. Whatever you do not need, drop it, it can be a security risk or it can be a consumer of resources without having the need for it.

Oracle Linux Kernel modules
Kernel modules are an important part of the Oracle Linux operating system, understanding them and being able to check what is loaded and what is not should be something that you need to understand. Kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system.

udev
Today, all necessary modules loading is handled automatically by udev, so if you do not need to use any out-of-tree kernel modules, there is no need to put modules that should be loaded at boot in any configuration file. However, there are cases where you might want to load an extra module during the boot process, or blacklist another one for your computer to function properly.

Kernel modules can be explicitly loaded during boot and are configured as a static list in files under /etc/modules-load.d/. Each configuration file is named in the style of /etc/modules-load.d/.conf. Configuration files simply contain a list of kernel modules names to load, separated by newlines. Empty lines and lines whose first non-whitespace character is # or ; are ignored.

lsmod
Checking which kernel modules are loaded in the kernel can be done by using the lsmod command. lsmod will list all the modules. Basically it is a representation of everything you will find in the /proc/modules file however in a somewhat more understandable way. An example of the lsmod command on an Oracle Linux system running in a Vagrant box is shown below:

[root@localhost ~]# lsmod
Module                  Size  Used by
vboxsf                 38491  1 
ipv6                  391530  20 [permanent]
ppdev                   8323  0 
parport_pc             21178  0 
parport                37780  2 ppdev,parport_pc
sg                     31734  0 
pcspkr                  2094  0 
i2c_piix4              12269  0 
snd_intel8x0           33895  0 
snd_ac97_codec        127589  1 snd_intel8x0
ac97_bus                1498  1 snd_ac97_codec
snd_seq                61406  0 
snd_seq_device          4604  1 snd_seq
snd_pcm               113293  2 snd_intel8x0,snd_ac97_codec
snd_timer              26196  2 snd_seq,snd_pcm
snd                    79940  6 snd_intel8x0,snd_ac97_codec,snd_seq,snd_seq_device,snd_pcm,snd_timer
soundcore               7412  1 snd
e1000                 134545  0 
vboxvideo              42469  1 
ttm                    88927  1 vboxvideo
drm_kms_helper        120123  1 vboxvideo
drm                   343055  4 vboxvideo,ttm,drm_kms_helper
i2c_core               53097  3 i2c_piix4,drm_kms_helper,drm
vboxguest             306752  3 vboxsf,vboxvideo
sysimgblt               2595  1 vboxvideo
sysfillrect             4093  1 vboxvideo
syscopyarea             3619  1 vboxvideo
acpi_cpufreq           12697  0 
ext4                  604127  2 
jbd2                  108826  1 ext4
mbcache                 9265  1 ext4
sd_mod                 36186  3 
ahci                   26684  2 
libahci                27932  1 ahci
pata_acpi               3869  0 
ata_generic             3811  0 
ata_piix               27059  0 
video                  15828  0 
dm_mirror              14787  0 
dm_region_hash         11613  1 dm_mirror
dm_log                  9657  2 dm_mirror,dm_region_hash
dm_mod                106591  8 dm_mirror,dm_log
[root@localhost ~]# 

This could be the starting point of investigating and finding out what is loaded and what is really needed, what is not needed and what might be a good addition in some cases.

modinfo
as you might not be checking your kernel modules on a daily basis you might not know which module is used for what purpose. In this case modinfo is coming to your reseque. If you want to know, for example, what the module snd_seq is used for you can check the details with modinfo as shown in the example below.

[root@localhost ~]# modinfo snd_seq
filename:       /lib/modules/4.1.12-61.1.28.el6uek.x86_64/kernel/sound/core/seq/snd-seq.ko
alias:          devname:snd/seq
alias:          char-major-116-1
license:        GPL
description:    Advanced Linux Sound Architecture sequencer.
author:         Frank van de Pol , Jaroslav Kysela 
srcversion:     88DDA62432337CC735684EE
depends:        snd,snd-seq-device,snd-timer
intree:         Y
vermagic:       4.1.12-61.1.28.el6uek.x86_64 SMP mod_unload modversions 
parm:           seq_client_load:The numbers of global (system) clients to load through kmod. (array of int)
parm:           seq_default_timer_class:The default timer class. (int)
parm:           seq_default_timer_sclass:The default timer slave class. (int)
parm:           seq_default_timer_card:The default timer card number. (int)
parm:           seq_default_timer_device:The default timer device number. (int)
parm:           seq_default_timer_subdevice:The default timer subdevice number. (int)
parm:           seq_default_timer_resolution:The default timer resolution in Hz. (int)
[root@localhost ~]#

As you can see in the example above the snd_seq module is the Advanced Linux Sound Architecture sequencer developed by Frank van de Pol and Jaroslav Kysela. Taking this as an example, you can argue. do I need the snd_seq module if I run a server where I have no need for any sound.

Unloading "stuff" you do not need will ensure you have a faster boot sequence timing of your system, less resource consumption and as every component carries a risk of having an issue.... with less components you have theoretically less possible bugs.

In conclusion
optimizing your system by checking which kernel models should be loaded and which could be left out on your Oracle Linux system. However, when you just use it for common tasks you might not want to spend to much time on it. However, if you are building your own image or investing time in building a fully automated way of deploying servers fast in a CI/CD manner you might want to spend time on making sure only the components you really need are in the system and nothing else.


Tuesday, October 10, 2017

Oracle Linux - Yum security plugin

Ensuring your Oracle Linux system is up to date with patches, and especially security patches can be a challenging task. Updating your system from a pure operating system point of view is not the main issue. A simple yum command will make sure that the latest versions are applied to your system.

The main challenge a lot of enterprises face is identifying which patches and updates are applicable and how they might affect applications running on the systems. For Oracle linux you will have an additional level of assurance that Oracle software will be working when applying updates from the official Oracle Linux repositories.

For software not developed by Oracle this assurance will not be that strict and you will face the same possible issues as you will have with other Linux distributions, like for example, RedHat.

A formal process of identifying what needs to be updated and after that ensuring the update will not break functionality should be in place. The first step in such a process is finding the candidates. A good way to find out which updates, security specific in our example, are available and could be applied is something that can be facilitated by yum itself

You can use the yum security plugin. Some of the options you can see mentioned below:

Plugin Options:
    --security          Include security relevant packages
    --bugfixes          Include bugfix relevant packages
    --cve=CVE           Include packages needed to fix the given CVE
    --bz=BZ             Include packages needed to fix the given BZ
    --sec-severity=SEVERITY
                        Include security relevant packages, of this severity
    --advisory=ADVISORY
                        Include packages needed to fix the given advisory

As an example you can use the below command which will show information on available updates.

[vagrant@localhost ~]$ yum updateinfo list
Loaded plugins: security
ELBA-2017-0891 bugfix         binutils-2.20.51.0.2-5.47.el6_9.1.x86_64
ELEA-2017-1432 enhancement    ca-certificates-2017.2.14-65.0.1.el6_9.noarch
ELSA-2017-0847 Moderate/Sec.  curl-7.19.7-53.el6_9.x86_64
ELBA-2017-2506 bugfix         dhclient-12:4.1.1-53.P1.0.1.el6_9.1.x86_64
ELBA-2017-2506 bugfix         dhcp-common-12:4.1.1-53.P1.0.1.el6_9.1.x86_64
ELBA-2017-1373 bugfix         initscripts-9.03.58-1.0.1.el6_9.1.x86_64
ELBA-2017-2852 bugfix         initscripts-9.03.58-1.0.1.el6_9.2.x86_64
ELSA-2017-0892 Important/Sec. kernel-2.6.32-696.1.1.el6.x86_64
ELSA-2017-1372 Moderate/Sec.  kernel-2.6.32-696.3.1.el6.x86_64
ELSA-2017-1486 Important/Sec. kernel-2.6.32-696.3.2.el6.x86_64
ELSA-2017-1723 Important/Sec. kernel-2.6.32-696.6.3.el6.x86_64
ELBA-2017-2504 bugfix         kernel-2.6.32-696.10.1.el6.x86_64
ELSA-2017-2681 Important/Sec. kernel-2.6.32-696.10.2.el6.x86_64
ELSA-2017-2795 Important/Sec. kernel-2.6.32-696.10.3.el6.x86_64

In case you want to see only the security related updates with a severity Moderate you can use the below command to generate this list:

[vagrant@localhost ~]$ yum updateinfo list --sec-severity=Moderate
Loaded plugins: security
ELSA-2017-0847 Moderate/Sec. curl-7.19.7-53.el6_9.x86_64
ELSA-2017-1372 Moderate/Sec. kernel-2.6.32-696.3.1.el6.x86_64
ELSA-2017-2863 Moderate/Sec. kernel-2.6.32-696.13.2.el6.x86_64
ELSA-2017-2863 Moderate/Sec. kernel-headers-2.6.32-696.13.2.el6.x86_64
ELSA-2017-0847 Moderate/Sec. libcurl-7.19.7-53.el6_9.x86_64
ELSA-2017-2563 Moderate/Sec. openssh-5.3p1-123.el6_9.x86_64
ELSA-2017-2563 Moderate/Sec. openssh-clients-5.3p1-123.el6_9.x86_64
ELSA-2017-2563 Moderate/Sec. openssh-server-5.3p1-123.el6_9.x86_64
ELSA-2017-1574 Moderate/Sec. sudo-1.8.6p3-29.el6_9.x86_64
updateinfo list done
[vagrant@localhost ~]$ 

To list the security errata by their Common Vulnerabilities and Exposures (CVE) IDs instead of their errata IDs, specify the keyword cves as an argument:

[vagrant@localhost ~]$ yum updateinfo list cves
Loaded plugins: security
 CVE-2017-2628    Moderate/Sec.  curl-7.19.7-53.el6_9.x86_64
 CVE-2017-2636    Important/Sec. kernel-2.6.32-696.1.1.el6.x86_64
 CVE-2016-7910    Important/Sec. kernel-2.6.32-696.1.1.el6.x86_64
 CVE-2017-6214    Moderate/Sec.  kernel-2.6.32-696.3.1.el6.x86_64
 CVE-2017-1000364 Important/Sec. kernel-2.6.32-696.3.2.el6.x86_64
 CVE-2017-7895    Important/Sec. kernel-2.6.32-696.6.3.el6.x86_64
 CVE-2017-1000251 Important/Sec. kernel-2.6.32-696.10.2.el6.x86_64
 CVE-2017-1000253 Important/Sec. kernel-2.6.32-696.10.3.el6.x86_64
 CVE-2017-7541    Moderate/Sec.  kernel-2.6.32-696.13.2.el6.x86_64

When checking (automated) what patches are applicable the question why is very reasonable. Meaning, you would like to have some more information on the background of patches. For this you can do a "yum updateinfo info" command or you can specifically query for a CVE ID. The CVE ID example is shown below in an example:

[vagrant@localhost ~]$ yum updateinfo info --cve CVE-2017-1000251
Loaded plugins: security

=====================================================
   kernel security and bug fix update
=====================================================
  Update ID : ELSA-2017-2681
    Release : Oracle Linux 6
       Type : security
     Status : final
     Issued : 2017-09-13
       CVEs : CVE-2017-1000251
Description : [2.6.32-696.10.2.OL6]
            : - Update genkey [bug 25599697]
            : 
            : [2.6.32-696.10.2]
            : - [net] l2cap: prevent stack overflow on incoming
            :   bluetooth packet (Neil Horman) [1490060 1490062]
            :   {CVE-2017-1000251}
   Severity : Important

=====================================================
   Unbreakable Enterprise kernel security update
=====================================================
  Update ID : ELSA-2017-3620
    Release : Oracle Linux 6
       Type : security
     Status : final
     Issued : 2017-09-19
       CVEs : CVE-2017-1000251
Description : kernel-uek
            : [4.1.12-103.3.8.1]
            : - Bluetooth: Properly check L2CAP config option
            :   output buffer length (Ben Seri)  [Orabug:
            :   26796363]  {CVE-2017-1000251}
   Severity : Important
updateinfo info done
[vagrant@localhost ~]$

By using the yum plugin in a correct way and automate against it you can leverage the power of this plugin and implement (an automated) process that will inform you about candidates for installation on your production systems.

Sunday, October 01, 2017

Oracle Cloud - IOT Enterprise Connectivity

Within the Oracle Cloud portfolio you will find the Oracle Internet Of Things (IOT) Cloud service. The IOT cloud service from Oracle provides a starting point for developing a IOT strategy within your company. Or, as Oracle likes to state: Oracle Internet of Things (IoT) Cloud Service is a managed Platform as a Service (PaaS) offering that helps you make critical business decisions and strategies by allowing you to connect your devices to the cloud, analyze data and alert messages from those devices in real time, and integrate your data with enterprise applications, web services, or with other Oracle Cloud Services, such as Oracle Business Intelligence Cloud Service.

One of the main pillars within the Oracle IOT strategy, and a right one in my opinion, is that you will have to connect your IOT strategy to your Enterprise Solutions. Connecting them to your enterprise solutions can be for many reasons. For example, integrating it with preventive maintenance and/or customer satisfaction programs... just to name two options.



If you look at the above diagram you will notice that Enterprise Connectivity is placed as a central part of the Oracle IOT Cloud Service.

Oracle IoT Cloud Service provides a secure communication channel for pushing messages to your enterprise applications, and for your enterprise applications to push or pull messages from Oracle IoT Cloud Service. The Oracle IoT Cloud Service Client Software Enterprise Library and REST APIs enable your enterprise applications to send commands to your devices. You can further analyze the device data and alerts sent to Oracle IoT Cloud Service by integrating your IoT application to your enterprise applications, Oracle Business Intelligence Cloud Service, Oracle Mobile Cloud Service, or JD Edwards EnterpriseOne with Internet of Things Orchestrator instances. 

REST based API connections
The beauty of connecting an enterprise application with the Oracle IOT Cloud service is that this can be done fully based upon REST API's exchanging JSON based messages with each other. This means that you can leverage all the API best practices and could leverage all the microservice best practices. Communication will be based upon API's supported by workflows within the Oracle IOT cloud Service.

Using a combination of stream processing and REST based API's you can make sure that certain events you receive from connected devices result in a JSON message being send to your enterprise application (or for example, to a user his mobile device who has a mobile APP installed).

Communicating back to the IOT cloud works in the same way, you can have your applications interact with the Oracle IOT Cloud service and, for example, querying device data and metadata, or send commands to devices.

Building a new model
Having the option to connect from and to the Oracle IOT Cloud service in a loosely coupled way using REST API's makes that complete new models are possible. You will be able to read data coming from connected devices. However, you are also able to directly connect this to processes downstream as well and sending instructions back to devices from the backend systems.

Whenever you are working on a solution which will involve IOT components it might be worth it to have a good look at the Oracle IOT solution as this could potentially bring you a lot of value from day one. 

Oracle Cloud Access Security Broker

"A cloud access security broker (CASB) is a software tool or service that sits between an organization's on-premises infrastructure and a cloud provider's infrastructure. A CASB acts as a gatekeeper, allowing the organization to extend the reach of their security policies beyond their own infrastructure."

Oracle Cloud Access Security Broker is used for exactly that. The Oracle CASB Cloud Service is the only Cloud Access Security Broker (CASB) that gives you both visibility into your entire cloud stack and the security automation tool your IT team needs.

Read more the Capgemini view on Oracle CASB via this link or view the presentation below., an overview created by Adriaan van Zetten and Johan Louwers.

Wednesday, September 20, 2017

Oracle Jet - preparing Oracle Linux for Oracle Jet Development

Oracle JavaScript Extension Toolkit (JET) empowers developers by providing a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles. Oracle JET is targeted at intermediate to advanced JavaScript developers working on client-side applications. It's a collection of open source JavaScript libraries along with a set of Oracle contributed JavaScript libraries that make it as simple and efficient as possible to build applications that consume and interact with Oracle products and services, especially Oracle Cloud services.

When developing Oracle Jet based solutions you can decide to use your local workstation for the development work, or you could opt to use virtual machine on your laptop. In this case we will be using a virtual Oracle Linux system which we created using vagrant and the Vagrant boxes provided by Oracle. To see a more detailed description on how to get Oracle Linux started with Vagrant you can refer to this specific blogpost.

Preparing your Oracle Jet Development system
To get started with Oracle Jet on a fresh Oracle Linux installation you will need to undertake a couple of steps outline below. The steps include;
  • Install Linux development tools
  • Install Node.JS
  • Install Yeoman
  • Install Grunt
  • Install Oracle JET Yeoman Generator

Install Linux development tools
for some of the Node.JS and Yeoman modules it i required to have a set of Linux development tools present at your machine. You can install them by using a simple YUM command as shown below:

yum install gcc-c++ make

Install Node.JS
The installation of Node.JS starts with ensuring you have the proper repositories in place. This can be done with a single command as shown below:

curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -

After this you can do the actual installation of Node.JS using yum as shown below:

yum -y install nodejs

Install Yeoman
After the installation of Node.JS you should have NPM on your system and you should be able to install Yeoman. Yeoman is a generic scaffolding system allowing the creation of any kind of app. It allows for rapidly getting started on new projects and streamlines the maintenance of existing projects. Yeoman is language agnostic. It can generate projects in any language (Web, Java, Python, C#, etc.) Yeoman by itself doesn't make any decisions. Every decision is made by generators which are basically plugins in the Yeoman environment.

You can install Yeoman with a single NPM command as shown below:
npm install -g yo

Install Grunt
After the installation of Node.JS you should have NPM on your system and you should be able to install Grunt. Grunt is a JavaScript task runner, a tool used to automatically perform frequently used tasks such as minification, compilation, unit testing, linting, etc. It uses a command-line interface to run custom tasks defined in a file (known as a Gruntfile). Grunt was created by Ben Alman and is written in Node.js.

You can install Grunt with a single NPM command as shown below:
npm install -g grunt-cli

Install Oracle JET Yeoman Generator
After the installation of Node.JS you should have NPM on your system and you should be able to install the Oracle JET Generator for Yeoman.

You can install the Oracle JET Yeoman Generator with a single NPM command as shown below:
npm install -g generator-oraclejet

Verify the installation
To verify the installation you can use the below command to see what is installed by NPM and you can try and run Yeoman.

To check what is installed you can use the NPM command in the way shown below:
[root@localhost ~]# npm list -g --depth=0
/usr/lib
├── generator-oraclejet@3.2.0
├── grunt-cli@1.2.0
├── npm@5.3.0
└── yo@2.0.0

[root@localhost ~]# 

After this you can try to start Yeoman in the way shown below (do not run yo as root).

[vagrant@localhost ~]$ yo
? 'Allo! What would you like to do? 
  Get me out of here! 
  ──────────────
  Run a generator
❯ Oraclejet 
  ──────────────
  Update your generators 
  Install a generator 
(Move up and down to reveal more choices)

If both are giving the result expected you should be ready to get started with your first Oracle Jet project.

Thursday, August 31, 2017

Oracle Linux - ClusterShell

When operating large clusters consisting out of large numbers of nodes the desire to be able to execute a command on all, or a subset of nodes, comes quickly. You might want for example to run certain commands on all nodes without having to login to the nodes. When doing configuration solutions like Ansible or Puppet are very good solutions to use. However, for day to day operations they might not be sufficient and you would like to have the option of a distributed shell.

A solution for this is building your own tooling, or you can adopt a solution such as ClusterShell. ClusterShell is a scalable Python Framework, however it is a lot more than that. In the simplest form of usage it is a way to execute commands on groups of nodes in your cluster with a single command. That leaves open the option to do a lot more interesting things with it when you start to look into the options of hooking into the Python API’s and build your own distributed solutions with ClusterShell as a foundation for this.

Installing ClusterShell on Oracle Linux is relative easy and can be done by using the EPEL repository for YUM. Just ensure you have the EPEL repository available. If you have the EPEL respository for Oracle Linux installed you should be able to have the file /etc/yum.repos.d/epel.repo which (in our case, contains the following repository configuration:

[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

If you do not have this you will have to make sure you locate and download the appropriate epel-release-x-x.noarch.rpm file http://download.fedoraproject.org/pub/epel/ . As an example, you could download the file and install it as shown below:

# wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm
# rpm -ivh epel-release-6-5.noarch.rpm

Now you should be able to use YUM to install ClusterShell on Oracle Linux, this can be done by executing the below yum command:

yum install clustershell

To test the installation you can, as an example, execute the below command to verify if clush is installed. Clush is a part of the full ClusterShell installation and being able to interact with it is a good indication of a successful installation.

[root@localhost /]# clush --version
clush 1.7.3
[root@localhost /]# 

To make full use of ClusterShell you will have to start defining your configuration and the nodes you want to be able to control with ClusterShell. The main configuration is done in the configurations file located at: /etc/clustershell . A basic installation should give you the below files in this loaction:

[root@localhost clustershell]# tree /etc/clustershell/
/etc/clustershell/
├── clush.conf
├── groups.conf
├── groups.conf.d
│   ├── genders.conf.example
│   ├── README
│   └── slurm.conf.example
├── groups.d
│   ├── cluster.yaml.example
│   ├── local.cfg
│   └── README
└── topology.conf.example

2 directories, 9 files
[root@localhost clustershell]# 

Friday, August 25, 2017

Oracle Linux - Install Ansible

Ansible is an open-source automation engine that automates software provisioning, configuration management, and application deployment. Ansible is based upon a push mechanism where you will push configurations to the servers rather than pulling them as is done by, for example, puppet. When you want to start using Ansible the first step required will be configuring that central location from where you will push the Ansible configurations.  Installing Ansible on a Oracle Linux machine is rather straight forward and can be achieved by following the below steps.

Step 1
To be able to install Ansible via the YUM command you will have to ensure that you have the EPEL release RPM installed which will take care of ensuring that you have the fedora YUM repository in place. This is due to the fact that the RPM's for ansible are placed on the fedora repository.

You can do so by first executing a wget to download the file and than install it with the RPM command:

wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm


rpm -ivh epel-release-6-8.noarch.rpm 

If done correct you will now have something like the below in your YUM repository directory:

[root@localhost ~]# ls -la /etc/yum.repos.d/
total 24
drwxr-xr-x.  2 root root 4096 Aug 25 09:22 .
drwxr-xr-x. 63 root root 4096 Aug 25 08:36 ..
-rw-r--r--   1 root root  957 Nov  5  2012 epel.repo
-rw-r--r--   1 root root 1056 Nov  5  2012 epel-testing.repo
-rw-r--r--.  1 root root 7533 Mar 28 10:13 public-yum-ol6.repo
[root@localhost ~]# 

if you check the epel.repo file you should have at least the "Extra packages for Enterprise Linux 6" channel active. You can see this in the example below:

[root@localhost ~]# cat /etc/yum.repos.d/epel.repo 
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1
[root@localhost ~]# 

Steps 2
As soon as you have completed the needed steps in step 1 you should be able to do an installation of Ansible on Oracle Linux by executing a simple yum install command.

yum install ansible

Step 3
In basic your installation should be done and Ansible should be available and ready to be configured. To ensure you have the installation right you can conduct the below test to verify.

[root@localhost init.d]#  ansible localhost -m ping
 [WARNING]: provided hosts list is empty, only localhost is available

localhost | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
[root@localhost init.d]# 

Oracle Linux - inspect memory fragments with buddyinfo

The file /proc/buddyinfo is used primarily for diagnosing memory fragmentation issues. Using the buddy algorithm, each column represents the number of pages of a certain order (a certain size) that are available at any given time. You get to view the free fragments for each available order, for the different zones of each numa node.

The content of /proc/buddinfo as shown below will show you the number of free memory chunks. You have to read the numbers from left to right where the first column each value is 2^(0*PAGE_SIZE) the second is 2^(1*PAGE_SIZE) etc ect.

An example of the content of the buddyfile on Oracle Linux 6 can be seen below:

[root@jenkins proc]# cat buddyinfo 
Node 0, zone      DMA     15     32     84     24      6      5      2      0      0      0      0 
Node 0, zone    DMA32    604    342    165     64     28     10     15      2      1      0      0 
[root@jenkins proc]#

Friday, August 04, 2017

Oracle Linux - Intuition Engineering and Site Reliability Engineering with Elastic and Vizceral

IT operations are vital to organisations, in daily business operations a massive system disruption will halt an entire enterprise. Running and operating massive scale IT deployments who are to big to fail takes more than how it is done traditionally. Next to DevOps we see the rise of Site Reliability Engineering, originally pioneered by Google, and complemented with Intuition Engineering, pioneered by Netflix. You see more and more companies who have IT which is to big to fail turn to new concepts of operation.  By developing new ways of operation proven ways are adopted and improved.

Site Reliability Engineering
According to Ben Treynor, VP engineering at Google Site Reliability Engineering is the following;
"Fundamentally, it's what happens when you ask a software engineer to design an operations function. When I came to Google, I was fortunate enough to be part of a team that was partially composed of folks who were software engineers, and who were inclined to use software as a way of solving problems that had historically been solved by hand. So when it was time to create a formal team to do this operational work, it was natural to take the "everything can be treated as a software problem" approach and run with it.

So SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.

On top of that, in Google, we have a bunch of rules of engagement, and principles for how SRE teams interact with their environment -- not only the production environment, but also the development teams, the testing teams, the users, and so on. Those rules and work practices help us to keep doing primarily engineering work and not operations work."

Intuition Engineering
An addition to Site Reliability Engineering can be Intuition Engineering. Intuition Engineering is providing a Site Reliability Engineer with with information in way that it appeals to the brain’s capacity to process massive amounts of visual data in parallel to give users an experience -- a sense, an intuition -- of the state of a holistic system, rather than objective facts. An example of a Intuition Engineering tool is Vizceral developed by Netflix and discussed by Casey Rosenthal, Engineering Manager at Netflix, Justin Reynolds and others in numerous talks. In the below video you can see Justin Reynolds give an introduction into Vizceral.


Implementing Vizceral
For small system footprints using Vizceral might be interesting however not that important for day to day operations. When operating a relative small number of servers and services it is relatively easy to locate an issue and make a decision. In cases where you have a massive number of servers and services it will be hard for a site reliability engineer to take in the vast amount of data and spot possible issues and take split second decisions. In deployments like this it can be very beneficial to implement Vizceral.

Even though Vizceral might look complicated at first glance it is in reality a relative simple however extremely well crafted solution which has been donated to the open source community by Netflix. The process of getting the right data into Vizceral to provide the needed view of the now is the more complex task.

The below image shows a common implementation where we are running a large number of Oracle Linux nodes. All nodes have a local Elastic Beat to collect logs and data and ship this to Elasticsearch where Site Reliability Engineers can use Kibana to get insight in all data from all servers.



Even though Elasticsearch and Kibana in combination with Logstash and Elastic Beats provide a enormous benefit to Site Reliability Engineers they can even still be overwhelmed by the massive amount of data available and it can take time to find the root cause of an issue. As we are already collecting all data from all servers and services we would like to also feed this to Vizceral. The below image shows a reference implementation where we pull data from Elasticsearch and provide to Vizceral.



As you can see from the above image we have introduced two new components, the "Vizceral Feeder API" and "Netflix Vizceral". Both components are running a Docker Containers.

The Vizceral Feeder API
To extract the data we collected inside Elasticsearch and feed this to Vizceral we use the Vizceral Feeder API. The Vizceral Feeder API is an internal product which we hope to provide to the Open Source community at one point in the near future. In effect the API is a bridge between Elasticsearch and Vizceral.

The Vizceral Feeder API will query Elasticsearch for all the required information. Based upon the dataset returned a Vizceral JSON file is created compatible with Vizceral.

Depending on your appetite to modify Vizceral you can have Vizceral pull the JSON file from the Feeder API every x seconds or you can have a secondary process pull the file from the Feeder and place it locally in the Docker container hosting Vizceral.

If you are not into developing your own addition to Vizceral and would like to be up and running relatively fast you should go for the local file replacement strategy.

If you go for the solution in which Vizceral will pull the JSON from the feeder you will have to make sure that you take the following into account;

  • The Vizceral Feeder API needs to be accessible by the workstations used by the Site Reliability Engineers 
  • The JSON file needs to be presented with the Content-type: application/json header to ensure the data is seen as true JSON
  • The JSON file needs to be presented with the Access-Control-Allow-Origin: * header to ensure it is CORS compatible