Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Tuesday, January 29, 2019

[SOLVED] OSError: [Errno 2] "dot" not found in path.

Python Data Visualization
When trying to visualize data using pydot in Python you might run into an error where it is stated that “dot” is not found in the path. This is after you already ensured you installed Pydot and ensured you did an import of Pydot in your python code. The main reason for this is that your python code is unable to find the dot executable. The dot executable comes from the graphviz project. This means that even though you did an install of the Pydot you are still missing a critical component.

If we look at the Pydot Pypi page you can see already a hint on this as it will tell you the following; Pydot is an interface to Graphviz and can parse and dump into the DOT language used by Grapgiz. Pydot is written in pure Python.

To resolve this is we can use yum to install Graphviz on Linux, in our case we use Oracle Linux.

yum -y install graphviz

this command will ensure that Graphviz is installed on your local Oracle Linux operating system, to check if the installation has been completed as expected you can use the below command to check the version;

[vagrant@localhost vagrant]$ dot -V
dot - graphviz version 2.30.1 (20180223.0356)

Now, if you run your python code and try something like pydot.graph_from_dot_data to work with dot data and visualize it at a later stage you will see you no longer have the issue you faced in the form of the OSError: [Errno 2] "dot" not found in path error faced before.

Monday, October 30, 2017

Oracle Linux - monitor file changes with auditd

As part of a security and audit strategy it is very common to ensure certain files are monitored for access, changes and execution. This is especially useful for systems that require level of security and where you need to ensure that every change to critical files is monitored. Also some auditors will require that you are able to provide proof of who has had access to a file. When you have those requirements auditd will be the solution you want to implement under Oracle Linux.

The auditd solution is the userspace component to the Linux Auditing System. It's responsible for writing audit records to the disk. Viewing the logs is done with the ausearch or aureport utilities.

Installing auditd
Installing auditd under Oracle Linux can be done by using YUM by executing the below command;

yum install audit

If you now do a check with which you will find that you now have auditd under /sbin/auditd which we now have to ensure will start when your system boots. This will ensure that all configuration you make for auditd will be active every time you boot.

To  ensure it will start at boot execute the below command.

chkconfig auditd on

To check if auditd is configured to start at boot use the chkconfig command. As you can see it is stated as "on" for runlevel 2, 3, 4 and 5.

[root@docker ~]# chkconfig --list auditd
auditd          0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@docker ~]# 

Now you will have to make sure auditd is running manually the first time. You can use the below example where we check the status of auditd, find out it is not running, start it, check again and see it is running. At the end of this we are sure we have auditd up and running.

[root@docker ~]# 
[root@docker ~]# service auditd status
auditd is stopped
[root@docker ~]# 
[root@docker ~]# service auditd start
Starting auditd:                                           [  OK  ]
[root@docker ~]# 
[root@docker ~]# service auditd status
auditd (pid  17993) is running...
[root@docker ~]# 

Configure auditd 
As an example we will create a rule to watch changes on a file. Based upon this rule the auditd daemon will monitor it and as soon as someone changes the file the audit data will be written to disk.

In this example we will place the repository file for the Oracle Linux repository under audit and we want to be informed when someone reads the file, changes the content or append the file. This is done with the below command:

[root@docker yum.repos.d]# auditctl -w /etc/yum.repos.d/public-yum-ol6.repo -p war -k main-repo-file
[root@docker yum.repos.d]#

In this example the following flags are used:

-w /etc/yum.repos.d.public-yum-ol6.repo is used to insert a watch on the file.
-p war is used to state the watch applies on write, append and read.
-k main-repo-file is used to make a simple naming for the watch rule.

Do note... that if you want to have your auditd rules persistent you have to ensure the rules are in the .rule file. An empty example is shown below:

[root@docker yum.repos.d]# cat /etc/audit/audit.rules 
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.

# First rule - delete all
-D

# Increase the buffers to survive stress events.
# Make this bigger for busy systems
-b 320

# Feel free to add below this line. See auditctl man page

[root@docker yum.repos.d]# 

Watching auditd in action
with the rule in place you can see that changes (or views) are registered. An example is shown below where we (as root) made a change to the file:

----
time->Mon Oct 30 19:16:13 2017
type=PROCTITLE msg=audit(1509390973.068:30): proctitle=7669002F6574632F79756D2E7265706F732E642F7075626C69632D79756D2D6F6C362E7265706F
type=PATH msg=audit(1509390973.068:30): item=0 name="/etc/yum.repos.d/public-yum-ol6.repo" inode=138650 dev=fb:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL
type=CWD msg=audit(1509390973.068:30):  cwd="/etc/yum.repos.d"
type=SYSCALL msg=audit(1509390973.068:30): arch=c000003e syscall=89 success=no exit=-22 a0=7ffd6bb1ed80 a1=7ffd6bb1fdd0 a2=fff a3=7ffd6bb1eb00 items=1 ppid=17847 pid=18206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts3 ses=16 comm="vi" exe="/bin/vi" key="main-repo-file"
----

You can use a number of tools such as aureport or ausearch to find the changes that have happend on your system. Having auditd up and running and ensuring you have the right configuration in place is just the beginning. You will have to ensure you that you have the right reporting, alerting and triggering in place. Just logging it is not providing security, (automatically) reviewing and taking action upon events is what will help you to get a higher level of security on your Oracle Linux system.

Wednesday, October 18, 2017

Oracle Linux - Check your kernel modules

Knowing and understanding what is running on your Oracle Linux system is vital for proper maintenance and proper tuning. As operating systems are seen more and more as something that is just there and should not be hindrance for development, as we see the rise of container based solutions and serverless computing it might look like that the operating system becomes less and less important. However, the opposite is true, the operating system becomes more and more important as it need to be able to facilitate all the requirements from the containers and functions running on top of it without human intervention or at least as less human intervention as possible.

This brings that, if you operate a large deployment of servers and you have to ensure everything is automated and operating at the best of performance at any moment in time without having to touch the systems or at least as less as possible, you need to optimize it and automate it. To be able to do so you need to be able to understand every component and be able to check if you need it or that you can drop it. Whatever you do not need, drop it, it can be a security risk or it can be a consumer of resources without having the need for it.

Oracle Linux Kernel modules
Kernel modules are an important part of the Oracle Linux operating system, understanding them and being able to check what is loaded and what is not should be something that you need to understand. Kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system.

udev
Today, all necessary modules loading is handled automatically by udev, so if you do not need to use any out-of-tree kernel modules, there is no need to put modules that should be loaded at boot in any configuration file. However, there are cases where you might want to load an extra module during the boot process, or blacklist another one for your computer to function properly.

Kernel modules can be explicitly loaded during boot and are configured as a static list in files under /etc/modules-load.d/. Each configuration file is named in the style of /etc/modules-load.d/.conf. Configuration files simply contain a list of kernel modules names to load, separated by newlines. Empty lines and lines whose first non-whitespace character is # or ; are ignored.

lsmod
Checking which kernel modules are loaded in the kernel can be done by using the lsmod command. lsmod will list all the modules. Basically it is a representation of everything you will find in the /proc/modules file however in a somewhat more understandable way. An example of the lsmod command on an Oracle Linux system running in a Vagrant box is shown below:

[root@localhost ~]# lsmod
Module                  Size  Used by
vboxsf                 38491  1 
ipv6                  391530  20 [permanent]
ppdev                   8323  0 
parport_pc             21178  0 
parport                37780  2 ppdev,parport_pc
sg                     31734  0 
pcspkr                  2094  0 
i2c_piix4              12269  0 
snd_intel8x0           33895  0 
snd_ac97_codec        127589  1 snd_intel8x0
ac97_bus                1498  1 snd_ac97_codec
snd_seq                61406  0 
snd_seq_device          4604  1 snd_seq
snd_pcm               113293  2 snd_intel8x0,snd_ac97_codec
snd_timer              26196  2 snd_seq,snd_pcm
snd                    79940  6 snd_intel8x0,snd_ac97_codec,snd_seq,snd_seq_device,snd_pcm,snd_timer
soundcore               7412  1 snd
e1000                 134545  0 
vboxvideo              42469  1 
ttm                    88927  1 vboxvideo
drm_kms_helper        120123  1 vboxvideo
drm                   343055  4 vboxvideo,ttm,drm_kms_helper
i2c_core               53097  3 i2c_piix4,drm_kms_helper,drm
vboxguest             306752  3 vboxsf,vboxvideo
sysimgblt               2595  1 vboxvideo
sysfillrect             4093  1 vboxvideo
syscopyarea             3619  1 vboxvideo
acpi_cpufreq           12697  0 
ext4                  604127  2 
jbd2                  108826  1 ext4
mbcache                 9265  1 ext4
sd_mod                 36186  3 
ahci                   26684  2 
libahci                27932  1 ahci
pata_acpi               3869  0 
ata_generic             3811  0 
ata_piix               27059  0 
video                  15828  0 
dm_mirror              14787  0 
dm_region_hash         11613  1 dm_mirror
dm_log                  9657  2 dm_mirror,dm_region_hash
dm_mod                106591  8 dm_mirror,dm_log
[root@localhost ~]# 

This could be the starting point of investigating and finding out what is loaded and what is really needed, what is not needed and what might be a good addition in some cases.

modinfo
as you might not be checking your kernel modules on a daily basis you might not know which module is used for what purpose. In this case modinfo is coming to your reseque. If you want to know, for example, what the module snd_seq is used for you can check the details with modinfo as shown in the example below.

[root@localhost ~]# modinfo snd_seq
filename:       /lib/modules/4.1.12-61.1.28.el6uek.x86_64/kernel/sound/core/seq/snd-seq.ko
alias:          devname:snd/seq
alias:          char-major-116-1
license:        GPL
description:    Advanced Linux Sound Architecture sequencer.
author:         Frank van de Pol , Jaroslav Kysela 
srcversion:     88DDA62432337CC735684EE
depends:        snd,snd-seq-device,snd-timer
intree:         Y
vermagic:       4.1.12-61.1.28.el6uek.x86_64 SMP mod_unload modversions 
parm:           seq_client_load:The numbers of global (system) clients to load through kmod. (array of int)
parm:           seq_default_timer_class:The default timer class. (int)
parm:           seq_default_timer_sclass:The default timer slave class. (int)
parm:           seq_default_timer_card:The default timer card number. (int)
parm:           seq_default_timer_device:The default timer device number. (int)
parm:           seq_default_timer_subdevice:The default timer subdevice number. (int)
parm:           seq_default_timer_resolution:The default timer resolution in Hz. (int)
[root@localhost ~]#

As you can see in the example above the snd_seq module is the Advanced Linux Sound Architecture sequencer developed by Frank van de Pol and Jaroslav Kysela. Taking this as an example, you can argue. do I need the snd_seq module if I run a server where I have no need for any sound.

Unloading "stuff" you do not need will ensure you have a faster boot sequence timing of your system, less resource consumption and as every component carries a risk of having an issue.... with less components you have theoretically less possible bugs.

In conclusion
optimizing your system by checking which kernel models should be loaded and which could be left out on your Oracle Linux system. However, when you just use it for common tasks you might not want to spend to much time on it. However, if you are building your own image or investing time in building a fully automated way of deploying servers fast in a CI/CD manner you might want to spend time on making sure only the components you really need are in the system and nothing else.


Tuesday, December 27, 2016

Oracle Linux - Peer cert cannot be verified or peer cert invalid

Whenever trying to update a package or install a package with Oracle Linux using YUM you will connect to a local or a remote YUM server which will serve you a list of packages available. By default and based upon good practice this connection will be encrypted. In some cases however a secure connection cannot be made. An example of such a case is when you need to rely on a proxy to the outside world and the proxy is not configured in the right manner to allow you to setup a correct certificate based connection.

In those cases you might end with an error as shown below:
: [Errno 14] Peer cert cannot be verified or peer cert invalid

A couple of options are available to resolve this issue. The most simple way to resolve the issue is to enforce YUM to simply not verify the SSL connection between the server and the YUM repository. To set this as a global setting to ensure you resolve error number 14 (as shown above) you have to edit the configuration file /etc/yum.conf

In /etc/yum.conf you have to ensure that sslverify is set to false. This means the below setting should be changed from true to false;

sslverify=false

Sunday, December 11, 2016

Oracle Linux - transform CSV into JSON with bash

Like it or not, a large number of outputs created by systems is still in CSV, comma separated value, file format. The amount of information that is created, and needs processing, that is represented in CSV format is large. And it is good to understand how you could script against CSV files in your Oracle Linux bash scripts. to use the information or to transform this into other formats. As an example we use the below which is a section of a output file generated by an application that logs the use of doors within a building.

[root@localhost tmp]# cat example.csv
100231,AUTHORIZED,11-DEC-2016,13:12:15,IN,USED,F2D001
100231,AUTHORIZED,11-DEC-2016.13:14:01,IN,USED,F2D023
100231,AUTHORIZED,11-DEC-2016,13:15:23,IN,TIMEOUT,F2D024
100231,AUTHORIZED,11-DEC-2016,13:15:59,IN,USED,F2D024
100562,AUTHORIZED,11-DEC-2016,13:16:01,IN,USED,F1D001
100562,AUTHORIZED,11-DEC-2016,13:16:56,IN,USED,F1D003
100562,AUTHORIZED,11-DEC-2016,13:20:12,OUT,USED,F1D003
100562,AUTHORIZED,11-DEC-2016,13:20:58,IN,USED,F1D004
100231,AUTHORIZED,11-DEC-2016,13:20:59,OUT,USED,F2D024
[root@localhost tmp]#

As you can see from the above data we have some information in a per-line format using a CSV format to seperate the data. In this example the fields have the following meaning:

  • The ID of the access card used
  • The status of the authorization request by the card for a certain door
  • The date the authorization request was made
  • The time the authorization request was made
  • The direction of the revolving door the request was made for
  • The usage status, this can be USED or can be TIMEOUT in case the door was not used
  • The ID for the specific revolving door

The amount of things you might want to do from a data mining or security point of view are endless, however, having a CSV file on the file system of your Oracle Linux server is not making it useful directly. You will have to do something with it. To show you how you can use bash scripting to understand the CSV file itself is shown in the below example script;

#!/bin/bash
INPUT=/tmp/example.csv
OLDIFS=$IFS
IFS=,
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read cardid checkstatus checkdate checktime doordirection doorstatus doorid
do
        echo "card used : $cardid"
        echo "check outcome : $checkstatus"
        echo "date : $checkdate"
        echo "time : $checktime"
        echo "direction : $doordirection"
        echo "usage : $doorstatus"
        echo "door used : $doorid"
        echo "----------------------"
done < $INPUT
IFS=$OLDIFS

As you can see in the above example we use the IFS variable to read and separate the values in the CSV file and place them in their own respective variables. The $IFS variable is a special shell variable and stands for Internal Field Separator. The Internal Field Separator (IFS)  is used for word splitting after expansion and to split lines into words with the read builtin command. Whenever trying to split lines into words you will have to look into the $IFS variable and how to use this.

The above example is quite simple and just prints the CSV file in a different way to the screen. More interesting is how you could transform a CSV file into something else, for example a JSON file. In the below example we will transform the CSV file into a correctly formatted JSON file.

#!/bin/bash
# NAME:
#   CSV2JSON.sh
#
# DESC:
#  Example script on how to convert a .CSV file to a .JSON file. The
#  code has been tested on Oracle Linux, expected to run on other
#  Linux distributions as well.
#
# LOG:
# VERSION---DATE--------NAME-------------COMMENT
# 0.1       11DEC2016   Johan Louwers    Initial upload to github.com
#
# LICENSE:
# Copyright (C) 2016  Johan Louwers
#
# This code is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this code; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
# *
# */

 inputFile=/tmp/example.csv
 OLDIFS=$IFS


 IFS=,
  [ ! -f $inputFile ] && { echo "$inputFile file not found"; exit 99; }

# writing the "header" section of the JSON file to ensure we have a
# good start and the JSON file will be able to work with multiple
# lines from the CSV file in a JSON array.
 echo "
  {
   \"checklog\":
   ["

# ensuring we have the number of lines from the input file as we
# have to ensure that the last part of the array is closed in a
# manner that no more information will follow. (not closing the
# the section with "}," however closing with "}" instead to
# prevent incorrect JSON formats. We will use a if check in the
# loop later to ensure this is written correctly.
 csvLength=`cat $inputFile | wc -l`
 csvDepth=1

 while read cardid checkstatus checkdate checktime doordirection doorstatus doorid
  do
     echo -e "   {
      \"CARDCHECK\" :
       {
        \"CARDID\" : \"$cardid\",
        \"CHECKSTATUS\" : \"$checkstatus\",
        \"CHECKDATE\" : \"$checkdate\",
        \"CHECKTIME\" : \"$checktime\",
        \"DIRECTION\" : \"$doordirection\",
        \"DOORSTATUS\" : \"$doorstatus\",
        \"DOORID\" : \"$doorid\"
       }"
     if [ "$csvDepth" -lt "$csvLength" ];
      then
        echo -e "     },"
      else
        echo -e "     }"
     fi
   csvDepth=$(($csvDepth+1))
  done < $inputFile

# writing the "footer" section of the JSON file to ensure we do
# close the JSON file properly and in accordance to the required
# JSON formating.
 echo "   ]
  }"

 IFS=$OLDIFS

As you can see, and test, you will now have a valid JSON output format. As JSON is much more the standard at this moment than CSV you can more easily use this JSON format in the next step of the process. You could for example use this to send it to a REST API endpoint for storing in a database, you can use it in a direct manner to upload it to Elasticsearch, or......

However, as stated, the entire way of working is using the $IFS variable available to you as a integrated part of the Linux shell. The above scv2json.sh code is uploaded to github and in case of any changes they will be made only to github.com, feel free to fork it and use it for your own projects. 

Oracle Linux – short tip #4 - grep invert-match

When working and scripting with Oracle Linux you will use grep at one point in time. Grep searches the input for lines containing a match to the given pattern. By default, grep prints the matching lines.Whenever trying to find something in a file you might want to use a combination of cat and pipe the output of cat to grep to show you only the matching lines. And, as stated grep will show you only the lines that match the given pattern.

Lets assume we have the below file with some random data in it as an example;

[root@localhost tmp]# cat example.txt
#this is a file with some example content

line 1 - abc
line 2 - def

line 3 - ghi
jkl
mno

pqr

#and this is the end of the file
[root@localhost tmp]#

A simple case would be to show you some specific content, for example "abc" and print this to the screen which can be done using the below command;

[root@localhost tmp]# cat example.txt | grep abc
line 1 - abc
[root@localhost tmp]#

Or we could print all lines containing "line" as shown below;

[root@localhost tmp]# cat example.txt | grep line
line 1 - abc
line 2 - def
line 3 - ghi
[root@localhost tmp]#

However, this is only an example showing you how to show lines that match. Something less commonly used is a invert match, showing you all the lines that do NOT match the pattern defined. The invert match can be done using the -v option in grep.

As an example we might want to remove all the lines starting with "#" from the output. This might be very useful when for example trying to quickly read a configuration file which contains a lot of examples. For example, if you try to read the configuration file from the Apache webserver most of the httpd.conf file is examples and comments you might not be interested in and you would like to remove that from the output to quickly see what the actual active configuration is. Below is an example where we use the invert match option from grep to remove those lines from the output;

[root@localhost tmp]# cat example.txt | grep -v '^#'

line 1 - abc
line 2 - def

line 3 - ghi
jkl
mno

pqr

[root@localhost tmp]#

Even though this is already helping a bit, we might also want to remove the empty lines, to make the example more readible we show it in a couple of steps. First step is the cat, second step is the invert match on lines starting with "#" and the third step ie the invert match on empty lines;

[root@localhost tmp]# cat example.txt | grep -v '^#'|  grep -v '^$'
line 1 - abc
line 2 - def
line 3 - ghi
jkl
mno
pqr
[root@localhost tmp]#

As you might already be well up to speed in using grep and using it to match all kinds of output this means that the learning curve for using invert match is nearly zero. It is just a matter of using the -v option in grep to exclude things instead of using the include match option which is the common behavior of grep. the grep command is standard available in Oracle Linux and in almost every other Linux distribution. 

Oracle Linux – short tip #3 – showing a directory tree

When navigating the Oracle Linux file system is sometimes is more comfortable as a human to see a directory tree structure in one view opposed to a "flat" view. People using a graphical user interface under Linux or Windows are commonly used to see a tree view of the directories they are navigating. Having the tree view makes sense and is more easy to read in some cases. By default Oracle Linux is not providing a feature for this on the command line, this is because the basic installation is is baed upon a minimalism installation for all good reasons. However the tree option is available and can be installed using yum and the standard Oracle Linux YUM repository.

Installation can be done with;
yum install tree

As soon as tree is installed you can use this option to show you a tree representation of the file system. For example, if you would use ls on a specific directory you would see something like in the example below;

[root@localhost input]# ls
by-id  by-path  event0  event1  event2  event3  event4  event5  event6  mice  mouse0  mouse1
[root@localhost input]#

If we execute the tree command in the same location we will see a more deep dive tree representation of the same directory and all underlying directories which makes it much more quicker to understand the layout of the directory structure and navigate it. In the example below you see the representation made available via the tree command.

[root@localhost input]# tree
.
├── by-id
│   ├── usb-VirtualBox_USB_Tablet-event-joystick -> ../event5
│   └── usb-VirtualBox_USB_Tablet-joystick -> ../mouse1
├── by-path
│   ├── pci-0000:00:06.0-usb-0:1:1.0-event-joystick -> ../event5
│   ├── pci-0000:00:06.0-usb-0:1:1.0-joystick -> ../mouse1
│   ├── platform-i8042-serio-0-event-kbd -> ../event2
│   ├── platform-i8042-serio-1-event-mouse -> ../event3
│   ├── platform-i8042-serio-1-mouse -> ../mouse0
│   └── platform-pcspkr-event-spkr -> ../event6
├── event0
├── event1
├── event2
├── event3
├── event4
├── event5
├── event6
├── mice
├── mouse0
└── mouse1

As you can see, using tree is making it much faster to understand the layout of the directory structure opposed to using for example ls and diving manually into the different directories while using Oracle Linux.

Saturday, December 10, 2016

Oracle Linux – finding the executable of a process

One of the important things when using and administering an Oracle Linux instance (or any other distribution for that matter) is to understand what is going on within your system. One of the things to understand is what is running on the Linux instance. Even more important is to ensure that you are constantly aware and you are in full control of what is running so you can detect things in case they are running and should not be running so you can detect anomalies. However, before you can look into detecting anomalies in running processes you need to have an understanding of how to look at what is running on your system.

Commonly when people like to know what is running on a Oracle Linux system they use the top command or they use the ps command. the ps command is to report a snapshot of the current processes on your system. An example of the ps command is shown below taken from one of my temporary test servers;

[root@localhost 2853]# ps -ef | grep root
root         1     0  0 Nov26 ?        00:00:01 /sbin/init
root         2     0  0 Nov26 ?        00:00:00 [kthreadd]
root         3     2  0 Nov26 ?        00:00:02 [ksoftirqd/0]
root         5     2  0 Nov26 ?        00:00:00 [kworker/0:0H]
root         6     2  0 Nov26 ?        00:00:00 [kworker/u:0]
root         7     2  0 Nov26 ?        00:00:00 [kworker/u:0H]
root         8     2  0 Nov26 ?        00:00:00 [migration/0]
root         9     2  0 Nov26 ?        00:00:00 [rcu_bh]
root        10     2  0 Nov26 ?        00:00:35 [rcu_sched]
root        11     2  0 Nov26 ?        00:00:05 [watchdog/0]
root        12     2  0 Nov26 ?        00:00:00 [cpuset]
root        13     2  0 Nov26 ?        00:00:00 [khelper]
root        14     2  0 Nov26 ?        00:00:00 [kdevtmpfs]
root        15     2  0 Nov26 ?        00:00:00 [netns]
root        16     2  0 Nov26 ?        00:00:00 [bdi-default]
root        17     2  0 Nov26 ?        00:00:00 [kintegrityd]
root        18     2  0 Nov26 ?        00:00:00 [crypto]
root        19     2  0 Nov26 ?        00:00:00 [kblockd]
root        20     2  0 Nov26 ?        00:00:00 [ata_sff]
root        21     2  0 Nov26 ?        00:00:00 [khubd]
root        22     2  0 Nov26 ?        00:00:00 [md]
root        24     2  0 Nov26 ?        00:00:00 [khungtaskd]
root        25     2  0 Nov26 ?        00:00:05 [kswapd0]
root        26     2  0 Nov26 ?        00:00:00 [ksmd]
root        27     2  0 Nov26 ?        00:00:00 [fsnotify_mark]
root        38     2  0 Nov26 ?        00:00:00 [kthrotld]
root        39     2  0 Nov26 ?        00:00:00 [kworker/u:1]
root        40     2  0 Nov26 ?        00:00:00 [kpsmoused]
root        41     2  0 Nov26 ?        00:00:00 [deferwq]
root       187     2  0 Nov26 ?        00:00:00 [scsi_eh_0]
root       190     2  0 Nov26 ?        00:00:00 [scsi_eh_1]
root       252     2  0 Nov26 ?        00:00:06 [kworker/0:1H]
root       305     2  0 Nov26 ?        00:00:00 [kdmflush]
root       307     2  0 Nov26 ?        00:00:00 [kdmflush]
root       373     2  0 Nov26 ?        00:00:46 [jbd2/dm-0-8]
root       374     2  0 Nov26 ?        00:00:00 [ext4-dio-unwrit]
root       474     1  0 Nov26 ?        00:00:00 /sbin/udevd -d
root      1845     2  0 Nov26 ?        00:00:00 [jbd2/sda1-8]
root      1846     2  0 Nov26 ?        00:00:00 [ext4-dio-unwrit]
root      1893     2  0 Nov26 ?        00:00:00 [kauditd]
root      2100     2  0 Nov26 ?        00:01:08 [flush-252:0]
root      2282     1  0 Nov26 ?        00:00:00 auditd
root      2316     1  0 Nov26 ?        00:00:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root      2445     1  0 Nov26 ?        00:00:00 cupsd -C /etc/cups/cupsd.conf
root      2477     1  0 Nov26 ?        00:00:00 /usr/sbin/acpid
root      2490  2489  0 Nov26 ?        00:00:00 hald-runner
root      2556     1  0 Nov26 ?        00:00:08 automount --pid-file /var/run/autofs.pid
root      2664     1  0 Nov26 ?        00:00:00 /usr/sbin/mcelog --daemon
root      2686     1  0 Nov26 ?        00:00:00 /usr/sbin/sshd
root      2812     1  0 Nov26 ?        00:00:02 /usr/libexec/postfix/master
root      2841     1  0 Nov26 ?        00:00:00 /usr/sbin/abrtd
root      2853     1  0 Nov26 ?        00:00:04 crond
root      2868     1  0 Nov26 ?        00:00:00 /usr/sbin/atd
root      2935     1  0 Nov26 ?        00:00:00 /usr/sbin/certmonger -S -p /var/run/certmonger.pid
root      2981     1  0 Nov26 tty1     00:00:00 /sbin/mingetty /dev/tty1
root      2983     1  0 Nov26 tty2     00:00:00 /sbin/mingetty /dev/tty2
root      2985     1  0 Nov26 tty3     00:00:00 /sbin/mingetty /dev/tty3
root      2987     1  0 Nov26 tty4     00:00:00 /sbin/mingetty /dev/tty4
root      2989     1  0 Nov26 tty5     00:00:00 /sbin/mingetty /dev/tty5
root      2996     1  0 Nov26 tty6     00:00:00 /sbin/mingetty /dev/tty6
root      2999   474  0 Nov26 ?        00:00:00 /sbin/udevd -d
root      3000   474  0 Nov26 ?        00:00:00 /sbin/udevd -d
root      5615  2686  0 Nov30 ?        00:00:07 sshd: root@pts/0
root      5620  5615  0 Nov30 pts/0    00:00:01 -bash
root      9739  5620  0 09:59 pts/0    00:00:00 ps -ef
root      9740  5620  0 09:59 pts/0    00:00:00 grep root
root     16808     2  0 Nov28 ?        00:00:00 [kworker/0:0]
root     17683     1  0 Nov30 ?        00:00:06 /usr/sbin/httpd
root     19810     2  0 Nov28 ?        00:04:08 [kworker/0:2]
root     20820     1  0 Nov28 ?        00:16:47 /usr/bin/consul agent -config-dir=/etc/consul.d
root     21102     1  0 Nov28 ?        00:05:02 /usr/bin/vault server -config=/etc/vault.d
[root@localhost 2853]#

As you can see this provides quite a good insight into what is running and what is not. However, it is not fully showing you all the details you might want to see. For example, we see that some of the lines show the exact path of the executable that is running under this process. For example mingetty (minimal getty for consoles). we can zoom in to mingetty with a grep as shown below;

[root@localhost 2489]# ps -ef |grep mingetty
root      2981     1  0 Nov26 tty1     00:00:00 /sbin/mingetty /dev/tty1
root      2983     1  0 Nov26 tty2     00:00:00 /sbin/mingetty /dev/tty2
root      2985     1  0 Nov26 tty3     00:00:00 /sbin/mingetty /dev/tty3
root      2987     1  0 Nov26 tty4     00:00:00 /sbin/mingetty /dev/tty4
root      2989     1  0 Nov26 tty5     00:00:00 /sbin/mingetty /dev/tty5
root      2996     1  0 Nov26 tty6     00:00:00 /sbin/mingetty /dev/tty6
root      9815  5620  0 10:04 pts/0    00:00:00 grep mingetty
[root@localhost 2489]#

If we look at the above we can be fairly sure that the executable for mingetty is located at /sbin/mingetty . However, if we start looking at the results of other lines this is not always that clear. As an example the HAL daemon hald (which is a just good example in this case). hald  is  a daemon that maintains a database of the devices connected to the system system in real-time. The daemon connects to the D-Bus system message bus to provide an API that applications can use to discover, monitor and invoke operations on devices.

[root@localhost 2489]# ps -ef|grep hald
68        2489     1  0 Nov26 ?        00:00:18 hald
root      2490  2489  0 Nov26 ?        00:00:00 hald-runner
68        2532  2490  0 Nov26 ?        00:00:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
root      9864  5620  0 10:09 pts/0    00:00:00 grep hald
[root@localhost 2489]#

If we look clearly at the above we can learn a number of things. For once, the hald-addon-acpi is a child process of hald-runner and hald-runner is a child process of hald. we can also see that both hald and hald-addon-acpi are running under UID 68 which is the default UID for hald. However, what we are not able to see is what the actual executable is that is runnign between hald.

To find out the exact executable of hald we can find out by going to the /proc directory and than go to the subdirectory which is in line with the pid of the process. In our case this is /proc/2489 which is the directory which holds all the information about process 2489, our hald process. In this dirctory we will find a lot of interesting information;

[root@localhost /]# cd /proc/2489
[root@localhost 2489]# ls
attr        coredump_filter  fdinfo    mem         numa_maps      root       stat
auxv        cpuset           io        mountinfo   oom_adj        sched      statm
cgroup      cwd              latency   mounts      oom_score      schedstat  status
clear_refs  environ          limits    mountstats  oom_score_adj  sessionid  syscall
cmdline     exe              loginuid  net         pagemap        smaps      task
comm        fd               maps      ns          personality    stack      wchan
[root@localhost 2489]#

Even though all the files and directories within a process /proc/pid diretcory are interesting our goal was to find out what the actual running process behind pid 2489 from user UID 68 was. To find out we have to look at the exe which is a symbolic link. So we can do a ls -la command or in case we want this to be part of a bash script to find things out we can use the readlink command.

The simple ls command will be able to tell us in a human readabile manner what the executable is for this pid.

[root@localhost 2489]# ls -la exe
lrwxrwxrwx. 1 root root 0 Dec  3 10:03 exe -> /usr/sbin/hald
[root@localhost 2489]#

Even thought this is great and we just have been able to find out what the executable file of a pid is in case it is not listed in the output of ps we might want to include this in some bash script. The most easy way is using the readlink command which will provide the below;

[root@localhost 2489]# readlink exe
/usr/sbin/hald
[root@localhost 2489]#

Making sure you understand a bit more on how to drill into the information of what us running on your system will help you debug issues quicker and make sure you can implement more strict security and monitoring rules on your Oracle Linux systems. 

Friday, December 02, 2016

Oracle Linux - installing Consul as server

Consul, developed by hashicorp,  is a solution for service discovery and configuration. Consul is completely distributed, highly available, and scales to thousands of nodes and services across multiple datacenters. Some concrete problems Consul solves: finding the services applications need (database, queue, mail server, etc.), configuring services with key/value information such as enabling maintenance mode for a web application, and health checking services so that unhealthy services aren't used. These are just a handful of important problems Consul addresses.

Consul solves the problem of service discovery and configuration. Built on top of a foundation of rigorous academic research, Consul keeps your data safe and works with the largest of infrastructures. Consul embraces modern practices and is friendly to existing DevOps tooling. Consul is already deployed in very large infrastructures across multiple datacenters and has been running in production for several months. We're excited to share it publicly.

Installing Consul on Oracle Linux is relative easy. You can download Consul from the consul.io website and unpack it. After this you already have a directly working Consul deployment. In essence it is not requiring an installation to be able to function. However, to ensure you can use consul in a production system and it starts as a service you will have to do some more things.

First, make sure your consul binary is in a good location where it is accessible for everyone. For example you can decide to move it to /usr/bin where it is widely accessible throughout the system.

Next we have to make sure we can start it relatively easy. You can start consul with all configuration as command line options however you can also put all configuration in a JSON file which makes a lot more sense. The below example is the content of a file /etc/consul.d/consul.json which I created on my test server to make consul work with a configuration file. The data_dir specified is not the best location to store persistent data so you might want to select a different data_dir location for that.

{
  "datacenter": "private_dc",
  "data_dir": "/tmp/consul3",
  "log_level": "INFO",
  "node_name": "consul_0",
  "server": true,
  "bind_addr": "127.0.0.1",
  "bootstrap_expect": 1
}

Now we have ensure the configuration is located in /etc/consul.d/consul.json we would like to ensure that consul is starting the consul server as a service every time the machine boots. I used the below code as the init script in /etc/init.d

#!/bin/sh
#
# consul - this script manages the consul agent
#
# chkconfig:   345 95 05
# processname: consul

### BEGIN INIT INFO
# Provides:       consul
# Required-Start: $local_fs $network
# Required-Stop:  $local_fs $network
# Default-Start: 3 4 5
# Default-Stop:  0 1 2 6
# Short-Description: Manage the consul agent
### END INIT INFO

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

exec="/usr/bin/consul"
prog=${exec##*/}

lockfile="/var/lock/subsys/$prog"
pidfile="/var/run/${prog}.pid"
logfile="/var/log/${prog}.log"
sysconfig="/etc/sysconfig/$prog"
confdir="/etc/${prog}.d"

[ -f $sysconfig ] && . $sysconfig

export GOMAXPROCS=${GOMAXPROCS:-2}

start() {
    [ -x $exec ] || exit 5
    [ -d $confdir ] || exit 6

    echo -n $"Starting $prog: "
    touch $logfile $pidfile
    daemon "{ $exec agent $OPTIONS -config-dir=$confdir &>> $logfile & }; echo \$! >| $pidfile"

    RETVAL=$?
    [ $RETVAL -eq 0 ] && touch $lockfile
    echo
    return $RETVAL
}

stop() {
    echo -n $"Stopping $prog: "
    killproc -p $pidfile $exec -INT 2&& $logfile
    RETVAL=$?
    [ $RETVAL -eq 0 ] && rm -f $pidfile $lockfile
    echo
    return $RETVAL
}

restart() {
    stop
    while :
    do
        ss -pl | fgrep "((\"$prog\"," > /dev/null
        [ $? -ne 0 ] && break
        sleep 0.1
    done
    start
}

reload() {
    echo -n $"Reloading $prog: "
    killproc -p $pidfile $exec -HUP
    echo
}

force_reload() {
    restart
}

configtest() {
    $exec configtest -config-dir=$confdir
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart)
        $1
        ;;
    reload|force-reload)
        rh_status_q || exit 7
        $1
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 7
        restart
        ;;
    configtest)
        $1
        ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

exit $?

As soon as you have the above code in the /etc/init.d/consul file and make sure this file is executable you can use chkconfig to add it as a system service and it will ensure consul is stopped and started in the right way whenever you stop or start your server. This makes your consul server a lot more resilient and you do not have to undertake any manual actions when you restart your Oracle Linux machine.

You are able to find the latest version of the script and the configuration file on my github repository. This is tested on Oracle Linux 6. It will most likely also work on other Linux distributions however it is not tested.

Thursday, December 01, 2016

Oracle Linux – short tip #2 – reuse a command from history

Whenever using Linux from a command line you will be typing a large number of commands throughout the day in your terminal. Every now and then you want to review what command you used to achieve something, or you might even want to reuse a command from the history. As your terminal will only show a limited number of lines and you cannot scroll backwards endlessly you can make use of history. The history command will show you a long list of commands you executed.

As an example of the history command you can see the output of one of my machines:

[root@localhost ~]# history
    1  ./filebeat.sh -configtest -e
    2  service filebeat.sh status
    3  service filebeat status
    4  chkconfig --list
    5  chkconfig --list | grep file
    6  chkconfig --add filebeat
    7  service filebeat status
    8  service filebeat start
    9  cd /var/log/
   10  ls
   11  cat messages
   12  cd /etc/filebeat/
   13  ls
   14  vi filebeat.yml
   15  service filebeat stop
   16  service filebeat start
   17  date
   18  tail -20 /var/log/messages
   19  date
   20  tail -f /var/log/messages
   21  clear

Having the option to travel back in time a review which commands you used is great, especially if you are trying to figure out something and have tried a command a number of times in different ways and you are no longer sure what some of the previous “versions” of your attempt where.

An additional trick you can do with history is to reuse the command by simply calling it back from history without the need to enter it again. As an example, in the above example you can notice that line 17 is date. If we want to reuse it we can simply do a !17 on the command line interface. As an example we execute 17 again.

[root@localhost ~]# !17
date
Sun Nov 27 13:41:55 CET 2016
[root@localhost ~]#

Oracle Linux – short tip #1 – using apropos

It happens to everyone, especially on Monday mornings, you suddenly cannot remember a command which normally is at the top of your head and you used a thousand times. The way to find the command you are looking for while using Linux is making use of the apropos command. apropos  searches  a set of database files containing short descriptions of system commands for keywords and displays the result on the standard output.

As an example, I want to do something with a service however not sure which command to use or where to start researching for it. We can use apropos to take a first hint as shown below:

[root@localhost ~]# apropos "system service"
chkconfig            (8)  - updates and queries runlevel information for system services
[root@localhost ~]#

As another example, I want to do something with utmp and I want to know which commands would be providing me functionality to work with utmp. I can use the below apropos command to find out.

[root@localhost ~]# apropos utmp
dump-utmp            (8)  - print a utmp file in human-readable format
endutent [getutent]  (3)  - access utmp file entries
getutent             (3)  - access utmp file entries
getutid [getutent]   (3)  - access utmp file entries
getutline [getutent] (3)  - access utmp file entries
getutmp              (3)  - copy utmp structure to utmpx, and vice versa
getutmpx [getutmp]   (3)  - copy utmp structure to utmpx, and vice versa
login                (3)  - write utmp and wtmp entries
logout [login]       (3)  - write utmp and wtmp entries
pututline [getutent] (3)  - access utmp file entries
setutent [getutent]  (3)  - access utmp file entries
utmp                 (5)  - login records
utmpname [getutent]  (3)  - access utmp file entries
utmpx.h [utmpx]      (0p)  - user accounting database definitions
wtmp [utmp]          (5)  - login records
[root@localhost ~]#

It is not the best solution and you have to be a bit creative in understanding how the string apropos uses could be defined however, in general it can be a good start when looking for a command while using Linux. 

Sunday, November 27, 2016

Oracle Linux - Consul failed to sync remote state: No cluster leader

Whenever you are installing and running Consul from HashiCorp on Oracle Linux you might run into some strange errors. Even though your configuration JSON file passes the configuration validation the log file contains a long repetitive lst of the same error complaining about "failed to sync remote state: No cluster leader" and " coordinate update error: No cluster leader".

Consul is a tool for service discovery and configuration. It provides high level features such as service discovery, health checking and key/value storage. It makes use of a group of strongly consistent servers to manage the datacenter. Consul is developed by HasiCorp and is available from its own website.

It might be that you have the below output when you start consul:

    2016/11/25 21:03:50 [INFO] raft: Initial configuration (index=0): []
    2016/11/25 21:03:50 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1 127.0.0.1
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1.private_dc 127.0.0.1
    2016/11/25 21:03:50 [INFO] consul: Adding LAN server consul_1 (Addr: tcp/127.0.0.1:8300) (DC: private_dc)
    2016/11/25 21:03:50 [INFO] consul: Adding WAN server consul_1.private_dc (Addr: tcp/127.0.0.1:8300) (DC: private_dc)
    2016/11/25 21:03:55 [WARN] raft: no known peers, aborting election
    2016/11/25 21:03:57 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:14 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:04:30 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:50 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:01 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:05:26 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:34 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:02 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:06:10 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:35 [ERR] agent: coordinate update error: No cluster leader

The main reason for the above is that you try to start consul in an environment where there is no cluster available, or it is the first node of the cluster. In case you start it as the first node of the cluster or as the only node of the cluster you have to ensure that you include -bootstrap-expect 1 as a command line option when starting (in case you will only have one node).

You can also include "bootstrap_expect": 1 in the json configuration file if you use a configuration file to start Consul.

As an example, the below start of Consult will prevent the above errors:

consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul

Friday, November 25, 2016

Oracle Linux - build Elasticsearch network.host configuration

With the latest version of Elasticsearch the directives used to ensure your Elasticsearch daemon is listening to the correct interfaces on your Linux machine have changed. By default Elasticsearch will listen on your local interface only which is a bit useless in most cases.

Whenever deploying Elasticsearch manually it will not be a problem to configure it manually, however, we are moving more and more to a world where deployments are done fully automatic. In case you use fully automatic deployment and depend on bash scripting to do some of the tasks for you the below scripts will be handy to use.

In my case I used the below scripts to automatically configure Elasticsearch on Oracle Linux 6 instances to listen on all available interfaces to ensure that Elasticsearch is directly useable for external servers and users.

To ensure your Elasticsearch daemon is listening on all ports you will have to ensure the below line is available, at least in my case as I have 2 external and one local loopback interface in my instance.

network.host: _eth0_,_eth1_,_local_

When you are sure your machine will always have 2 external network interfaces and one local loopback interface you want Elasticsearch to listen on you could hardcode this. However, if you want to make a more generic and stable solution you should read the interface names and build this configuration line.

The ifconfig command will give you the interfaces in a human readable format which is not very useable in a programmatic manner. However, ifconfig will provide you the required output which means we can use it in combination with sed to get a list of the interface names only. The below example shows this:

[root@localhost tmp]# ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d'
eth0
eth1
[root@localhost tmp]#

However, this is not in the format we want it, so we have to create a small script to make sure we do get it more in the format we want it. The below code example can be used for this:

#!/bin/bash

  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
  do
   echo "_"$OUTPUT"_"
  done

If we execute this we will have the following result:

[root@localhost tmp]# ./test.sh
_eth0_
_eth1_
[root@localhost tmp]#

As you can see it is looking more like how we want to have this as input for the Elasticsearch configuration file however we are not fully done. First of all the _local_ is missing and we still have it in a multi-line representation. The below code example shows the full script you can use to build the configuration line. We have added the _local_ and we use awk to make sure it is one comma separated line you can use.

#!/bin/bash
 {
  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
  do
   echo "_"$OUTPUT"_"
  done
echo "_local_"
 } | awk -vORS=, '{ print $1 }' | sed 's/,$/\n/'

If we run the above code we will get the below result:

[root@localhost tmp]# ./test.sh
_eth0_,_eth1_,_local_
[root@localhost tmp]#

You can use this in a more wider script to ensure it is written (including network.host:) to the /etc/elasticsearch/elasticsearch.yml file which is used by Elasticsearch as the main configuration file. As stated, I used this script and tested in while deploying Elasticsearch on Oracle Linux 6. It is expected to be working on other Linux distributions however it has not been tested.

Friday, October 21, 2016

Oracle Linux : sending mail with Sendmail

There can be many reasons why you need to send mail from your Linux host to some mail account. For Example, you have an application that needs to send out mail to end users, in those cases you will use a central SMTP mail relay server within your corporate IT footprint. However, in some cases you want to have scripting that makes use of a local SMTP instance that will send the mail for you. This can be direct to the end user or using a SMTP relay server.

In cases you want to have your local Linux machine to send out the messages directly to the recipient you will have to ensure that (A) your machine is allowed to make the connection outside of your firewall to the recipient mail server and (B) you will have to make sure you have a local MTA (Mail Transfer Agent) in place. The best known MTA’s are Sendmail and Postfix. We will use Sendmail as an example while showing how to send mails from an Oracle Linux machine to a gmail account (or whatever account you require) by using simple bash commands and scripting.

Install Sendmail on Oracle Linux
Installing Sendmail is most likely the most easy step in the entire blogpost. You can install Sendmail by making use of the default Oracle Linux YUM repositories. Install Sendmail is done with the below command. You will notice we install Sendmail and sendmail-cf. Sendmail-cf is used to make your life much more easy when configuring and reconfiguring Sendmail.

yum install sendmail sendmail-cf

For some reason Sendmail migt be giving you some strange errors every now and then right after you installed it and start using it. A good practice to ensure everything is ready to go is to stop and start the sendmail service again as shown in the example below.

[root@testbox08 log]#
[root@testbox08 log]# service sendmail status
sendmail (pid  968) is running...
sm-client (pid  977) is running...
[root@testbox08 log]#
[root@testbox08 log]# service sendmail stop
Shutting down sm-client:                                   [  OK  ]
Shutting down sendmail:                                    [  OK  ]
[root@testbox08 log]#
[root@testbox08 log]# service sendmail start
Starting sendmail:                                         [  OK  ]
Starting sm-client:                                        [  OK  ]
[root@testbox08 log]# service sendmail status
sendmail (pid  1139) is running...
sm-client (pid  1148) is running...
[root@testbox08 log]#
[root@testbox08 log]#

After this your sendmail installation on Oracle Linux should be ready to go and you should be able to send out mails. We can easy test this by sending a test message.

Sending your first mail with sendmail
Sending mail with Sendmail is relative easy and you can make it even easier by ensuring your entire message is within a single file. As an example, I created the file /tmp/mailtest.txt with the following content:

To: xxx@gmail.com
Subject: this is a test mail

this is the content of the test mail

This would mean the mail is send to my gmail account, the subject should be “this is a test mail” and the body of the mail will show ” this is the content of the test mail”. Sending this specific mail (file) can be done by executing the below command:

[root@testbox08 tmp]# sendmail -t /tmp/mailtest.txt

However, a quicker way of ensuring your message is processed is removing the “To: xxx@gmail.com” part and using a command like shown below:

[root@testbox08 log]# sendmail xxx@gmail.com < /tmp/mailtest.txt

The below screenshot shows that the mail has arrived in the mailbox, as expected. You can also see it has gotten the name of the account and the fully qualified hostname from the box I used to send the mail from. In this case this shows a Linux host located in the Oracle Public cloud.


Making your reply address look better
The above mail looks a bit crude and unformulated. Not the mail you would expect to receive as an end user, and especially not as a customer for example. Meaning, we have to make sure the mail that is received by the recipient is formatted and in a better way.

The first thing we like to repair is the name of the sending party. We would, as an example, have the name shown as "customer service" and the reply address should become cs@mycompany.com. To do so we add a "Reply" line to the /tmp/mailtest.txt file which looks like:

From: customer service

Due to the formating it is not showing as cs@mycompany.com it is rather showing in the way we commonly see and as is shown in the screenshot below:

Giving the mail priority
Now, as this is a mail from customer service informing your customer that his flight has been canceld  it might be appropriate to this mail a priority flag.

Doing more with headers
In essence you can define every mail header you like and which is understandable and which is allowed. To get an understanding of the type of headers that you can use and which are common you can have a look at RFC 2076 "Common Internet Message Headers".

Sending HTML formatted mail
It is quite common to use HTML to format emails. Ensuring you can send your email in a HTML formatted manner requires that you have the right headers in your email and you format your message in the appropriate HTML code (please review the example on github).

An important thing to remember is that not everyone is able to read HTML. For this it is good to use the "content-Type: multipart/alternative;" header in combination with the "Content-Type: text/html; charset=UTF-8". This will allow you to make a distinct between HTML formatted mail and non-HTML formatted mail.


All the examples below can be found in the example mail file "/tmp/mailtest.txt" which is available on github.

Oracle Linux - Checking installed package

In some cases you want to verify if a package is installed on your Oracle Linux instance within a bash script. You query what is installed by using the "list installed" option for the yum command. However, this is giving you a more human readable result and not something that works easy in a the flow of a script. In essence you would like to have a boolean value returned to tell you if a package is installed or not on your Oracle Linux instance.

The below code example is a bash script that will exactly do so. Within the example you see the packageInstalled function which takes a variable for the package name you are looking for. The result will be true or false.

#!/bin/bash

function packageInstalled () {
     numberOfPackages=`yum list installed | grep $1 | wc -l`
     if [ "$numberOfPackages" -gt "0" ];
       then
           echo "true"
     else
         echo "false"
     fi
}

packageInstalled wget

In the example we are checking the installation of the wget package. You can change wget for whatever you need to be sure is installed. Using this building block function can help you to write a more complex script for installing packages when needed. 

Using SQlite on Oracle Linux

Most people who are working with Oracle technology and who are in need of a database to store information will almost by default think about using an Oracle Database. However, even though the Oracle database is amazing, it is not a fit for all situations. If you are in need to just store some information locally or for a very small application and you do not worry too much about things like performance you might want to turn to other solutions.

In cases where you need something just a bit more smart and easy to use than flat file storage or JSON/XML files you can parse and a full Oracle database is overkill you might want to look at SQLite. SQLite is an open source software library that implements a self-contained (single file), zero-configuration, transactional SQL database engine. SQLite supports multi-user access, but only a single user can update the database at a time. It is largely an "untyped" system and all data is stored as strings.

SQLite is by default shipped with Oracle Linux 7 and is widely used in scripting whenever a semi-smart storage of data is needed. Understanding SQLite and investing some time into it well worth it if you regularly develop code and scripting for your Oracle Linux systems or for other purposes.

Interacting with SQLite
The easiest way to explore SQLite is using the SQLite command line. When on your Linux shell you can use the sqlite3 command to open the SQLite command line. The below example shows how we open a new database, create a table, write some data to the table, query it and after that exit. As soon as you open a new database that does not exist and write something to this database the file will be created on the filesystem.

[root@testbox08 tmp]#
[root@testbox08 tmp]# ls showcase.db
ls: cannot access showcase.db: No such file or directory
[root@testbox08 tmp]#
[root@testbox08 tmp]# sqlite3 showcase.db
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table objecttracker (object, version, object_id INTEGER);
sqlite> insert into objecttracker values ('api/getNewProduct','1.3',10);
sqlite> insert into objecttracker values ('api/getProductPrice','1.3',20);
sqlite> select * from objecttracker;
api/getNewProduct|1.3|10
api/getProductPrice|1.3|20
sqlite> .exit
[root@testbox08 tmp]
[root@testbox08 tmp]# ls showcase.db
showcase.db
[root@testbox08 tmp]#

As you can see from the above example we do not explicitly create the file showcase.db it is simply created the moment we start writing something to the database. In our case the first write is the creation of the table objecttracker.

Even though knowing your way around the SQLite command line is something you have to understand the more interesting part is using it in a programmatic manner.

Coding against SQLite
There are many ways you can interact and code against SQLite, a large number of languages provide a standard way of interacting with SQLite. However, if you simply want to interact with it using a bash script at your Oracle Linux instance you can very well do so.

Working from bash with SQLite is failry simple if you understand the SQLite command line. You can simply wrap all commands together with the command used to call the SQLite database. As an example, if we want to query the table we just created and have the output we can use the below:

[root@testbox08 tmp]#
[root@testbox08 tmp]# sqlite3 showcase.db "select * from objecttracker;"
api/getNewProduct|1.3|10
api/getProductPrice|1.3|20
[root@testbox08 tmp]#
[root@testbox08 tmp]#

As you can see we now have the exact same output as that we got when executing the select statement in the SQLite command line.

This means you can use the above way of executing a SQLite command in a bash script and parse the results in the bash code for future use. In general SQLite provides you a great way to store data in a database without the need to install a full fletched database. In a lot of (small) cases a full database such as the Oracle database is an overkill as you only want to store some small sets of data and retrieve it using SQL statements. 

Ensuring ILOM power up on Exadata with IPMI

Like it or not, power interruptions are still a thread to servers. Even though servers ship with dual power supplies and if done correctly they should be plugged into different power supplies within the datacenter a power outage can still happen. Even tough datacenters should have backup power and provide uninterrupted power to your machines it still might happen. To ensure all your systems are behaving in the right way when power comes back on you can make use of some settings within the IPMI configuration.

The Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware (BIOS or UEFI) and operating system. IPMI defines a set of interfaces used by system administrators for out-of-band management of computer systems and monitoring of their operation. For example, IPMI provides a way to manage a computer that may be powered off or otherwise unresponsive by using a network connection to the hardware rather than to an operating system or login shell.

Oracle Servers have IPMI on board and it is good practice to ensure you make use of the HOST_LAST_POWER_STATE information to ensure your server boots directly when power comes back online or is not booting up when the server was already down during the power outage.

To verify the ILOM power up configuration, as the root userid enter the following command on each database and storage server:

if [ -x /usr/bin/ipmitool ]
then
#Linux
ipmitool sunoem cli force "show /SP/policy" | grep -i power
else
#Solaris
/opt/ipmitool/bin/ipmitool sunoem cli force "show /SP/policy" | grep -i power
fi;

When running this on an Exadata the output varies by Exadata software version and should be similar to:

Exadata software version 11.2.3.2.1 or higher:
HOST_AUTO_POWER_ON=disabled
HOST_LAST_POWER_STATE=enabled

Exadata software version 11.2.3.2.0 or lower:
HOST_AUTO_POWER_ON=enabled
HOST_LAST_POWER_STATE=disabled

If the output is not as expected you will have to ensure make the settings correct so your Exadata machine boots directly after the power is restored. 

Friday, October 14, 2016

Using osquery in Oracle Linux

Recently the guys at facebook released an internal project as opensource code. Now you can make use of some of the internal solutions facebook is using to keep track and analyse their compute nodes in the facebook datacenter. Osquery allows you to easily ask questions about your Linux, Windows, and OS X infrastructure. Whether your goal is intrusion detection, infrastructure reliability, or compliance, osquery gives you the ability to empower and inform a broad set of organizations within your company.

What osquery provides is a collector that on a scheduled basis will analyse your operating system and store this information in a sqlite database local on your system. In essence osquery is an easily configurable and extensible framework that will do the majority of collection tasks for you. What makes it a great product is that it is all stored in sqlite and that enables you to use standard SQL code to ask questions about your system.

After a headsup from Oracle Linux product teams about the fact that facebook released this as opensource I installed it on an Oracle Linux instance to investigate the usability of osquery.

Installing osquery
Installation is quite straightforward. A RPM is provided which installs without any issue on Oracle Linux 6. Below is an example of downloading and installing osquery on an Oracle Linux 6 instance.

[root@testbox08 ~]#
[root@testbox08 ~]# wget "https://osquery-packages.s3.amazonaws.com/centos6/osquery-2.0.0.rpm" -b
Continuing in background, pid 28491.
Output will be written to “wget-log”.
[root@testbox08 ~]#
[root@testbox08 ~]# ls -rtl osq*.rpm
-rw-r--r-- 1 root root 13671146 Oct  4 17:13 osquery-2.0.0.rpm
[root@testbox08 ~]# rpm -ivh osquery-2.0.0.rpm
warning: osquery-2.0.0.rpm: Header V4 RSA/SHA256 Signature, key ID c9d8b80b: NOKEY
Preparing...                ########################################### [100%]
   1:osquery                ########################################### [100%]
[root@testbox08 ~]#
[root@testbox08 ~]#

When you check you will notice that osquery will not start by default and that some manual actions are required to get it started. In essence this is due to the fact that no default configuration is provided during the installation. To enable the collector (daemon) to start it will look for the configuration file /etc/osquery/osquery.conf to be available. This is not a file that is part of the RPM installation. This will result in the below warning when you try to start the osquery daemon;

[root@testbox08 init.d]#
[root@testbox08 init.d]# ./osqueryd start
No config file found at /etc/osquery/osquery.conf
Additionally, no flags file or config override found at /etc/osquery/osquery.flags
See '/usr/share/osquery/osquery.example.conf' for an example config.
[root@testbox08 init.d]#

Without going into the details of how to configure osquery and tune it for you specific installation you can start to test osquery by simply using the default example configuration file.

[root@testbox08 osquery]#
[root@testbox08 osquery]# cp /usr/share/osquery/osquery.example.conf /etc/osquery/osquery.conf
[root@testbox08 osquery]# cd /etc/init.d
[root@testbox08 init.d]# ./osqueryd start
[root@testbox08 init.d]# ./osqueryd status
osqueryd is already running: 28514
[root@testbox08 init.d]#
[root@testbox08 osquery]#

As you can see, we now have the osquery deamon osqueryd running under PID 28514. As it is a collector it is good to wait for a couple of seconds to ensure the collector makes its first collection and stores this in the sqlite database. However, as soon as it has done so you should be able to get the first results stored in your database and you should be able to query the results for data.

To make life more easy, you can use the below script to install osquery in a single go:

#!/bin/sh
wget "https://osquery-packages.s3.amazonaws.com/centos6/osquery-2.0.0.rpm" -O /tmp/osquery.rpm
rpm -ivh /tmp/osquery.rpm
rm -f /tmp/osquery.rpm
cp /usr/share/osquery/osquery.example.conf /etc/osquery/osquery.conf
./etc/init.d/osqueryd start

Using osqueryi
The main way to interact with the osquery data is using osqueryi which is located at /usr/bin/osqueryi . Which means that if you execute osqueryi you will be presented a command line interface you can use to query the data collected by the osqueryd collector. 

[root@testbox08 /]#
[root@testbox08 /]# osqueryi
osquery - being built, with love, at Facebook
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using a virtual database. Need help, type '.help'
osquery>

As an example you can query which pci devices are present with a single SQL query as shown below:

osquery>
select * from pci_devices;
+--------------+-----------+------------------+--------+-----------+-------+----------+
| pci_slot     | pci_class | driver           | vendor | vendor_id | model | model_id |
+--------------+-----------+------------------+--------+-----------+-------+----------+
| 0000:00:00.0 |           |                  |        | 8086      |       | 1237     |
| 0000:00:01.0 |           |                  |        | 8086      |       | 7000     |
| 0000:00:01.1 |           | ata_piix         |        | 8086      |       | 7010     |
| 0000:00:01.3 |           |                  |        | 8086      |       | 7113     |
| 0000:00:02.0 |           |                  |        | 1013      |       | 00B8     |
| 0000:00:03.0 |           | xen-platform-pci |        | 5853      |       | 0001     |
+--------------+-----------+------------------+--------+-----------+-------+----------+
osquery>

As osqueryi uses a sqlite backend we can use the standard options and SQL provided by sqlite and for example get a full overview of all tables that are present when using the .table command in the command line interface. This provides the below output, which can be a good start to investigate what type of information is being collected by default and can be used;

  acpi_tables
  apt_sources
  arp_cache
  authorized_keys
  block_devices
  carbon_black_info
  chrome_extensions
  cpu_time
  cpuid
  crontab
  deb_packages
  device_file
  device_hash
  device_partitions
  disk_encryption
  dns_resolvers
  etc_hosts
  etc_protocols
  etc_services
  file
  file_events
  firefox_addons
  groups
  hardware_events
  hash
  interface_addresses
  interface_details
  iptables
  kernel_info
  kernel_integrity
  kernel_modules
  known_hosts
  last
  listening_ports
  logged_in_users
  magic
  memory_info
  memory_map
  mounts
  msr
  opera_extensions
  os_version
  osquery_events
  osquery_extensions
  osquery_flags
  osquery_info
  osquery_packs
  osquery_registry
  osquery_schedule
  pci_devices
  platform_info
  process_envs
  process_events
  process_memory_map
  process_open_files
  process_open_sockets
  processes
  routes
  rpm_package_files
  rpm_packages
  shared_memory
  shell_history
  smbios_tables
  socket_events
  suid_bin
  syslog
  system_controls
  system_info
  time
  uptime
  usb_devices
  user_events
  user_groups
  user_ssh_keys
  users
  yara
  yara_events

The example shown above is a extreme simple example, everyone with at least a bit SQL experience will be able to write much more extensive and interesting queries which can make life as a Linux administrator much more easy.

Script against osquery
Even though using the command line interface is nice for adhoc queries you might have for a single Oracle Linux instance it is more interesting to see how you can use osquery in a scripted manner. As this is based upon sqlite you can use the same solutions you would use when coding against a standard sqlite database. This means you can use bash scripting, however, you can also use most other scripting languages and programming languages popular on the Linux platform. Most languages now have options to interact with a sqlite database. 

Obtaining OPCinit for Oracle Linux

When deploying an Oracle Linux instance on the Oracle Public Cloud you will most likely use the Oracle Linux default templates. That is, up until the moment the moment that you need more than what is provided by the template.

It might very well be that at one point in time you feel that scripting additional configuration to be used after deployment is no longer satisfactions and for some reason you would like to have your own private template. Oracle provide some good documentation on how to do this. You can read some of this at the "Using Oracle Compute Cloud Service" documentation under the "Building Your Own Machine Images" section.

The documentation however lacks one very important point, you can find references about using OPCinit when creating your template. Up until recent the entire OPCinit was missing online and you would not be able to download it. You could reverse engineer OPCinit from an existing template and use it however the vanilla download was not available and it was not available on the Oracle Linux YUM repository.

Now Oracle has solved this by providing a download link to a zip file containing two RPM's you can use to install in your template that will ensure it will make use of OPCinit.

You can download OPCinit from the Oracle website on this location. Unfortunate it is not available on the public Oracle Linux YUM repository so you have to download it manually.

Friday, September 09, 2016

Automate Oracle Linux security hardening

Security becomes more and more a topic that receives the level of attention it deserves. For long security has been a topic that has commonly been seen something that, the other department for sure is handling. More and more security is becoming engraned in every level of IT departments and business organizations.

Companies start to realize that security is a subject that should be the foundation of the enterprise architecture and not something you add on top at a later stage and you do just a bit. Architects, developers and administrators as well as business and IT management becomes aware that ensuring the right level of security is vital for surviving and crucial to ensure day to day operations are not hindered in any way or form.

When thinking about security most people think about firewalls, antivirus solutions and passwords… overlooked are commonly the security parts needed in the developed code, how to secure a database and ensuring the operating system used on your servers is configured in the most optimal and secure manner.

Operating system security
Operating systems, and Linux is not any different in that, are still too often installed in a next, next finish manner. Not looking at how you should secure your installation correctly. Ensuring you have the correct level of hardening on your Oracle Linux system is vital to ensure full end-to-end security of your IT footprint.

CIS Benchmark
Good documentation and guidance on how to do proper hardening of your Oracle Linux Operating system is provided by both Oracle and CIS. CIS provides a benchmark guideline on what needs to be in place to ensure a proper secured Oracle Linux installation. CIS Oracle Linux 7 Benchmark, provides prescriptive guidance for establishing a secure configuration posture for Oracle Linux version 7.0.

By default the installation Oracle provides is already secured up to a certain level without the need to undertake any specific actions. Ensuring the CIS benchmark guidelines are implemented will ensure an even higher level implementing security on your Oracle Linux system

Additionally, the CIS benchmark is a well respected and accepted hardening guideline. Implementing it and scoring the level of implementation will give you a good insight in how your hardening scores against industry standards.

Automate it all
With the changing IT world into a model where automation is being implemented as much as possible and providing end-users with a self-service options to request and provision new systems. Using automation can support in creating a more secure implementation of Oracle Linux operating systems.

In the below shown flow you see how a new environment is requested, build and delivered to the requester for use. Requesting can be done, for example, via Oracle Enterprise Manager self service however, can also be done via the Oracle Cloud portal or any other form of self service portal.

In the above diagram the following steps are undertaken:

  1. The user requests a new system via a self service portal
  2. A standard golden machine image is used to build an Oracle Linux instance. 
  3. The resulting virtual machine is hardened by default and includes all standard hardening rules and settings as defined in the golden machine image.
  4. The new virtual machine registers itself with Puppet and Puppet will implement all additional security measures needed for a deployment to make it truly safe
  5. The new virtual machine registers itself with Oracle Enterprise Manager. Oracle Enterprise Manager will use the compliancy framework options to monitor and report on the level of compliancy against the security baseline defined. 
  6. The automation layer reports back to the user that the machine is ready for use. 


As you can see from the above steps, you can have a fully automated deployment which will ensure a fully secured and hardened Oracle Linux implementation is provided to the end users. At the same time the newly created machine is registered with Oracle Enterprise manager. Next to the well know use of Oracle Enterprise Manager to monitor performance and availability of a machine and do maintenance tasks it provides the option to use to compliancy framework. The compliancy framework within Oracle Enterprise Manager can be used in real-time to monitor the setting of the machine and benchmark them against a defined security standard. The benefit of this is that you can have a realtime insight in the level of implemented security over your entire IT footprint and produce compliancy reports and exception reports with the push of a button.

The use of Puppet is something seen more and more in environments that use a high level of automation. Puppet will be able to push configurations to all environments without the need to have human administrators taking the burden of doing manual tasks in the final configuration of machines. When building an automated deployment flow you do not want to include every setting in your golden machine image, the final configuration is something you would rather do with a solution like Puppet. Puppet is not alone in the market, other solutions are available such a Chef, Ansible and Salt however Puppet is currently the most commonly used solution.

Implementing a solution as shown above provides you a full end-to-end automation of new environments and at the same time makes the outcome of the process more predicatively and more secure.