Sunday, January 31, 2016

Oracle Linux - configure firewalld local firewall

When using the latest version of Oracle Linux (at this moment 7) a local firewall is by default activated. The new implementation of the local Linux firewall is no longer iptables, it is currently being done by using firewalld. Using a local firewall on your Oracle Linux machine is a good practice, in some cases when you run a local test system you might not always see the direct need for a local firewall. However, in all production or semi-production situations a firewall should actually be default.

Instead of disabling the firewall when it is blocking something you can better configure the firewall to make it work. In case you run, as an example an nginx webserver, and you try to reach it from the outside you will by default be blocked. The below steps you can use to find the current firewall state and make sure you can access the webserver at a later stage in the right way. The right way in this is, opening port 80 and ensuring the rest is still secured by your firewall.

First you want to check if the firewall is running and this is indeed the issue that you cannot access your nginx webserver. You can check the current state of the firewall with the --state option of the firewall-cmd command as shown below. As you can see the firewall is running.

[root@localhost ~]# firewall-cmd --state
running

Now we know that the firewall is up and running we need to check which zones are currently active and on which interface they are currently active. The firewalld implementation uses zones, the default zone is the public zone. A number of zones are available by default and all have a use. The following zones are by default available in the firewalld implementation:

block
Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6. Only network connections initiated within this system are possible.

public
For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.

external
For use on external networks with masquerading enabled especially for routers. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.

dmz
For computers in your demilitarized zone that are publicly-accessible with limited access to your internal network. Only selected incoming connections are accepted.


work
For use in work areas. You mostly trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.

home
For use in home areas. You mostly trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.

internal
For use on internal networks. You mostly trust the other computers on the networks to not harm your computer. Only selected incoming connections are accepted.

trusted
All network connections are accepted.

You can check which zones active with the --get-active-zones option. in the below example you will notice that the public zone is active on the enp0s3 interface. The public zone is the default zone that is loaded. For good reasons a limited number of services are activated (open) in the public zone.

[root@localhost ~]# firewall-cmd --get-active-zones
public
  interfaces: enp0s3
[root@localhost ~]#

Now we know that the firewall is active and that a zone is loaded, in our case the public zone on interface enp0s3 we need to check which services and ports are currently active (open). A difference between a service and a port needs to be understood to make sure you have your security tight. A service can contain one or more ports. Where a port is,.. a port. It is important to note that --list-ports will not list the ports that are covered by a service. In the example below we check the services and ports with the associated commands. As you can see no ports are explicitly open and only two services are active (open).

[root@localhost ~]# firewall-cmd --list-services
dhcpv6-client ssh
[root@localhost ~]# firewall-cmd --list-ports
[root@localhost ~]#

We stated that the example case was to open port 80 for http traffic to the nginx webserver running on the box. We can do this by adding port 80. As we want this to be permanent and ensure that this new setting is active after a reboot of the server we add the --permanent option to the command. Below you can see the example of adding and reloading the settings to ensure that port 80 will be open on the public zone.

[root@localhost ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
success
[root@localhost ~]# firewall-cmd --reload
success
[root@localhost ~]#

When we again check the services and ports we will see that the services are still the same, only two services, we do however now have a explicit mention of port 80 as an open port.

[root@localhost ~]# firewall-cmd --list-services
dhcpv6-client ssh
[root@localhost ~]# firewall-cmd --list-ports
80/tcp

As stated before, you have to remember that also a service is an option. In our case we open port 80 to ensure access to the http server which runs in the form of nginx on our machine. So, instead of adding port 80 explicitly we could also have added the service http instead. Adding the service http is done with the --add-service instead of --add-port option. An example of this is shown below.

[root@localhost ~]# firewall-cmd --zone=public --add-service=http --permanent
success
[root@localhost ~]# firewall-cmd --reload
success
[root@localhost ~]#

If you now check the status of the services and the ports with the commands we used previously you will notice that no ports are mentioned explicitly and http is added as a service (do note, I have removed port 80 which is not part of this blogpost)

[root@localhost ~]# firewall-cmd --list-service
dhcpv6-client http ssh
[root@localhost ~]# firewall-cmd --list-port
[root@localhost ~]#

Even though this is only a simple example of how you can configure firewalld it provides you an insight and starting point to create much more complex configurations when and where needed.

Friday, January 29, 2016

Install nginx on Oracle Linux

When you like to run nginx as a webserver, which makes very much sense if you like to run a high performance webserver which is at the same time not taking up much resources, you will have to do some additional tasks. As you might find out quickly is that Oracle is not including nginx in the mainstream repository for Oracle Linux

However, you can add the nginx repository to your local yum configuration as an additional repository. You simply need to create a new file in /etc/yum.repos.d and call it (in our case) nginx.repo 

you can use a simple touch command to do so:
touch /etc/yum.repos.d/nginx.repo

Now you will have to add the below to the content of the file. This will ensure that the nginx repository is now part of the repositories that are available when you use yum, 

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/rhel/7/$basearch/
gpgcheck=0
enabled=1

after saving the file a simple yum command will ensure that nginx will be installed in your Oracle Linux system, shown below.
yum install nginx

One a number of things to remember are that, when you use a standard Oracle Linux 7 installation the standard implementation of the firewall will block port 80 for external traffic. You will have to disable the firewall or open the port. next to this you have to be aware that nginx will not be configured to start automatically when the system boots and that nginx will not be started directly after installation.

If you check the status of nginx (as shown below) you will see it is down:

[root@localhost ~]# systemctl status nginx
nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled)
   Active: inactive (dead)
     Docs: http://nginx.org/en/docs/

[root@localhost ~]# 

you are able to start nginx by executing: 
systemctl start nginx.service

the will ensure that the service is now up and running. If you would now check the status of nginx via systemctl you will note the following:

[root@localhost ~]# systemctl status nginx
nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled)
   Active: active (running) since Tue 2015-12-29 01:32:14 CET; 11s ago
     Docs: http://nginx.org/en/docs/
  Process: 2285 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)
  Process: 2284 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)
 Main PID: 2287 (nginx)
   CGroup: /system.slice/nginx.service
           ├─2287 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
           └─2289 nginx: worker process

Dec 29 01:32:14 localhost.localdomain systemd[1]: Starting nginx - high performance web server...
Dec 29 01:32:14 localhost.localdomain nginx[2284]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Dec 29 01:32:14 localhost.localdomain nginx[2284]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Dec 29 01:32:14 localhost.localdomain systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
Dec 29 01:32:14 localhost.localdomain systemd[1]: Started nginx - high performance web server.
[root@localhost ~]# 

Even though you now have a running nginx service on your Oracle Linux 7 machine you can note from the above that it is still marked as disabled. Note the last part of the loaded line in the status output which states disabled. This indicates that when the system will be (re-)booted that nginx will not be started automatically. To make sure nginx is started each time the server is rebooted you will have to make sure it is enabled. you can enable this by executing the below shown systemcelt enable command for the nginx.service which is in essence “ nothing more” then creating a link.

[root@localhost ~]# systemctl enable nginx.service
ln -s '/usr/lib/systemd/system/nginx.service' '/etc/systemd/system/multi-user.target.wants/nginx.service'
[root@localhost ~]# 

Now you will have a running nginx http server that will start each time you will reboot your machine and you are ready to configure nginx to be used. A basic configuration can be found in /etc/nginx/nginx.conf editing this file is however not best practice. nginx.conf makes of a include /etc/nginx/conf.d/*.conf statement. This means that it is a better way to edit the file *.conf files or add *.conf files in this directory rather then editing the main nginx.conf file. 




Tuesday, January 05, 2016

Full stack monitoring for Oracle SOA suite.

Oracle SOA suite is a well used SOA server implementation in the industry. Oracle SOA Suite is a comprehensive, standards-based software suite to build, deploy and manage integration following the concepts of service-oriented architecture (SOA). The components of the suite benefit from consistent tooling, a single deployment and management model, end-to-end security and unified metadata management.

Oracle SOA Suite helps businesses lower costs by allowing maximum re-use of existing IT investments and assets, regardless of the environment (OS, application server, etc.) they run in, or the technology they were built upon. It’s easy-to-use, re-use focused, unified application development tooling and end-to-end lifecycle management support further reduces development and maintenance cost and complexity.

One of the two minute tech tips, an Oracle OTN show by Bob Rhubart, is featuring Matt Brasier who makes a excellent point on monitoring Oracle SOA suite. You can view the show below:


Full Stack Monitoring
A good practice is to not monitor only SOA suite itself, monitoring the full stack will bring you the full value. Only monitoring the working and performance of SOA suite will only give you a hint of what might be wrong and where an issue might reside. Also, by only monitoring Oracle SOA Suite you will also be less able to predict issues before they happen.

Instead of "only" monitoring SOA Suite you should monitor the full stack, this includes (among other things):

  • Network
  • Server hardware
  • Storage
  • Oracle Linux Operating System
  • Weblogic Middleware
  • JVM (Java Virtual Machine)
  • Oracle SOA Suite (health)
  • Oracle SOA Suite (Response times)
  • Database connections / database pooling

Oracle Enterprise Manager
When you want to monitor your solution in a full stack mode and ensure you can provide the best services and uptime to your users you should consider how to implement monitoring in your organisation. As you are using Oracle SOA suite you will most likley have more Oracle components in your IT footprint, using Oracle Enterprise Manager base functionality plus additional packs where needed is then a logical decision.

Oracle Enterprise Manager (OEM or EM) is a set of web-based tools aimed at managing software and hardware produced by Oracle Corporation as well as by some non-Oracle entities.


Deploying Oracle Enterprise Manager will bring you exactly what is needed for a full stack and end-to-end monitoring, alerting and maintenance solution. As it is developed by Oracle it will by nature be able to communicate in an optimal way with the Oracle products, this includes not only SOA suite and the Oracle Database, this also includes the operating system Oracle Linux for example and extends also to the Oracle hardware and the network.

Conclusion
When you want to provide optimal monitoring and maintenance to your Oracle SOA suite implementation it is best practice to monitor the full technology stack. The solution provided by Oracle is Oracle Enterprise Manager which provides a lot of free to use features and can be extended with custom checks or additional packs when needed.

Thursday, December 31, 2015

Oracle Linux - Install beanstalkd

When you have a requirement for the installation of beanstalkd you might find out that beanstalkd is not available in the mainstream repository for Oracle Linux.

Beanstalkd is a big to-do list for your distributed application. If there is a unit of work that you want to defer to later (say, sending an email, pushing some data to a slow external service, pulling data from a slow external service, generating high-quality image thumbnails) you put a description of that work, a “job”, into beanstalkd. Some processes (such as web request handlers), “producers”, put jobs into the queue. Other processes, “workers”, take jobs out of the queue and run them.

Looking at the beanstalkd website you will also learn that no RPM is available as such for a quick download and installation. This means that you will have to download the sourcecode and compile it yourself. Below you can find a quick instruction on how to download and compile beanstalkd on a Oracle Linux system. This will most likely not be that different from how you should do this on, for example, a Red Hat system.

First, we need to make sure we have a location to store the source. We will create a temporary directory for this in /tmp

  mkdir /tmp/build_beanstalkd

Download (clone) the sourcecode from github by executing a git clone command and make sure we put it in the temp directory we just created

  git clone git://github.com/kr/beanstalkd.git /tmp/build_beanstalkd/

We now have the sourcecode so we can go to the directory and compile and install beanstalkd

  cd /tmp/build_beanstalkd/
  make
  make install

By now you should have a compiled and installed version of beanstalkd. It is good practice to ensure you cleanup after yourself so we will remove all the " junk"  we just created.

  rm -rf /tmp/build_beanstalkd/

Wednesday, December 30, 2015

Oracle Linux and scaleable microservice architecture based applications

For a long time the answer to the question how to handle the increase of transactions on a system was vertical scaling. Simply putting in more memory or buying new servers with more processing power. This answer was mainly driven by the fact that most applications have been build based upon a monolithic architecture principle.

In a monolithic architecture principle applications are developed as one single application where all the components (libraries / functions / procedures) would live on a single server and cannot be separated on different servers. In some cases these applications are developed in a way that two of those instances can work together in a clustered manner. However, in many cases the primary way of deploying them is to deploy them on a single server. If it turns out that the use of the application grows and due to this the application gets slower the common answer was, add more compute resources. Adding resources to a server to cope with the growth in demand for compute power is seen as vertical scaling.

Databases, Oracle databases, have adopted the model of horizontal scaling already for some time by using Oracle RAC. When the demand for more compute power grows additional nodes can be added to the RAC cluster. Adding additional nodes, rather than putting more compute power in a single node, is referred to as vertical scaling.

Also in the area of application servers and web servers the vertical scaling model is not something new. Creating a cluster of, for example, Weblogic application servers and loadbalance the load over the members of this cluster is something that already has been done for quite some time.

However, even though Weblogic clustering and Oracle RAC clustered databases are available it is often seen that the foundation of an architecture is not taking into account the vertical scaling model. When you intend to build an application that will make optimal use of vertical scaling you will have to ensure that you incorporate this directly in your architecture. It is good practice to ensure that your application (landscape) will be able to scale vertically even in cases where you do not see the direct use. This is to ensure that if you ever need to scale in the future you do not have to redesign the application from scratch or recode large parts of it.

One example of an architecture principle that will provide you an optimal vertical scaling model is the use of microservices. microservices is a software architecture style in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task, facilitating a modular approach to system-building.

Commonly microservices will communicate with each other by using the HTTP protocol, calling (REST-)API’s via HTTP has the advantage you can make use of all the scaling and load balancing options that are commonly available and are already proven technology for years for deploying large web enabled applications.

monolithic application design
By implementing microservices you will avoid building an application based upon a monolithic application design as shown in the below diagram.


Some of the characteristics of an application build based upon a monolithic application design are:

  • Application has single code base with multiple modules 
  • Large internal dependencies between components and functions
  • Components invoke one another via language-level method or function calls
  • In general difficult to scale up and down when the situation demands this
  • Poor resilience towards component failure, small disturbances often result if complete application unavailability 
  • Complex to maintain and difficult to change or integrate with new systems
  • Poor re-use of developed functions and components outside of the application
  • Commonly depending primarily on vertical scaling of the hardware and not capable of handling horizontal scaling
  • Serial development process, takes in general longer to develop.


Microservice application design 
Microservices allow you to build multiple smaller components, capable of running all on one or multiple dedicated machines instead of running the entire application on one single server that cannot be scaled easily. An example of the same application as above, however build upon a microservice architecture, is shown in the below diagram:

Some of the characteristics of building an application based upon a microservice architecture principle are:

  • Application is broken down in multiple (micro)services, each having their own code base / programming language
  • Components are loosely coupled and communicate based upon API’s and HTTP calls with each other 
  • Easy to scale up and down, both vertical as well as horizontal. Each service can run on one, or multiple machines
  • Highly resilient towards component failure due to the distributed system principles and the use of multiple small machines. 
  • Ease of changing components and integrate with new functions.
  • Provides an easy way of function re-use outside of the original application realm.
  • Parallel development process, takes in general less time to develop.


Implementing this model provides both advantages in scaling as well as in ensuring your application is resilient against component failure. In the below diagram you can see that some components are scaled to a four node implementation and have presence in two datacenters. This provides both benefits on load balancing as well as improved resilience

Building a node
A node, when used in a microservice architecture will be a machine or virtual machine running one or more microservices. In general this is Linux machine running an application capable of providing an API. One way of building node’s and micorservices is making use of Flask, Flask is a micro web application framework written in Python and based on the Werkzeug toolkit and Jinja2 template engine. It is BSD licensed.

Flask is called a micro framework because it does not presume or force a developer to use a particular tool or library. It has no database abstraction layer, form validation, or any other components where pre-existing third-party libraries provide common functions. However, Flask supports extensions that can add application features as if they were implemented in Flask itself. Extensions exist for object-relational mappers, form validation, upload handling, various open authentication technologies and several common framework related tools. Flask is ideally suited to build microservices on.

In the below diagram we show a node build based upon Flask in combination with NGINX,  NGINX is a free, open-source, high-performance HTTP server and reverse proxy and running both Flask as well as NGINX on an Oracle Linux operating system. In this example we have selected specifically Oracle Linux as microservice nodes are commonly used in deployments that demand high availability, extreme performance and extreme scaling. Oracle Linux can provide this in combination with Flask and NGINX.

Making it virtual
As one of the advantages of microservices is that it can scale easily it is commonly a good practice to ensure that your nodes are based upon a virtualization strategy. As we use Oracle Linux in our example it also makes sense to use Oracle VM as a virtualization layer. When needed a new node with the associated microservice can be created and can be added to the loadbalancing.


When monitoring your entire footprint with Oracle Enterprise Manager you can create a solution in which Oracle VM automatically creates new nodes for you when performance drops due to an increase in the number of requests. Oracle Enterprise Manager will then be able to scale down again as soon the number of requests goes down. By implementing such a self scaling and self healing solution you are assured of a always performing and always available solution to your users. This will require good thought on what kind of hardware to use; in this case it would be a good solution to use for example the Oracle Private Cloud Appliance. The Oracle Private Cloud Appliance is an engineered system, developed by Oracle, specifically to run elastic and changing landscapes based upon Oracle Linux and Oracle VM. 

Oracle Database Security

It might be interesting to know that the estimated value of lost business every year due to cybercrime is around a trillion dollars. Even though IT departments have been trying to ensure their systems are safe and secure and even though spending on IT security budget has doubled in the past couple of years we still see a lot of data breaches over and over again.

In the past year some notable data breaches have occurred and made the news. It is estimated that the know breaches are only a fraction of the real number of breaches in security and leakage of confidential data.

As it currently stands, companies do focus a lot on the external perimeter and defend this quite well. Having security in the lower levels, the core, of a IT footprint is however commonly not implemented. Ensuring you secure the main goal of many attacks is often not done. Oracle provides an extensive set of solutions and products which support companies to secure their database. In a recent blogpost on capgemini.com I go into the details on how the Oracle Maximum Availability Architecture can be used to secure data where it should be secured, namely, where it is created, accessed and stored; the database.



Also the above deck is providing an insight into the options to secure your Oracle database in a more advanced manner to help you protect the data it holds and prevent data breaches. 

Creating video content - keep it short

For some strange reason I have agreed with a couple of people to think about how we could create content in the form of YouTube video’s. seeing this as an addition to the blogging I agreed that it would be nice to not only create content in the form of posts on my personal blog and the company blog as well as slidedecks on slideshare, however, start adding also video content to this mix.

In all honesty I have done some video content previously for internal and external use and I have to admit that I do not always feel comfortable with it. Looking back at some of the video files I can say that I feel I am not a video type. Having stated that, it turns out a lot of people experience the same feelings and emotions while watching themselves on video. A mix of awkward self-consciousness and vicarious embarrassment with the person you see on the video and realizing you are the person on the video. As it turns out, a lot of people are experiencing this and feel uncomfortable watching themselves on video including a long list of prominent TV and film personalities. Meaning, I am not alone in this apparently.

However, having made the promise to support and engage in the creation of a couple of video messages, the question came to me on how long a video should be. The initial thought was that a video, on a pure technical subject, should be around 20 till 30 minutes. A 20 till 30 minute video is considered a long video, unless you are Google TechTalk or TED, this might be a bit long to keep the audience engaged until the end of the video.

Based upon some research done by thenextweb it turns out that the length of the video is really important. Almost regardless of the type of content short video's keep people engaged. A short video is more likely to be watched in full. The below diagram from thenextweb shows the percentage of people that watch the full video in combination with the length of the video. 

As you can see, a rapid decline in the average percentage of the full length video viewed is dropping with the length of the video. This means a couple of things; (A) if you want to state something important in your video you should do this directly at the beginning and (B) if you want to ensure people watch the full length of the video (or a large part of it) you have to keep it short. 

Depending on the research you are reading the ideal length of a online video changes slightly. based upon what ADWeek states about this, a youtube video should be around 3 minutes, if you do a TED talk the ideal length is 18 minutes. 

Other sources state that a YouTube video should be between 0 and 10 minutes. All in all, the general consensus between all research is that a video should be short. In fact, as short as possible and you should avoid long video content. Avoid it, avoid long video content because people will be less attracted to start a long video and people will be less motivated to watch the entire length of a video. It turns out that the concentration span of people while watching online video content is extremely short. 

Given this information, and after some discussion internally, it has been decided that an Ignite way of doing things and building video content might be the best way of creating video content and get the message t the audience. The ignite concept is that presenters get 20 slides, which automatically advance every 15 seconds. The result is a fast and fun presentation which lasts just 5 minutes.More information on Ignite can be found at the ignitetalks.io website. 

Saturday, December 26, 2015

Clean removal of Virtualbox on a Mac

Some questions came on how to remove Oracle Virtualbox from a Mac. Not that the person who asked the question was unhappy with Virtualbox, however, due to some issues and some manual fiddling they damaged the installation. Removing software from a Mac is quite easy as an application is (should) be a self contained package. You can remove Virtualbox this way, however to be completely sure it is done in the right way you can also use the official removal tool that is shipped with the installer.


When you run the installer script you will be shown what will be removed, something I personally always like…. knowing what is going to happen.


Welcome to the VirtualBox uninstaller script.

The following files and directories (bundles) will be removed:
    /Users/parmakat/Library/LaunchAgents/org.virtualbox.vboxwebsrv.plist
    /usr/bin/VirtualBox
    /usr/bin/VBoxManage
    /usr/bin/VBoxVRDP
    /usr/bin/VBoxHeadless
    /usr/bin/vboxwebsrv
    /usr/bin/VBoxBalloonCtrl
    /usr/bin/VBoxAutostart
    /Library/StartupItems/VirtualBox/
    /Library/Extensions/VBoxDrv.kext/
    /Library/Extensions/VBoxUSB.kext/
    /Library/Extensions/VBoxNetFlt.kext/
    /Library/Extensions/VBoxNetAdp.kext/
    /Applications/VirtualBox.app/

And the following KEXTs will be unloaded:
    org.virtualbox.kext.VBoxUSB
    org.virtualbox.kext.VBoxDrv

And the traces of following packages will be removed:
    org.virtualbox.pkg.vboxkexts
    org.virtualbox.pkg.vboxstartupitems
    org.virtualbox.pkg.virtualbox
    org.virtualbox.pkg.virtualboxcli

Do you wish to uninstall VirtualBox (Yes/No)?
Yes

The uninstallation processes requires administrative privileges
because some of the installed files cannot be removed by a normal
user. You may be prompted for your password now...

Please enter parmakat's password:
unloading org.virtualbox.kext.VBoxUSB
unloading org.virtualbox.kext.VBoxDrv
Successfully unloaded VirtualBox kernel extensions.
Forgot package 'org.virtualbox.pkg.vboxkexts' on '/'.
Forgot package 'org.virtualbox.pkg.vboxstartupitems' on '/'.
Forgot package 'org.virtualbox.pkg.virtualbox' on '/'.
Forgot package 'org.virtualbox.pkg.virtualboxcli' on '/'.
Done.
logout


[Process completed]


Removing Virtualbox by using the uninstaller script makes sure you will have a more clean environment for a doing the installation again. Also, it has been known that in previous versions of VirtualBox some strange issues have been occurring when you upgraded to the next while using a Mac. Some internal network cards and internal network settings are not always picked up correctly. Meaning, doing a clean removal and a clean installation of Virtualbox on your Max might in some cases be saving you a lot of time.

Wednesday, December 23, 2015

Oracle Linux - Enforce password complexity

When requesting a new Linux systems it used to be a case where someone with a Linux background would “build” a system based upon the specifications given by the requester and based upon a number of pre-defined settings by the Linux team. With the introduction of virtualization and especially with the introduction of self service models the way new systems are “created” has changed enormously.

A Linux system is now often more considered as a necessity for running applications or databases and less of a tangible “thing”. It is often regarded as a almost stateless asset which can be requested, provisioned and decommissioned with a couple of clicks in a self service portal.  Specially for teams that work on short development projects or in environments that require quickly up and down scaling to be able to handle a workload this makes a lot of sense.

One of the downsides of this however that systems are “created” and managed by people who might be less security aware than your average Linux system operator. This calls for a more strict security management, a more template based implementation of security. And security starts partially with having a good authentication in place, ensuring you have a strong password and surely not a default welcome1 password.

When requesting an Oracle Linux system based upon the Self Service functionality in Oracle Enterprise Manager the requester will have the ability to state a password desired for the root account. It is however good practice to ensure that the user is forced to change the root password on the newly created Oracle Linux system and that this is in line with the standards that are set for this password.

Commonly newly create Oracle Linux systems in a self service manner will be accessed by the requester seconds after they received the automated message that the system is available. Commonly the requester will login via a SSH session and provide the password the requester has provided during the request phase. (This is, if you allow root to login as SSH and if you allow password based authentication).

A good practice is to run a post-installation script on every newly created machine to change any number of settings you like. This post-installation / first boot script can contain the required actions to enforce a password reset on first login an implement the enforcement of a password complexity policy. As an example of how to script this you can have a look at the below snippet of a wider post-installation / first boot script. You can also find this snippet at Github.

#!/bin/bash

# function used to enfore root to reset password on next login.
 function forceRootPwd {
    logger "Setting enforced root password change"
    chage -d 0 root
 }



# function used to ensure the password policy is set
 function forcePolicy {
    logger "setting the password policy"
    echo "password    required    pam_pwquality.so retry=3" >> /etc/pam.d/passwd
    echo "minlen = 8" >> /etc/security/pwquality.conf
    echo "minclass = 4" >> /etc/security/pwquality.conf
    echo "maxsequence = 3" >> /etc/security/pwquality.conf
    echo "maxrepeat = 3" >> /etc/security/pwquality.conf
 }



# call the forceRootPwd function
 forceRootPwd
 forcePolicy


As you can see the script enforces that root will have to reset the password at the next login by executing a chage command against the root account.

Next we will push a number of settings to both /etc/pam.d/passwd as well as /etc/security/pwquality.conf to enforce the quality of a password. As you can see, in the example we do set minlen, minclass, maxsequence and maxrepeat. However you are able to do more specific settings in /etc/security/pwquality.conf , the settings you can set are listed below:

Difok
Number of characters in the new password that must not be present in the old password.

minlen
Minimum acceptable size for the new password (plus one if credits are not disabled which is the default)

dcredit
The maximum credit for having digits in the new password. If less than 0 it is the minimum number of digits in the new password.

ucredit
The maximum credit for having uppercase characters in the new password. If less than 0 it is the minimum number of uppercase characters in the new password.

lcredit
The maximum credit for having lowercase characters in the new password. If less than 0 it is the minimum number of lowercase characters in the new password.

ocredit
The maximum credit for having other characters in the new password. If less than 0 it is the minimum number of other characters in the new password.

minclass
The minimum number of required classes of characters for the new password (digits, uppercase, lowercase, others).

maxrepeat
The maximum number of allowed consecutive same characters in the new password. The check is disabled if the value is 0.

maxclassrepeat
The maximum number of allowed consecutive characters of the same class in the new password. The check is disabled if the value is 0.

gecoscheck
Whether to check for the words from the passwd entry GECOS string of the user.The check is enabled if the value is not 0.

dictpath
Path to the cracklib dictionaries. Default is to use the cracklib default.

Monday, December 07, 2015

IOT infrastructure with Oracle Linux and Mosquitto

MQTT stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimize network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.

We see that a large number of IOT and intelligent sensor manufacturers are making use of MQTT to send information from the edge of the network to a centralized location within an infrastructure. For example, sensors are sending out readings using MQTT to a centralized broker, a number of other services have subscribed to the broker to receive readings (payloads) for one or more topics. When building an IOT infrastructure based upon MQTT the role of the message broker is vital. Understanding of the MQTT protocol is also vital for making the correct decisions.

The above architecture blueprint shows on how you can position a MQTT broker in your landscape. The implementation is done based upon the Mosquitto an Open Source MQTT v3.1/v3.1.1 Broker which we deploy on an Oracle Linux 7.1 distribution. The decision for Oracle Linux is driven due to the fact that it is a stable enterprise grade Linux distribution and due to the fact that we do interact a lot with other products from Oracle in this example. Having a lot of Oracle products (which is not necessary) in this landscape gives you the ability to monitor and manage the entire landscape with Oracle Enterprise Manager. Oracle Enterprise manager can play the role as central monitoring solution over all components, for example the Oracle Big Data Appliance as well as the Oracle Exadata database machine and the other Oracle components. And in the same single monitoring solution the Oracle Linux Mosquitto server is integrated. This provides an easy way of maintaining all components without the need to develop custom scripting and lowers the overall TCO and improves the ROI.

When deploying a solution which is, or will/might become a important part of your business, it is advisable to ensure you select enterprise grade components. Selecting Oracle Linux in combination with Oracle Enterprise Manager for monitoring and possible other Oracle hardware and software components is a good practice.

Positioning Mosquitto
Mosquitto is an open source (BSD licensed) message broker that implements the MQ Telemetry Transport protocol versions 3.1 and 3.1.1. MQTT provides a lightweight method of carrying out messaging using a publish/subscribe model. This makes it suitable for "machine to machine" messaging such as with low power sensors or mobile devices such as phones, sensors, embedded computers or microcontrollers. In essence the components that will form a large part of the internet of things (IOT) are making use of the MQTT protocol and Mosquitto is a vital part as the broker.

Understanding the MQTT protocol
The MQTT protocol is based upon a publish and subscribe model. Within this model messages are published to a broker (in our case Mosquitto), subscribers receive messages with a topic they have subscribed to. A “topic” is an important and central piece within the MQTT protocol, a topic can be seen as the descriptor of a message payload.
For example, if your message payload contains the temperature coming from a sensor you want to ensure everybody knows what it is, where it is coming from, etc etc. As an example, if you have an office building where every room contains temperature sensor your “topic” could be: /building01/floor04/room25/tempsensor if you subscribe to this topic as a subscriber the broker will send you all the message payload for this topic.

However, unless this is your specific office space, it is not very useful, you might want for example, want to have all temperature readings of the fourth floor, in this case you can subscribe on the topic /building01/floor04/#/tempsensor by using a wildcard. Adding in every room another type of sensor, you can reuse the topic setup and replace for those sensors tempsensor with humsensors. This enables you to subscribe, for example, to all sensors on the fourth floor by using /building01/floor04/#/#

Understanding topics is vital and critical to be able to build architect a correct topic model which can be used in the optimal way and can be easily extend. If the topic architecture is created in a suboptimal way it will be an enourmous task (depending on the number of sensors) to correct this at a later stage. Taking the correct amount of time to develop a topic model is worth the investment.

The MQTT protocol has a number of message packets that are form the backbone of the MQTT protocol. Prime messages are:

  • Publish
    • Message payload send by a publisher to the broker
  • Subscribe
    • Message from a client (subscriber) to subscribe on a specific topic
  • Suback
    • Message from the broker as a response (conformation) back on a subscription request
  • Unsubscribe
    • Message from a client (subscriber) to unsubscribe on a specific topic
  • Unsuback
    • Message from the broker as a response (conformation) back on a unsubscribe request

Building a Oracle Linux & Mosquitto test server
For reasons of stability and due to the fact that Mosquitto will be used often in enterprise grade deployments we will use Oracle Linux as the Linux distribution of choice.  To be able to install Mosquitto on Oracle Linux (release 7.1) you can do a number of things. You can download the source code and compile it yourself or you can use yum to install everything you need. When you want to go down the easy path and install Mosquitto on Oracle Linux by making use of yum you have to realize that Oracle is not provding a RPM for it, meaning it will not be in the standard yum repository that is provided by Oracle. You can however use a specific centos repository. With the below command you will download the repository file and ensure it will be placed in /etc/yum.repos.d

wget http://download.opensuse.org/repositories/home:/oojah:/mqtt/CentOS_CentOS-7/home:oojah:mqtt.repo -O /etc/yum.repos.d/mqtt.repo

If you now do a check on your repositories you should be able to see the mqtt (Mosquitto) repository and it should be active:

[root@localhost ~]# yum repolist all | grep mqtt
home_oojah_mqtt               mqtt (CentOS_CentOS-7)             enabled:      8
[root@localhost ~]#

If we now want to install Mosquitto we could directly use a yum install, however to be sure that the mqtt repository is available we can first do a quick yum list as shown below:

[root@localhost ~]# yum list mosquitto
Loaded plugins: langpacks
Available Packages
mosquitto.x86_64          1.4.4-2.1     home_oojah_mqtt
[root@localhost ~]#

As can be seen from the example we have a mosquitto.x86_64 version 1.4.4-2.1 available within the home_oojah_mqtt repository. Installing it can now be done with a quick yum install as shown below:

 [root@localhost ~]# yum install mosquitto
Loaded plugins: langpacks
Resolving Dependencies
  Running transaction check
  Package mosquitto.x86_64 0:1.4.4-2.1 will be installed
  Processing Dependency: uuid for package: mosquitto-1.4.4-2.1.x86_64
  Running transaction check
  Package uuid.x86_64 0:1.6.2-26.el7 will be installed
  Finished Dependency Resolution

Dependencies Resolved

======================================================================================================
 Package               Arch               Version                   Repository                   Size
======================================================================================================
Installing:
 mosquitto             x86_64             1.4.4-2.1                 home_oojah_mqtt             102 k
Installing for dependencies:
 uuid                  x86_64             1.6.2-26.el7              ol7_latest                   54 k

Transaction Summary
======================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 156 k
Installed size: 346 k
Is this ok [y/d/N]: y
Downloading packages:
(1/2): uuid-1.6.2-26.el7.x86_64.rpm                                            |  54 kB  00:00:01
warning: /var/cache/yum/x86_64/7Server/home_oojah_mqtt/packages/mosquitto-1.4.4-2.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 49e1d0b1: NOKEY
Public key for mosquitto-1.4.4-2.1.x86_64.rpm is not installed
(2/2): mosquitto-1.4.4-2.1.x86_64.rpm                                          | 102 kB  00:00:06
------------------------------------------------------------------------------------------------------
Total                                                                  23 kB/s | 156 kB  00:00:06
Retrieving key from http://download.opensuse.org/repositories/home:/oojah:/mqtt/CentOS_CentOS-7//repodata/repomd.xml.key
Importing GPG key 0x49E1D0B1:
 Userid     : "home:oojah OBS Project "
 Fingerprint: bdf4 d371 5b8d c145 d583 46e9 f8c8 d6db 49e1 d0b1
 From       : http://download.opensuse.org/repositories/home:/oojah:/mqtt/CentOS_CentOS-7//repodata/repomd.xml.key
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : uuid-1.6.2-26.el7.x86_64                                                           1/2
  Installing : mosquitto-1.4.4-2.1.x86_64                                                         2/2
  Verifying  : mosquitto-1.4.4-2.1.x86_64                                                         1/2
  Verifying  : uuid-1.6.2-26.el7.x86_64                                                           2/2

Installed:
  mosquitto.x86_64 0:1.4.4-2.1

Dependency Installed:
  uuid.x86_64 0:1.6.2-26.el7

Complete!
[root@localhost ~]#

After installation of Mosquitto you can check if a service has been defined and check the status by executing a systemctl command, shown below:

[root@localhost ~]# sysctl status mosquitto
sysctl: cannot stat /proc/sys/status: No such file or directory
sysctl: cannot stat /proc/sys/mosquitto: No such file or directory
[root@localhost ~]# systemctl status mosquitto
mosquitto.service - LSB: Mosquitto MQTT broker
   Loaded: loaded (/etc/rc.d/init.d/mosquitto)
   Active: inactive (dead)
[root@localhost ~]#

As you can see, sysctl recognizes Mosquitto however Mosquitto has not been started yet. To start this you can execute systemctl start mosquitto and the next time you do a status lookup you will notice a lot more is running within Oracle Linux:

[root@localhost ~]# systemctl status mosquitto
mosquitto.service - LSB: Mosquitto MQTT broker
   Loaded: loaded (/etc/rc.d/init.d/mosquitto)
   Active: active (running) since Tue 2015-11-03 14:38:45 EST; 5s ago
  Process: 2099 ExecStart=/etc/rc.d/init.d/mosquitto start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/mosquitto.service
           ââ2101 /usr/sbin/mosquitto -d -c /etc/mosquitto/mosquitto.conf

Nov 03 14:38:45 localhost.localdomain systemd[1]: Starting LSB: Mosquitto MQTT broker...
Nov 03 14:38:45 localhost.localdomain mosquitto[2099]: [121B blob data]
Nov 03 14:38:45 localhost.localdomain mosquitto[2099]: 1446579525: Config loaded from /etc/mosqui...f.
Nov 03 14:38:45 localhost.localdomain mosquitto[2099]: 1446579525: Opening ipv4 listen socket on ...3.
Nov 03 14:38:45 localhost.localdomain mosquitto[2099]: 1446579525: Opening ipv6 listen socket on ...3.
Nov 03 14:38:45 localhost.localdomain systemd[1]: Started LSB: Mosquitto MQTT broker.
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]#

In esscence you have a running Mosquitto service on your Oracle Linux server. The downside of this installation is that you do not “really” have the tools to test what is working and to play around with your fresh installation. If you want to test Mosquitto you will also have to install the mosquitto-clients tools by executing a yum install mosquitto-clients. As an example you can now test to the Mosquitto service like shown below where we subscribe to the topic test/mqtt

[root@localhost ~]# mosquitto_sub -d -t test/mqtt
Client mosqsub/2239-localhost. sending CONNECT
Client mosqsub/2239-localhost. received CONNACK
Client mosqsub/2239-localhost. sending SUBSCRIBE (Mid: 1, Topic: test/mqtt, QoS: 0)
Client mosqsub/2239-localhost. received SUBACK

If you want to see the full working of the pub/sub mechanism you can keep the above statement running in a terminal and open a second terminal. The first (above shown) command will act as the subscriber to the topic test/mqtt. In the second we will be pubishing information to Mosquitto and if everything works as expected the subscriber should receive from Mosquitto what the publisher sends to Mosquitto. An example of sending a test message as a publisher on the topic test/mqtt with the payload “test123”

[root@localhost ~]# mosquitto_pub -d -t test/mqtt -m "test123"
Client mosqpub/2411-localhost. sending CONNECT
Client mosqpub/2411-localhost. received CONNACK
Client mosqpub/2411-localhost. sending PUBLISH (d0, q0, r0, m1, 'test/mqtt', ... (7 bytes))
Client mosqpub/2411-localhost. sending DISCONNECT
[root@localhost ~]#
As can be seen from the subscriber side we now receive the message:
[root@localhost ~]# mosquitto_sub -d -t test/mqtt
Client mosqsub/2384-localhost. sending CONNECT
Client mosqsub/2384-localhost. received CONNACK
Client mosqsub/2384-localhost. sending SUBSCRIBE (Mid: 1, Topic: test/mqtt, QoS: 0)
Client mosqsub/2384-localhost. received SUBACK
Subscribed (mid: 1): 0
Client mosqsub/2384-localhost. received PUBLISH (d0, q0, r0, m0, 'test/mqtt', ... (7 bytes))
test123

The above examples are all working on localhost by default, if you want to publish (or subscribe) to a remote machine you can use the –h option to specify the host. Make sure you have port 1883 (the default port of mqtt) is open and not blocked. As an example of a remote subscription:

mosquitto_sub -h 192.168.1.101 -d -t test/mqtt

Tuesday, October 20, 2015

Oracle Enterprise Manager - This report has saved copies

When using Oracle Enterprise Manager to manage your IT footprint you most likely also want to make use of the reporting functions within Oracle Enterprise Manager. Within the latest releases Oracle tries to push to using Oracle BI and not the older reporting options. However, many deployments still use the "old" method of reporting (which works fine in most cases).

In some cases you do want to make a change to a report you have created and might run into a message like this: "You have chosen to edit report "xxxx". This report has saved copies. Do you want to edit the report with limited editing capabilities?".

This means that you cannot change the definition of the report while there are still "old" copies. To resolve this you have to first remove the copies before you can do your changes. To do so, login as a user who has the rights to change the report and open the report itself (not in edit mode, open it in view mode). You will see, as shown in the below screenshot, the number of saved copies.


When you click on the number you will be guided to a page like the one below:

You will have to delete all saved copies of this report. When you have done so and you enter the edit mode of the report again you will see that you have full editing capabilities and are able to make all changes required. 

Thursday, October 08, 2015

Oracle Linux - NuPIC AI core installation

NuPIC is an open source project based on a theory of neocortex called Hierarchical Temporal Memory (HTM). Parts of HTM theory have been implemented, tested, and used in applications, and other parts of HTM theory are still being developed. Today the HTM code in NuPIC can be used to analyze streaming data. It learns the time-based patterns in data, predicts future values, and detects anomalies. HTM is a set of algorithms which model the functionality of the neocortex in the human brain. HTM Theory is the key to unlocking Intelligent Applications and Machines. NupIC is the core product from numenta and is opensource and available to all who like to test with it, build upon it or add to it.

For intelligent applications NuPIC is great as a starting point of your development. However, a thing to keep in mind is that this field of computer science is new, HTM is fairly new. Or in the words from Jeff Hawkins: "This stuff is not easy. I can assure you that once you understand it, you will see a beauty in it. But most people take months to deeply understand the CLA. The tasks of creating hierarchies of CLAs and adding in motor capabilities are very difficult. Even just using the CLA in its current form is not trivial due to the learning required."

When you like to run NuPic on Oracle Linux a number of steps might be a bit different from the installation on a MacBook. Also a couple of dependencies are in place before you can install NuPic on Oracle Linux which are: Python 2.7, Python development headers, pip, wheel, numpy and C++ compiler like gcc or clang.

Python development headers
Next to Python, which most likely will be shipping with your Oracle Linux installation you will have to make sure that you have the Python development headers. You can check if this is installed by executing the below command. In my case Python development headers was already installed.

[root@localhost ~]# rpm -qa | grep python-devel
python-devel-2.7.5-18.0.1.el7_1.1.x86_64
[root@localhost ~]#

In case you do not get a result you will have to install the Python development headers by executing a yum install command as shown below:

[root@localhost ~]# yum install python-devel

pip
one of the requirements to be able to install NupiC is to install pip. pip is a package management system used to install and manage software packages written in Python. Many packages can be found in the Python Package Index (PyPI). If you have installed the Python setuptools which I describe in this blogpost the installation of pip can be done by using the easy_install command which is part of the setuptools distribution.

[root@localhost ~]# easy_install pip
Searching for pip
Best match: pip 6.1.1
Adding pip 6.1.1 to easy-install.pth file
Installing pip script to /usr/bin
Installing pip3.4 script to /usr/bin
Installing pip3 script to /usr/bin

Using /usr/lib/python2.7/site-packages
Processing dependencies for pip
Finished processing dependencies for pip
[root@localhost ~]#

wheel
wheel is required as a dependency. Wheel(s) are a built-package format for Python. A wheel is a ZIP-format archive with a specially formatted filename and the .whl extension. It is designed to contain all the files for a PEP 376 compatible install in a way that is very close to the on-disk format. Many packages will be properly installed with only the “Unpack” step (simply extracting the file onto sys.path), and the unpacked archive preserves enough information to “Spread” (copy data and scripts to their final locations) at any later time. You can install wheel with the just installed pip by executing the below command. Which resulted in my case in some warnings which you can (should) resolve however are not blocking the installation.

[root@localhost ~]# pip install wheel
/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
You are using pip version 6.1.1, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting wheel
/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
    100% |ââââââââââââââââââââââââââââââââ| 65kB 1.8MB/s
Installing collected packages: wheel
Successfully installed wheel-0.26.0
[root@localhost ~]#

NumPy
it will not come as a surprise that NumPy is required to be installed on the system. NumPy is is the fundamental package for scientific computing with Python. It contains among other things; a powerful N-dimensional array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, useful linear algebra, Fourier transform, and random number capabilities. Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases

Numpy can be installed by executing the command:
pip install numpy

Compiler
As the core of Nupic is written in C++ you will need a C++ compiler. The obvious choice in this is GCC which most likely is already installed on your system. You can check the availability with the below command, which shows in my example that it is installed.

[root@localhost ~]# rpm -qa | grep gcc
gcc-4.8.3-9.el7.x86_64
libgcc-4.8.3-9.el7.x86_64
[root@localhost ~]#

In case it is not installed you can execute a yum install command to install gcc on your Oracle Linux machine. One small, however important note, gcc should be GCC 4.8.

Installing NuPic
After you ensured all dependencies are done you can install NuPic. The installation of NuPic on Oracle Linux is a bit different than the installation on for example a Mac. Reason for this is that The nupic.bindings binary distribution is not stored on PyPi along with the OS X distribution. NuPIC uses the wheel binary format, and PyPi does not support hosting Linux wheel files. This forces you to download the wheel file directly from Numenta and not from PyPi.

pip install https://s3-us-west-2.amazonaws.com/artifacts.numenta.org/numenta/nupic.core/releases/nupic.bindings/nupic.bindings-0.2.1-cp27-none-linux_x86_64.whl
pip install nupic


If all is ok the "pip install nupic" command should work like a charm. However, in case you run into a compiler error like the one shown below it might be that you are missing some additional prerequisites.

cc -c /tmp/tmphmvPkY/vers.cpp -o tmp/tmphmvPkY/vers.o --std=c++11
    cc: error trying to exec 'cc1plus': execvp: No such file or directory
    *WARNING* no libcapnp detected. Will download and build it from source now. If you have C++ Cap'n Proto installed, it may be out of date or is not being detected. Downloading and building libcapnp may take a while.
    fetching https://capnproto.org/capnproto-c++-0.5.1.2.tar.gz into /tmp/pip-build-PHQZgs/pycapnp/bundled
    configure: error: *** A compiler with support for C++11 language features is required.

To resolve this issue you will need to do a additional install of gcc-c++ by executing

[root@localhost ~]# yum install gcc-c++

Testing NuPic
to ensure your installation of Nupic was successful you can run a test with the test units provided in the github repository. Execute the py.test against test/unit/ which can be found in the github repository. This should look like the example below.

[root@localhost nupic-master]# py.test tests/unit/
=== test session starts  ===
platform linux2 -- Python 2.7.5 -- pytest-2.5.1
plugins: cov, xdist
collected 844 items / 2 skipped

tests/unit/nupic/utils_test.py ......
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py ...ss..
tests/unit/nupic/algorithms/anomaly_likelihood_test.py ....................
tests/unit/nupic/algorithms/anomaly_test.py ..............
tests/unit/nupic/algorithms/cells4_test.py .
tests/unit/nupic/algorithms/cla_classifier_diff_test.py ...................
tests/unit/nupic/algorithms/cla_classifier_test.py ...................
tests/unit/nupic/algorithms/fast_cla_classifier_test.py ...................
tests/unit/nupic/algorithms/knn_classifier_test.py .....s
tests/unit/nupic/algorithms/nab_detector_test.py ..
tests/unit/nupic/algorithms/sp_overlap_test.py .s.s
tests/unit/nupic/algorithms/svm_test.py ..s
tests/unit/nupic/algorithms/tp10x2_test.py .
tests/unit/nupic/data/aggregator_test.py .
tests/unit/nupic/data/dictutils_test.py ......
tests/unit/nupic/data/fieldmeta_test.py .....
tests/unit/nupic/data/file_record_stream_test.py ......
tests/unit/nupic/data/filters_test.py s
tests/unit/nupic/data/functionsource_test.py ......
tests/unit/nupic/data/inference_shifter_test.py ........
tests/unit/nupic/data/record_stream_test.py .......
tests/unit/nupic/data/utils_test.py .......
tests/unit/nupic/data/generators/anomalyzer_test.py ...........
tests/unit/nupic/data/generators/pattern_machine_test.py .........
tests/unit/nupic/data/generators/sequence_machine_test.py .....
tests/unit/nupic/encoders/adaptivescalar_test.py .......
tests/unit/nupic/encoders/category_test.py ..
tests/unit/nupic/encoders/coordinate_test.py ................
tests/unit/nupic/encoders/date_test.py ........
tests/unit/nupic/encoders/delta_test.py .....
tests/unit/nupic/encoders/geospatial_coordinate_test.py ...........
tests/unit/nupic/encoders/logenc_test.py ......
tests/unit/nupic/encoders/multi_test.py ..
tests/unit/nupic/encoders/pass_through_encoder_test.py ....
tests/unit/nupic/encoders/random_distributed_scalar_test.py ...............
tests/unit/nupic/encoders/scalar_test.py .............
tests/unit/nupic/encoders/scalarspace_test.py .
tests/unit/nupic/encoders/sdrcategory_test.py ...
tests/unit/nupic/encoders/sparse_pass_through_encoder_test.py ....
tests/unit/nupic/engine/network_test.py .........
tests/unit/nupic/engine/syntactic_sugar_test.py .....
tests/unit/nupic/engine/unified_py_parameter_test.py ..
tests/unit/nupic/frameworks/opf/clamodel_classifier_helper_test.py ......................
tests/unit/nupic/frameworks/opf/clamodel_test.py ......
tests/unit/nupic/frameworks/opf/opf_metrics_test.py ...............................
tests/unit/nupic/frameworks/opf/previous_value_model_test.py ......
tests/unit/nupic/frameworks/opf/safe_interpreter_test.py ........
tests/unit/nupic/frameworks/opf/two_gram_model_test.py .....
tests/unit/nupic/frameworks/opf/common_models/cluster_params_test.py .
tests/unit/nupic/math/array_algorithms_test.py ...
tests/unit/nupic/math/cast_mode_test.py s
tests/unit/nupic/math/lgamma_test.py .
tests/unit/nupic/math/nupic_random_test.py .............
tests/unit/nupic/math/sparse_binary_matrix_test.py ............s............
tests/unit/nupic/math/sparse_matrix_test.py ...s...............................
tests/unit/nupic/regions/anomaly_region_test.py .
tests/unit/nupic/regions/knn_anomaly_classifier_region_test.py ....................
tests/unit/nupic/regions/pyregion_test.py ....
tests/unit/nupic/regions/record_sensor_region_test.py .
tests/unit/nupic/regions/regions_spec_test.py s...s......
tests/unit/nupic/research/connections_test.py .............
tests/unit/nupic/research/inhibition_object_test.py s
tests/unit/nupic/research/sp_learn_inference_test.py s
tests/unit/nupic/research/spatial_pooler_boost_test.py ..
tests/unit/nupic/research/spatial_pooler_compatability_test.py ....ss..
tests/unit/nupic/research/spatial_pooler_compute_test.py ..
tests/unit/nupic/research/spatial_pooler_cpp_api_test.py ..............................
tests/unit/nupic/research/spatial_pooler_py_api_test.py ..............................
tests/unit/nupic/research/spatial_pooler_unit_test.py s.................................
tests/unit/nupic/research/temporal_memory_test.py ...........................
tests/unit/nupic/research/tp10x2_test.py ....
tests/unit/nupic/research/tp_constant_test.py ...
tests/unit/nupic/research/tp_test.py ....
tests/unit/nupic/research/monitor_mixin/metric_test.py ..
tests/unit/nupic/research/monitor_mixin/trace_test.py ..
tests/unit/nupic/support/configuration_test.py ............s....................
tests/unit/nupic/support/custom_configuration_test.py .........s..............
tests/unit/nupic/support/decorators_test.py ....
tests/unit/nupic/support/object_json_test.py ...............
tests/unit/nupic/support/consoleprinter_test/consoleprinter_test.py .
=== 825 passed, 21 skipped in 100.95 seconds ===
[root@localhost nupic-master]#

This should enable you to start with exploring NuPic

Tuesday, October 06, 2015

Oracle Linux - Install Python setuptools

When working with Python and when you like to make your life more easy when installing new modules and functions it is commonly a best practice to use things like for example pip and/pr Python setuptools. Python setuptools will help you to easily download, build, install, upgrade, and uninstall Python packages. The setup of the setuptools on Oracle Linux is basically a single command to get things working. Executing the command will download a python script and execute it. This script will ensure the setuptool will be downloaded and installed correctly on your system.

You can download and execute the script manually and in two steps, you can also do this in one go and ensure that you only need a single command to install the setuptools on Oracle Linux. Below is an example of the single command which involves a wget and sending the result to Python for execution.

[root@localhost ~]# wget https://bootstrap.pypa.io/ez_setup.py -O - | python
--2015-10-06 16:06:27--  https://bootstrap.pypa.io/ez_setup.py
Resolving bootstrap.pypa.io (bootstrap.pypa.io)... 185.31.18.175
Connecting to bootstrap.pypa.io (bootstrap.pypa.io)|185.31.18.175|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11434 (11K) [text/x-python]
Saving to: âSTDOUTâ

100%[==================================>] 11,434      --.-K/s   in 0s

2015-10-06 16:06:28 (534 MB/s) - written to stdout [11434/11434]

Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-18.3.2.zip
Extracting in /tmp/tmpuwKkuT
Now working in /tmp/tmpuwKkuT/setuptools-18.3.2
Installing Setuptools
running install
running bdist_egg
running egg_info
writing requirements to setuptools.egg-info/requires.txt
writing setuptools.egg-info/PKG-INFO
writing top-level names to setuptools.egg-info/top_level.txt
writing dependency_links to setuptools.egg-info/dependency_links.txt
writing entry points to setuptools.egg-info/entry_points.txt
reading manifest file 'setuptools.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'setuptools.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
copying easy_install.py -> build/lib
creating build/lib/_markerlib
copying _markerlib/__init__.py -> build/lib/_markerlib
copying _markerlib/markers.py -> build/lib/_markerlib
creating build/lib/pkg_resources
copying pkg_resources/__init__.py -> build/lib/pkg_resources
creating build/lib/setuptools
copying setuptools/__init__.py -> build/lib/setuptools
copying setuptools/archive_util.py -> build/lib/setuptools
copying setuptools/compat.py -> build/lib/setuptools
copying setuptools/depends.py -> build/lib/setuptools
copying setuptools/dist.py -> build/lib/setuptools
copying setuptools/extension.py -> build/lib/setuptools
copying setuptools/lib2to3_ex.py -> build/lib/setuptools
copying setuptools/msvc9_support.py -> build/lib/setuptools
copying setuptools/package_index.py -> build/lib/setuptools
copying setuptools/py26compat.py -> build/lib/setuptools
copying setuptools/py27compat.py -> build/lib/setuptools
copying setuptools/py31compat.py -> build/lib/setuptools
copying setuptools/sandbox.py -> build/lib/setuptools
copying setuptools/site-patch.py -> build/lib/setuptools
copying setuptools/ssl_support.py -> build/lib/setuptools
copying setuptools/unicode_utils.py -> build/lib/setuptools
copying setuptools/utils.py -> build/lib/setuptools
copying setuptools/version.py -> build/lib/setuptools
copying setuptools/windows_support.py -> build/lib/setuptools
creating build/lib/pkg_resources/_vendor
copying pkg_resources/_vendor/__init__.py -> build/lib/pkg_resources/_vendor
creating build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/__about__.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/__init__.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/_compat.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/_structures.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/specifiers.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/version.py -> build/lib/pkg_resources/_vendor/packaging
creating build/lib/setuptools/command
copying setuptools/command/__init__.py -> build/lib/setuptools/command
copying setuptools/command/alias.py -> build/lib/setuptools/command
copying setuptools/command/bdist_egg.py -> build/lib/setuptools/command
copying setuptools/command/bdist_rpm.py -> build/lib/setuptools/command
copying setuptools/command/bdist_wininst.py -> build/lib/setuptools/command
copying setuptools/command/build_ext.py -> build/lib/setuptools/command
copying setuptools/command/build_py.py -> build/lib/setuptools/command
copying setuptools/command/develop.py -> build/lib/setuptools/command
copying setuptools/command/easy_install.py -> build/lib/setuptools/command
copying setuptools/command/egg_info.py -> build/lib/setuptools/command
copying setuptools/command/install.py -> build/lib/setuptools/command
copying setuptools/command/install_egg_info.py -> build/lib/setuptools/command
copying setuptools/command/install_lib.py -> build/lib/setuptools/command
copying setuptools/command/install_scripts.py -> build/lib/setuptools/command
copying setuptools/command/register.py -> build/lib/setuptools/command
copying setuptools/command/rotate.py -> build/lib/setuptools/command
copying setuptools/command/saveopts.py -> build/lib/setuptools/command
copying setuptools/command/sdist.py -> build/lib/setuptools/command
copying setuptools/command/setopt.py -> build/lib/setuptools/command
copying setuptools/command/test.py -> build/lib/setuptools/command
copying setuptools/command/upload_docs.py -> build/lib/setuptools/command
copying setuptools/script (dev).tmpl -> build/lib/setuptools
copying setuptools/script.tmpl -> build/lib/setuptools
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
copying build/lib/easy_install.py -> build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/_markerlib
copying build/lib/_markerlib/__init__.py -> build/bdist.linux-x86_64/egg/_markerlib
copying build/lib/_markerlib/markers.py -> build/bdist.linux-x86_64/egg/_markerlib
creating build/bdist.linux-x86_64/egg/pkg_resources
copying build/lib/pkg_resources/__init__.py -> build/bdist.linux-x86_64/egg/pkg_resources
creating build/bdist.linux-x86_64/egg/pkg_resources/_vendor
copying build/lib/pkg_resources/_vendor/__init__.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor
creating build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/__about__.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/__init__.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/_compat.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/_structures.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/specifiers.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/version.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
creating build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/__init__.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/archive_util.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/depends.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/dist.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/extension.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/lib2to3_ex.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/msvc9_support.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/package_index.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/py26compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/py27compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/py31compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/sandbox.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/site-patch.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/ssl_support.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/unicode_utils.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/utils.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/version.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/windows_support.py -> build/bdist.linux-x86_64/egg/setuptools
creating build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/__init__.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/alias.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/bdist_egg.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/bdist_rpm.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/bdist_wininst.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/build_ext.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/build_py.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/develop.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/easy_install.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/egg_info.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install_egg_info.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install_lib.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install_scripts.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/register.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/rotate.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/saveopts.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/sdist.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/setopt.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/test.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/upload_docs.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/script (dev).tmpl -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/script.tmpl -> build/bdist.linux-x86_64/egg/setuptools
byte-compiling build/bdist.linux-x86_64/egg/easy_install.py to easy_install.pyc
byte-compiling build/bdist.linux-x86_64/egg/_markerlib/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/_markerlib/markers.py to markers.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/__about__.py to __about__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/_compat.py to _compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/_structures.py to _structures.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/specifiers.py to specifiers.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/version.py to version.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/archive_util.py to archive_util.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/compat.py to compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/depends.py to depends.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/dist.py to dist.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/extension.py to extension.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/lib2to3_ex.py to lib2to3_ex.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/msvc9_support.py to msvc9_support.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/package_index.py to package_index.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/py26compat.py to py26compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/py27compat.py to py27compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/py31compat.py to py31compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/sandbox.py to sandbox.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/site-patch.py to site-patch.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/ssl_support.py to ssl_support.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/unicode_utils.py to unicode_utils.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/utils.py to utils.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/version.py to version.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/windows_support.py to windows_support.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/alias.py to alias.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/bdist_egg.py to bdist_egg.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/bdist_rpm.py to bdist_rpm.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/bdist_wininst.py to bdist_wininst.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/build_ext.py to build_ext.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/build_py.py to build_py.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/develop.py to develop.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/easy_install.py to easy_install.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/egg_info.py to egg_info.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install.py to install.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install_egg_info.py to install_egg_info.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install_lib.py to install_lib.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install_scripts.py to install_scripts.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/register.py to register.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/rotate.py to rotate.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/saveopts.py to saveopts.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/sdist.py to sdist.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/setopt.py to setopt.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/test.py to test.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/upload_docs.py to upload_docs.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
creating dist
creating 'dist/setuptools-18.3.2-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing setuptools-18.3.2-py2.7.egg
Copying setuptools-18.3.2-py2.7.egg to /usr/lib/python2.7/site-packages
Adding setuptools 18.3.2 to easy-install.pth file
Installing easy_install script to /usr/bin
Installing easy_install-2.7 script to /usr/bin

Installed /usr/lib/python2.7/site-packages/setuptools-18.3.2-py2.7.egg
Processing dependencies for setuptools==18.3.2
Finished processing dependencies for setuptools==18.3.2
[root@localhost ~]#

In esscence there is nothing more to installing the Python setuptools on Oracle Linux. A single command will ensure you are in business and you are good to go.