Friday, November 02, 2018

Why innovators need to eat frogs

Innovation is a long process of tackling very complex problems and often problems nobody has been trying to tackle before. Throughout my career I have been working at a number of hard innovation projects, building IOT like solutions way before IOT was a known phrase and developing cryptographic solutions to allow secure communication with chips on government ID cards. In all those cases I followed the mantra of Google X without knowing it, even before Google was a real company and Google X existed.

The mantra at Google X, the moonshot department of Google where people tackle really hard problems is #MonkyFirst. The idea behind this is that if you want a monkey to recite Shakespeare on a pedestal you do not start with building the pedestal. Everyone can build a pedestal; a lot of people have been building a pedestal before Training the monkey is the hard part. If you are unable to train the monkey to recite Shakespeare there is no need to build the pedestal at all.

In short, try to tackle the hard problem first before you spend time and money on tackling the parts of your project that you know will not be that hard.

The mantra #MonkeyFirst is also stated by Mark Twain in a bit of a different form while having the same meaning. Mark Twain wrote; “eat a live frog first thing in the morning and nothing worse will happen to you for the rest of the day”.

Companies start to eat frogs
In companies around the world the common thing to do is to build the pedestal first and after that try to train the monkey. One of the reasons for it is that people tend to desire quick satisfaction and within companies there is commonly a tendency that management want to see tangible results fast. Building the pedestal is something that can quickly be done and it will show results towards management. Showing what you have accomplished is a more convenient message to tell than providing a long list on why it is so hard to train a monkey.

However, if you are unable to train the monkey there is no reason to build a pedestal. Eating the frog or #MonkeyFirst ties a bit into the “fail fast, fail early, fail cheap" concept.


As resources can only be spend once it is in the best interest of a company to make sure that you fail early in a project. If it turns out that you are unable to train the monkey before you spend resources on building a, then useless, pedestal you have saved burning resources without getting a usable output from it.

Say no to your inner self
It is a natural thing to try to get instant satisfaction, it is a natural thing to build the pedestal first and see what you have achieved. However, it is a wiser thing to try and train the monkey first. On a personal level it is difficult to say no to the natural tendency. Within an enterprise it is equally hard to change the mindset to aim for instant satisfaction. Changing the mindset within an enterprise might even be harder than changing your own inner mindset.

However, changing to a culture of eating frogs is very beneficial for enterprises how strive for innovation. Eating frogs will save valuable time and money and ensures that the focus is on projects that have a higher rate of providing a success. 

Oracle dev – microservice secure bootstrapping for shared secrets

When building microservices, to be more precise when building microservices in containers at one point in time you will start hitting the problem of secure bootstrapping and the handling of shared secrets. This blogpost aims to provide some insights into possible design patterns for handling this problem.
Problem outline

When building microservices or functions which will run from a container you want to provide them as little configuration as possible while at the same time you want them to allow to be configured dynamically at the time of startup. To make the problem a bit more interesting, you want to be able to scale up and down containers dynamically. This problem is directly visible when you are building functions in, for example, Oracle project Fn. A function is in effect a docker container which will only live for a very short amount of time.
s
We did mention configuration we should actually split configuration into two parts; one part is actual configuration and one part is secrets. Configuration is stored in a central configuration store (for example consul from Hashicorp) and secrets are stored in a secret management solution (for example vault from Hashicorp). Making a distinct split between secrets and configuration is a best practice.

The simple bootstrap
Let’s say that you need to build a microservice which needs to call another service within your landscape you might potentially need the following two things to allow your microservice to call the other microservice; (1) a URL and (2) a shared secret for authentication.


A simple way to resolve this is to ensure your container or function will have a bootstrap routine which will call the configuration store in a static manner to acquire the needed configuration and the shared secret from a configuration store. Static manner means that your microservice or function has the URL and possibly a key baked into the deployment which allows it to connect to the configuration store, this is shown in the example below.



The above works and already provide a relative better implementation than building in all configuration and keys needed for microservice 1 to communicate with microservice 2.

Bootstrap with service registry 
The below model is a bit more complex, however, it is also providing some more options. In the below diagram all steps are done over HTTPS, you can however implement step 1 also with a socket connect from your docker container or you can select an IP which is only available within the Docker internal network and is not accessible from outside.


 The reason you might want to secure the “service registry” service from the outside world is because it is key to gaining access to everything else.  In the above model the following steps are executed:

  1. The bootstrap of microservice-1 registers at the service registry and receives a key and URL for the configuration store
  2. The service registry informs the configuration store about the new service that has been booted and which random secret it will use to connect. 
  3. Microservice-1 connects to the configuration store while only providing the secret it received from the service registry. Based upon this secret the configuration store knows which configuration and secrets to return to the service. Additionally, it creates a random secret for this specific instance of microservice-1 to be used to communicate with microservice-2
  4. Microservice-1 calls microservice-2 while using the configuration and the key it receives from the configuration store.
  5. Microservice-2 receives a call from microservice-1 and verifies the secret with the configuration store. 

The benefit of this model is that only a service which is started within Docker can access the service registry and by enforcing this only a service starting within docker can acquire a key to communicate with the dynamic configuration store. 

Friday, October 05, 2018

Oracle Fn - build your first Go function

In this guide we showcase how easy it is to build a single function using Oracle Fn based upon Go.

The first step in developing a function is to initialize a function with the fn command. Below example shows we init a new function which will be using Go as the programming language and the trigger to call the function will be a http call. We call the function myCoolFunction

[root@projectfn devstuff]# fn init --runtime go --trigger http myCoolFunction
Creating function at: /myCoolFunction
Function boilerplate generated.
func.yaml created.
[root@projectfn devstuff]#

As soon as the initalization is complete we can see that we have a new directory called myCoolFunction which has a number of files inside
[root@projectfn devstuff]# ls -la myCoolFunction/
total 16
drwxr-xr-x. 2 root root  73 Oct  5 18:48 .
drwxr-xr-x. 8 root root  90 Oct  5 18:48 ..
-rw-r--r--. 1 root root 469 Oct  5 18:48 func.go
-rw-r--r--. 1 root root 193 Oct  5 18:48 func.yaml
-rw-r--r--. 1 root root 127 Oct  5 18:48 Gopkg.toml
-rw-r--r--. 1 root root 505 Oct  5 18:48 test.json
[root@projectfn devstuff]# 

If we look at the files the two most important files are the func.go and the func.yaml file. The func.go file will contain the function logic and the func.yaml will contain the configuration of the function.

If we look at the content of func.go we can see that this is a very simple hello world example written in Go which will respond in the form of a JSON response file " Hello World" or in case you provide a JSON payload file with a name it will respond with a "Hello name". In effect the example is very simple and very handy to quickly test if your function is OK and can be used before you start coding your own logic into it.
package main

import (
 "context"
 "encoding/json"
 "fmt"
 "io"

 fdk "github.com/fnproject/fdk-go"
)

func main() {
 fdk.Handle(fdk.HandlerFunc(myHandler))
}

type Person struct {
 Name string `json:"name"`
}

func myHandler(ctx context.Context, in io.Reader, out io.Writer) {
 p := &Person{Name: "World"}
 json.NewDecoder(in).Decode(p)
 msg := struct {
  Msg string `json:"message"`
 }{
  Msg: fmt.Sprintf("Hello %s", p.Name),
 }
 json.NewEncoder(out).Encode(&msg)
}

The content of the func.yaml file will help in the configuration of the function and how it is, for example, accessible externally on what endpoint.

schema_version: 20180708
name: mycoolfunction
version: 0.0.1
runtime: go
entrypoint: ./func
format: json
triggers:
- name: mycoolfunction-trigger
  type: http
  source: 

  Now we have to build and deploy the function. What will happen in the background is that a docker container is build and that the application, the function and the trigger is registered within Fn so it can be called. As we have not defined any application name we will call this application mycoolapp. The command required and the result is shown in the example below.

[root@projectfn myCoolFunction]# fn --verbose deploy --app mycoolapp --local
Deploying mycoolfunction to app: mycoolapp
Bumped to version 0.0.4
Building image mycoolfunction:0.0.4 
FN_REGISTRY:  FN_REGISTRY is not set.
Current Context:  No context currently in use.
Sending build context to Docker daemon  6.144kB
Step 1/10 : FROM fnproject/go:dev as build-stage
 ---> fac877f7d14d
Step 2/10 : WORKDIR /function
 ---> Using cache
 ---> 910b06b938d1
Step 3/10 : RUN go get -u github.com/golang/dep/cmd/dep
 ---> Using cache
 ---> f6b396d6e1fa
Step 4/10 : ADD . /go/src/func/
 ---> 35a944c2ad0f
Step 5/10 : RUN cd /go/src/func/ && dep ensure
 ---> Running in 8ef4cfb23602
Removing intermediate container 8ef4cfb23602
 ---> 75991cccc0b0
Step 6/10 : RUN cd /go/src/func/ && go build -o func
 ---> Running in 5d38abb76d94
Removing intermediate container 5d38abb76d94
 ---> 87f20cf4d16d
Step 7/10 : FROM fnproject/go
 ---> 76aed4489768
Step 8/10 : WORKDIR /function
 ---> Using cache
 ---> 1629c0d58cc1
Step 9/10 : COPY --from=build-stage /go/src/func/func /function/
 ---> Using cache
 ---> ac97ccf6b37f
Step 10/10 : ENTRYPOINT ["./func"]
 ---> Using cache
 ---> 5c61704790e4
Successfully built 5c61704790e4
Successfully tagged mycoolfunction:0.0.4

Updating function mycoolfunction using image mycoolfunction:0.0.4...
In effect, this is the only thing you need to do to get your first function up and running. To test if it is really working we can call the function as shown below and we will get the result back as expected.
[root@projectfn myCoolFunction]# curl http://192.168.56.15:8080/t/mycoolapp/mycoolfunction-trigger
{"message":"Hello World"}

Wednesday, September 12, 2018

Oracle Cloud - Elastic Search set number of replicas for shards

We see more and more that customers leverage Elastic Search within modern application deployments. Recently we have been experimenting with Elastic Search in the Oracle Cloud. The initial setup was a relative simple cluster setup of a number of nodes. When doing the first resilience test we found out that we where finding that we degraded the cluster state and lost data when turning off a virtual machine in the Oracle Compute Cloud. The reason for this was that we did not set the replication for shards in the correct manner. Ensuring you have set sharding correctly is vital to ensure your Elastic Search Cluster in the Oracle Cloud is resilient against node failure.

Building a cluster in the cloud
When building a Cluster in the Oracle Compute Cloud based upon Oracle Linux this is in effect nothing different from building an Elastic Cluster in any other cloud or in your own datacenter. This holds that the below is applicable to any installation you do.

The general idea when building an Elastic Search Cluster is that you ensure that you have multiple nodes with multiple roles working as one while ensuring that the cluster is capable of loosing a number of its member nodes and will still continue functioning. The main roles within the cluster configuration of Elastic Search are data nodes, master nodes and client nodes.

when configuring your cluster you have to ensure that you are capable of loosing one node and still be able to provide the services to your customers. One of the things we overlooked while building the initial cluster was the level of replica shards.

Use replica shards
When you store an index in Elastic Search this can be broken down into multiple shards. The shards can be distributed over multiple machines (nodes) in the cluster. The idea behind it is two folded. Firstly it helps you to distribute load and secondly it helps you to ensure data is on more than one node at the same time to ensure that the data is available when a node is removed from the cluster.

To ensure the optimal use for both distributing operations as well as ensuring you have a replica of a shard to mitigate against failure you will have to ensure you have a replicated shard. The below image shows this on a high level:


As you can see in the above example we have two shards for a single index, P0 and P1. However, we also have a replica shard for both R0 and R1. In case node 1 is failing for some reason the data is still available for the cluster in the form of shard R0. In this example we have a replication of 1, you can however set the replication much higher. The level of replication will depend on the level of risk you want to take combined with the level of compute distribution you want to achieve against the costs you are willing to have.

As storage is relative cheap in the Oracle Compute Cloud it is advisable to set the replication higher than 1 to ensure you have 2 or more replication shards.

Setting the replication level
You can set the replication level per index in Elastic Search. This will help you to ensure you have more shards for data which is accessed frequently or where you need a higher level of availability. Setting the replication level is done with the parameter number_of_replicas as shown below:

PUT /my_index/_settings
{
  "number_of_replicas": 1
}

Conclusion
using shard replication will help you protect against failure of a datanode in your elastic search cluster and it will improve the performance. As storage is relative cheap in the Oracle Cloud it is advisable to ensure you have set number_of_replicas to 2 or higher.

Monday, August 20, 2018

Oracle Jet - security by obfuscation - Do not use it

Obfuscation “the action of making something obscure, unclear, or unintelligible” is often used when trying to secure an application. The idea behind obfuscation is to make the technical working of an application so unclear to attackers that it will become very hard to figure out the actual working.  Even though obfuscation might look like a good measure it provides no real security against people who intend to understand the real working of an application. A good example of security by obfuscation is URL obfuscation.

When applying URL obfuscation as a security measure a common practice is to obfuscate the URL parameters in such a way that they are not easily understandable by someone who has not build the application.

A recent example I found is the below URL which ends with details?id=QEREUS where at first instance it appears that the ID is a randomly generated ID. When having a randomly generated ID this will make it harder to build a script which will access the details site. However, in this specific example the result of the URL is a JSON file, when examining the JSON file another ID was shown. In this case the ID in the JSON file was A01049 while the ID in the URL was QEREUS

By examining a number of different detail pages quickly a pattern emerges, for example, all URL IDs started with a Q and only used a specific subset of the alphabet. In this specific example the actual ID, shown in the JSON file, was an A followed by a sequential number where the developer has chosen to use a substitution method to obfuscate the actual ID. The below list shows the selected substitution method:

  • 0 - E
  • 1 - R
  • 2 - T
  • 3 - Y
  • 4 - U
  • 5 - I
  • 6 - O
  • 7 - P
  • 8 - A
  • 9 – S

When you take a good look at the letters and take a good look at a QWERTY keyboard layout you can figure out why the developer has selected this set of letters to substitute the numbers.

Why people use obfuscation. 
There are very good reasons why one would like to make it hard for externals to guess ID parameters. When you are able to guess the ID parameter it might become very easy to write a script to scrape a website for information. When looking at more modern web based applications, for example Oracle JET based applications, a javascript will call a REST API, this means that the user will be able to gain access to pure JSON (or XML) based information.


In essence there is nothing wrong with external people accessing this information however there might be very good reasons why you do not want to over-promote the use of the pure JSON data outside of the context of your Oracle JET based application.

A better way to do obfuscation
Besides implementing true randomization and more advanced security than obfuscation there are much better ways to do obfuscation in your URLs. An example would be for example simple encryption would work much (much much much) better. The below examples are rijndael-256 encrypted IDs with a base64 encoding

  • A01049 - fiCOcGED9SDiYe9du0XUIu1tsHQNGwWVK9uvI755+fg=
  • A01050 - cQbgwi1mnsYloonV4ZhqPMo3C2ie0+ilmsZFr3mBb3A= 
  • A01050 - U0xs5e4QP6OIclZBCSLuf34WFNRqs3lbtBgkmMiRvkc=

Opposed to the number substitution shown below for the same IDs:

  • A01049 - QEREUS
  • A01050 - QEREIE
  • A01051 - QEREIR 


Should you use obfuscation?
my personal opinion; no. Main reason for saying no is that it provides a false sense of security. It gives you the idea you are safe while someone who is determined will figure it out at one moment in time. There are better and more structural ways of doing things like preventing people from overly active scrape your website. However, if obfuscation is a part of a wider set of security implementations you should think about a very good way of doing obfuscation and not simply rely on a solution like substitution as it will take a very short amount of time for someone to figure out the substitution algorithm


Thursday, August 02, 2018

Oracle Linux - security hardening - CIS control 1.1.2

As part of ensuring you deploy Oracle Linux 7 in a secure way the CIS benchmark van provide a good guidance. Following the CIS benchmark will ensure that most of the important security hardening topics will be considered. As with most general guidelines, the Oracle Linux 7 CIS benchmark, not all will apply on your specific situation. Having stated that, it is good to consider all the points mentioned in the benchmark and apply them with a comply or explain model.

Within this series of posts we will go through all the Oracle Linux 7 CIS benchmark controls and outline them a bit more than might have been done on the actual CIS benchmark.

control : Set nodev option for /tmp Partition

The rationale given by the CIS benchmark is ; Since the /tmp filesystem is not intended to support devices, set this option to ensure that users cannot attempt to create block or character special devices in /tmp.

In more detail, if the nodev option is not set it would allow for mounting the /tmp filesystem on a device. In this sense, a device could be for example a USB drive attached to the machine. When, if this is not prevented, a USB drive is added to the machine a device node will be created under /dev which can be used to mount /tmp on. The nodev option will ensure that this is prevented.  The official man page reads the following on nodev : “Do not interpret character or block special devices on the file system.”

The CIS benchmark documentation provides the below command as a way to verify that the nodev option is given. In reality two possible options are provided however, I feel the one below is providing the most assurance that it is actually actively implemented in the right way.

mount | grep "[[:space:]]/tmp[[:space:]]" | grep nodev 

A more extensive version of this check which will provide a pass/fail response is shown below:

#!/bin/bash
mount | grep "[[:space:]]/tmp[[:space:]]" | grep nodev | wc -l&> /dev/null
if [ $? == 0 ]; then
   echo "fail"
else
   echo "pass"
fi

This provides a easier way to implement an automated check if you want to incorporate this in a wider check for your Oracle Linux installation.

Monday, July 30, 2018

Oracle Linux - security hardening - CIS control 1.1.1

As part of ensuring you deploy Oracle Linux 7 in a secure way the CIS benchmark van provide a good guidance. Following the CIS benchmark will ensure that most of the important security hardening topics will be considered. As with most general guidelines, the Oracle Linux 7 CIS benchmark, not all will apply on your specific situation. Having stated that, it is good to consider all the points mentioned in the benchmark and apply them with a comply or explain model.

Within this series of posts we will go through all the Oracle Linux 7 CIS benchmark controls and outline them a bit more than might have been done on the actual CIS benchmark.

control : Create Separate Partition for /tmp

The rational behind this control is : Since the /tmp directory is intended to be world-writable, there is a risk of resource exhaustion if it is not bound to a separate partition. In addition, making /tmp its own file system allows an administrator to set the noexec option on the mount, making /tmp useless for an attacker to install executable code. It would also prevent an attacker from establishing a hardlink to a system setuid program and wait for it to be updated. Once the program was updated, the hardlink would be broken and the attacker would have his own copy of the program. If the program happened to have a security vulnerability, the attacker could continue to exploit the known flaw.

What this in effect means is: that if an attacker would be able to flood /tmp with “junk” data it could lead to a situation where your system disks are full and the operating system is unable to function in the way it should. Additionally, as /tmp is a place most users will be able to write data to an attacker could also write scripts and code to it. As it provides a place to store code it can be abused from that point of view. If you allow users to write to the /tmp space however disallow them to execute code that is stored in /tmp this takes away this specific risk of code execution for code stored in /tmp.

The CIS benchmark provides the below standard code to verify if you have a separate /tmp in place:


grep "[[:space:]]/tmp[[:space:]]" /etc/fstab


Even though this check works the below might be a bit smarter and will provide a pass or fail result based upon the check:

#!/bin/bash

grep "[[:space:]]/tmp[[:space:]]" /etc/fstab | wc -l&> /dev/null
if [ $? == 0 ]; then
   echo "fail"
else
   echo "pass"
fi

In effect the above will do the same as the check promoted in the CIS benchmark document, however, it might be easier to include in a programmatic check. In case /tmp is not a separate file system it will return a fail on this control.

Tuesday, July 24, 2018

Oracle WebLogic - prevent users from bypassing the SAML authentication

The Oracle Critical Patch Update Advisory of July 2018 provides an advisory on patching Weblogic to resolve two SAML related security issues. Both issues have been registered as a CVE (Common Vulnerabilities and Exposures) entry. For more information you can refer to CVE-2018-2998 and CVE-2018-2933 who provide additional information as well as the Oracle website where you can find more information about the issues found and the way to mitigate against them.

At current the CVE database provides most insight on CVE-2018-2998:

Vulnerability in the Oracle WebLogic Server component of Oracle Fusion Middleware (subcomponent: SAML). Supported versions that are affected are 10.3.6.0, 12.1.3.0, 12.2.1.2 and 12.2.1.3. Easily exploitable vulnerability allows low privileged attacker with network access via HTTP to compromise Oracle WebLogic Server. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of Oracle WebLogic Server accessible data as well as unauthorized read access to a subset of Oracle WebLogic Server accessible data. CVSS 3.0 Base Score 5.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:N).

Both issues have been found by Denis Andzakovic from Pulse Security. Denis gives more insight on the Pulse Security website on the inner workings and additional details of the exact issue. In effect the issues results in the option to bypass the SAML authentication and by inserting an XML comment, an attacker can coerce the WebLogic SAML Service Provider to log in as another user. When an XML comment is added inside a NameID tag, the WebLogic server only processes the string after the comment. Adding the XML comment does not invalidate the SAML assertion’s signature.

Oracle has provided a solution for the issue found by Denis Andzakovic, the July CPU page provides additional information on how to patch your systems. As an addition the Oracle support website provides document 2421480.1 "Recommendations for Configuring Assertion Signatures in WebLogic Server" with additional information.


Monday, July 02, 2018

Oracle Linux - Caddy Server in OL7 Docker container

Today we pushed the first version of a Oracle Linux based Caddy server Docker image to the public Docker Hub. The Caddy webserver is seen as one of the most security minded webservers and is known to be not vulnerable for a large number CVEs. The main objective of the Caddy webserver is to provide a security first webserver.

For developers and DevOps teams who want to adopt Caddy Server, it is now available in a Oracle Linux 7 Docker Container on the OracleLinuxWorld docker hub page. You can pull the image with a:

docker pull oraclelinuxworld/ol7slim-caddyserver

The code for the Oracle Linux based Caddy Server container is available on the OracleLinuxWorld Github page. In case of any issues or requests; please raise a request on github.

When deploying your HTML code, the home directory for the Caddy server is /var/www/html where you can deploy all files you want to serve with Caddy. 

Friday, June 29, 2018

Oracle Linux - connect F5 to remote syslog server

Logging on Linux devices is by default local. A number of good reasons exists to ensure you have all your logs in one central location. Within a recent project the ask was to ensure all logging was done to a central Oracle Linux rsyslog server. The activation of a rsyslog server to receive all information from other Oracle Linux nodes is a trivial task.

Installing the rsyslog server can be done using a yum command ; yum install rsyslog which will take care of the most. A more interesting side of things is if you want to ensure that not only your Oracle Linux nodes report to your rsyslog server. When you have an F5 appliance you will have to make sure that you provide the details of your Oracle Linux rsyslog server to this device as well using the tmos shell.

Setting the Oracle Linux rsyslog server in the F5 can be done using a command as shown below:

modify /sys syslog remote-servers add { {host  remote-port }}

This should set the config correct. If you want to verify the new configuration you can do so using the below commmand:

# tmsh list sys syslog
sys syslog {
    include "destination d_loghost { udp(172.16.1.110 port(514) localip(172.18.1.1));}; log {source(s_syslog_pipe); filter(f_local0); destination(d_loghost);};"
}

This should ensure the basics are set to enable you to receive log traffic from your F5 on your Oracle Linux rsyslog server. 

Saturday, May 26, 2018

Oracle AI cloud - develop local Pillow applications

Oracle AI Cloud provides by default a solution for developers who want to develop and deploy applications that make use of Pillow. Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors. Even though developing your code in the Oracle cloud makes sense in some cases, however, developing on your local workstation makes a lot more sense from time to time.

To start developing Pillow based applications the most easy way is to install pillow in a local Oracle Linux vagrant box. Setting up a local Oracle Vagrant box is relative straightforward and already discussed a couple of times in this blog.

Installing Pillow
Installing Pillow can be done by using pip, you can use the below example on how you need to install Python Pillow on Oracle Linux using pip.

[root@localhost site-packages]# pip install Pillow
Collecting Pillow
  Downloading https://files.pythonhosted.org/packages/00/49/a0483e7308b4b04b5a898789911dbb876d9fea54e7df0453915e47744cfd/Pillow-5.1.0-cp27-cp27mu-manylinux1_x86_64.whl (2.0MB)
    100% |████████████████████████████████| 2.0MB 190kB/s 
Installing collected packages: Pillow
Successfully installed Pillow-5.1.0
[root@localhost site-packages]#

As you can see, pip will take care of installing Pillow and gives you a ready to environment to start developing. This will help you to develop locally on projects you can deploy later on the Oracle AI Cloud. 

Saturday, May 05, 2018

Oracle Linux - register GitLab Runner

Automation of the development process and including CI/CD processes in your development and deployment cycle is more and more common. One of the solutions you could use is GitLab CI to build automated pipelines. For companies who want (need) to maintain a private repository and cannot use, as an example, github.com for storing their source code GitLab is a very good tool. As part of GitLab you can use GitLab Ci for pipeline automation.

Using GitLab CI and the Gitlab runners takes away (for a part) the need to include tooling such as Jenkins in your landscape. You can instruct your Gitlab Runners to execute certain task and control run the pipeline. For this to work you need to install the runner and register it against your GitLab repository.

In our case we run the GitLab repository on an Oracle Linux 7 instance and we also have the GitLab runner installed on a (seperate) Oracle Linux 7 instance. After installation you will have to take the below steps to register your runner against the GitLab repository. This is done on the GitLab runner instance.

[root@gitlab ~]# gitlab-runner register
Running in system-mode.                            
                                                   
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
http://192.168.56.3/
Please enter the gitlab-ci token for this runner:
5gosbE5T2XJxtzXr_b_h
Please enter the gitlab-ci description for this runner:
[gitlab.devnet.terminalcult.org]: runner_0  
Please enter the gitlab-ci tags for this runner (comma separated):
all
Whether to run untagged builds [true/false]:
[false]: true
Whether to lock the Runner to current project [true/false]:
[true]: true
Registering runner... succeeded                     runner=5gosbE5T                
Please enter the executor: docker, parallels, shell, virtualbox, kubernetes, docker-ssh, ssh, docker+machine, docker-ssh+machine:
[docker, parallels, shell, virtualbox, kubernetes, docker-ssh, ssh, docker+machine, docker-ssh+machine]: shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! 
[root@gitlab ~]# 

The token you need to provide you can get from the GitLab repository. The below image shows the token and the location where you can obtain the code.



The same page can be used to change settings to your runners after they are deployed. 

Saturday, April 07, 2018

Oracle Cloud - Using AI cloud Platform to find a parking spot

One of the new and upcoming parts of the Oracle cloud is the Oracle AI Cloud platform. In effect this is a bundle of pre-installed frameworks and libraries who are tuned to run on the Oracle cloud infrastructure. One of the deployments in the Oracle AI Cloud Platform is OpenCV. When you are working with incoming visual data this might be of much interest to you. 

OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. Officially launched in 1999, the OpenCV project was initially an Intel Research initiative to advance CPU-intensive applications, part of a series of projects including real-time ray tracing and 3D display walls.

The below image showcases the full Oracle AI Cloud platform:

Example usecase
as an example usecase for using OpenCV from the Oracle AI Cloud Platform we like to outline a theoretical case where on a regular base pictures of a "old fashion" parking space at an airport are being uploaded to OpenCV. based upon the images that are being send to OpenCV on the Oracle AI Cloud Platform the system can detect on which part of the parking area most open spots are and direct visitors to this area.

Even though most parking spaces have a counter of how many cars are currently on the parking lot, when this is done for a large space it can still be hard to find the area where there are free spots. As you already would need some level of camera security for this area the costs for adding this feature are much lower compared to installing sensors in the ground who could detect where a car is parked or not.


Even though it might sound complex, detecting free parking space is a relative easy task to conduct with OpenCV and a large number of examples and algorithms are available. With relative ease you would be able to create a solution like this on the Oracle Cloud and by doing so improve the satisfaction of customers without the need to add sensors in every possible parking location. 

Tuesday, March 20, 2018

Oracle Linux - Local Vault token cache

Vault is more and more seen in modern day infrastructure deployments. HashiCorp Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, and auditing. Through a unified API, users can access an encrypted Key/Value store and network encryption-as-a-service, or generate AWS IAM/STS credentials, SQL/NoSQL databases, X.509 certificates, SSH credentials, and more.

When using Vault from Hashicorp on your Oracle Linux infrastructure you might have noticed that there is no logout option. You can authenticate yourself against vault and from that moment on you can request all information from Vault that you need (and entitled to see). When starting with Vault and building your scripting you might wonder how you "break" the connection again.

In effect, the connection is build every time you do a request against vault and the authentication with a token is done based upon a local cache of the token. If you want to ensure that after you executed the steps needed against vault all tokens are removed you will have to remove the token which is placed in a local cache.

In the case of vault the local cache is a clear text file stored in your home directory as shown below:

[root@docker tmp]# ls -la ~/.vault-token 
-rw------- 1 root root 36 Mar 19 15:49 /root/.vault-token
[root@docker tmp]# 

Even though some improvement requests have been raised to add a logout like function to the Vault CLI the response from the developers from HashiCorp has been that they are not intending to build this into the CLI due to the fact that removing the .vault-token file has the same effect.

In effect the developers from Vault are correct in this and it has the same effect even though it might be a more understandable way of doing things with an option in the CLI. A reminder for everyone who is using Vault, if you are done, ensure that you remove the .vault-token cache file so you are sure nobody will be able to abuse the key to gain access to information they are not entitled to see. 

Sunday, March 18, 2018

Oracle MySQL - test MySQL with Docker

First things first, I am totally against running any type of Docker container that will hold persistent data in any way or form. Even to the point that I like to make the statement that mounting external storage to a container which will hold the persistent data is a bad thing. Some people will disagree with me, however, in the current state of Docker I am against it. Docker should run stateless services and should in no way be depending on persistent data which is directly available (in any form) in the container itself. Having stated this, this post is about running databases in a container, while databases are one of the best examples of persistent storage.

The only exception I make on the statement of not having persistent storage in a container is volatile testing environments. If you have a testing environment you intend to only use for a couple of hours, using a container to serve a database is not a bad thing at all. What you need to remember, if your container stops, all your data is gone.

Getting started with MySQL in Docker
To get started with MySQL in a Docker container you first have to pull it from the Docker registry. You can pull the official container image from Docker as shown in the example below which is done on Oracle Linux:

[root@docker ~]# docker pull mysql
Using default tag: latest
latest: Pulling from library/mysql
2a72cbf407d6: Pull complete 
38680a9b47a8: Pull complete 
4c732aa0eb1b: Pull complete 
c5317a34eddd: Pull complete 
f92be680366c: Pull complete 
e8ecd8bec5ab: Pull complete 
2a650284a6a8: Pull complete 
5b5108d08c6d: Pull complete 
beaff1261757: Pull complete 
c1a55c6375b5: Pull complete 
8181cde51c65: Pull complete 
Digest: sha256:691c55aabb3c4e3b89b953dd2f022f7ea845e5443954767d321d5f5fa394e28c
Status: Downloaded newer image for mysql:latest
[root@docker ~]# 

Now, this should give you the latest evrsion of the MySQL container image. You can check this witht  the docker images command as shown below:

[root@docker ~]# docker images | grep mysql
mysql        latest       5195076672a7        4 days ago          371MB
[root@docker ~]#

Start MySQL in Docker
To start MySQL you can use the below command as an example. As you can see this is a somewhat more extended command than you might see on the Docker page for MySQL.

docker run --name testmysql -e MYSQL_ROOT_PASSWORD=verysecret -p 3306:3306 --rm -d mysql

What I have added in the above example is that I map the internal port 3306 to an external port 3306. If you run multiple instances of MySQL you will need to change the external port numbers. I also added --rm to ensure the docker image is not persisted in any way or form as soon as you stop it.

After starting the container you should be able to find it with a docker ps command:

[root@docker ~]# docker ps |grep mysql
5d8f8bac45a1        mysql        "docker-entrypoint..."   8 minutes ago     Up 8 minutes        0.0.0.0:3306->3306/tcp  testmysql
[root@docker ~]# 

Use databases in Docker?
As already stated, and actually the reason I wrote this post, you should not run anything in a container where you will need to have persistent storage available within the container itself. Databases are a good example of this. Based upon that statement you should not run a database in a container. Having stated that, if you can live with the fact you might lose all your data (for example in a quick test setup) there is nothing against running a database in a container.

Just make sure you don't do it with your production data (please....).