Wednesday, December 31, 2008

VirtualBox and virtual disks

When I shifted to a MacBook as my primary computer I needed to install all kinds of new software and find all kinds of new tools. Looking into new tools for a new platform can be fun. One of the choices I made was running VirtualBox Sun xVM as the virtualization platform to allow me to run Linux and windows virtual machines on my Mac.

I have been working now for some time with VirtualBox and I have to say, I love the product. However, one thing came to me as strange and unexpected. After giving it some thoughts it made a lot of sense. After using VirtualBox for some time and playing with the installation options I removed a couple of the VM's. After some time it came to my notion that my disk was becoming quite full.

!! When you remove a virtual machine you do NOT remove the disk image !!

So when I looked in virtualbox vdi directory I noticed all the vdi files which are around 10 Gig in size still there. When you create a virtual machine you first create a virtual disk, then you install the operating system on this disk. The disk is represented as a vdi file which has the size of the disk. When you remove the VM you also have to remove the disk. Just something you have to know, so clean up after yourself and remove the disks. This can be done via the Virtual Disk Manager.

About VDI, VDI stands for Virtual Disk Image. This is the standard that VirtualBox uses. You can also use VMDK which is the format for disks from VMware or you can use VHD from Microsoft.

Sunday, December 28, 2008

Install Apache Tomcat on Ubuntu

Tomcat, an Open Source JSP and Servlet Container developed by the guys from the Apache software foundation. I have abandoned Tomcat for years because I was not working on projects which required a java container of it self. Working with Oracle Application server is giving you enough, however, now I am starting to code some projects which requiers a java container of itself so it was a small step to get back to my old love I have been working with for several years.

Because I will be using Tomcat at this moment still mainly for testing and coding while commuting from home to work and back I decided to install it on my virtual Ubuntu installation on my MacBook. I found out that installing Tomcat on Ubuntu is quite easy. I found a quick howto at from which I followed most of the steps. Here is my fork of this howto.

1) Check your current java version.
As Tomcat is running Java code and is depending on Java you will have to have Java running on your system. Check your currently installed Java versions with the following command:

dpkg -l | grep sun

This should give you at least the following packages installed; sun-java6-bin, sun-java6-jdk and sun-java6-jre. If not you have to install Java on Ubuntu with the following command: sudo apt-get install sun-java6-jdk

Now when you check it again you should see the packages installed.

2) Install Tomcat
Installing tomcat can be done by downloading it from the tomcat website, currently this can be done from the following location:
When you have downloaded Tomcat you have to unpack it and move it to /usr/local so that tomcat will be in /usr/local/tomcat/

3) Setting JAVA_HOME
One of the requirements of Tomcat is that the variable TOMCAT_HOME is set on your system and that it points to the java version you just installed. You can check this by executing env | grep JAVA_HOME which should (or should not) give you JAVA_HOME and the value of this variable. If this is not the case or the value is not pointing to the correct java version you have to change this. Lets say JAVA_HOME is not set, you have to edit .bashrc in your home directory so enter vi ~/.bashrc at the end of the file add the following: export JAVA_HOME=/usr/lib/jvm/java-6-sun

4)Startup script Tomcat
Now we would like to have tomcat to startup when you start your machine and shutdown correctly when you shutdown the machine. Create a file in /etc/init.d named tomcat and enter the following information:

# Tomcat auto-start
# description: Auto-starts tomcat
# processname: tomcat
# pidfile: /var/run/

export JAVA_HOME=/usr/lib/jvm/java-6-sun

case $1 in
sh /usr/local/tomcat/bin/
sh /usr/local/tomcat/bin/
sh /usr/local/tomcat/bin/
sh /usr/local/tomcat/bin/
exit 0

Now we have to make this script executable sudo chmod 755 /etc/init.d/tomcat . Now the script is executable we can do a tomcat start to start it, tomcat stop to make it stop and a tomcat restart to stop and start it again. However we would like to have it start and stop automatically. For this we create a start and stop link by executing the following two commands:

sudo ln -s /etc/init.d/tomcat /etc/rc1.d/K99tomcat
sudo ln -s /etc/init.d/tomcat /etc/rc2.d/S99tomcat

Now when you open http://localhost:8080 you will see the start page of tomcat in your browser. All set and ready to go.

Monday, December 22, 2008

High Performance Computing with Penguin Computing

A couple of days ago someone pointed out to me a company named Penguin Computing. Penguin Computing is one of the leading companies in HPC, High Performance Computing, and it is a leading company which has dedicated itself to Linux for the full 100%. Penguin computing is, according to its mission statement a leader in Cluster Virtualization. One of the executives of Penguin Computing is Donald Becker who we know from creating beowulf clustering while he was working at NASA.

"Penguin Computing is the leader in Cluster Virtualization, the most practical and cost-effective methodology for reducing the complexity and administrative burden of clustered computing. Our cluster solutions are driven by Scyld ClusterWare, whose unique architecture makes large pools of Linux servers appear and act like a single, consistent, virtual system, with a single point of command/control. By combining the economics of Open Source with the simplicity and manageability of Cluster Virtualization, we help you drive productivity up and cost out, making Linux clustering as powerful and easy to use as expensive SMP environments, at a Linux price point."

One of the things that Penguin Computing is providing is a ready to use 'out of the box' cluster. You get it all, the software, the hardware and all you need to get your cluster up and running. Even do this is not as much fun as building your own cluster with a team of people it is more efficient I think. The thing holding it all together by the solutions from Penguin Computing is the Scyld ClusterWare Linux clustering software. The Scyld ClusterWare is a set of tools to manage your cluster, or as they would like to call it "Scyld ClusterWare HPC is an HPC cluster management solution".

The Scyld ClusterWare cluster is controlled by the master node which hosts the scheduler, process migration, parallel libraries and the cluster management tools. Both cluster administrators and the users connect to the master node. The scheduler on the master node is enforcing the cluster policies so that you can create rules that for example work from a specific department has more priority in the cluster than that from an other. So jobs with a higher priority are scheduled before work with a low priority. TORQUE is the main workload management tool used within Scyld ClusterWare as a basic scheduler, for your more advanced scheduling requests Scyld TaskMaster, Scyld TaskMaster is an adaptation of the Moab Suite from Cluster Resources.

provisioning your computing nodes is done via a network boot where the master image is loaded into the RAM memory of the computing nodes. No need for local disks running a operating system. Specific libraries and drivers needed by the computing nodes are provided by the master node on request. The computing nodes are running a very very lean operating system which is stripped from all unneeded options. Only services which are needed to communicate with the master node are provided which makes more room for the real work to be done. If we look at other cluster setups we have in most cases a 'stripped' linux operating system with a lot of services running which are not really needed but who are hard to remove from the system.

On the website of Penguin Computing they state that they can provision a new node with a computing node operating system within a minute, this makes my provisioning of Oracle Enterprise Linux look very very slow. However, they are provisioning a very small operating system into RAM where I was provisioning a complete distribution to disk so that makes some difference. By using a single image and the quick provisioning they can make sure that all nodes in the cluster are running the same OS which is a good thing and makes it even more stable.

One more thing from the website I would like to share with you:
"Scyld ClusterWare is fully compatible with RedHat Enterprise Linux, supporting a huge variety of applications from all HPC disciplines such as Mechanical Computer-Aided Engineering (MCAE), Life Sciences, Computational Fluid Dynamics, Financial Services, Energy Services and Electronic Design Automation (EDA). Application Notes for applications such as ANSYS®, FLUENT®, LS-Dyna®, Blast, Matlab® and Schrodinger® (Prime/Glide/Jaguar) are available for customers through Penguin Computing's Support Portal".

From what I read at the website Penguin Computing is doing a great thing and they have a great solution, however, I would love to play with the system for a couple of days to get to know more of how it all is working. You can download a fully working Scyld ClusterWare for a test period of 45 days. Great, however, I do not have enough spare computers to build a test cluster. If Penguin Computing is having a road show somewhere in Europe I might take a flight to talk to some of the people and have a look at the system when it is in operation.

Saturday, December 20, 2008

Partition Decoupling Method

When working on complex systems and trying to map them you will find that the relations within your data can become very complex very fast. Complex data which is time-dependent and interrelated will form a complex maze of data and dependencies which will become almost a impossible task of mapping.

A group of researchers from Darmouth have developed a mathematical tool which can help understand complex data systems like the votes of legislators over their careers, second-by-second activity of the stock market, or levels of oxygenated blood flow in the brain.

“With respect to the equities market we created a map that illustrated a generalized notion of sector and industry, as well as the interactions between them, reflecting the different levels of capital flow, among and between companies, industries, sectors, and so forth,” says Rockmore, the John G. Kemeny Parents Professor of Mathematics and a professor of computer science. “In fact, it is this idea of flow, be it capital, oxygenated blood, or political orientation, that we are capturing.”

Capturing patterns in this so-called ‘flow’ is important to understand the subtle interdependencies among the different components of a complex system. The researchers use the mathematics of a subject called spectral analysis, which is often used to model heat flow on different kinds of geometric surfaces, to analyze the network of correlations. This is combined with statistical learning tools to produce the Partition Decoupling Method (PDM). The PDM discovers regions where the flow circulates more than would be expected at random, collapsing these regions and then creating new networks of sectors as well as residual networks. The result effectively zooms in to obtain detailed analysis of the interrelations as well as zooms out to view the coarse-scale flow at a distance."
Source Press Release

In a paper named "Topological structures in the equities market network" written by Gregory Leibon, Scott D. Pauls, Danile Rockmore and Robert Savell the Partition Decoupling Method is used to map the underlying structure of the equities market network.

"We present a new method for the decomposition of complex systems given a correlation network structure which yields scale-dependent geometric information — which in turn provides a multiscale decomposition of the underlying data elements. The PDM generalizes traditional multi-scalar clustering methods by exposing multiple partitions of clustered entities. "

More information can be found at:

Saturday, December 13, 2008

Cluster Computing Network Blueprint

For those who have been watching the lecture by Aaron Kimball from Google "cluster computing and mapreduce lecture 1" might have noticed that a big portion of the first part of the lecture is about networking and why networking is so important to distributed computing.

"Designing real distributed systems requires consideration of networking topology."

Let say you are designing a Hadoop cluster for distributed computing for a company who will be processing lots and lots of information during the night to be able to use this information the next morning for daily business. The last thing you want is that the work is not completed during the night due to a networking problem. A failing switch can, in certain network setups, make that you loose a large portion of your computing power. Thinking about the network setup and making a good network blueprint for your system is a vital part of creating a successful solution.

We take for example the previously mentioned company. This company is working on chip research and during the day people are making new designs and algorithms which needs to be tested in your Hadoop cluster during the night. The next morning when people come in they expect to have the results of the jobs they have placed in the Hadoop queue the previous day. The number of engineers in this company is enormous and the cluster has a use of around 98% during 'non working hours' and 80% during working hours. As you can see a failure of the entire system or a reduction of the computing power will have a enormous impact on the daily work of all the engineers.

A closer look at this theoretical cluster.
- The cluster consists out of 960 computing nodes.
- A node is a 1u server with 2 quad core processors and 6 gigabyte of ram.
- The nodes are placed in 19' racks every rack houses 24 computing nodes.
- There are 40 racks with computing nodes.

As you can see, if we would lose a entire rack due to a network failure we would lose 2.5% of the computing power. As the cluster is used 98% during the night we would not have enough computing power to do all the work during the night. Losing a single node will not be a problem, however losing a stack of nodes will result in a major problem during the next day. For this we will have to create a network blueprint in which we can ensure that we will not lose a entire computing stack. If we talk about a computing stack we talk about a rack of 24 servers in this example.

First we will have to take a look at how we will connect the racks, if you look at the Cisco website you will be able to find the Cisco Catalyst 3560 product page. The Cisco Catalyst 3560 has 4 Fiber ports and we will be using those switches to connect the computing stacks. However, to ensure the network redundancy we will use 2 switches for very stack instead of one. As you can see in the diagram below we crosslink the switches. SW-B0 and SW-B1 will both be handling computing stack B, switches SW-C0 and SW-C1 will be handling the network load for computing stack C etc etc. We will be connecting SW-B0 with fiber to SW-C0 and SW-C1. We will also connect SW-B1 with fiber to SW-C0 and SW-C1. In case SW-B0 or SW-B1 will fail the network can still route traffic to the the switches in the B computing stack and also the a computing stack. By creating the network in this way the network will not fail to route traffic to the other stacks, the only thing that will happen is that the surviving switch will have to handle more load.

This setup will however not solve the problem that the nodes that are connected to the failing switch will loose the network connection. To resolve this we will attache every node to two switches. Every computing stack has 24 computing nodes. The switch has 48 ports and we do have 2 switches. To solve this problem we place 2 network interfaces in every node. One will be in standby mode and one will be active. To spread the load at all even numbered nodes the active NIC will be connected to switch 0, at all uneven numbered nodes the active NIC will be connected switch 1. For the inactive (standby) NIC, all the even numbered nodes will be connected to switch 1 and all uneven numbered nodes will be connected to switch 0. In a normal situation the load will be balanced between the two switches, in case one of the two switches fails the standby NIC's will have to become active and all the network traffic to the nodes in the computing stack will be handled by the surviving switch.

To have the NIC's to switch to the surviving network switch and to make sure that operation continue as normal you will have to make sure that the network keeps looking at the servers in the same way as before the moment one of the switches failed. To do this you will have to make sure that the new NIC has the same IP address and MAC address. To do so you can make use of IPAT, IP Address Takeover.

"IP address takeover feature is available on many commercial clusters. This feature protects an installation against failures of the Network Interface Cards (NICs). In order to make this mechanism work, installations must have two NICs for each IP address assigned to a server. Both the NICs must be connected to the same physical network. One NIC is always active while the other is in a standby mode. The moment the system detects a problem with the main adapter, it immediately fails over to the standby NIC. Ongoing TCP/IP connections are not disturbed and as a result clients do not notice any downtime on the server. "

Now we have tackled almost every possible breakdown, however what can happen is that not one switch but both switches in stack break. If we look at the examples above this would mean that the stacks will be separated by the broken stack. To prevent this you will have to make a connection between the first and the last stack as you make between all the stack. By doing so you make a 'ring' of your network. With correct setup of all your switches and making good routing a failover routings your network can also handle the malfunction of a complete stack in combination with both switches in the stack.

Even do this is a theoretical blueprint, developing your network in such a way in combination with writing your own code to controle network flows, scripts to control the IPAT and the switching back of IPAT, thinking about reporting and alerting mechanisms will make a very solid network. If there are any questions about this networking blueprint for cluster computing please do send me a e-mail or post a comment. I will reply with a answer (good or bad) or explain things in more detail in a new post.

Friday, December 12, 2008

Fix your macbook

Ok, you bought a macbook, iphone or any other product from the Apple corporation. All looks good until suddenly something breaks. You have a couple of options, go to a mac shop and claim that this is part of your waranty contract and that you would like to have it repaired.

Now comes the tricky part, it might be that you no longer have a a warranty or that you have to turn in your machine and you are not willing to. In those cases you still have a second option. You can look at which parts are broken and go to the iFixit website and order the parts. iFixit have a lot of mac spareparts on stock. Also they provide you with step by step instructions on how to change the parts that are broken.

Tuesday, December 09, 2008

Cluster Computing and MapReduce Lecture 1

The Google Code Channel on Youtube is having some lectures about Cluster Computing and Hadoop. I am currently viewing them all and I can say they are worth watching. The first one in the serie is given by Aaron Kimball, "Problem solving on Large-Scale clusters".

Some quick nice quotes:

- Parallelization is "easy" if processing can be cleanly split into
n units.
- Processing more data means using more machines at the same time.
- Cooperation between processes requires synchronization.
- Designing real distributed systems requires consideration of networking topology.

Aaron will go into the fundamentals and in the upcoming lectures we will dive into more details. you can review the presentation here below. You can also find all the shows on the Google code site and some of the questions and answers. I will also post the other shows with some of my comments after I have viewed them.

Virtualbox and Oracle Enterprise Linux

I recently tried to get Oracle Enterprise Linux running in a virtualbox on my mac. The installation of Oracle Enterprise Linux is no problem, however, when you try to do a first boot the systems comes up and never passes beyond the point of:

Uncompressing Linux.... 0k, booting the kernel

The problem is that virtualbox is incapable of handling some of the SMP kernel options when you start the kernel. Currently a bug is reported at Sun to fix the problem. A workarround for this problem is to boot the Enterprise-up (2.6.9- kernel instead of the Enterprise (2.6.9- kernel.

It is not the best workarround but you can get a OEL system running in a virtualbox. When you like to default the system to start with this kernel you have to edit the file /boot/grub/grub.conf and change the default=0 to default=1. In my case the 1 is the Enterprise-up (2.6.9- kernel which can start with virtualbox.

Sunday, December 07, 2008

Oracle cloud computing

As a modern CIO in difficult times you will most likely have received the memo about cutting costs in the IT budget a couple of months after your received the memo about "we have to go green in the IT department". So there are a couple of options and ways to go on this. Most likely you have been considering the options of virtualization and clustering. I do make the presumption that we are talking about a Oracle minded company so you will have looked into the possibilities of clustering and virtualization with Oracle VM and Oracle clustering.

Running more than one operating system on a single server or making a large server by deploying a couple of small servers instead of buying a large expensive server. Now there is an other alternative which can help cutting IT budgets. Oracle is for some time now working with Amazone on cloud computing. Amazone is providing Amazon S3 "Amazone Simple Storage Service" and Amazon EC2 "Amazon Elastic Compute Cloud". As I already pointed out in a previous article about Oracle and Amazone teaming up and I posted a article about amazon EC2 and python.

So it is possible to run your Oracle database and applications within the Amazone computing cloud and store information at the Amazone storage service. This will make that you can minimize your own hardware infrastructure while providing your users with the same, or even higher' level of service.

"Oracle customers can now license Oracle Database 11g, Oracle Fusion Middleware, and Oracle Enterprise Manager to run in the AWS cloud computing environment. Oracle customers can also use their existing software licenses on Amazon EC2 with no additional license fees. And for on-premise Oracle installations, AWS offers a dependable and secure off-site backup location that integrates seamlessly with Oracle RMAN tools."

On the RMAN part, Oracle has released "Oracle Secure Backup Cloud Module". This module is a extension on the RMAN functionality, you are now able to backup your database to a storage cloud, for example Amazone S3. The nice part of this module is that you can backup a database that you are running in your own datacenter into the cloud. This means you encrypt and compress your backup and send it to the storage cloud instead of doing a backup to tape or a backup to disks. You send your data to Amazone automatically like you are doing a backup as you normally would do.

The other part is that, if you run your database in the Amazone Elastic Compute Cloud you can also backup your Oracle database to Amazone S3. When you run your database in your own datacenter you can have the bandwidth as a bottleneck even do the backup cloud module uses the 11G fast compressed backup feature it can still be a lot of data. If you run your database at the EC2 you will not have to worry about the bandwidth in your datacenter because the bandwidth between the computing cloud and the storage cloud is handled by Amazone.

So when you are looking for ways to reduce the datacenter costs it can be worth to look into what Amazone is providing for Oracle customers at the moment.

Oracle XEN, VT-x and VT-i

When running virtual machines with Oracle VM you have basically 2 options, you can run paravirtualization or hardware virtualization. When running hardware virtualization with oracle VM you have a closer connection to the CPU, however the CPU has to support that you are running in hardware virtualization mode. When running in hardware virtualization mode all commands are send to the CPU where during a paravirtualization setup some commands are filtered.

When running for example windows on a XEN or Oracle VM machine you will need to have a hardware platform that enables hardware virtualization. If not you will not be able to run windows virtually on your Oracle VM machine. For the intel processors the most are proving you with VT-x or VT-i support, you can check out the current processors at the intel website and get more details there.

VT-x is the code name for virtualization support for the IA-32 (Intel Architecture, 32-bit), often generically called x86 or x86-32 processors. VT-i is the code name for virtualization support for the IA-64 processors.

Some of the things the intel VT-x and VT-i technology is doing an helping XEN and Oracle VM are the following:

"Address-space compression. VT-x and VT-i provide two different techniques for solving address-space compression problems. With VT-x, every transition between guest software and the virtual machine monitor (VMM) can change the linear-address space, allowing the guest software full use of its own address space. The VMX transitions are managed by the virtual machine control structure (VMCS), which resides in the physical-address space, not the linear-address space. With VT-i, the VMM has a virtual-address bit that guest software cannot use. A VMM can conceal hardware support for this bit by intercepting guest calls to the processor abstraction layer (PAL) procedure that reports the number of implemented virtual-address bits. As a result, the guest will not expect to use this uppermost bit, allowing the VMM exclusive use of half of the virtual-address space."

"Ring-aliasing and ring compression. VT-x and VT-i eliminate ring-aliasing problems because they allow a VMM to run guest software at its intended privilege level. Instructions such as PUSH (of CS) and cannot reveal that software is running in a virtual machine. VT-x also eliminates ring compression problems that arise when a guest OS executes at the same privilege level as guest applications."

For more information about the processor instruction you can look at this intel document. The IA-64 instruction is the equivalent of the x86 CALL instruction. Microsoft has a article the MSDN website with more information about the IA-64 registers.

"Non-faulting accesses to privileged state. VT-x and VT-i avoid problems of non-faulting accesses to privileged state in two ways: by adding support that causes such accesses to transition to a VMM and by adding support that causes the state accessed to become unimportant to a VMM. A VMM based on VT-x does not require control of t he guest privilege level, and the VMCS controls the disposition of interrupts and exceptions. Thus, it can allow its guest access to the GDT, IDT, LDT, and TSS. VT-x allows guest software running at privilege level 0 to use the instructions LGDT, LIDT, LLDT, LTR, SGDT, SIDT, SLDT, and STR. With VT-i, the thash instruction causes virtualization faults, giving a VMM the opportunity to conceal any modifications it may have made to the VHPT base address."

LGDT : Load Global Descriptor Table Register
LIDT : Load Interrupt Descriptor Table Register
LLDT : Load Local Descriptor Table Register
LTR : Load Task Register
SGDT : Store Global Descriptor Table Register
SIDT : Store Interrupt Descriptor Table Register
SLDT : Store Local Descriptor Table Register
STR : Store Task Register

"Guest transitions. Guest software cannot use the IA-32 instructions SYSENTER and SYSEXIT if the guest OS runs outside privilege level 0. With VT-x, a guest OS can run at privilege level 0, allowing use of these instructions. With VT-i, a VMM can use the virtualization-acceleration field in the VPD to indicate that guest software can read or write the interruption-control registers without invoking the VMM on each access. The VMM can establish the values of these registers before any virtual interruption is delivered and can revise them before the guest interruption handler returns."

SYSENTER: Executes a fast call to a level 0 system procedure or routine. This instruction is a companion instruction to the SYSEXIT instruction. The SYSENTER instruction is optimized to provide the maximum performance for system calls from user code running at privilege level 3 to operating system or executive procedures running at privilege level 0.

SYSEXIT: Executes a fast return to privilege level 3 user code. This instruction is a companion instruction to the SYSENTER instruction. The SYSEXIT instruction is optimized to provide the maximum performance for returns from system procedures executing at protections levels 0 to user procedures executing at protection level 3. This instruction must be executed from code executing at privilege level 0.

You can find the complete IA-32 Opcode Dictionary at the website or via google.

"Interrupt virtualization. VT-x and VT-i both provide explicit support for the virtualization of interrupt masking. VT-x includes an external-interrupt exiting VMexecution control. When this control is set to 1, a VMM prevents guest control of interrupt masking without gaining control on every guest attempt to modify EFLAGS.IF. Similarly, VT-i includes a virtualization-acceleration field that prevents guest software from affecting interrupt masking and avoids making transitions to the VMM on every access to the PSR.i bit.

VT-x also includes an interrupt-window exiting VM-execution control. When this control is set to 1, a VM exit occurs whenever guest software is ready to receive interrupts. A VMM can set this control when it has a virtual interrupt to deliver to a guest. Similarly, VT-i includes a PAL service that a VMM can use to register that it has a virtual interrupt pending. When guest software is ready to receive such an interrupt, the service transfers control to the VMM via the new virtual external interrupt vector."

"Access to hidden state. VT-x includes in the guest-state area of the VMCS fields corresponding to CPU state not represented in any software-accessible register. The processor loads values from these VMCS fields on every VM entry and saves into them on every VM exit. This provides the support necessary for preserving this state while the VMM is running or when changing VMs."

Macbook and wifi

I recently purchased a new macbook pro, this resulted in an 'old' macbook hanging around my desk without being used for some time. As my girlfriend is still working on a old laptop I purchased a couple of years back she was interested in using my old macbook. One of the things however she did not like was the default operating system on my macbook. She wanted to run Linux on it.

Installation of ubuntu on a macbook is very very simple, just take the normal steps of installing ubuntu. however the wifi drivers is a story on its self. If you are installing Ubuntu on a macbook.... don't do what i did, just follow the guide of installing ath9k driver and you will be ready to go. It is part of a ubuntu howto on installing on a macbook.

You can install the ath9k drivers, you can also make wifi on a macbook with ubuntu working with madwifi or a ndiswrapper. it is all in the guide so just follow the guide and you will be able to run ubuntu on a macbook.

Monday, December 01, 2008

HOWTO Oracle descriptive flexfields

According to the Oracle manuals a descriptive flexfield is the following; "A flexfield is a placeholder set of fields that can be configured by customers for use by their organizations. Once configured, the customer-defined fields (label/widget pairs) may appear in either form or tabular layouts. There are two main types of flexfields: Descriptive flexfields, which are configured as a set of fields that are indistinguishable from core (default) application fields, and key flexfields, which consist of multiple segments for entry of codes, such as a product serial numbers or bank account numbers.".

In short, a descriptive flexfield gives you in Oracle E-business suite the possibility to extend the information which you by default can enter. For example when you enter information about a product in de item master you can provide a number of default values. In some cases you would like to give the users the possibility to enter more information which is not by default setup by Oracle because they are very specific for your business. In this example we will add a couple of descriptive flexfields to the Item table in Oracle E-Business suite.

First of all we identify the screen where we want the flexfields to be shown, in this case this is the Master Item screen under the Inventory responsibility. Remember, if we create a flexfield for this screen it will show up under every responsibility so it will also be shown when you request this screen via, for example, the Order Management Super User responsibility.

Now we know the screen and we switch responsibilities to the Application Developer responsibility. Here we select the following from the menu: Flexfield - Descriptive - Segments. This will open the screen as shown below, we have to find the correct flexfield segment, in this case this is Application : Inventory and Title : Items.

Now we have find the correct descriptive flexfield segments we can select segments, we can for example add the flexfield test by simply adding a record. By setting the value set you can define to which conditions the field is bound. For example only numbers, only characters, only 5 numbers, only 5 characters... etc etc. There are a couple of pre-defined values however you can also create your own if needed.

You can also open the record and set some extra things, like is the field required, set a default value for the flexfield, set a range, etc etc.

When you are done you can save your work and close the flexfield screens. What you have to remember is that you have to run a concurrent program to compile the new flexfields. You can run the concurrent request "Compile Descriptive Flexfields", here you can specify which flexfield you want to compile. If you have some more time you can also re-compile all descriptive flexfields by running the concurrent request "Compile All Flexfields", no parameters needed. After one of those 2 is completed successfully the flexfields are available for use. Remember, you can only create as many flexfields per database table as defined, a good indicator is looking in the table and look how many ATTRIBUTE(x) columns are defined.

Saturday, November 29, 2008

Oracle VM virtual RAC

Jakub Wartak, released a guide on how to set up a RAC enviroment for Oracle using Oracle VM and Oracle Enterprise Linux. You can find the paper here at the Oracle site and it is a must read!

"A typical Oracle Real Application Clusters (RAC) deployment is an architecture that provides fast recovery from a single or multiple-node failures. But in a typical scenario, all nodes of Oracle RAC are located in a single data center and thus prone to catastrophic data center failure. The solution for achieving disaster recovery in this scenario is to set up Oracle DataGuard between the local data center and some backup data center where standby systems are running (typically a single Oracle database or another RAC cluster).

Although DataGuard plays this role very well, it turns the whole standby system(s) and array(s) into passive nodes—wherein computing power can't be used for transactions—and thus heavily increases the price of the solution. (Although standby Oracle DataGuard systems can be opened for read-only queries, and can even run in read-only mode all the time with Active DataGuard in Oracle Database 11g, in this configuration it requires applications to be aware of the read-only nature of some nodes.)

Fortunately there is another solution for achieving (partial) disaster recovery, called Extended RAC or Stretched RAC. In this architecture some of the RAC nodes work at “site Alpha”, and the rest of the nodes work at “site Beta”. Nodes at both sites are active, so all computing resources are fully utilized. As shown in Figure 1, each site has its own Storage Area Network (SAN); systems present at both data centers (dcA and dcB) are members of the same, single RAC cluster and thus must be able to exchange data over the interconnect very quickly as well as access the other site's storage. (That is, node RAC1 at dcA writes to SAN array at dcB and also communicates with RAC2 node at dcB)."

find subdomains of a domain

You might find yourself in a situation that you would like to know all the listed sub-domains in a domain. If you are the administrator of the DNS server you will not have much trouble of finding the information, if you are not the administrator of the server you can have a hard time finding out the sub-domains for a domain.

There is a 'trick' you can use to make the domain server tell you what the sub-domains for a domain are. This 'trick' is however not always working. If the domain server you are talking to allows DNS zone transfer and it is allowing this information to be send to all IP addresses that request is you are in luck. A DNS zone transfer is used to update slave DNS servers from the master DNS server.

When you operate several DNS servers you do not want to update them all when you are making a change. In a ideal situation you update the master server and the slave servers do a request to the master server every X time to be updates in. In some cases the administrator of the DNS servers has not set a limitation to who can request those updates. This will mean that you also can request a update.

A quick example, we take the domain which is the Dutch meteorology institute. First we would like to know the authoritive nameserver of the domain so we do a 'dig' at a linux shell:

jlouwers$ dig

; <<>> DiG 9.4.2-P2 <<>>
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24089 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ; IN A ;; AUTHORITY SECTION: 4521 IN SOA 2008112601 14400 1800 3600000 86400 ;; Query time: 18 msec ;; SERVER: ;; WHEN: Sat Nov 29 12:32:55 2008 ;; MSG SIZE rcvd: 78 jlouwers$ jlouwers$ jlouwers$

"dig (domain information groper) is a flexible tool for interrogating DNS name servers. It performs DNS lookups and displays the answers that are returned from the name server(s) that were queried. Most DNS administrators use dig to troubleshoot DNS problems because of its flexibility, ease of use and clarity of output. Other lookup tools tend to have less functionality than dig."

So now we know the authoritive nameserver of the domain, you can find it under ";; AUTHORITY SECTION:", so we will be asking the for all the subdomains listed for this domain. This can be done with the a command like 'dig axfr'. With this command we ask for all records for with a axfr command, axfr is the zone transfer command.

jlouwers$ dig axfr

; <<>> DiG 9.4.2-P2 <<>> axfr
; (1 server found)
;; global options: printcmd 86400 IN SOA 2008112601 14400 1800 3600000 86400 86400 IN MX 5 86400 IN MX 10 86400 IN NS 86400 IN NS 86400 IN NS
* 86400 IN MX 5
* 86400 IN MX 10 86400 IN CNAME 86400 IN A 86400 IN CNAME 86400 IN A 86400 IN CNAME 86400 IN A 86400 IN CNAME 86400 IN A 300 IN A 86400 IN A 300 IN A 300 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 300 IN A 86400 IN CNAME 86400 IN CNAME 300 IN A 86400 IN CNAME 86400 IN CNAME 86400 IN A 86400 IN CNAME 86400 IN A 86400 IN CNAME 86400 IN A 300 IN A 86400 IN A 300 IN A 86400 IN CNAME 86400 IN CNAME 86400 IN A 86400 IN CNAME 86400 IN CNAME 86400 IN CNAME 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN CNAME 86400 IN CNAME 300 IN A 300 IN A 86400 IN A 86400 IN A 300 IN A 86400 IN A 86400 IN A 86400 IN A 300 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN CNAME 86400 IN CNAME 86400 IN CNAME 86400 IN CNAME 86400 IN CNAME 300 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN A 86400 IN CNAME 86400 IN CNAME 86400 IN A 86400 IN CNAME 86400 IN CNAME 86400 IN A 86400 IN A 300 IN A 86400 IN A 86400 IN SOA 2008112601 14400 1800 3600000 86400
;; Query time: 19 msec
;; WHEN: Sat Nov 29 12:39:04 2008
;; XFR size: 95 records (messages 1, bytes 2169)


This is how you can get a complete list of all subdomains listed at a domain server. However, this will only work in cases that a domain server is allowing you to request a zone transfer.

Friday, November 28, 2008

NDMP backup stalls

We are currently using a netvault backup system in combination with a Netapp filer. To be able for netvault to communicate with the tape library and the tape drives to make the backup ndmpd is used as a deamon. However in some cases those sessions will be hanging. Only in extreme cases. This is what I found in a help document, it can come in handy if you have a problem:

NDMP backup says "Writing to Media" even though it is not writing to media.
Affected NV Version: 7.4.x
OS Version: All
Plugin version: 6.3.x
Application version: N/A

Several jobs continually say "Writing to Media" even though they are not writing to media. If you look at the logs, it shows a "Channel Error" near the bottom of the log. In "Device Manager" the drive used for these backups says: DRIVE 1 (Locked by Session(Hard).


Most likely the ndmp sessions on the filer have entered a hung state. Issue the following commands at the console prompt of the filer;

ndmpd killall
ndmpd off
ndmpd on
ndmpd status (shows if all strays processes have been eliminated)

This should resolve issue.

Thursday, November 27, 2008

Amazon EC2 and Python

Some time ago I posted a post about the Amazone Elastic Compute Cloud (EC2) and that Amazone and Oracle where teaming up to provide more services to the users of the EC2.

"Oracle announced that it will certify/support deployments of Oracle Database (all editions), Oracle Enterprise Linux, Oracle Enterprise Manager, and Oracle Fusion Middleware to Amazon Web Services' (AWS) Elastic Compute Cloud (EC2). In fact you may transfer your existing licenses to AWS if you like."

Today I found more information about the EC2 and the way you can program Python code for it. Pete Skomoroch gave a talk on PyCon 2008 in Chicago. His talk can be found on youtube (also see the embedded video below) and you can download his presentation slides from the PyCon website here.

"Amazon EC2 may offer the possibility of high performance computing to programmers on a budget. Instead of building and maintaining a permanent Beowulf cluster, we can launch a cluster on-demand using Python and EC2. This talk will cover the basics involved in getting your own cluster running using Python, demonstrate how to run some large parallel computations using Python MPI wrappers, and show some initial results on cluster performance."

"Amazon EC2 may offer the possibility of high performance computing to programmers on a budget. Instead of building and maintaining a permanent Beowulf cluster, we can launch a cluster on-demand using Python and EC2. This talk will cover the basics involved in getting your own cluster running using Python, demonstrate how to run some large parallel computations using Python MPI wrappers, and show some initial results on cluster performance."

Tuesday, November 18, 2008

Oracle EBS cross references

When running a Oracle E-Business suite for a company who is handling goods you can come into a position that one product can have more than one item code. In a ideal world you would always have a one to one coupling between a 'tangible' and a item number. In some cases, think about for example a transition phase between a old system and a new system, you might need to have a many to one situation.

In this example we will discussing about migrating from SAP to oracle E-Business suite. In the scoping sessions the customer pointed out that he also want to give all his products new item numbers. The problem is that the people who work at the order entry department are so used to the old numbers that they would like to keep the old numbers in the system for the period of a year and the Product Data Management department stated they do not want to maintain every item twice. In this situation your best option is to make a list containing all the cross references of the old item numbers and link them to the new item numbers. You can even set a end date for the list that it will become unavailable after a year.

By using cross references you the Product Data Management department will not have to maintain old and new items in the system and the Order Entry department will be able to use the old items numbers for a period of time. You can find the cross references list under the Order Management Super User responsibility, go to Items -> Cross References.

Here you can set a Type, a description and a end date. The type is a free field where you can give the cross reference a meaningful short name like in this example 'OLD_ITEMNUMBER'. The description can be something like 'The old item number of the old SAP system'. At the end date field you can set the end date on which the cross reference will become unavailable to users. If at some point after this end date is reached there is still a need to use it again you can remove the end date again and the functionality will become available again. When you have set this information you can click on the Assign button and a new screen will open where you can assign inventory items to the list and provide the old item number for cross referencing.

Now when someone uses a old SAP number in Oracle it will be linked to the Oracle Item number. When a SAP number is the same as a real Oracle item number the user will be presented with a choice where he can see what the product is with the real number and the product linked to the old SAP number.

Cross references can be found in the database in the table mtl_cross_references.

Monday, November 17, 2008

Oracle and Drop Shipment

Drop Shipment is the process where you ask your supplier to ship the goods to your end customer instead of shipping the goods from your own warehouse. Drop Shipment can be the standard way your company is operating because you do not a own warehouse, also it can be done in cases that you do not have a requested item on stock or the item cannot be housed in your own warehouse.

For example you might be doing business in appendage for offshore oil-drilling, most of the requested item on a order from a drilling platform can be sourced from your local warehouse, however, the drilling pipes are longer than your average shelve so the decision is made that drilling pipes will be sourced from the supplier/factory directly. In those cases you can use a Drop Shipment method. In the figure below we can see the order, goods and invoice flow for the ordering of those drilling pipes order with a Drop Shipment.

  • 1 The customer places a purchase order for x drilling pipes.
  • A The order desk receives the order and creates an drop shipment order, the requisition import is done and from the requisition a purchase order is created.
  • 2 After approving the purchase order is send to the supplier via mail, e-mail, fax or EDI
  • B The supplier receives the purchase order and handles this at their order desk as a normal order
  • 3 The supplier ships the goods to your end customer
  • 4 The supplier sends an invoice for the goods to you.
  • 5 An invoice is send from you to the end customer

By these steps the complete flow is done, your customer has received the goods directly and received a invoice from the goods from you. You have received an invoice from the supplier and by this the goods flow and the financial flow are completed.

To automate this process you have to make use of “Requisition Import” And “Autocreate PO”. By making sure your processes are aligned properly and all data that is required is in the Oracle E-Business suite master you can have completed the process of creating an order until sending a Purchase Order to your supplier in a matter of minutes.

When importing your requisitions and generating your Purchase Orders you have the option to group on a couple of parameters. The most used is the “Supplier” parameter. This will group all your requisitions into a single Purchase Order to your supplier. Per line the quantity per ship-to address will be listed. This way you have the ability to send your supplier for example once a day a complete order containing all the required goods from orders from that day.
In short, when you have setup your drop shipment process correctly and it is implemented in a good way this can save you lots of time and work in your day to day work and will keep your customers satisfaction on a higher level due to the fact that you can process their orders even do you do not have it on stock but source the goods via Oracle E-business suite directly from the warehouse of you supplier.

Oracle demand planning with Demantra

When selling and/or producing goods creating operational forecasts for replenishment can be a complex task. Some products will be fast movers others will be slow movers, some products will be promoted during a certain period of time and some will be under the influence of weather, season or the geographical region in which they are being sold. Taking these, and even more, factors into consideration you will get a multi-dimensional model which you have to maintain and feed with new and accurate data.

One of the key factors of a demand planning model is to keep it running with the latest data of your company so the model can detect new trends upon old trends and come with accurate predictions to help you plan your day to day business. To create such models and keep the new data coming in Oracle has a rich set of demand planning models for you to use, seasonal, nonseasonal, and moving-average forecasting models. All those models will be filled with your company data from Oracle E-Business suite so you do not have to worry about filling a separate data warehouse for you forecast models.

A story by David Baum in Oracle Profit magazine is explaining how this is done by “Jack Of All Games”.

Jack of All Games (JOAG), the largest video game distributor in the United States, reports success with its vendor-managed inventory (VMI) program. The VMI program came about after JOAG completed an intensive business intelligence initiative that involved collecting point-of-sale data from thousands of stores and analyzing data on which titles customers were buying. JOAG then approached retailers with the opportunity to share this information.

Eric Clark, vice president of business systems and technology at JOAG, says, "Retailers want us to keep their shelves filled, so our challenge is to determine the optimum quantity of each title based on the demand trends, local demographics, and a number of other constantly fluctuating variables."

JOAG recently installed Oracle's Demantra Demand Management, a software package that allows warehouse managers to sense demand in real time by capturing point-of-sale information from retailers. JOAG runs the information through Demantra's analytical engine and develops a fulfillment strategy for replenishing its retailers' stock.

Oracle has acquired Demantra, a leading global provider of demand-driven planning solutions.

Demantra is a best-in-class provider of demand management, sales and operations planning, and trade promotions management solutions. With this acquisition, Oracle plans to offer customers a compelling, comprehensive solution for the extended enterprise that enhances demand visibility with powerful analytics for more accurate demand-driven planning, forecasting, and modeling. With Demantra's proven demand chain solutions, and Oracle's leading technology infrastructure and existing ERP and supply chain applications, we plan to provide a seamless solution for the lean enterprise.

Handling hazardous goods with Oracle

Companies handling dangerous and/or hazardous goods are required to closely monitor those goods and keep up with legislation for purchasing, handling, storage and sales. To be able to correctly track those kinds of goods Oracle E-Business suite give you the ability to give a product a hazard class.

Several standards of classifying goods exist, however, the leading standard is maintained by the United Nations Committee of Experts on the Transport of Dangerous Goods. This standard is recognized by most countries and used to classify goods. The major exception on this is the classification used in the United States which uses NA (North America) numbering principle which is maintained by the United States Department of Transportation. Even do it is in principal a different standard the numbering is exactly the same as the numbering system UN numbers. UN numbers are also referred to as UN IDs, four-digit numbers that identify hazardous substances, and articles (such as explosives, flammable liquids, toxic substances, etc.).

In Oracle E-Business suite the UN Number of a item can be inserted and associated to the official Hazard Class. This enables users to identify items and hard classes and enables them to act upon them. Certain items are, for example, not allowed to be stored next to other items; you are only allowed to stock a certain amount per warehouse. All of those rules can be setup and will require the numbering and classification as basis. Also Oracle provides methods to report upon hazard classes so you will be able to report on the amount of flammable stock in a warehouse or the amount of radioactive stock in a certain sub inventory.

Even do Oracle directly refers to the UN number you are not limited to UN number system, you are completely free to create the logic and values behind this system based upon for example local legislation or company classifications. Handling dangerous and/or hazardous goods requires good and solid rules of handling; Oracle E-business suite is helping with a good out off the box framework which enables you to correctly implement those rules within your organization.

Wednesday, November 12, 2008

Photo stream NYC

I received some comments from people on a recent post about my trip to New York. I made a promise that I would show you some pictures. Well, Joost Remme posted some of the pictures of the trip on his photo site so you can take a look :-). I hope to receive some more pictures and will make sure you get a link to that location as soon as it is available.

The picture on top of this post is a very large MM in toysrus, we where under the impression this was the biggest MM we would ever see and that we never never would see more MM's in the same room (on the other side are a couple of tubes with MM's). However 100 meters out of the shop we found MM world.... just see the pictures..... that is all I can say.

So oTTo,... enjoy :-)