Tuesday, April 24, 2007

IBM reveals new virtual Linux environment

IBM has today announced the availability of an open beta version of its virtual Linux environment to enable x86 Linux applications to run without modification on POWER processor-based IBM System p servers. Designed to reduce power, cooling and space by consolidating x86 Linux workloads on System p servers, it will eventually be released as the roles off the tongue ‘IBM System p Application Virtual Environment (System p AVE).’

With a 31.5% global revenue share during 2006, IBM hopes to build on System p UNIX success and extend firmly into the Linux marketplace. Considering there are almost 2,800 applications that already run natively on Linux on System p servers, the chances are good that it will succeed. System p AVE will allow most x86 Linux binaries to run unmodified as well, which will expand the x86 workloads that can be moved to a System p server. Everyone wants to get more out of their investment in IT, and moving Linux workloads to virtual server environments that allow the consolidation of multiple servers onto a single platform is a great way of achieving that aim. But a systems approach that maximizes system resource utilization, manageability and flexibility as well as provides ‘no excuses’ reliability and scalability is needed.

“System p customers have told me that technology that may have been 'good enough' for deploying one x86 server at a time is not 'good enough' when consolidating over 300 x86 servers spanning eight racks onto one rack of more powerful System p servers," Scott Handy, vice president, worldwide marketing and strategy, IBM System p told DaniWeb "these customers are choosing to trust System p products and our Advanced POWER Virtualization for those more mission critical points of consolidation and p AVE will expand the possibilities of what x86 workloads they can consolidate onto System p platforms to derive greater savings."

So what exactly is IBM System p AVE technology in a nutshell? Initial testing shows that clients should be able to easily install and run a wide range of x86 Linux applications on System p and BladeCenter JS20 and JS21 servers that are using a Linux operating system. These applications should run, without any change to the application and without having to predefine that application to the Linux on POWER operating system with p AVE installed. The system will "just know" the application is a Linux x86 binary at runtime and run it automatically in a p AVE environment. Behind the scenes, p AVE creates a virtual x86 environment and file structure, and executes x86 Linux applications by dynamically translating and mapping x86 instructions and system calls to a POWER Architecture processor-based system. It uses caching to optimize performance, so an application's performance can actually increase the longer it runs. Using p AVE, IBM expects ISVs that don't already have a native Linux on POWER product to be able to expand their addressable market to System p servers at minimal cost by allowing them to run their existing x86 Linux applications on these servers without having to recompile, release new media or documentation, or maintain a unique product offering for POWER technology.

IBM intends to leverage its successful Chiphopper program to help those ISVs support System p servers with the x86 Linux version of their application.

Thos story was origanally done by Bill Andad from daniweb.com

Monday, April 23, 2007

BakBone, no controlling server set

After installing a new client as a member of the backup group we tried to boot the netvault software to connect to the backup server and the StorageTek L80 tape library. However every time we tried to start the Netvault GUI we got the message ‘No controlling server set’.

After some tries we found out that you have to manually set the server name in the gui.cfg file. When you open the gui.cfg file you will find the following lines at the bottom of the file:



you will have to manually enter the server name after “names=”. Remember you have to enter the name and NOT the ip address of the server you want to connect to.

Oracle TOP otop

Before you can run otop.pl you will have to have an active connection to your Oracle database, meaning you will have to have Perl DBI and DBD Oracle. The installation of those 2 is described in a previous post.

[root@pubjo root]# ./otop.pl
Can't locate Curses.pm in @INC (@INC contains: /usr/lib/perl5/i386-linux /usr/lib/perl5 /usr/lib/perl5/site_perl/i386-linux /usr/lib/perl5/site_perl /usr/lib/perl5/site_perl/5.8.0 /usr/lib/perl5/site_perl .) at ./otop.pl line 75.
BEGIN failed--compilation aborted at ./otop.pl line 75.

To fix this problem you will need to install the Perl Module Cursus. You can download the Cursus module from CPAN.

1) Download the latest version of curses
2) Unpack curses
- [root@pubjo root]# gunzip Curses-1.15.tar.gz
- [root@pubjo root]# tar -xvf Cusres.1.15.tar
3) make/install curses
[root@pubjo root]# perl makefile.PL FORMS
[root@pubjo root]# make
[root@pubjo root]# make install
4) cleanup the tar file and the temp directory.

It could be that your installation is failing with the following error message:

c-config.h:9:21: ncurses.h: No such file or directory
Curses.c:91: parse error before `WINDOW'
Curses.c:91: warning: data definition has no type or storage class
Curses.c:94: parse error before `{'
Curses.c:96: initializer element is not constant
Curses.c:98: parse error before `return'
Curses.c: In function `c_chtype2sv':
Curses.c:142: `ERR' undeclared (first use in this function)
Curses.c:142: (Each undeclared identifier is reported only once
Curses.c:142: for each function it appears in.)
Curses.c: At top level:
Curses.c:284: parse error before `*'
Curses.c: In function `c_sv2window':
Curses.c:290: `WINDOW' undeclared (first use in this function)
Curses.c:290: `ret' undeclared (first use in this function)
Curses.c:290: parse error before `)'
Curses.c: In function `c_window2sv':
Curses.c:303: parse error before `WINDOW'
In file included from Curses.c:358:
CursesFun.c: In function `XS_Curses_longname':
CursesFun.c:3125: warning: initialization makes pointer from integer without a cast
CursesFun.c: In function `XS_Curses_touchline':
CursesFun.c:3226: `WINDOW' undeclared (first use in this function)
CursesFun.c:3226: `stdscr' undeclared (first use in this function)
CursesFun.c:3227: parse error before `int'
CursesFun.c:3233: `ret' undeclared (first use in this function)
make: *** [Curses.o] Error 1
/usr/bin/make -- NOT OK

The line “c-config.h:9:21: ncurses.h: No such file or directory” is telling you that you have to install ncurses first. The Ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses Terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD Curses. You can find the ncurses library project site at GNU. To find the latest version of GNU ncurses go to there ftp download site.

To install ncurses take the following steps:
1) Download the latest version of ncurses from ftp://ftp.gnu.org/pub/gnu/ncurses/
2) Unpack ncurses
- [root@pubjo root]# gunzip ncurses-5.6.tar.gz
- [root@pubjo root]# tar -xvf ncurses-5.6.tar
3) make/install ncurses
- [root@pubjo ncurses-5.6]# ./configure
- [root@pubjo ncurses-5.6]# make
- [root@pubjo ncurses-5.6]# make install

When you have successfully installed ncurses you can try to install the Perl curses again. Besides the way to install Perl curses by compiling it from the source code you can also install it by using the –MCPAN shell :

[root@pubjo root]# perl -MCPAN -e shell

This will get you into the cpan shell:

Terminal does not support AddHistory.

cpan shell -- CPAN exploration and modules installation (v1.7602)
ReadLine support available (try 'install Bundle::CPAN')


Here you will also be able to install the the perl curses by executing the following command:

cpan> install Curses::Forms

If you have successfully installed al those required options you will be able to use otop and monitor you Oracle Database.

Friday, April 20, 2007

Perl and Oracle

Many people who are working with Oracle might want to have the possibility to connect to the database with Perl also many Perl coders might want to have the possibility to use an oracle database.

To be able to use Perl in combination with an Oracle database you need, besides Perl and an Oracle database, Perl DBI and DBD Oracle.

DBI is the DBI is a database access module for the Perl programming language. It defines a set of methods, variables, and conventions that provide a consistent database interface, independent of the actual database being used. It is important to remember that the DBI is just an interface. The DBI is a layer of "glue" between an application and one or more database driver modules. It is the driver modules which do most of the real work. The DBI provides a standard interface and framework for the drivers to operate within.

DBD Oracle is a Perl module which works with the DBI module to provide access to Oracle databases.

To install DBI and DBD Oracle there are 2 ways to dot this. You can use a 'perl -MCPAN -e' like way or you can download the source and build from scratch. We will use the "build from scratch" methode.

First we install DBI:

1) Make sure you are root!
2) Download DBI from CPAN
3) Use gunzip and tar to extract the archive:
gunzip DBI-1.48.tar.gz
tar -xvf DBI-1.48.tar

4) Make and install DBI:
perl Makefile.PL
make install
5) Cleanup all the downloaded stuff and the extracted directories.

Secondly we will build DBD Oracle:

1) Make sure you are root!
2) Download DBI Oracle from CPAN
3) Use gunzip and tar to extract the archive:
gunzip DBD-Oracle-1.16.tar.gz
tar -xvf DBD-Oracle-1.16.tar
4) Make and install DBD Oracle:

perl Makefile.PL
make install
Cleanup all the downloaded stuff and the extracted directories.

Now you should have a working Perl database interface and working oracle drivers. This means we can run a test. You could for example use this perl script


use strict;
use DBI;

my $dbh = DBI->connect( 'dbi:Oracle:orcl',
RaiseError => 1,
AutoCommit => 0
) || die "Database connection not made: $DBI::errstr";
my $sql = qq{ SELECT id, name, title, phone FROM employees };
my $sth = $dbh->prepare( $sql );
my( $id, $name, $title, $phone );
$sth->bind_columns( undef, \$id, \$name, \$title, \$phone );
while( $sth->fetch() ) {
print "$name, $title, $phone\n";


A script origanly done by Jeffrey William Baker, on his website there are some more details about the example.

Thursday, April 19, 2007

Microsoft and the UNIX like desktop.

Linux and UNIX user are already used to the concept of virtual desktops, having 2 or more desktop environments to work on. The big advantage is that you can run several projects on a single workstation and let them have there own desktop. If you need to switch from doing private thing to work things you just select the work desktop and you will have all your windows and applications as you left them.

Or for example you are installing and updating servers and you have to wait a long time and have kinds of terminal screens open you might want to do something else without closing or minimizing those windows. With a virtual desktop you can switch to a clean desktop and start working on something else. When you like to go back to your server update you just switch back to that particular desktop.

Also Microsoft has released a virtual desktop environment for Windows XP however not all things are like you would expect if you are used to UNIX/Linux. When you, under UNIX/Linux start a application on a desktop you will not be able to access under an other desktop. For example when you start FireFox on desktop 2 you will not be able to browse the internet with the same instance of this browser on desktop 1.

Windows will not keep the applications in a "desktop container" when you start an application you will be able to change and use this application on all desktops. For UNIX/Linux users it can be frustrating to use the Microsoft virtual desktop solution, for windows users who never used UNIX/Linux it might be a very handy tool.

Wednesday, April 18, 2007

Building Perl from scratch on Linux

Building Perl from scratch is always a good option and not really hard. You can download the source code from the Perl website. Here you will be forwarded to a FTP mirror, for example a CPAN mirror at Funet in Finland and if you like to download the latest version you have to download latest.tar.gz .

Now take the following steps:
1) Extract the archive
- [root@pubjo root]# gunzip latest.tar.gz
- [root@pubjo root]# tar –xvf latest.tar
2) Run the configuration
- [root@pubjo root]# ./Configure -de -Dprefix=/usr -Dcccdlflags='-fPIC' -Dd_dosuid -Darchname=i386-linux -Dprivlib=/usr/lib/perl5 -Darchlib=/usr/lib/perl5/i386-linux -Dsitelib=/usr/lib/perl5/site_perl -Dsitearch=/usr/lib/perl5/site_perl/i386-linux

3)Make the installation:
- [root@pubjo root]# make
- [root@pubjo root]# make test
- [root@pubjo root]# make install

This is all there is to compiling Perl from the source code. If all went without problems you will now have a working version of perl on your system. To check you can execute the following command:

- [root@pubjo root]# perl –version

Friday, April 06, 2007

Nagios and Oracle

Running a Oracle database in a production environment can be a relieve or can be a hassle. You can make it a relieve by monitoring your systems in a good way so you will be notified before a problem becomes a big problem. This can make your life as a Oracle DBA or system administrator a lot easier.

You will also be able to guarantee a higher business continuity meaning that the employees can continue work because the DBA’s and system administrators have a better view of upcoming problems and will be able to solve those problems before they become a thread to business continuity.

In this post I like point out to 3 new Nagios Oracle scripts.

One will check the oracle instance on number of users, table spaces, possible ORA messages will be send to Nagios so you will be able to view from your Nagios web application or it can be mailed or even send via SMS to you. Thanks to Sven Dolderer for this script written in Perl. You can download check_oracle_instance.pl from my website.

You can also download check_oracle_status.sh a shell script written by latigid010@yahoo.com and check_ora_table_space.pl a perl script written by Erwan Arzur.

Thursday, April 05, 2007

Britney Spears and Semiconductor Physics

It is a little known fact, that Ms Spears is an expert in semiconductor physics. Not content with just singing and acting, in the following pages, she will guide you in the fundamentals of the vital laser components that have made it possible to hear her super music in a digital format.

This is the introduction to a website devoted to semiconductor physics. How you can you make a subject like that even more attractive, by using pictures of a babe to explain some of the details.

Wednesday, April 04, 2007

Interplanetary Supply Chain

If you think shipping freight from Cincinnati to El Paso is challenging, imagine trying to deliver an oxygen generation unit from the Earth to a remote location on the moon.

By 2020, NASA plans to establish a long-term human presence on the moon, potentially centered on an outpost to be built at the rim of the Shackleton crater near the lunar South Pole.

To make such a scenario possible, a reliable stream of consumables such as fuel, food and oxygen, spare parts and exploration equipment would have to make its way from the Earth to the moon as predictably as any Earth-based delivery system. Or more predictably: One missed shipment could have devastating consequences when you can't easily replenish essential supplies.

To figure out how to do that, MIT researchers Olivier L. de Weck, associate professor of aeronautics and astronautics and engineering systems, and David Simchi-Levi, professor of engineering systems and civil and environmental engineering, created SpaceNet, a software tool for modeling interplanetary supply chains. The latest version, SpaceNet 1.3, was released this month.

The system is based on a network of nodes on planetary surfaces, in stable orbits around the Earth, the moon or Mars, or at well-defined points in space where the gravitational force between the two bodies (in this case, the Earth and the moon) cancel each other out. These nodes act as a source, point of consumption or transfer point for space exploration logistics.

"Increasingly, there is a realization that crewed space missions such as the International Space Station or the buildup of a lunar outpost should not be treated as isolated missions, but rather as an integrated supply chain," said de Weck. The International Space Station already relies on periodic visits by the space shuttle and automated, unpiloted Russian Progress re-supply vehicles.

While "supply chain" usually refers to the flow of goods and materials in and out of manufacturing facilities, distribution centers and retail stores, de Weck said that a well-designed interplanetary supply chain would operate on much the same principles, with certain complicating factors. Transportation delays could be significant--as much as six to nine months in the case of Mars--and shipping capacity will be very limited. This will require mission planners to make difficult trade-offs between competing demands for different types of supplies.

A reliable supply chain will "improve exploration capability and the quality of scientific results from the missions while minimizing transportation costs and reducing risks" to crew members, de Weck said.

SpaceNet evaluates the capability of vehicles to carry pressurized and unpressurized cargo; it simulates the flow of vehicles, crew and supply items through the trajectories of a space supply network, taking into account how much fuel and time are needed for single-sortie missions as well as multiyear campaigns in which an element or cargo shipment might have to be prepositioned by one set of vehicles or crew members while being used by another.

In addition to determining a logical route, SpaceNet also allows mission architects, planners, systems engineers and logisticians to focus on what will be needed to support crewed exploration missions.

To experience an environment as close as possible to harsh planetary conditions, MIT conducted an expedition to Devon Island in the Canadian arctic in 2005. The researchers established a semi-permanent shelter at the existing NASA-sponsored Haughton-Mars Research Station and compiled an inventory of materials at the base, including key items such as food, fuel, tools and scientific equipment, while carefully tracking inbound and outbound flights.

They also experimented with modern logistics technologies, such as radio frequency identification, that autonomously manage and track assets with the goal of creating a "smart exploration base" that could increase safety and save astronauts and explorers precious time.

SpaceNet 1.3 is written in MATLAB, a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis and numerical computation.

For more information on SpaceNet 1.3, go to spacelogistics.mit.edu.