Monday, January 26, 2009

Windows 7 beta on VirtualBox

Today the official Windows 7 beta is available and can be downloaded from the Microsoft website. So now we all can enjoy looking at the new windows, what you most likely do not want is to free a computer for it. So to beta test the new windows 7 it is ideal to run it in a virtual machine.

From personal experience I can tell that you test Windows 7 by using virtualbox. You might have some issues when you try to get online. Network cards might not work out of the box when running windows 7 in VirtualBox however I have written a short guide on how to solve those issues.

A beta is still a beta so not all is working, Microsoft is proving some tips and tricks to work arround some of the bigger bugs in Windows 7.

Here are some known issues (and some solutions):

- BitLocker Drive Encryption is not supported on drives that use the extended file allocation table (exFAT) file system. BitLocker Drive Encryption is a full disk encryption feature included with Microsoft's Windows Vista Ultimate, Windows Vista Enterprise, Windows Server 2008 and Windows 7 Ultimate Beta operating systems designed to protect data by providing encryption for entire volumes. By default it uses the AES encryption algorithm in CBC mode with a 128 bit key, combined with the Elephant diffuser for additional disk encryption specific security not provided by AES.

- Some mobile broadband software does not install correctly and some devices are not detected or cannot connect to the Internet.For a current list of devices known to connect to the Internet with this beta release, as well as simple workarounds for other devices and software, please refer to http://go.microsoft.com/fwlink/?LinkID=139390.

A complete list of known defects can be found at the microsoft website.


Hotel chain rolls out Oracle E-business suite

Crown Plaza Hotels group has completed the rollout of Oracle Financials, Purchasing and i-Procurement across its six hotels in Ireland.

System Dynamics, a member of Oracle PartnerNetwork, used Oracle Business Accelerators to deliver a tailored Oracle E-Business Suite implementation, which was rolled out over a five-month period.

“By using Oracle Business Accelerators, we were able to reduce the time and cost of the implementation, while tailoring the solution to the specific needs of Tifco and providing the right mix of industry functionality and best practices,” explained Tony McGuire, managing director, System Dynamics.



Linux get bios information

In some cases you would like to have some information about your hardware while your system is up and running. You might wonder what version your bios is. When starting you PC you can see this however it is not always desired to reboot your system only to get this information. Under Linux you can use the dmidecode command to retrieve this information. This command will decode the information available in the DMI table in such a way that we humans can read it.

Looking at the man page of dmidecode we can see all the information about the command... here are some examples:

Read the information about your system type:
root@orcl11gdb:~# dmidecode -t system
# dmidecode 2.9
SMBIOS 2.5 present.

Handle 0x0001, DMI type 1, 27 bytes
System Information
Manufacturer: innotek GmbH
Product Name: VirtualBox
Version: 1.2
Serial Number: 0
UUID: DB26C8D8-1B46-4F79-BA15-AF685C1A1650
Wake-up Type: Power Switch
SKU Number: Not Specified
Family: Virtual Machine


Read the system manufacturer in a single string:

root@orcl11gdb:~# dmidecode -s system-manufacturer
innotek GmbH

in basics the -s option will give you the information in a single string format where the -t option will present you with much more information at once. So when you are scripting a script which will get you some specific information, for example as input for a installation script you will use the -s option.

Valid string keywords are:
bios-vendor
bios-version
bios-release-date
system-manufacturer
system-product-name
system-version
system-serial-number
system-uuid
baseboard-manufacturer
baseboard-product-name
baseboard-version
baseboard-serial-number
baseboard-asset-tag
chassis-manufacturer
chassis-type
chassis-version
chassis-serial-number
chassis-asset-tag
processor-family
processor-manufacturer
processor-version
processor-frequency

And for the -t option will have some other options you can specify to retrieve your information:
bios
system
baseboard
chassis
processor
memory
cache
connector
slot

So, as you can see this command can help you to get a lot of information about the hardware you are working on. In case you are looking into your hardware platform, for example to make a overview of your datacenter you do not have to go into the datacenter itself to get all the information. Also you do need to wander around the office to get all the information, you can simple use dmidecode as a starting point. There are other commands, some even better however much of them are specifically developed by hardware manufacturers for there hardware and do also have a management en reporting side. Just looking for a good and quick linux command, dmidecode is your tool of choice. A great script is done in this example of a perl script which can be found at http://www.hollenback.net/sysadmin/CheckProcs.txt .

Sunday, January 25, 2009

Capgemini Oracle Fusion Experience Lab

One of the strategic regions of Capgemini is Oracle. Capgemini is already known for its expertise with SAP and Oracle but one of the regions it is still want to gain more market share is Oracle, this is specially the case for outsouring in the Netherlands. One of the reasons I made the decision to move to Capgemini Outsourcing in the Netherlands is that we will be focusing in getting more market share in the Oracle world in this region.

One of the things Capgemini is promoting is the Oracle Fusion Experience Lab, A collegue pointed out to me that there was a great video about it on YouTube and that it might be something I wanted to blog about... and yes... it is something I would like to blog about.

The Capgemini Oracle Fusion Experience (COFE) Lab offers advanced advisory and technology assistance to help organizations optimize current and future investments in Oracle Fusion Middleware.

The COFE Lab combines the industry experience and Service-Oriented Architecture (SOA) delivery capabilities of Capgemini, Oracle Corporation, and selected hardware network providers in one place. COFE Lab provides assistance to companies investing in Oracle Fusion Middleware technology. Organizations can access the full knowledge, experience and capabilities of COFE Lab to leverage existing application investments, improve business process efficiency, work smarter in everyday business transactions, adapt more quickly to market changes and competition, and plan effectively for innovative, flexible growth. For enterprises looking to expand their footprint across the globe with maximized Return On Investment (ROI), COFE Lab is the resource to tap.



Thursday, January 22, 2009

Oracle retrieve remote XML files

Oracle databases have the ability to read store and process remote XML files, many developers of 'traditional' PL/SQL code are not aware of this I have noticed. However, having the ability to read a remote XML file via the HTTP protocol can open lots of possibilities and can make your live a lot easier. In some cases you do not have to develop all kinds of interfaces to have the information send to a server to be processed and placed in the database. You can simply point at a location on a web server and retrieve a XML file. There are thousands of possible fields where this can be used, think about getting stock market information, scientific readings, machine information, research data, rss feeds and for example weather data.

When you need information in your database and it is available in XML and can be accessed via HTTP you can very simply read, parse and store this information in your oracle database. As a example I will show you a project of reading weather data from the National Oceanic and Atmospheric Administration's servers in a XML format into a Oracle database where it later was used to tune weather models.

The National Oceanic and Atmospheric Administration's NOAA is providing the readings of the weather stations in a XML format on this website. The request was to be able to read and store all the raw XML files in a Oracle database. The parsing part was done later. This can be a very good example for for example retrieving and storing stock market information which can also be done in a similar way. At a later stage the raw XML files in the database are processed.

First step is to make sure you have all the information about all the reading stations, for this we create a table to store the available static information:

CREATE TABLE noaa_weather_obsloc
(
location_name VARCHAR2(100) NOT NULL,
location_code varchar2(5) NOT NULL,
location_state varchar(4) NOT NULL,
location_detail_url sys.uritype,
location_xml_url sys.uritype NOT NULL,
location_rss_url sys.uritype,
location_latitude varchar2(10),
location_longitude varchar2(10),
CONSTRAINT noaa_weather_obsloc_pk PRIMARY KEY
(
location_code
)
ENABLE
);

You can see that all the columns where we will store information about URL's are in the format sys.uritype. URIType provides a means to store a URI in the database, rather than the document referenced by the URI. In the same way that the methods of the XMLType object can be used to insert or create XML document objects, methods of URIType objects can be used to insert or query URIs.

The next step is to insert data into the table which is done for a single record as a example with the statement below. The latitude and longitude are intentionally blank we can fill this information in a later stage because this information will be in the XML file we will be retrieving.

INSERT INTO noaa_weather_obsloc VALUES
('Atka Airport',
'PAAK',
'AK',
sys.UriFactory.getUri('http://weather.noaa.gov/weather/current/PAAK.html'),
sys.UriFactory.getUri('http://www.weather.gov/data/current_obs/PAAK.xml'),
sys.UriFactory.getUri('http://www.weather.gov/data/current_obs/PAAK.rss'),
null,
null
);

As you can see sys.UriFactory.getUri is used to insert the data in the URIType fiels. There are also other methods like sys.httpuritype.createHttpuri however I do prefer in this case to use the sys.UriFactory.getUri methode. According to Oracle this is the difference in use: If the URI starts with http://, then getURI() creates and returns an HTTPUri. If the URI starts with either /oradb/ or /dburi/, then getURI() creates and returns a DBUri. Otherwise, getURI() creates and returns an XDBUri.

So now we have the location data stored in a table together with the remote location of the XML file we can start retrieving the XML file to store it in the database. To store it we however first have to create a table which is capable of storing raw XML data in a XML format. We want to use the XMLTYPE because then we will be able to directly query the XML file in the table when needed. I will not go into details on that in this post, in later posts I will come back on this subject.

CREATE TABLE NOAA_WEATHER_XMLDATA
(
location_code varchar2(5) NOT NULL,
store_timestamp date,
weather_xml_data xmltype
);

Now we have the table noaa_weather_xmldata and we can start retrieving the XML data and store it in this table. The example given below is just a manual insert for a single location. As you can imagine you would like to develop a more flexible way of retrieving and storing the data. You have a list locations you want to monitor in the table noaa_weather_obsloc and you have some time intervals on which new data is retrieved. So, what I will not be handling in this blogpost is how you can automate this process, I will also not be handling the topic on how you can write some more fun queries and functions to help you do that. I do however promiss I will write some things about it in the near (very near) futures. So if you are interested keep a eye on this blog or if the post is already some days old try to search this weblog.

insert into NOAA_WEATHER_XMLDATA values
(
'PAAK',
sysdate,
(select sys.xmltype.createXML(locs.location_xml_url.getClob()) from noaa_weather_obsloc locs where location_code = 'PAAK')
);

As you can see I use a subquery to retrieve the remote XML file in combination with the data in the noaa_weather_obsloc table. the sys.xmltype.createXML is used to create and store the XML file in the XMLTYPE. By putting all those bits together you can now retrieve and store a remote XML file in your Oracle 11G database. As I already promised I will come back in later posts on how I sceduled the retrieval and how to parse and extract XML data in a Oracle database.

Monday, January 19, 2009

Samba 4 and active directory

"Enterprise networks now have an alternative choice to Microsoft Active Directory (AD) servers, with the open source Samba project aiming for feature parity with the forthcoming release of version 4, according to Canberra-based Samba developer Andrew Bartlett.

Speaking at this year's linux.conf.au Linux and open source conference in Hobart, Bartlett said Samba 4 is aiming to be a replacement for AD by providing a free software implementation of Microsoft's custom protocols.

Because AD is "far more than LDAP and Kerberos", Bartlett said, Samba 4 is not only about developing with Microsoft's customisation of those protocols, it is also about moving the project beyond just providing an NT 4 compatible domain manager."

This is real good for all those companies who are still depending on Windows servers to run their active directory infrasructure. This is also good news for all those developers who like to work on the free software implementations of Microsoft's custom protocols. For those of you who are not familiar with active directory:

Active Directory is a directory service used to store information about the network resources across a domain and also centralize the network.

An 'Active Directory' structure is a hierarchical framework of objects. The objects fall into three broad categories: resources, services, and users (user accounts and groups). The AD provides information on the objects, organizes the objects, controls access and sets security.

Each object represents a single entity — whether a user, a computer, a printer, or a group — and its attributes. Certain objects can also be containers of other objects. An object is uniquely identified by its name and has a set of attributes — the characteristics and information that the object can contain — defined by a schema, which also determines the kind of objects that can be stored in the AD.

Each attribute object can be used in several different schema class objects. The schema object exists to allow the schema to be extended or modified when necessary. However, because each schema object is integral to the definition of AD objects, deactivating or changing these objects can have serious consequences because it will fundamentally change the structure of AD itself. A schema object, when altered, will automatically propagate through Active Directory and once it is created it can only be deactivated — not deleted. Changing the schema usually requires a fair amount of planning

Sunday, January 18, 2009

ORA-24247: network access denied by access control list (ACL)

When trying to access a remote XML file from a Oracle 11G Database you might end up with a ORA-24247 error message. The message will state something like "ORA-24247: network access denied by access control list (ACL)". For example I tried to execute the following:

SELECT HTTPURITYPE('http://rss.slashdot.org/Slashdot/slashdot').getxml() orana_feed from dual;

The result of this was the error message shown below:

Error starting at line 1 in command:
SELECT HTTPURITYPE('http://rss.slashdot.org/Slashdot/slashdot').getxml() orana_feed from dual
Error report:
SQL Error: ORA-29273: HTTP request failed
ORA-06512: at "SYS.UTL_HTTP", line 1674
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.HTTPURITYPE", line 34
ORA-06512: at "SYS.HTTPURITYPE", line 97
29273. 00000 - "HTTP request failed"
*Cause: The UTL_HTTP package failed to execute the HTTP request.
*Action: Use get_detailed_sqlerrm to check the detailed error message.
Fix the error and retry the HTTP request.

Accoording to metalink this is due to the following: "A new security measure is introduced in Oracle 11g Release 1 (11.1) for the following network-related PL/SQL packages: UTL_TCP, UTL_HTTP, UTL_SMTP, UTL_MAIL, and UTL_INADDR. The invoker of those packages needs additional privileges to connect to an external host or to resolve the name or the IP address of a host. The packages check the invoker for the necessary privileges only when the calls are made at runtime and raises an exception if the invoker lacks the privileges. This new security measure is implemented by the XML DB access control list (ACL) mechanism and, therefore, requires XML DB to be installed and Configure network Access Control Lists (ACLs) in the database before these packages can work as they were working in prior releases.".

So to prevent every user to be able to connect from within the database to any external source by executing PL/SQL code new security mechanisms are in place. For a user to be able to connect to a external source he has to be listed in a Access Control List as granted to access a certain external location. For me to be able to access the the RSS feed from rss.slasdot.com I have to be granted in a ACL to do so. Oracle created the mechanism in such a way that you, as a DBA, will be able to precisely tell which user will have access to what. You can create multiple ACL's and assign those to roles and/or users. You can also state on which ports a user can connect. This way you will have all the freedom to decide who can access what and this will prevent user in a Oracle 11G database to access remote sources.

The first step is to create a ACL, to do so we will make use of the dbms_network_acl_admin package and the create_acl function inside this package. As shown below I will create a Access Control List which I name rssslashdot.xml, as first principal I enter my own username and the privilege I get is to connect. I could have created a start and end date to limit the time I can use it. Lets say I can only use it in the first month of next year I would be able to set this by using the start_date and end_date.

dbms_network_acl_admin.create_acl (
acl => 'rssslashdot.xml',
description => 'rss feed location for slashdot',
principal => 'JLOUWERS',
is_grant => true,
privilege => 'connect',
start_date => null,
end_date => null
);

Only be able to connect is not enough, I also need to be able to resolve the hostname which I later will enter so part of the script has to be that I grant this user jlouwers a the resolve role. This can be done by using the add_privilege function in the dbms_network_acl_admin package:

DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl => 'rssslashdot.xml',
principal => 'JLOUWERS',
is_grant => true,
privilege => 'resolve');

Now all we have to do is enter the information about what I can access so at the host I enter rss.slashdot.org which will enable everyone (the user jlouwers in this case) who is granted to this ACL to connect to this host. I could also provide a port range in which I can operate. If you do not set anything you can use all ports but from a security perspective it could be good to limit this to a range of ports, this can be done by using the lower_port and upper_port variables.

DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL(acl => 'rssslashdot.xml',
host => 'rss.slashdot.org');

So this are basilcy all the parts you need to create a ACL, if put all this in a single script you will get something like the script below:

BEGIN
dbms_network_acl_admin.create_acl (
acl => 'rssslashdot.xml',
description => 'rss feed location for slashdot',
principal => 'JLOUWERS',
is_grant => true,
privilege => 'connect',
start_date => null,
end_date => null
);

DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl => 'rssslashdot.xml',
principal => 'JLOUWERS',
is_grant => true,
privilege => 'resolve');

DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL(acl => 'rssslashdot.xml',
host => 'rss.slashdot.org');

END;
/
COMMIT;


To get more information about how this new functions are working a great resource is the "Oracle Database PL/SQL Packages and Types Reference 11g Release 1 (11.1)" which you can find at the Oracle website. When you need to check which user can access what remote location the best way is to start looking in sys.dba_network_acl_privileges and sys.dba_network_acls

Manually start a Oracle 11G instance

I just installed a Oracle 11G database on a Ubuntu Linux server. Oracle 11G is not certified to run on Ubuntu however it can be done without any problem. Because it is not certified by Oracle you will never get support from Oracle and I have to advice you to never run a production environment with it. To have it running as a development environment on your laptop is fine, this is the reason I am running it at the moment, I now have the possibility to develop PL/SQL code wherever I go on my laptop.

Installing a Oracle 11G database on Ubuntu is no rocked science, however, a guide can come in handy. So I have to praise Augusto Bott from the Pythian Group who wrote a great paper on installing Oracle 11G databases on Ubuntu. When you start installing please do refer to this paper and follow the steps and you will have a running database very quickly.

Almost at the end of the Paper Augusto states the following "Just remember: this script depends on the existence of a database on the machine, so don’t forget to run dbca as the oracle user, and configure it to start at boot time (edit /etc/oratab after creating the database, and set the last column to Y)." This is very true, however it might be fun to NOT do this and to learn some about how you can mount and start your database manually.

I installed my Oracle 11G database and created a database named JLO11G so if we look at the file /etc/oratab we can see the last line is stating my database name and the home of the JLO11G database. Also you can see that it states N which means that the database should not be started at boot time.

JLO11G:/u01/app/oracle/product/11.1.0/db_1:N

Now we have to start, mount and make the database available by hand which is normally done by the system when you set the last column to Y. First of all we have to set the ORACLE_SID as the user running the database, in my case this is the user oracle at my linux box named orcl11gdb. To do so we enter the following command:

oracle@orcl11gdb:~$
oracle@orcl11gdb:~$ export ORACLE_SID=JLO11G

Now we have set the ORACLE_SID, a ORACLE_SID is the Oracle System ID which is used to identify a particular database in the system. Under UNIX/Linux and windows it is called a ORACLE_SID and at VMS systems it is called a ORA_SID. As we are talking about a Linux system we talk about a ORACLE_SID. In case you installed your database a long time ago and like to start your project again but you are failing to remember you ORACLE_SID you can look in your /etc/oratab file where you can find it. As we have set the ORACLE_SID and we are logged in as the user who runs the database software we can now begin starting the database with sqlplus. We will login as a sysdba because this is providing us the correct rights to mount and start a database.

oracle@orcl11gdb:~$
oracle@orcl11gdb:~$ sqlplus '/as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Sun Jan 18 12:23:32 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

We are now connected to a idle instance so we need to start the instance, to do so we will first start it without mounting the database.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 849530880 bytes
Fixed Size 1303216 bytes
Variable Size 377490768 bytes
Database Buffers 465567744 bytes
Redo Buffers 5169152 bytes
SQL>


Now we have a running instance and we can even query some information from the instance, we can for example query the status of the instance. In case you do not know what the status of a instance is this can be very handy.

SQL> select status from v$instance;

STATUS
------------
STARTED

SQL>

In case you have a system running a lot of databases you might find a statement like the one below very handy to get some information about what and who is running in which state, in this case it is not very impressive however it can come in handy:

SQL> select
2 instance_name,
3 version,
4 status
5 from
6 v$instance;

INSTANCE_NAME VERSION STATUS
---------------- ----------------- ------------
JLO11G 11.1.0.6.0 STARTED

SQL>

The next step is to mount the database so we can later make it available for users. To do so we enter the following command:

SQL> alter database mount;

Database altered.

SQL>

If we now query again the status we will see that the instance is now in the status mounted and not in the status started anymore. Again a query that can come in handy if you are unsure of the current status of your instance.

SQL> select status from v$instance;

STATUS
------------
MOUNTED

SQL>

At this stage the database is mounted however it is not in the status open. If the database is in the status open users can connect to it and can use it, in the status mounted it is closed and you (as a sysdba) can work on it. A mounted database gives you access to configuration tables and such, this can be very handy when you are recovering or working on solving a priority one problem with the database. You will have access to fixed tables and views. The fixed tables and views are those in the data dictionary (Oracle's internal configuration tables). We do however want the database to be open for all users so we have to alter the database so it will be open for normal use:

SQL> alter database open;

Database altered.

SQL>

Now the database is open for normal use and all users can connect to it and work on it normaly. We can also 'again' check the status and we will see that the database is open, this might be a good thing to do before informing your users that the database is available again. (login as a normal database user is also a very good check ;-) ).

SQL> select status from v$instance;

STATUS
------------
OPEN

SQL>

So in short, what do you need to do to make instance come back to live:

- startup nomount;
- alter database mount;
- alter database open;

And to shutdown the instance you can do so by enetering a "shutdown immediate;" command which will result in the following:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Wednesday, January 14, 2009

Linux and EXT4

It was already announched and we could already find it in the latest Friday build of Ubuntu, EXT4 support under Ubuntu. EXT4 is a Journaling file system, a journaling file system is a file system that logs changes to a journal (usually a circular log in a dedicated area) before committing them to the main file system. Such file systems are less likely to become corrupted in the event of power failure or system crash.

And an other great thing is that it can be really fast, you can read some benchmarking at phoronix.com and you can read some nice information about testing softpedia.com. Ubuntu booting in 21.4 seconds,..... that is very very fast in my opinion. How much this is due to the EXT4 filesystem, I think there are also a lot of other things that can help in Ubuntu 9.04.

However the support of EXT4 under Linux is a good step, one of the great values of supporting EXT4 is that you now can work with very large volumes. EXT4 is supporting currently volumes up until 1 exbibyte and files up until 16 TiB. A other thing is the way EXT4 can 'reserve' space on the disk, this can come in handy when creating large files for for example virtual disks when running virtual machines also streaming and databases come to mind.

"The ext4 filesystem allows for pre-allocation of on-disk space for a file. The current methodology for this on most file systems is to write the file full of 0's to reserve the space when the file is created (although XFS has an ioctl to allow for true pre-allocation as well). This method would no longer be required for ext4; instead, a new preallocate() system call was added to the linux kernel for use by filesystems, including ext4 and XFS, that have this capability. The space allocated for files such as these would be guaranteed and would likely be contiguous. This has applications for media streaming and databases."

Besides this EXT4 is also trying to prevent in all kinds of ways to prevent fragmentation, EXT4 has an online defragmenter. Even with the various techniques used to avoid it, a long lived file system does tend to become fragmented over time. ext4 will have a tool which can defragment individual files or entire file systems.

CapGemini, Gartner Magic Quadrants

A couple of posts ago I wrote about Gartner's Magic Quadrant and how Oracle placed itself in CPM, just read this blogpost to see what I am referring to.

It is nice to know that the model for offshoring done by CapGemini is working and is recognized by Gartner in such in way that it is deserving a place in the Magic Quadrant.

"In the report, Gartner uses its unique Magic Quadrant to evaluate the strengths of IT solutions providers operating in the European Offshore Application Services market. Providers are placed within the quadrant based on their ratings along two axes: their ability to execute and the completeness of their vision."

CapGemini is using a Rightshore model where not everything is moved over to India but when a customer decides that he want to outsource some or all its IT activities we will not move it to India directly. We will look at the location and requests of customer and find a country to 'outsource' to. For France this could be somewhere in Africa because the french language is spoken in large parts of Africa as a second language. South America and Spain and Portugal are a good mix. India is not ruled out but also other options are possible and Gartner is recognizing this and is sharing the vision that this is a great way to go.

"Gartner said: “Although some traditional providers are still trying to build critical mass in India, all providers must invest further in nearshore or possibly low-cost onshore European delivery centers. India is no longer the only delivery source.”

Therefore, in this regard, our pan-European presence and growing visibility in Latin America and Africa - which exemplifies our Rightshore model - are among our strongest differentiators."


Wednesday, January 07, 2009

Oracle's Hyperion in Gartner's Quadrant for Corporate Performance Management Suites

Again Gartner positions Oracle in the top of its Magic Quadrant for Corporate Performance Management (CPM) Suites. Accoording to a newswire article I have just been reading. This is not the first victory of Oracle with Oracle Business Intelligence Applications. In Q4 it already enterd in the Gartner Magic Quadrant and it was named, IDC ranked it as #1 in Business Analytics and Information Week stated that Oracle is the #1 prefered BI Vendor.

A quote from Gartner: "To remain competitive in today's marketplace, companies need the ability to access accurate information across financial and operational functions quickly and seamlessly," said Thomas Kurian, senior vice president, Oracle Fusion Middleware. "We believe Oracle's position in the Leaders Quadrant in Gartner's Magic Quadrant for Corporate Performance Management Suites underscores Oracle's continued commitment to drive innovation and integration across its EPM system, while helping customers leverage their software investments in both Oracle and non-Oracle transaction systems and other data sources."

Sunday, January 04, 2009

Installing Windows 7 on VirtualBox

I just received a copy of Windows 7 to install and test on a MacBook using VirtualBox. installing a pre-release Windows operating system in a virtual machine is always exciting and not always working.... however, within a blink of a eye I had a running Windows 7 VM on my macbook. Nice to note, in the EULA it states the following: "Time-Sensitive Software. The software will stop running on August 1, 2009. You may not receive any other notice. You may not be able to access data used with the software when it stops running." So a fixed end-date is set for Windows 7 Build 6965.winmain.081122-1150.

The first thing I noticed was that the first screen you get is a black screen with a progress bar telling you Windows is copying files. When it is at a 100% it stalls for about a minute, I think it is still copying files and starting some processes. One other thing I noticed is that it takes a long time before it starts expanding the files, most important is that it is expanding files so no problem here.

After a 'quick' installation I tried to install the VirtualBox guest additions however failed. The current version of windows was not supported by the VirtualBox guest additions was the message I got returned by the installer, this resulted in the fact that for example my network interface for windows 7 was not working in my VirtualBox. The solution for this is simple, go to 'Computer Management' and update the drivers for your network card manualy, select a location for windows to look for the driver and the VirtualBox drivers will be installed and you will be able to use your network card.



Until this moment I am however still unable to get the screen resolution set to 1440x900. Not even when I use the VirtualBox drivers. For a test I can live with this however when Windows 7 comes into production I hope that the combination Windows 7, VirtualBox and my macbook will be supported. I think we can not blame Microsoft for this, we will have to look into the drivers provided by VirtualBox.

I have missed the windows vista 'revolution' and from what I have heard, windows 7 is somewhat similar to vista. I might happen due to this reason that I talk about new things in Windows 7 even do they are already present in Windows Vista. It is my intention to write some blogposts in the upcoming days about my findings on Windows 7.

Oracle Database Server Tuning

A couple of days ago I was asked by someone to have a look at one of the LInux servers running a Oracle database. The complain was that the database was slow, query optimization was done and the database was tuned so that from a Oracle point of view everything should work correctly. The actual question was, 'do we have to add 2 more CPU's to the machine or do you advise adding 4 more CPU's to the machine'. Adding more CPU's to a server will however not always make your application run faster. To understand this principle you have to have some understanding of the process scheduler. I wil try to explain some of the detail here.

The 'problem' with SMP (Symmetric Multi Processor architecture) servers is that you can put more than one CPU in your server (and/or can have multiply cores). Your UNIX server will be able to make use of all your processor cores however a single single process can only run on a single core at once. It is important to understand this in the SMP architecture. When running a UNIX server with Oracle on top (or any other software) you will have a lot of process running which will all claim a amount of processing time on a CPU. The scheduling mechanism will schedule processes to run on a CPU and will share CPU's.You will have more processes running than can be handled by the CPU's so the scheduler will put processes in wait queues and will make sure everything will get his fair share of processing time. Best know schedulers are the Staircase scheduler which is developed for a large part by Con Kolivas and the priority based scheduler.

A scheduler will assign a state to a process, the general processing sates are the running state, suspended state(blocked state) or ready-to-run(ready) state.

Running means that the process is executed on the CPU, instruction sets are running on the CPU. The process will be running on the CPU until it is done or it is 'removed' from the CPU because an other process has to be run and has priority over the running the process. It can also happen that a process is 'removed' from the CPU because it has to wait for a resource like for example disk I/O. in this case the process is 'removed' from the cpu in a suspended state, when resources become available (data from the disk arrives) the process is returned in a ready-to-run state.

The ready to run state is a state where the process is placed in the run queue and is waiting for its moment to be placed upon the CPU and execute its instructions. Each CPU has its own run queue. When the number of processes in the run queue grows this is a indicator that the system is under utilized with CPU power. processes getting in and out of the running queue is normal however they should not 'stay' in the queue because this indicates that processes have to wait for processing time and this makes your server slowing down. The run queue is one of the most likely bottlenecks when your servers CPU's are unable to handle the load.

Monitoring the number of processes in the run queue is one thing to monitor, an other thing you will have to monitor is the number of processes in the suspended state. The number of processes waiting for, for example, data being read from the disks can indicate that your disks are slow. The same you can say for network i/o memory access etc.

However, it is possible that your CPU's are not keeping up, your network speed is correct, your disks are fast than you will see that processes are piling up. You can use vmstat to check this, below is a example of vmstat.



procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 586796 12932 266552 0 0 419 48 29 236 2 8 81 8
0 0 0 586648 12932 266552 0 0 0 0 13 123 0 3 98 0
0 0 0 586664 12932 266552 0 0 0 0 12 116 1 2 97 0
0 0 0 586664 12932 266552 0 0 0 0 12 115 1 2 98 0
0 0 0 586648 12932 266552 0 0 0 0 12 112 0 2 98 0
0 0 0 586664 12932 266552 0 0 0 0 12 115 1 2 97 0
0 0 0 586664 12932 266552 0 0 0 0 37 160 1 2 97 0
1 0 0 586664 12932 266600 0 0 0 0 97 347 2 5 93 0
4 1 0 585020 12984 267744 0 0 596 14 112 536 3 10 62 26
0 1 0 569168 13180 279640 0 0 6040 0 252 1448 6 23 0 71
0 1 0 553616 13308 287104 0 0 3772 6 243 891 14 27 0 60
0 0 0 542484 13384 288592 0 0 760 162 104 1117 16 31 34 19
1 0 0 542244 13396 288728 0 0 60 0 22 231 2 5 88 5
0 0 0 541888 13440 288876 0 0 50 250 54 257 2 6 88 5
0 0 0 539268 13456 288912 0 0 8 114 48 256 5 7 86 1


Important fields here are the first field 'r' and the 'id' field. 'r' is providing you information about the number of processes in the run queue. It is normal for a system to have processes in the run queue, there is no standard for what is normal however keeping a number of 4 processes on average can be quite ok. The second field, the 'id' field is providing you information about the load on the system. When the system is complete idle it will read 100 when it is completely loaded it will state 100. Those fields can provide you some insight on the load and the run queue on your system. If your system is real slow and the number of processes in the run queue is low you can state that there is no use to add more CPU's because the CPU's will not have work to process. In those cases you will have to look where the bottleneck is however you can say with some degree of certainty that the CPU's are not the bottleneck and that adding more CPU's will not help. Adding faster CPU's will help process the requests faster so when you have the feeling processes take to long to complete and there is no other bottleneck to solve in I/O for example you can always add faster CPU's.

In the example above I have executed the command 'vmstat -n 1' to make vmstat show the stats every second until I stopped the process. To get a good average however you should run this process for a longer period like a couple of weeks to get good insight. The data should be analyzed to get a good insight in the overall average and the average in certain periods of the day and/or week. Even do vmstat is providing real good insight you might want to look at other tools or scripts if you are planning to have a longer period of analysis of the system. Oracle is providing some scripts for this, you might want to take a look at the following metalink notes; 301137.1 and 466966.1.

Conclusion, as you can see adding more CPU's to a server will not always make the application run faster. In some conditions you have to change other things to remove the bottleneck, for example adding faster memory, adding faster disks, adding a faster NIC or looking at your storage systems to get your database running faster.