Monday, September 24, 2007

Bloom filter stuff

Bloom filters is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. False positives are possible, but false negatives are not. The Bloom filter was conceived by Burton H. Bloom in 1970

Where X is in Y like comparisons can be quite resource intensive, code having comparisons and equations like this can run for days when you execute them against large datasets. Bloom filters can be a big relieve and save you a lot of resources and time.

In Perl for example this is a lookup hash, a handy idiom for doing existence tests:

foreach my $e ( @things ) { $lookup{$e}++ }

sub check {
my ( $key ) = @_;
print "Found $key!" if exists( $lookup{ $key } );
}

When running this against a small set of data and in a situation where time is not a very big issue this will work fine. However if one or possibly both are against you you might want to use a bloom filter. In Perl this would look something like this:

use Bloom::Filter;

my $filter = Bloom::Filter->new( error_rate => 0.01, capacity => $SONG_COUNT );
open my $fh, "enormous_list_of_titles.txt" or die "Failed to open: $!";

while (<$fh>) {
chomp;
$filter->add( $_ );
}

sub lookup_song {
my ( $title ) = @_;
return unless $filter->check( $title );
return expensive_db_query( $title ) or undef;
}


An empty Bloom filter is a bit array of m bits, all set to 0. There must also be k different hash functions defined, each of which maps a key value to one of the m array positions.

For a good hash function with a wide output, there should be little if any correlation between different bit-fields of such a hash, so this type of hash can be used to generate multiple "different" hash functions by slicing its output into multiple bit fields. Alternatively, one can pass k different initial values (such as 0, 1, ..., k-1) to a hash function that takes an initial value; or add (or append) these values to the key.

For larger m and/or k, independence among the hash functions can be relaxed with negligible increase in false positive rate (Dillinger & Manolios (2004a), Kirsch & Mitzenmacher (2006)). Specifically, Dillinger & Manolios (2004b) show the effectiveness of using enhanced double hashing or triple hashing, variants of double hashing, to derive the k indices using simple arithmetic on two or three indices computed with independent hash functions.

To add an element, feed it to each of the k hash functions to get k array positions. Set the bits at all these positions to 1.
To query for an element (test whether it is in the set), feed it to each of the k hash functions to get k array positions. If any of the bits at these positions are 0, the element is not in the set – if it were, then all the bits would have been set to 1 when it was inserted. If all are 1, then either the element is in the set, or the bits have been set to 1 during the insertion of other elements.

Unfortunately, removing an element from this simple Bloom filter is impossible. The element maps to k bits, and although setting any one of these k bits to zero suffices to remove it, this has the side effect of removing any other elements that map onto that bit, and we have no way of determining whether any such elements have been added. The result is a possibility of false negatives, which are not allowed.

Removal of an element from a Bloom filter can be simulated by having a second Bloom filter that contains items that have been removed. However, false positives in the second filter become false negatives in the composite filter, which are not permitted. This approach also limits the semantics of removal since adding a previously removed item is not possible.

However, it is often the case that all the keys are available but are expensive to enumerate (for example, requiring many disk reads). When the false positive rate gets too high, the filter can be regenerated; this should be a relatively rare event.


Bloom filter in:
Perl
C/C++
Ruby
Java

Monday, September 17, 2007

Create apps accounts from database.


When working as a consultant in different systems you have from time to time the situation that you have a account to the oracle apps schema but you do not have a applications login.

There are a couple if things you can do in this situation, (1) ask a DBA to create a account for you, (2) create the account yourself by running a PL/SQL script. Running the following PL/SQL script is in most cases NOT a smart thing to do because you circumvent the security protocol of a customer. However, in cases you are working on a test environment in your own company rules can be somewhat less tight than this, meaning you can run this without any problems.

Please note that before you run this script you migt want to change some things to your own taste so the script generates a user account whith the username you prefer. In any case a APPLICATION_DEVELOPER responsibility is assigned and a SYSTEM_ADMINISTRATOR responsibility is assigned.

Download
oracle_create_applications_account.sql

Thursday, September 13, 2007

Tuning custom Oracle application.

In some cases it is possible that, after you created your custom Oracle application, the performance is not what you and your user community had in mind. You could just start debugging and looking for ways to redo your code immediately and directly start looking into your code. Most likely you have a idea of which operations take the most time and which operations take the most time in the opinion from the users.

It is however wise to do some research first. The first thing you would will most likely like to know is which packages and tables are used the most in your application. For this you can use the following queries:


This query will report on the top 10 tables that where you do a insert, select, update or delete or possibly create a lock on. This can be a good starting point, it could be wise to create for example a index or check if the table might contain a lot of “junk” data which actually could be placed in a history table. By keeping the data in your table limited you good increase some of the speed of your application because a possible full table scan has less data to look into. oracle_top_10_tables.sql


After you created a overview of the tables that are mostly used and maybe after you tuned some of the tables it is wise to take a look at what procedures are used the most. By tuning those you might also gain some performance. This query will look for the mostly used functions, packages, package bodies, procedures and triggers. oracle_top_10_procedures.sql

Having this you can start looking into your code on where you might expect to be gaining some performance. Before you do so you might want to know which SQL statements are intensively used.

By using the following query you get a the top 10 SQL statements by there buffer gets, oracle_top_statements_by_buffer_gets.sql
And the top 10 of SQL statements by the number of disk reads, oracle_top_statements_by_disk_reads.sql

By having this information you have a good starting point to work on the tuning of your custom made Oracle application.


Wednesday, September 12, 2007

Oracle XML Publisher and XDODTEXE executable.

By using XML publisher you can easily create reports and documents which can be started from a concurrent request within Oracle Applications. To enable your concurrent request to do so you need to have the executable XDODTEXE, a Java Concurrent Program as a executable in your concurrent request.

It could however happen that this executable is not available, there is a threat on Oracle forums where this is discussed. The reason for this is most likely that you do not have applied the installation of XML Publisher 5.6.0. Even do you might even be on version 5.6.3 you first have to have installed version 5.6.0. In a ideal situation Oracle would warn you to install 5.6.0 first because only this version contains a installation of XDODTEXE however this is not happening.

If you are missing XDODTEXE as a executable you can check the installed version by using this query:

SELECT
DECODE (bug_number
, '3554613', '4.5.0'
, '3263588', 'XDO.H'
, '3822219', '5.0.0'
, '4236958', '5.0.1'
, '4206181', '5.5.0'
, '4561451', '5.6.0'
, '4905678', '5.6.1'
, '5097966', '5.6.2'
, '5472959','5.6.3')
FROM
ad_bugs
WHERE
bug_number IN( '3554613'
,'3263588'
,'3822219'
,'4236958'
,'4206181'
,'4561451'
,'4905678'
,'5097966'
,'5472959')

If 5.6.0 is not in the list, even as a higher version is, you need to install 5.6.0 and apply all the patches upwards again on your system. This will make sure you have XDODTEXE available and you are on the correct patchlevel again.

Tuesday, September 11, 2007

Spawn concurrent request from pl/sql

When developing pl/sql code for oracle applications you would like to start concurrent requests directly from a pl/sql package. To do so Oracle has provided the option to use fnd_request.submit_request. Fnd_requests is a package in the apps schema.


Please note the example below where we use fnd_request.submit_request to start the concurrent request OEOIMP.

v_request_id := fnd_request.submit_request('ONT','OEOIMP','Order Import','' ,FALSE,'','','','N','1','4','','','','Y','N',CHR(0));
COMMIT;

IF v_request_id > 0
THEN
FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'DEBUG, Successfully submitted');
ELSE
FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'DEBUG, Not Submitted');
END IF;

The parameters needed to start submit_request are the following:

- Application (varchar2) (shortname of the application)
- Program (varchar2) (shortname of the concurrent request)
- Description (varchar2) (description of the concurrent request)
- Start_time (varchar2) (time to start, if null then immediate)
- Sub_request (Boolean) (is this a sub request true/false)

And you have argument1 until argument100. You could enter all the 100 arguments even if they are null or you could use after the last argument you need the following: ,CHR(0) By using this there will only by used the arguments you need and you do not have to fill up the number of arguments with null values.

To find out all the short descriptions of all the concurrent requests you can use this query:

SELECT
conpro.CONCURRENT_PROGRAM_ID
,conpro.CONCURRENT_PROGRAM_NAME
,conpro.DESCRIPTION
,conpro.APPLICATION_ID
FROM
FND_CONCURRENT_PROGRAMS_VL conpro


To find all the short names of the applications on your system you can use this query:
SELECT
appview.APPLICATION_ID
,appview.APPLICATION_SHORT_NAME
,appview.APPLICATION_NAME
,appview.DESCRIPTION
FROM
fnd_application_all_view appview




Unable to ship confirm

Problem description: unable to Ship Confirm in Oracle Order Management Super User. In some cases the ‘Ship Confirm” button is grayed out.

This is in the ‘Shipping Transactions’ screen under the Delivery tab. The ‘Shipping Transactions’ screen can be found under ‘Order Management’. Shipping -> Transactions.

The reason for this is that your user account is not listed under the ‘Shipping Execution Grants. To enable the button you have to list your user account here. You can find this under ‘Oracle Order Management Super User’. Menu: Setup -> Shipping -> Grants and Role Definitions -> Grants.





How America searches: Mobile

More and more people are using mobile devices to access the internet. Getting directions, checking your e-mail, reading files and websites…. As of March 2007 the number of wireless subscribers has climbed to nearly 234 million, reaching more than 72 precent of the total population, according to industry tallies by CTIA The Wireless Association. With mobile devices on hand throughout the day and the number of mobile internet users topping 20 million, wireless is beginning to deliver on its long-held promise of becoming the “third screen”.

The digital marketing agency icrossing has delivered a comprehensive report on the figures of mobile internet use and they way of searching on the American region.



Sunday, September 09, 2007

Supercomputing By Reservation


Supercomputers keep growing ever faster, racing along at the blazing speed of nearly one petaflops – 10 to the fifteenth, or one thousand trillion calculations per second – equivalent to around 250 thousand of today’s laptops.

In contrast, the experience of a computational scientist can be anything but fast -- waiting hours or days in a queue for a job to run and yield precious results needed for further steps. The unpredictably of queues can impede the course of research, slowing progress with unexpected periods of waiting.

To address this problem, the San Diego Supercomputer Center (SDSC) at UC San Diego has released version 1.0 of a new User Portal, featuring an innovative user-settable reservation system that gives researchers more control over when their jobs will run on the center’s supercomputers. The service, not previously offered in high performance computing centers, is debuting on SDSC’s DataStar and TeraGrid Cluster systems.

“We’ve had a lot of feedback in user surveys asking for faster turnaround time,” said Anke Kamrath, director of User Services at SDSC. “While we couldn’t eliminate the queue, especially on popular machines like DataStar, we realized that a service that lets users themselves schedule ‘windows’ of reserved time would let them complete jobs more reliably and get more done.”

The reservation system can make computing more efficient in various situations. For example, a user with a large allocation may start a full machine job that will run for a day, only to find a minor problem causes it to quickly fail. Instead of being able to simply fix the problem and restart, the user is faced with going to the end of the queue and again waiting hours or days for the job to run. With SDSC’s new User Portal, this user can now easily set a reservation for a full-machine job, ensuring that they can complete the job in a timely way, even if minor problems occur.

Another research group may be debugging a new code. To do this they need to run many short jobs in succession, working as a team to troubleshoot the results of each run, and then trying again.

But each time they want to restart the code, they have to sit in the queue, potentially wasting many hours as the group awaits the results of each run. Using the reservations feature in the portal, the researchers can now schedule several hours of machine time for multiple debugging runs, making efficient use of the team’s time.

Other researchers may need to be sure they run in conjunction with a scheduled event such as observing time on an electron microscope or other instrument. Efforts are also underway to use this capability to support the co-scheduling of jobs to run across TeraGrid-wide systems.

“SDSC’s User Portal offers a clean interface that shields users from the complexity of the underlying service,” said Diana Diehl, who leads SDSC’s Documentation and Portals group. “Just like an airline reservation system makes intricate arrangements in a few minutes for travelers at their computers, SDSC’s reservation system carries out complicated tasks to arrange the supercomputer reservation, making sure that it follows policies, doesn’t disrupt jobs currently in the queue, interfaces with the user’s account, and allows time for preventive maintenance.”

While users have always been able to reserve time manually, the process can be slow and cumbersome. SDSC’s new user-settable system democratizes access to reliable computing, letting any user log in with either their TeraGrid or SDSC account and easily reserve time themselves. Rather than carving up the machine among various pre-selected users, this approach allows users to reserve up to full machine runs, encouraging use of the power of the full supercomputer to advance science into new realms.

The new user-settable system has been carefully designed to provide reservations that are in balance with existing jobs in the queue, and reservations carry a premium cost over jobs run without a reservation.

Based on GridSphere, the portal offers a Web interface to accomplish tasks such as running jobs and moving data that would ordinarily require complex command-line scripts. In the future, more features will be added to the User Portal through portlets such as accessing the SDSC Storage Resource Broker (SRB) data management system, the HPSS archival tape storage system, and visualization tools.

“It was an enormous task to create such a complex system,” said Kamrath. “It required teamwork among groups from Documentation to Production Systems across the center, and couldn’t have been done without SDSC’s large pool of expertise in a number of areas.”

The large team required to create the SDSC User Portal and user-settable reservation system includes, in management and development, Anke Karmrath, Diana Diehl, Patricia Kovach, Nancy Wilkins-Diehr, as well as Fariba Fana, Mona Wong, Ken Yoshimoto, Martin Margo, Andy Sanderson, J.D. Bottorf, Bill Link, Doug Weimer, Mahidhar Tatineni, Eva Hocks, Leo Carson, Tiffany Duffield, Krishna Muriki, and Alex Wu; in testing, Subha Sivagnanam, Leon Hu, Cuong Phan, Nicole Wolter, Kyle Rollin, Ella Xiong, Jet Antonio, and Shanil Daya.

Note: This story has been adapted from a news release issued by University of California - San Diego.



Corporate blogging.

Blogging turns out to become a more widely used marketing tool for companies. Not only big companies are starting to use corporate weblog solutions but it also enables smaller companies to enhance there online marketing with weblogs. In most cases people from management and the thought leaders of companies are asked to maintain a corporate weblog. In some cases the bloggers are completely free in what they place on there weblog in most cases the bloggers are support by marketing teams who help them create there posts and keep a eye on the corporate value of a post.

In general there are some things to keep in mind when starting a corporate or non-corporate weblog.

- Transparency is Key
- Develop a Community
- Be Consistent
- Make a Policy
- Be Committed
- Acknowledge Faults and Missteps
- Take the Good with the Bad
- Ensure Weblog Usability

Keeping these guidelines in mind will help you to make a success of your corporate weblog. Companies can benefit from weblogs in several ways. They get more exposure which is in the opinion of a marketing department always a good thing, they have the ability to give thought leaders a platform to expose the knowledge they have a so show what knowledge the company has itself. There is a option to create a community of devoted readers and get feedback from this community. And those are just some of the advantages that are there. Also a big plus is that most bloggers get a hang of it and start to enjoy it and after some time do not consider this as part of a job but more part of fun.


Some good examples of corporate weblogs can be found here:
MacroMedia corporate weblog: http://weblogs.macromedia.com
Sun microsystems corporate weblogs: http://blogs.sun.com/
Weblogs from people working at IBM development: http://www-128.ibm.com/developerworks/blogs/


As this trend is getting more and more speed there is also done more research, please find some interesting research papers here:

A research paper from Durbin Media Group: CORPORATE PRIMER ON BUSINESS BLOGGING
E-Business Consortium, University of Wisconsin-Madison : Corporate Weblogging Best Practices
Lewis 360 : the business value of blogging
Bloomberg Marketing : Blogs: Beyond A Corporate Handshake

Sunday, August 26, 2007

Tracking containers.


The world is a very different place out beyond the horizon. Even as you read this, there are some 40,000 large cargo ships plying the world's waterways and oceans, not to mention innumerable smaller merchant craft, all pulling in and out of ports, loading, unloading, changing out crews and cargos, and steaming from one location to the next.

In what can be a very murky world of shadowy ship registry offices, lengthy manifests, and dockhands who change out faster than Barbosa's crew, how all these ships come by their cargo, how that cargo is loaded, by what polyglot seamen and in what untamed ports, can be an amazingly scrambled and trackless story rivaling the Pirates of the Caribbean.

Scenario: A single ship starts out in Singapore with containers filled with electronics, passes through Indonesia where it picks up spices, sails to Calcutta to load cotton, Port Said where it boards an Egyptian crew, Piraeus where it stops for fuel, Tangier where it picks up leathers, Scotland where it packs in woolen sweaters, and finally sets sail for Newark, New Jersey. Eleven million containers packed with such goods reach U.S. ports every year.

At any point in a merchant ship's journey, pry open container XYZ mid-ocean, and what might you find? When you can't be sure, that spells danger. The possibility that a single container has gone purposefully astray and might now be packed with explosives, or loaded with a virulent biologic destined for our shores, is not a fictional scenario. (In 1988, it was an Al Qaeda merchant ship that delivered the materials needed to bomb U.S. embassies in Tanzania and Kenya. That same ship was never seen again.)

Given lots of time, customs agents could find all contraband. But, in the world of maritime shipping, time is the enemy. Try delaying a delivery, and you may face some rough characters down at the docks (think On the Waterfront). What's more, anything that hinders the swift transit of goods around the world can have a rippling effect on the world's economy.

MATTS -- DHS S&T's Marine Asset Tag Tracking System -- is a miniature sensor, data logging computer, radio transceiver, and GPS tracking system integrated into a compact and inexpensive black box, about the size of a deck of cards. Affixed to a shipping container, MATTS can use its on-board GPS chip to estimate its location if the GPS signal is lost. And, in the final version of the system, containers outfitted with MATTS tags will be able to transmit through shipboard communications systems, even if they are placed deep below deck. The tag's signal will "jump" from container to container until it finds a path it can use. Better yet, this black box stores its location history and reports it back when in range (up to 1 km) of an Internet equipped ship, container terminal, or a cell phone tower. At any point in a container's journey, its history can be examined, and if anything has gone amiss, authorities know instantly to scrutinize that particular container.

Ultimately, MATTS will be integrated with S&T's Advanced Container Security Device. The ACSD sends an alert through MATTS when a container has been opened or tampered with on any side, providing even more security.

"MATTS will globally communicate in-transit alerts to Customs and Border Protection, and this capability satisfies a high-priority CBP requirement," says Bob Knetl, Program Manager for the MATTS research within S&T's Borders and Maritime Division.

In late April 2007, one hundred MATTS-equipped containers started out in the Port of Yokohama, Japan, and are now making their trans-Pacific crossing to the Port of Los Angeles/Long Beach, where they will then continue by rail to the Rochelle, Illinois, Rail Terminal and be unloaded and trucked to their final destination. This test, ending in August, will demonstrate that the communications can be used internationally (in this case, by Japan's Ministry of Land, Infrastructure and Transportation) and that transitioning to domestic drayage once portside in Long Beach also runs smoothly.

MATTS was developed under a DHS S&T Small Business Innovation Research (SBIR) contract by iControl Incorporated, a small Santa Clara, CA-based company.

"A serious threat is posed by the cargo that containers may hold," says Vinny Schaper, SBIR Program manager. "We have to view the ocean with grave concern, and realize that a maritime attack is not beyond the realm of possibility and if it comes, it will probably involve the use of merchant ships. Eleven million containers a year are brought onto our docks. Interrupt this with a terrorist attack, and the backup would reach around the world."

Note: This story has been adapted from a news release issued by US Department of Homeland Security.



Thursday, August 09, 2007

MIT wins Marconi Prize.

MIT Professor Ronald L. Rivest, who helped develop one of the world's most widely used Internet security systems, has been named the 2007 Marconi Fellow and prize-winner for his pioneering work in the field of cryptography, computer and network security.

Rivest, the Andrew and Erna Viterbi Professor in MIT's Department of Electrical Engineering and Computer Science, will receive the award and accompanying $100,000 prize at the annual Marconi Society Award Dinner on Sept. 28 at the Menlo Circus Club in Atherton, Calif.

The Marconi Society, established in 1975 by Gioia Marconi Braga, annually recognizes a living scientist who, like her father Guglielmo Marconi, the inventor of radio, shares the determination that advances in communications and information technology be directed to the social, economic and cultural improvement of all humanity.

"Ron Rivest's achievements have led to the ability of individuals across the planet--in large cities and in remote villages--to conduct and benefit from secure transactions on the Internet," said Robert Lucky, chairman of the nonprofit Marconi Society.

The group cited Rivest's advances in public-key cryptography, a technology that allows users to communicate securely and secretly over an insecure channel without having to agree upon a shared secret key beforehand.

"Public key cryptography has flattened the globe by enabling secure communication via e-mail, web browsers, secure shells, virtual private networks, mobile phones and other applications requiring the secure exchange of information," Lucky said.

A native of Niskayuna, N.Y., Rivest attended Yale University, where he earned a B.S. in mathematics in 1969. After receiving his Ph.D. in computer science from Stanford in 1974, Rivest accepted an offer to join the faculty at MIT.

At MIT he met two colleagues, Leonard Adleman and Adi Shamir, who would become his partners in solving the puzzle of public-key cryptography.

"Ron is a very special person," said Adleman. "He has a Renaissance quality. If tomorrow he discovered an interest in rocketry, then in five years he would be one of the top rocket scientists in the world."

Fortunately, what captured Rivest's imagination was the challenge of a public key encryption system. He managed to enlist Adleman and Shamir in his quest to produce what he called an "e-crypto system." It was a challenge ideally suited to Rivest's mathematical interests, relying on what Adleman calls "some of the oldest and deepest mathematics, going back to antiquity."

In public key cryptography, there are two keys; one known to everyone, and one known only to the recipient. The public and private keys are paired in such a way that only the public key can be used to encrypt messages and only the corresponding private key can be used to decrypt them. But even if someone knows the public key, it is effectively impossible to deduce the private key. To design such a system was the challenge. In effect, it was a mathematical puzzle.

The RSA encryption algorithm that Rivest, Shamir and Adleman developed relies on the challenge of factoring large numbers (typically 250 or more digits long), a problem that has stumped the world's most prominent mathematicians and computer scientists for centuries.

At one end of the "conversation," the receiving party's computer secretly selects two prime numbers and multiplies them to create a "public key" which can be posted on the Internet. On the other end, the sending party's computer can take that key, enter it into the RSA algorithm and encrypt a message.

The genius of the scheme is that only the recipient knows the prime factors that went into the creation of the public key--and that is what is required by the RSA algorithm to decipher the message. Even though others can see the encrypted message and the public key, they cannot decipher the message because it is impossible to factor the number being used in the public key within a reasonable period of time.

The team developed its system in 1977 and founded RSA Data Security in 1983. RSA was acquired in 1996 by Security Dynamics, which in turn was acquired by EMC in 2006. Rivest has continued his work in encryption and is the inventor of the symmetric key encryption algorithms RC2, RC4, RC5, and co-inventor of RC6.

looking at adds



How do consumers look at advertisements? Most marketing textbooks advance the theory that looking at ads is a predominantly "dumb process," driven by visual stimuli such as the size of the ad or the color of the text.

However, new research by researchers from the Netherlands and the University of Michigan uses eye-tracking software to reveal that it may be our goals -- the tasks we have in mind -- that drive what we pay attention to, even during a few seconds of ad exposure.

In the August issue of the Journal of Consumer Research, Rik Pieters (Tilburg University, The Netherlands) and Michel Wedel (University of Michigan) perform an eye tracking experiment on 220 consumers. The consumers are split into four groups, each with a different goal, and given free rein to view a series of advertisements.

The study is self-paced -- that is participants are allowed to look at the ads for as long or as short of a time as they would like. Overall, the participants looked at the 17 target ads in the study for an average of about 4 seconds only -- but with notable differences in focus.

Those asked to memorize the ad focused on both the body text and the pictorial representation of the product. Those asked to learn about the brand, on the other hand, paid enhanced attention to the body text but simultaneously ignored the pictorial.

This supports the Yarbus thesis that ad informativeness is goal-contingent. Differences in pupil diameter between ad objects but not between processing goals reflect the pupil's role in maintaining optimal vision.

"The fact that even during the few seconds of self-paced ad exposure, attention patterns already differ markedly between consumers with different goals underlines the importance of controlling and knowing consumers' processing goals in theory building and during advertising pre- and post-testing," the researchers write.

In other words, the eyes are a reflection of consumer goals.

Reference: Rik Pieters and Michel Wedel. "Goal Control of Attention to Advertising: The Yarbus Implication," Journal of Consumer Research: August 2007.


Tuesday, July 31, 2007

Sun project FISHworks.

Shares of Sun Microsystems jumped close to 10 per cent in after-hours trading, as the hardware maker posted strong fourth quarter results. Longtime Sun followers, however, may be less moved by the figures since Sun relied more on cost-cutting than sales to improve its bottom line.

And now Sun is working on a new project called FISHworks. Project FISHworks is trying to create a netapp killer. The project is developing a new NAS solution like Netapp has at this moment. Currently the project is still "secret" or at least Sun is not telling much about it.... however, some details are already leaked out and believed to true. It will, most likely contain a ZFS file system and most likely it will contain DTrace. DTrace stands for Dynamic Tracing and helps you to trace bottlenecks on Solaris and make your system perform better.

Also it will most likely have a nice looking web GUI and a command line interface to administrate the system. For the rest.... we have to wait for more information from Sun.

Wednesday, July 04, 2007

Windows loses ground with developers

Microsoft's Windows platform is losing traction as a target for application developers in North America but still is the dominant platform, according to Evans Data survey results being released on Tuesday.

A survey this spring of more than 400 developers and IT managers in North America found that the number of developers targeting Windows for their applications declined 12 percent from a year ago. Just 64.8 percent targeted the platform as opposed to 74 percent in 2006.

"We attribute [the decline] largely to the increase in developers beginning to target Linux and different Linux [distributions]. Both Novell and Red Hat are the two dominant ones right now," said John Andrews, the CEO of Evans Data.

The arrival of Windows Vista likely only kept the numbers from being even worse. "I think Vista probably offset some of the decline," Andrews said.

The share for Windows is expected to drop another 2 percent, to about 63 percent, in the next year, Andrews said.

The targeting of Linux by developers increased by 34 percent to 11.8 percent. It had been 8.8 a year ago, according to the survey. Linux targeting is expected to reach 16 percent over the next year.

Evans views the situation as a battle of Windows versus open source with open source maturing, Andrews said. Windows remains tops, though. "They're still dominant, there's no doubt about it," said Andrews. Use of Windows on the development desktop remains steady.

The survey, featuring developers at enterprises and solution providers like system integrators, covered both client and server application development.

Evans Data said the shift away from Windows began about two years ago and is accelerating. Linux is benefiting as are nontraditional client devices. Evans Data also surveyed developer plans for such platforms as Unix and Mac OS but did not release those numbers.

A Microsoft representative said Monday no one was available from the company to comment on the Evans Data report.

Andrews said the verdict still is out on the full impact that open-source software is having on the commercial software market but noted that there will always be a place for both paradigms.

In other findings in the Evans Data Spring North American Development survey, Evans found that JavaScript is the most widely used scripting language. It has more than three times the users of PHP (Hypertext Preprocessor), Ruby, or Python. But Ruby usage is expected to increase by 50 percent within the coming year.

Also gathering steam is virtualization. A third of developers surveyed are writing applications that support virtualization with 42.5 percent expected to adopt it within the next year.

Original article done by Paul Krill and published on infoworld.com

Developing PHP multi database applications.

When developing a website or web enabled solution in PHP most developers do consider the fact that there database might be migrated to a newer version, what most do not consider is that they have to migrate to a completely different database platform. This might sound strange to some as they are developing there application for a specific database platform. However, when you develop a application which can be used at several sites you might not always be able to rely on the same database. Or in cases you are building a opensource or commercial solution you users might not always want to work with the database you have in mind. You might be developing your code on a Oracle 10G database while your customer thinks a Oracle database is overkill and wants to use a MySQL or PostgreSQL database.

For those who are not familiar with the three tier architecture, this architecture is describing the situation where you have a client, application server and a infrastructure server. The client is in a PHP web enabled solution a customer running a web browser, the application server is running a webserver with the PHP engine enabled. The infrastructure server is running a database. In the picture below you will see that all tiers are represented by a different server (or client pc). What you normally will see in not so critical environments is that the applications server and the database server are running on the same hardware platform.

When planning to build a PHP solution you might want to think about this problem and might plan in advance. To prevent that you need to write code for every possible database platform you will need to have a SQL translating engine in the middle. Even SQL is quite a standard language there are quite some differences between the database vendors SQL implementation.

You might consider to write your own SQL translator however there is a good opensource solution which can help you with this problem. ADOdb is a database abstraction library for PHP and will give you support on MySQL, PostgreSQL, Interbase, Firebird, Informix, Oracle, MS SQL, Foxpro, Access, ADO, Sybase, FrontBase, DB2, SAP DB, SQLite, Netezza, LDAP, and generic ODBC, ODBTP. This enables you to write a single code pack to connect to all those “infrastructure” tiers instead of writing code for all those platforms.

Using the solution as shown above will prevent you from having the situation as shown here that you have to develop and maintain a large number of releases. Even do your developers will have to learn to adopt a new way of coding it will in the long run pay of when you decide to migrate to a different database platform or when you are working on a solution you will be distributing to customers. When you are developing