Friday, December 18, 2009

Ubuntu Linux network proxy

In my previous company I made the switch from Windows to Linux. Again at my current company I am trying the same at the moment. The difference between the situation now and the previous situation is that I am switching more and more between locations and customers. I feel from time to time a laptop nomad and have to change settings every time I am at a different network.

I have been switching to Ubuntu Linux and here you can setup your network proxy quite easy. You have different profiles you can set and store and simply select the profile of the customer you are currently at. You can find this tool under "System" -> "Preferences" -> "Network Proxy". As you can see in the screenshot below you can set a profile and the settings are not that different from for example the firefox settings for a proxy server.

Under the details button you can state a possible username password you might need if your proxy server needs some form of authentication. The good part about this is that you only have to set it here and all your applications are now using this setting.

If you travel a lot between different networks the option to make different profiles is very very handy.

Monday, October 12, 2009

Mono, Tomboy, .NET and my mistake

My Mother told me something a long time ago, if you are wrong and you made a mistake this is not something to be ashamed of as long as you admit you did.

So maybe it is time to admit it for some parts. I am talking about my post on Mono/TomBoy and how it tangles in .NET into mainstream Linux. After posting this blogpost it took some time for Google to pick it up and tell people the page was out in the open. However after it did I was contacted via a comment by the lead developer from the Tomboy project.

Sandy had to state some things about my post and I am glad to have received this update. It might changes my opinion on some parts however not in all. Sandy stated the following:

1) Mono does not depend on any .NET or any Microsoft code. It is a free software *reimplementation* of the .NET runtime, framework, and languages. All of the code you need to run Tomboy (for example) is 100% open source and free software, and totally compatible with the GNU GPL. Richard Stallman agrees with this and has stated it before. So you are more than welcome to download the Mono source, and tweak the runtime or change the C# language or do anything you want! :-)

Agreed, you are completely right and I was wrongly informed that some parts of Mono where closed. I did already know that the complete sourcecode of Tomboy was opensource and could be changed however I was under the impression that some parts of Mono where still closed and under control by Microsoft. So I have to change my opinion on this based upon the information from Sandy.

2) Richard Stallman's main complaints about Mono that I have heard are the following:
a) Because Mono is a reimplementation of .NET, and Microsoft decides what is in .NET, you could say that Microsoft indirectly influences what ends up happening in Mono.
b) Microsoft has a lot of patents on a lot of things, and Stallman is concerned that there might be patents that affect Mono.

So, Stallman's argument against Mono is not about having the source, or it not being free software, or anything like that. It's more of a political and philosophical thing.

Lets not keep us at the statement given by Stallman. My own opinion is that as a patent might (read might) be end up in mono it can influence Tomboy so one should not take this risk. Maybe I am a purist like Stallman that might be the case. However this is my opinion. So as I stated in my previous post I would like to be able to pack my stuff and go to a island and be able to do whatever I like. So even if I have all the sourcecode I can still end up in a situation where a ship comes to my island and tells me I am doing illegal things because I am tinkering with patents by Microsoft. To prevent this one should pick a language in which this will never happen.

It might be a political and philosophical discussion however I think we have to have this discussion to be able to think about what we do allow in mainstream Linux and what not. Currently I am under the impression that we should prevent this wherever we can to protect the purity of Linux. In my humble opinion we should try to prevent it from happing that anything which potentially can harm the free form of linux ends up in a Linux distribution.

So I might have made some mistakes and I hope I have corrected them in this post, I might have been not to clear on some points and I hope I have corrected them in this post and I hope some people would like to comment on this post so we might be able to start a discussion on it. This is because I have the feeling this is a discussion we will have to have to decide what to do with those political and philosophical implications of opensource.

Sunday, October 11, 2009

No .NET in Linux

Richard Stallman has opposed against a single app, Tomboy, which has become part of the current unstable release of Debian Linux. Reason for this is that it is depending on MONO. Mono is a cross platform, open source .NET development framework and his concern is that Microsoft might someday stop with .NET and as MONO is depending on .NET also all the applications build with MONO are depending on it and so are depending on a Microsoft. His statement is that all that is in Linux should be open and by this all the source codes should be available which is not the case with .NET.

Richard is seen as one of the most brilliant people in the opensource world however also seen as one the hardliners in the opensource world who is not willing to make any compromise on his thoughts of what opensource is and should be. Some people think he is reacting to strong and state that he has become to much of a hardliner. I do however agree with him on this. Linux should be complete opensource and by adding a application like Tomboy we are compromising to this thought. Linux should be a platform that you can change to your liking and by having a dependancy to closed source you loose this ability.

In my opinion you should be able to pack linux on a laptop, get all the sourcecode, and move to a deserted island and be able to do anything you like to it without having the need to contact anyone. Complete freedom and not depending on any other person. By adding tomboy to it you are depending on Microsoft and to the thinking of Microsoft. If you need a function changed in .NET for some reason you will have to wait until Microsoft decides to do it, if they ever do it. And not only is this the case with .NET it is also the case with for example C#.

it is not a issue of opposing against a language, everyone should pick the language he or she likes to use. It is opposing against mixing licenses in Linux. As Mono is depending on .NET and .NET is not GNU/GPL compliant it should never become part of a linux mainstream release. If you really like it you should have the option to implement it however I would vote against it as it is not GNU/GPL compliant.

If you coding in for example Python and you need a special part of the compiler changed for some reason you do have the ability to go to the Python website, check the sourcecode, change it and make you own version. That is complete freedom as it should be. Not many people will do it however you do have the option if it is really needed. As some person from the US army once stated about systems that would make it to the battlefield, “if we can not hack it we do not pack it” and that is a very true statement. If you do not have the option to make modifications to it when the situation calls for it it is useless.

So, placing Tomboy in Debian and by this making yourself depending on Microsoft is a very bad move. Richard can be seen as a hardliner however I can only agree with him on this part.

Linux find command

sometimes finding out the way a Linux or UNIX command is working can be fun. For example I have been looking into the way how the find command is working and I have had a lot of fun with it. find can give you a great option to locate exactly the files you want and with the pipe options in Linux you can get exact that output you want and need.

The reason for me to look into the find command was a interface at a server at a customer. The interface consist basically out of remote servers who place files with FTP in certain locations. On the server side we have a couple of processes scheduled looking for files in certain directories who will process and delete the files. The processes run every 10 minutes, as we have some files that can be large the processing can take some time so files can be at the location for some time. However, never longer than 1 hour. So what we want the administrators would like to have is a system that looks for files older than 1 hour because this can indicate a interface that is stuck for some reason.

As we maintain a large number of servers and interfaces this can not be done by hand and has to be automated. The solution is to schedule a scrip that will look for the files and send the output every hour via mail. Even in the cases no files older than 1 hour are find a mail should be send because this is a trigger to see if the check has run.

First is to find out how you can locate files older than 1 hour. We will be using the find command for this. We will use the following command:

find . -type f -mmin +60

find is the command to find files. the “.” indicates that you want to look in the current directory. than we have the “-type f” option. “-type” allows you to set what kind of files you are looking for. f states that you are looking for regular files. If you for example are looking for directories you can state d and if you are looking for a link you can state l. for the complete list you can refer to the man page of find.

We also states -mmin +60. This indicates that you are looking for files older than 60 minutes. You can play with +60, if you are looking for files for example that are NOT older than 60 minutes you can state -60 instead.

Now we do not want the standard output because we do want to have more information, somewhat like the output we get from the ls command. For this we can do a “internal” pipe to the ls command with the -exec option. for the -exec we set ls -la so we get all the ls output. The command will look like this:

find . -type f -mmin +60 -exec ls -la {} \;

However this is still not what we want because we only want the filename and the time it is created. currently we get something like:

-rw-r–r– 1 jlouwers staff 0 Oct 11 11:14 ./x
-rw-r–r– 1 jlouwers staff 0 Oct 11 11:23 ./z/x

So we have to change the output and we can do this by using awk. We have to pipe the output from the above command to awk and than make sure we only get what we want. And what we want is filename and time. nothing more. So we can do this by using awk. According to the UNIX manual pages AWK is a pattern scanning and processing language… simple told.. it is a damn handy tool to make stuff look like the way you want it.

We pipe the data into the following command:
awk ‘{print $8,$9,$10}’

which makes the complete command look like:
find . -type f -mmin +60 -exec ls -la {} \; | awk ‘{print $9,$10}’

and the output will look like:
11:14 ./inbound_225/225int_inb.txt_65466
11:15 ./inbound_225/225int_inb.txt_65467
11:25 ./inbound_225/225int_inb.txt_65468
11:23 ./inbound_256/256int_inb.txt_43221
11:24 ./inbound_256/256int_inb.txt_43222

One last thing you might want to add, if you are running the command on a directory and you do not have access to all sub-directories you might end up in a situation where you get access denied errors in your output which can disturb your checks so you can pipe all the error messages to /dev/null . You can do this by editing the command that it will look like the command below:

find . -type f -mmin +60 -exec ls -la 2>>/dev/null {} \; | awk ‘{print $8,$9,$10}’

A lot more options are in the find command, you have however to check out the manual and have some fun with it while trying.

Sunday, October 04, 2009

Sun Solaris manual pages

We are now providing again a option to check UNIX manual pages from our website. Special thanks to the University of Alabama, University of Athens, SGI and the University of Southampton.

Read the complete story at >>

TEDx in Amsterdam

TEDx is coming to Amsterdam. Very exciting to have a TEDx event in Amsterdam. TED is a small nonprofit devoted to Ideas Worth Spreading. It started out (in 1984) as a conference bringing together people from three worlds: Technology, Entertainment, Design.

Read the complete story on >>

Monday, September 28, 2009

Blackberry and Twitter

For a long time RIM, the company behind blackberry, hold back on developing a twitter app. They found that companies like ubertwitter, twitterberry, and such could do the job. And they did the job quite nice however now RIM would like to jump the twitter bandwagon. is running a blogpost that RIM will launch a official twitter app. Nice.....

Python database abstraction

Even do I am going cover to cover (when I feel like it) in a Python book I sometimes likes to make a exception. This is one of those exceptions. I already have covered ADOdb in a previous blogpost. As you might recall I wrote a piece on on ADOdb in combination with PHP.

However I discovered that ADOdb is also available for Python. Now you have the option to write ADOdb statements and this abstraction layer will create the correct syntax when you deploy it on your database. Meaning that if you ever switch from a MS-sql database to a Oracle database you will not have to revise all your code. Simply tell the abstraction layer that it now should talk Oracle SQL instead of MS SQL and you are in business.

Thursday, September 24, 2009

Starbucks Iphone

Starbucks is launching a new app for the iPhone. It will be launched on the US version of AppStore first. As we all know strabucks is already a very internet friendly company and me included a lot of people have spend some time with a laptop in a Starbucks to get some work done.

Now Starbucks has launched which can help you enjoy it even more. It can help you locate a starbucks close to you so you will never spend to much time looking for a place to get coffee. You can also give credit points to coffee and in this was indicate what your favorite is. This will help starbucks to do some marketing I guess…..

A second part of the application can be used only in a couple of test starbucks in the US. You can place a amount on your iPhone app and use it to pay for your coffee. It works in the same way as yur starbucks card would do. Only difference is that it is now on your iPhone.

I am really wondering how this will fly. If this is a success and I guess it will be we will see a lot of those applications coming in the upcoming time for other companies. Gas stations for example can come to mind. So you phone will become more and more your wallet. Eventually someone will start to build a new app which is capable of holding all the custom apps. Years ago people where already talking about pay with your phone. Well I guess this is one of the best steps towards that goal. Some pilots and options with SMS pay are already tried and on the run at the moment however I feel personally that this is he first attempt in a way that it can be accepted by the public.

However, security comes to mind. I would love to dive into this to find out what the security is on this thing.

Tuesday, September 15, 2009

Nomee ..... NO!

Just been reading a item about Nomee on by christina warren, nomee is a new tool where you can monitor and follow people on all kind of different social networks and channels.

A great app, and something that can help you, however,.... why a app? It is a application I have to download and install on my computer. I can not download it on my phone and why is it not just a web application.

Nomee, it looks great, it looks like fun however for me it is not a usable thing at this moment. Why not, I am on the move most of the time I have time to follow people, read stuff and such. So it has to be mobile. More mobile than Nomee is at the moment. A mashup for social networks is great however not in the form as Nomee envisions.

Saturday, September 12, 2009

IndentationError: expected an indented block

IndentationError: expected an indented block

Indentation is a big part of writing Python code, and it is a good thing in my opinion because it makes you write better and cleaner code. indentation is used to place code more to the right. so instead of writing your code like below;

n = 1
i = 1
print "start code"
while n<=10:
i = 1
print "start the table of", n
while i<=10:
print i,"x 8 =", i*8
print "end code"

you have to write your code using indentations to make it work in python. Meaning a functioning code will look like this;

n = 1
i = 1
print "start code"
while n<=10:
i = 1
print "start the table of", n
while i<=10:
print i,"x 8 =", i*8
print "end code"

As you can see it will make your code more readable because you can see what is inside a while look and what not. This is directly the reason why it is used in Python coding, not to make your code look nice, it has a functional part to it. In some languages you indicate the begin and end of a codeblock like a while loop with brackets, a { to start and a } to end the while block. In python you use indentations. If you do not make sure your indentation is correct you will most likely end with a " IndentationError: expected an indented block" error. Lucky for you a line number will be given so you can debug your code quickly.

Personaly I think that the use of indentation for codeblocks is great. It will learn you to write your code in a way that it is more readable for other developers. That is at least on this part. I remember re-writing code from other developers and first making sure all the indentation is correct so it becomes more readable, in in python that is no longer needed because if you have your indentation not set correct in your code you will not be able to run it in the first place. Meaning that, if your process is correct, no developer can commit code into production if the indentation is incorrect. That is to say, for the parts where it is needed.

Final word, indentation in Python code,….. a good thing.

Friday, September 11, 2009

Python while loop

As I will go cover to cover in a book about Python coding I will have to touch the loop section. A loop is in basics a repeating command until a criteria is matched. You will see loops in almost every language.

This is what Wikipedia has to say on it:
In most computer programming languages, a do while loop, sometimes just called a do loop, is a control flow statement that allows code to be executed repeatedly based on a given Boolean condition. Note though that unlike most languages, Fortran's do loop is actually analogous to the for loop.

The do while construct consists of a block of code and a condition. First, the code within the block is executed, and then the condition is evaluated. If the condition is true the code within the block is executed again. This repeats until the condition becomes false. Because do while loops check the condition after the block is executed, the control structure is often also known as a post-test loop. Contrast with the while loop, which tests the condition before the code within the block is executed.

It is possible, and in some cases desirable, for the condition to always evaluate to true, creating an infinite loop. When such a loop is created intentionally, there is usually another control structure (such as a break statement) that allows termination of the loop.

Some languages may use a different naming convention for this type of loop. For example, the Pascal language has a "repeat until" loop, which continues to run until the control expression is true (and then terminates) — whereas a "do-while" loop runs while the control expression is true (and terminates once the expression becomes false).

As can be seen below you can also nest a loop inside a loop:

n = 1
i = 1
print "start code"
while n<=10:
i = 1
print "start the table of", n
while i<=10:
print i,"x 8 =", i*8
print "end code"

Wednesday, September 09, 2009


Recently I encounter a project where they used Oracle BPEL on a Oracle Application server 10G. It turned out that some of the BPEL processes suddenly stopped when waiting for a asynchronous callback method in BPEL.

After some searching I finally found a usable error message in one of the error logs. You can see it at the end of this blogpost. It turned out that this could be solved by changing some settings in XML configurtaion files. We found that by changing the transaction-timeout settings the problem is solved.

If you ever encounter a error like below you might want to check the "Setting Properties for BPEL Processes to Successfully Complete and Catch Exception Errors" section in this Oracle manual.

You will have to change some settings in $SOA_Oracle_Home\j2ee\home\config\transaction-manager.xml and in $SOA_Oracle_Home\j2ee\home\application-deployments\orabpel\ejb_ob_engine\orion-ejb-jar.xml . After this rememeber to stop and start to have the new settings activated.

<2009-09-03 14:05:07,617> failed to handle message
javax.ejb.EJBException: An exception occurred during transaction completion: ; nested exception is: javax.transaction.RollbackException: Timed out
javax.transaction.RollbackException: Timed out
at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(
at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(
at com.evermind.server.ejb.interceptor.system.SetContextActionInterceptor.invoke(
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(
at com.evermind.server.ejb.InvocationContextPool.invoke(
at oracle.j2ee.connector.messageinflow.MessageEndpointImpl.OC4J_invokeMethod(
at WorkerBean_EndPointProxy_4bin6i8.onMessage(Unknown Source)
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$
javax.ejb.EJBException: An exception occurred during transaction completion: ; nested exception is: javax.transaction.RollbackException: Timed out
at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(
at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(
at com.evermind.server.ejb.interceptor.system.SetContextActionInterceptor.invoke(
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(
at com.evermind.server.ejb.InvocationContextPool.invoke(
at oracle.j2ee.connector.messageinflow.MessageEndpointImpl.OC4J_invokeMethod(
at WorkerBean_EndPointProxy_4bin6i8.onMessage(Unknown Source)
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$
Caused by: javax.transaction.RollbackException: Timed out
at com.evermind.server.ApplicationServerTransaction.checkForRollbackOnlyWhileInCommit(
at com.evermind.server.ApplicationServerTransaction.doCommit(
at com.evermind.server.ApplicationServerTransaction.commit(
at com.evermind.server.ApplicationServerTransactionManager.commit(
at com.evermind.server.ejb.EJBTransactionManager.end(
... 29 more
<2009-09-03 14:05:07,618> Failed to handle dispatch message ... exception ORABPEL-05002

Message handle error.
An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: An exception occurred during transaction completion: ; nested exception is: javax.transaction.RollbackException: Timed out


Message handle error.
An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: An exception occurred during transaction completion: ; nested exception is: javax.transaction.RollbackException: Timed out

Tuesday, September 08, 2009

Twestival in Amsterdam

Looking for a party, hoping to meet all the people you know from twitter? Go and have fun at Twestival. Twestival is a festival which is organized and promoted using twitter. Between 10 and 13 September you will be able to find Twestival’s all over the world.

Twestival is not only about having fun, it is also about helping. The way it is done is you have to pay a small amount when you want to join a party and by this you give the money to a charity of your choice. For example in Amsterdam a party will be held in Odeon which is promoted and organized by @AncillaTilia.

So if you are looking for a great party, like to meet your twitter friends and help a good cause… go and find a local Twestival event and have a great party.

Sunday, August 30, 2009

Clean desk policy

Started the new clean desk policy at home again for my own working area. Every now and then, specially on a rainy sunday, I get the idea that a clean desk is perfect for me. Today is such a sunday and I started cleaning my desk again. Somehow I do get a lot of paper mail from all kind of vendors and I do collect books and magazines arround my working area.

Most of it contains some interesting information and I would like to post some of those things here before dumping the magazines and such into a dumpster (that is a paper recycle dumpster).
Just found a item on and a interview with director Hans Robben. is the for fresh food. Ever in need to buy a shipload of shellfish, you can check and bid on it. This is where you as a farmer, fisherman or company can place your goods and people can start placing bids. The magazine where I found it is not that old and so is I am wondering to see how this open marketplace will hold. The idea is great and i hope to see more initiatives like this.

Cool looking workplaces.
I recently changed my workspace to a new office. it is very modern and very nice looking. A real nice and cool working environment with a lot of open spaces. It really gives me a vibe in a good way. However I am still wondering how it is to work a for example at Google. Google is still on my list of companies I want to work for one day. That can be as a employee or as a hired consultant. Anyway I want to work at google maybe one day. Why, have a look at the one of the offices on this website. However, talking about a nice working environment. I think that the office of Wieden+Kennedy in Amsterdam is a really really nice place to work. The way the design of the office is done is great. You can find more information at this website including some pictures

Space Barley
Most likely you never have heard from it and most likely you have never tasted it. Space barley is a beer that is made of barley grown in the ISS space station. Scientists have tested what grows in space so that when we try to colonize space one day we know what to grow. Barley can be used for this and one of the great things is you can make beer from barley. Well it was a test and most likely it will never become a mass product so if you had the pleasure of drinking a bottle of it.... to bad for you!!! you should have kept it and sold it in 10 years and made a fortune of it.

Building a house?
Thinking about building a house? Well have a look at the website. You can find a lot of great pointers on how to make your house green and energy efficient. And building something green is something hippy like? Forget it, green is cool and it gives you a great extra benefit in the costs when living in your house when it is done.

Tuesday, August 25, 2009

Python if elif else

In almost every programming language you have some basic commands and functions, basic construction options so to call. "if" is one of those, "the if statement is used to check a condition and if the condition is true, we run a block of statements (called the if-block), else we process another block of statements (called the else-block). The else clause is optional." So we can have a check and if this check is returning a true we can take some action. Lets see in a very basic example how this works, I will show this with a very small python script.


#set some variables
var0 = 2
var1 = 1

if var0 > var1:
print "var0 IS larger than var1"
elif var0 < var1:
print 'var0 IS smaller than var1'
print 'var0 is not larger or smaller than var1, maybe they are the same?'

print 'and we have left the if elif else'

Sow with this very simple script we can show what action is taken, or what text is printed to the console. You can test this by playing with the values of var0 and var1 and see for yourself what the result is. Basically I do not want to spend to much time on the "if" part as this should be a very basic part of programmers knowledge. The only part that can be tricky is that in some languages elif is written as "els if" or "elsif" or even "if else". In Python it is if, elif and else. Just something you have to know when you start with Python.

So as you can see making decisions with if statements is a very basic way of making decisions. Another thing which is good to know is that you can nest if statements. So if you come into if-block you can create inside this block another if-block to make your decision even more precise. In the script below I first determine if var0 and var1 are equal. If this is not the case we "open" a new if-block to see what is exactly the case. Is var0 larger or smaller than var1. Just play arround with the values of var0 and var1 and you will see what it can do.


#set some variables
var0 = 2
var1 = 2

print 'starting some nesting'

if var0 != var1:
print 'var0 is not the same as var1'
if var0 > var1:
print 'var0 is larger than var1'
elif var0 < var1:
print 'var0 is smaller than var1'
elif var0 == var1:
print 'var0 is the same as var1'

print 'done with the nesting'

Also something that you might know from other languages is that the content of if-block should be within brackets and that it is constructed something like below:


else if (condition II){

In Python this is not used, you might like it or you might not like it that this is not in place however the people who developed the Python language did not see the need of it. I personally think it is a missing part, this is simply because I am within the group of bracket lovers who like to have it nice and tidy inside a couple of brackets. If you are in the same group.... you will get used to it finally.. I did also. Upside, you will never have to count opening and closing brackets again... remember those long nights of debugging a bracket problem?


For a internal project I just became somehow the admin of a couple of HP-UX
servers. Those servers are used in a previous project for some time and are behind on patching. Now we would like to install Oracle E-Business Suite R12 on it. To be able to do so we have to patch the boxes before we can use it.
So I have downloaded the patches in a bundle from the HP website and unpacked them in a location from which I like to install them.

According to the README_hp-ux file you should take the following steps:

1. Move the patch download file to a file system with enough free space. These instructions assume "/tmp/patches".

2. Run the create_depot script as user "root". create_depot_hp-ux_11 must run on a 11.X system. The create_depot_hp-ux script unpacks the patches, and uses swcopy to create the /tmp/patches/depot directory.

3. After the depot is created, remove the individual patches, .text, and .depot files.

4. If you are creating more than one depot, rename the /tmp/patches/depot directory, and remove /tmp/patches/depot.psf.

Installing the depot
1. On the target system, run swinstall. Enter the Source Host Name. The Source Depot Path is "/tmp/patches/depot".

2. Further instructions are available on the swinstall man page. Type "man swinstall"

I encountered the first problems when I tried to create the depot for the patch bundle. It returned a couple of times errors with a error text like this:

You do not have permission for this operation. The depot owner, system administrator, or alternate root owner may need to use the "swreg" or "swacl" command to give you permission. Or, to manage applications designed and packaged for nonprivileged mode, see the "run_as_superuser" option in the "sd" man page. WARNING: More information may be found in the daemon logfile on this.

After some reading in HP documentation and forums it turns out that this error can be the result of a changed hostname and/or IP address while the swagentd is still running on the old IP / hostname. So to stop/start it you can do a:

/sbin/init.d/swagentd stop
/sbin/init.d/swagentd start

After this the create_depot_hp-ux script is working without any error.

Sunday, August 23, 2009

Ashley Schwartau

Someone asked me what the hackers scene is and what to expect on a hackers convention. Well, the only way to get the correct view is to visit one, however as we just finished har2009 we all have to wait for some time for the next big event. However I could advise them to watch the documentary made by Ashley Schwartau. Just watch the interview with Ashley Schwartau below:

Ashley Schwartau - Hak5 Interview - Hackers Are People Too from Ashley Schwartau on Vimeo.

Python, comparing variable values

I have started some time ago with the Python cover to cover serie on this weblog however for some reason, namely, working on other projects I have not posted a Python cover to cover for some time now. So time to pick it up again and to all who like to follow the Python cover to cover... I will keep working on it more as I have finished some of the projects that where holding me back.

The past posts on Python I have been explaining about variable types. Now we will look on what we can do with variables on the comparing part. First we define some variables to play with:

var0 = "a"
var1 = "b"
var2 = 100
var3 = 50
var4 = float(1.1)
var5 = float(50.0)

So now we have some variables to play with. First we will use the string variables var0 and var1 . Lets compare if they are the same, to do this you use == so in this example we will be using the expression var0 == var1 which will return a boolean value which in this case this will be false. so as a small coding example you can check the code below:

>>> var0 = "a"
>>> var1 = "b"
>>> var0 == var1

we can also some other types of compare. For example the "not like" compare can be done by a using != as can be seen below:

>>> var0 != var1

And now something that you might expect on float and int values only maybe a greater or smaller than compare on a string which takes the alphabet into account:

>>> var0 > var1
>>> var0 <>

When you are comparing String values in python with greater than or smaller than functions you have to take into account that you might run into troubles because of upper and lower case characters. So it is a good thing to make sure that when you compare like this you make all characters upper or lower case before you start comparing. for this you can use the upper() and lower() functions. So if you want to turn a string into uppercase in Python, or lowercase you can use the following:

>>> "THIS IS A TEST".lower()
'this is a test'
>>> "ThIS Is A TeSt".lower()
'this is a test'

Basically all can also be done on numbers and not only on strings, Al Lukaszewski has also written some about it for which you might want to read.

Friday, August 21, 2009

Basic network security for Oracle Developers

I just recently came back from a 4 day hackers convention in the Netherlands, har2009, and have been talking to a lot of people. One of the things that came up during some of the discussions was security and the time a vendor needs to patch some things. In most cases a exploit is found and used in the field before the vendor (for example Oracle or a firewall vendor) is aware of it. After it is made aware of the fact that a security issue is available it will take some time to fix it and make sure a patch is available. After the patch is available administrators still need to apply the patch.

So even do vendors are working on making the systems as secure as possible and developers who are developing on those platforms are trying to develop the code as secure as possible you will still see that security breaches will happen. You can simply not find all the errors in the code even if you test, test, retest and retest you will still have bugs in your code and ways the code can be used you never ever have been thinking about.

So simply to say, it is almost impossible to create a system that is a hundred percent secure. So if you as an oracle developer develop a system, think about it as an architect or are responsible as a project manager you will have to know at least some of the basics of security. I will not go into all kinds of coding examples to make your code as secure as possible I will however provide some best practices on mainly network security and architecture.

As an example I will use the below image I found in a blogpost by Steven Chan on “Loopbacks, Virtual IPs and the E-Business Suite”. Now for the this document we are not going into the fact this is a Oracle E-Business Suite setup we just take it for granted that web nodes that are used are just simple web-nodes and we are unsure of what they do, it is not really important for the examples as you should harden en secure every system as much as possible without looking at what the system is doing.

This statement can create already some questions, why should I harden a system which is not holding any mission critical information, is “normal” security not enough? Simply put, no. why not, if you “lower” security on non mission critical systems you have a weak link in your security. Possibly an attacker can compromise this system and use it as a foothold and starting point for future attacks. You should protect every system as was it the most important system in your entire company. Never compromise on security for whatever reason. Budget, time, deadlines, policies not in place,… none of this can ever be a excuse to “lower” security on a system.

In the above picture we see the network as shown in the blogpost of Steven Chan. In basics nothing is wrong with this approach and the network diagram is not intended to display security it is used to explain another issue. We will however use it as the basis of explaining some things. When reading the rest of this article the question might arise that you as a developer will not have the “power” to request all the things I will state. However, in my opinion you should mention them and ask about them when developing a system because you simply would like to have the most secure and stable system possible.

External firewalls:

With external firewalls I mean non local firewalls, so all firewalls which are in place somewhere in the network. I will make a point on internal firewalls later is this article where I will be talking on the subject of firewalls on your Oracle server. As we can see in the network drawing we have 3 firewalls. A firewall on the outside of your network, a firewall between DMZ0 and DMZ1 and one between DMZ1 and DMZ2. DMZ0 holds the reverse proxy, DMZ1 holds a loadbalancer and 2 web nodes. DMZ2 is not really a demilitarized zone because it also holds clients in it as the internal users. Possibly the network drawing is showing another firewall which protects your database server.

However the setup looks to be valid and is protecting you with a layer of firewalls from the external network (the internet in this case) it not complete. You should have another firewall in place to make this more secure. You should place a firewall between the internal users and the rest of DMZ2 which contains web-nodes and a load balancer.

Reasons for this, (A) even do most companies try to trust the users you can never be sure. It is even so that a large portion of the hacking attempts is done from within the company by for example disgruntled employees. (B) The users will most likely have access to internet and so can be compromised by malware and rootkits which could potentially become a gateway into your company. So you cannot trust your users (or the workstations they are using) and you should at least have a firewall between your servers and your clients. It is even advisable to secure your servers from your internal users in the same way as you would do to protect them from the internet.

Internal firewalls:

So as we stated in the firewall section it is great to have firewalls in place to make sure intruders cannot enter your DMZ without any hurdle however when thinking about security and your DMZ you have to keep one thing in mind. What happens if a intruder gets access to your DMZ, what happens when an intruder has compromised a server in your DMZ without you knowing. With the setup as it stands now all other servers are now wide open.

To prevent this it is advisable to have internal firewalls on your servers to harden them for attacks from within the DMZ. So you will have secure islands within the secure DMZ sector. This will make it harder for an attacker to take over multiple servers before being spotted. You can use for example iptables to harden your servers with internal firewalls. The setup of iptables rules can be a little hard if you do this for the first time and even if you have a long history of working with iptables it can be hard to maintain all local iptables settings when you have a large amount of servers. For this it can be handy to use tools like fwbuilder or kmyfirewall.

When setup an internal firewall you can make sure only the ports are open per interface that are needed for a minimum service on this network segment. I will explain more about network segmentation in combination with internal firewalls in the next topic.

Network Segmentation:

As stated in the previous topic of internal firewalls one has to minimize the open ports and services per network segment so only the ports are open that are needed for this segment. With network segmentation you will make use of all network interfaces in your server to create multiple networks. For example you will have a customer network, a data network and a maintenance network.

Users for example will only use the customer network as we know the servers are webnodes so we will only have to use TCP port 443 for HTTPS traffic over TLS/SSL. Users will most likely have no need to access anything else so no need to provide them the option to connect to those ports.

The data network can be a separate network running over a different NIC which can be used to connect to the database from a web-node or connect storage appliances. This will most likely only be used for server to server connections so most likely no real-life users have a need to access. So people who are working on the customer network will not be able to connect to ports on the data network. First because of they are not on the same physical network and even if you have some vlan configuration mistake or something like that you will have your iptables firewall which will restrict them based upon the source IP address. On this network you do NOT have to open for example port 443 or a port for for example SSH or telnet.

The maintenance network will be used by administrators and on this network you can close for example direct access to network storage appliances, web applications and such. You have to open ports like SSH on this network so administrators can access the servers by secure shell and do their work. This network will be the most wanted by attackers because it will have ports open that can be used and exploited to gain access to the console of the severs.

So by physically have some network segmentation you can separate services and groups of users based upon their role. Are they users or are they administrators of the systems, based upon this you can provide them access to a certain network segment with its own routers, switches and entire network topology.


We have talked about firewalls on the outside and in the inside, we have discussed separating networks. However, even if you have done all this you cannot feel secure. When thinking about security you always have to consider the network compromised. So if you consider the network compromised someone can sniff all the network traffic by using a network sniffer. So you should state, very strongly, that you can only use encrypted network connections.

This means, never use telnet and only use SSH, Never use FTP use SFTP or SCP. Never use HTTP use HTTPS instead. Use encrypted SMTP when you are sending out mails. So you can make your network even more secure by making only use of encrypted connections. By using only encrypted connections a possible attacker who is using a package analyzer on your network traffic cannot sniff cleartext passwords.

You should restrict users from using open services like telnet, ftp and such by simply closing the ports with iptables.

So coding a custom application can be a little harder when using a secure and encrypted connection however it will not be so much more work and the benefits are huge. As this post is intended for Oracle developers it might be good to have a look at the Oracle Application Server Administrator’s Guide, Overview of Secure Socket Layer SSL In Oracle Application Server. So huge that if you implement it correctly a possible attacker will only be able to see scrambled data and not a single useful packet of information.

Patching policy:

So we have setup external and internal firewalls, encrypted the network traffic and separated the network into parts, can we feel save? No, not really. Even do we have taken all those steps we have to take into consideration that we still have to open some ports and very open port is a possible point of entry for a attacker. If you are for example running a web node you will have to run a webserver and as this post is intended for Oracle developers you will most likely be running a Oracle Application server which uses Apache. Even do Apache is a really good and secure webserver still it has its security issues. So if the vendor, Oracle, releases a patch you need to give this a very good look and I advise to apply it after testing it and reading the specs a couple of times.

The patch is released for a reason, this reason is solving bugs. Not always security related bugs however they solve some issues. As you will see most exploits are up and running for old versions and systems who are not being patched are getting more and more vulnerable for a attack every time they miss a patch. It can be that your server can be compromised or services are disturbed, in all cases it is not desirable. So again, my advice, whenever a patch is released, give it a good look and if it is not causing a major problem you will have to apply it as soon as possible.

If a patch is giving a real problem with your custom code for example you should not simply decide to not apply the patch. You should solve the problem which is holding you back from applying the patch and after that apply the new code and the patch on the systems. If you decide to not apply a patch you can have the situation that some time later a critical patch is released and you still have to apply the first patch as it is a prerequisite for the second patch. Then you still have to do the code fixing. Or you can decide also not to apply the second patch and…… you are simply making your system more and more insecure by every patch you miss.

Securing code:

This post is about network security and no so much about code security. However, you vendor (oracle) will provide you security patches, on your own code you (and your team) will be the only one who will be able to release security patches so it is important to not only test the standard things in your code. Also have a look at for example SQL injection, buffer overflows, incorrect error handling…. Etc ect.

Password policies:

Not really a topic on network security however something to mention, passwords. It will go without saying that your administrators will have to have some policy in place on passwords, how long it can be, what the strength should be, etc etc. However, a topic what I would like to mention is Public Key Authentication. In this case you “do not need” a password to login to SSH. You will use a public key to authenticate yourself to the server. The good thing about this is for example that you can grant users to login as root without them knowing the password. You add a key to the user root and based upon this key users can login. So if you want to revoke the rights you do not have to change the password and inform all who still have access to this account. You can simply remove the key for this person and for the rest of the users nothing will change.

Public Key Authentication is considered much stronger than password Authentication so it is advisable to use only Key Authentication. This can also be used for for example SCP when you move (automatically) files between servers you can also make use of Key Authentication. So give this a good look.

Monitoring and sniffing:

Now we are getting a save network. However, if we want to be sure that nothing “funny” is happening you want to monitor your network with a intrusion detection system. For example Snort, snort is a opensource IDS which can monitor and detect strange behavior. This way you can monitor anomalies and possible attacks on your network.

Also it can be good to have some log analyzers ready. All your systems will provide log files. Most administrators are only looking into the log files (and the mails to root) when a problem is found. However it can be very beneficial to write your own custom log analyzer and create some portal like environment where you consolidate the log files and analyze them. You can scan for specific error messages and for example failed login attempts. Use your creativity.

So with the combination SNORT and log analyzers you can keep a good look if someone still tries to do some damage in your network.

Wednesday, August 19, 2009

Twitter Spam Trust Model

Some time ago I signed up for a twitter account as you could been reading on my weblog some time ago. I started using twitter just for fun and try to find out what everyone is talking about on twitter. After some time I became quite happy with the service and the information which can be found on twitter and the way you can interact with people you never have spoken to before and who might be unavailable to reach if it was not for twitter.

However, as with every good service after some time it will also be used to promote goods and services you might not want. You will be contacted by people in such a way that you can consider it spam. Twitter spam is currently in my opinion the biggest problem and threat to twitter and its growth. If people are using it they do not want to be annoyed with all kinds of spam messages. Some time ago I posted a tweet stating that twitter spam will be the next big fight. On this tweet I got some reactions via twitter and also offline. Some people stated that if this was the next fight in my opinion I should make a point by thinking about the subject and creating some kind of approach on how Twitter should fight this fight.

As twitter is just a message service from a person to one or more other persons some of the approaches designed for fighting email spam can be applied. Even some in a more effective way as all communication is happening inside the domain. For example a trust model can be very easily applied, already used for email it can be used to fight twitter spam.

Trust model:
A trust model against twitter spam should find the relationship you as a sender is having with the person you are sending the tweet to. A Tweet Spam Rank (TSR) could be calculated for the tweet and the higher the TSP the lower the trust between the sender and the receiver. You can send a message to someone you do not have a relation with, this will provide you a high TSR however will not make you a spammer. To prevent the effect that you will be banned as a spammer due to the fact you send a single message to someone you have no relation with you should have a average TSR over time which is below the threshold of being identified as a spammer. However, the TSR calculation will have a big role in the spam fighting. Before explaining the TSR calculation first some basics on the twitter relation model and the components inside this model.

You, or the sending part, will be represented in the model with as the green dot, as you can see you can have several relations (or non relations) with other hops. Hops are other twitter users you send a message to or who are a bridge to other hops. The model in its current version will only go for two hops. So max a connection hop and a destination hop. To be sure if this is “deep” enough one should run some calculations on the twitter data.

As can been seen in the picture above there are four types of connections that can be made:

- T1, a connection with a hop and a connection back. You follow the tweets of this person and this person on his turn is following your tweet. As you both follow the other you most likely will have a strong connection so sending a message over this connection will result in a low TSR.

- T2, a connection from a remote hop to you. This person is following you and you do not follow him. So for some reason this person is interested in you so if you send a direct tweet to this person he or she will most likely be wiling to accept this. It is not as strong as a double connection however still a low TSR.

- T3, a non connection. You have no connection whatsoever to this person, not even via a connection hop so this will result in a high TSR score.

- T4, you follow a person however this person is not following you. So for some reason you have interest in the tweets from this person however this person is not following you. So a direct tweet to this person will result in a higher TSR.

Now we have to connect some values to the parts of the trust model so we can calculate the TSR of a message. For this we refer to the model as it is shown below. As you can see all possible relations within the trust model are represented in this diagram.

We start with the sending party, a sending party will have for calculation reasons the value 2. T1 connections will have a value of 5, T2 has a value of 10, T3 has a value of 100 and T4 a value 15. A connection hub will have a value of 5.

Now lets say you want to send a message to the user in hop B we can calculate the TSR like {you * T1} which will be {2*5} so this message will have a TSR of 10 which is the lowest TSR you can get. Meaning you just sent a message with a very low Twitter Spam Rank. However, sending a message to B1 will have a calculation like {you * T1 * connection-hub * T4} which is {2*5*5*15} meaning you will have a TSR of 750 for this message.

For example you can be sending a message to C1. You have a very weak connection with D2 so you should get a high TSR. {you * T4 * connection-hub * T4}, this results in {2*15*5*15} which results in a TSR of 900. This is the most weak connection you can have with a connection hop and two times a T4 connection. However, one exception on the rule is a T3 connection which will result in TSR of 1000 without any calculation needed to be done.

The entire model would make sense if people would behave and only play by the rules of the model above. However in a normal world you will see that multiple routes to a person are possible and we have to take this into account. You can see a example of this below.

In this example you see two possible routes to hop B3. You can take the route to B3 via connection hub B or via D. Based upon the model we can not state if B3 will appreciate your message because if he is willing to follow you he could have made a direct relation. So to get a correct TSR we have to calculate the average TSR of both connections, meaning you will have to calculate {(you * T1 * 2 * T1) + (you * T4 * 2 * T1) / 2 } This will give you the correct TSR for this message. We only do a average TSR calculation in case there is no direct connection, so even if there are multiple paths and a direct connection we will ingnore the other paths and only use the direct connection to calculate the TSR.
Now we have a good way model of calculation the value of relations within the model, however scoring a high TSR every now and then is not making you a spammer on Twitter. Every now and then you like to contact people you do not know and maybe build a stronger relation later in time. So we have to measure the TSR score within a time and tweet frame. Based upon the number of tweets, the time and the TSR you can start to determine if a person if a spammer. In a normal world you will see that a spammer will hit a lot of high TSR scores and a lot of the same scores on arrow while a normal human user will hit mostly low scores and the TSR scores differ a lot. This is a way how you can identify a spammer.
This model and the calculations are raw and not based on actual research on the twitter data, however, if access to Twitter data could be granted someone could complete this model and do some test drives on this and see what the exact behavior of a spammer is. The model can be tuned and perfected. Also I would like to point out that for example the growth of connections can be used in combination with TSR to determine the intentions of a Twitter user. To be precise, a spammer would like to have a large network very quickly so he most likely will add hundreds of connections within a short periode of time while this is not the case for most human users. So this also can be used in combination with TSR to identify spammers. I hope this blogpost will come to the attention of some people at twitter and that they are willing to give this a thought because I would be very disappointed if Twitter collapses under its own success and the spammers it attracts with this success.

Friday, August 07, 2009

Oracle custom error message

When developing scripts and code with PL/SQL for a oracle database you always like to think that your code is the best in the world. You would like to think it will never result in an error. However, users who are using your code will find a way to crash it; you will have overlooked some possibilities. So even after you and several other people have tested the code you will find that in some cases a error will happen.

So you have implemented all kinds of error handling, the problem with “standard” error handling is that it will generate all kinds of user unfriendly error messages. For developers and DBA’s this will make sense however if you want it to be shown to your user community and make sure it will have some meaning it might be nice to have a custom error message.

In Oracle PL/SQL you can use the RAISE_APPLICATION_ERROR procedure. RAISE_APPLICATION_ERROR allows you to set a custom error message which will have more meaning to an end-user than the standard ORA messages. You will also have the option to attach a custom ORA number so you know where it has happened in the code so you will have some more useful information while debugging your code.

For example if I want to raise a error like ORA-60001: The value you entered is not a valid customer ID number I have to tell my code to somehow do this.

So let’s say you have some IF clause which checks the given value for the customer ID which is entered by a user. If the check ends in a successes there is no need to raise a error. If it fails you can use RAISE_APPLICATION_ERROR to show the error message. It is done as shown below:

RAISE_APPLICATION_ERROR(-60001,’ The value you enterd is not a valid customer ID number’);

It goes without saying that you can add some variables to the message so it might be nice to show the user for example the value he has entered. However you might in that case also consider having an alternative error message in case RAISE_APPLICATION_ERROR will be unable to handle the variable. Think about a customer ID which has a length beyond the length that can be shown… etc etc etc.

However, using custom error messages in your PL/SQL code is a good way of showing your users what is wrong. It is better than having the standard somewhat cryptic messages which are provided by Oracle.

Monday, August 03, 2009

Instant messaging in 2011

We all use instant messaging nowadays, I have resisted against it for a long time and was a true believer that UNIX talk and IRC where more than enough to communicate with the outside world. However I also have now a skype account, a MSN messenger account and a corporate Microsoft Office communicator account.

All needed to keep in contact with people. The downside of this all is that you get more and more differentiation between the user communities. Some of use MSN others use Google Talk. Some make the decision because of what their friend’s use, some because of what is installed on the PC. Now if I wanted to talk to someone who is using for example MXit I have to create a account and I have to install a client to be able to communicate with this person.

Creating a account is already a hassle and then I have to hope I can install the software on the PC I am currently using. In some cases I do not have the rights to install the software so in that case I am blocked and cannot come into contact with this person unless he or she is willing to register to the same network and install the client I am using.

If we look at Wikipedia the number if instant messaging platforms is enormous and growing. Just to name some names; AIM, eBuddy, IBM Lotus Sametime, ICQ, IMVU,… the list can continue for some time.

What do we need in 2011?
What we need in 2011 as a new killer app for instant messaging is a instant messaging mashup in the form of a website. Think of it as a single location where you can login once and activate all your instant messaging channels. When you need an extra platform, like for example you need to talk to someone who has only MXit you now can register for MXit and assign it to your mashup profile and will be able to chat with them via the webinterface.

Next step is that you will be able to create a account with only a couple of steps using for example a OpenID. If you have a instant messaging mashup and are able to connect and create accounts using a OpenID you no longer have the need to do all the painfull registration steps, within a minute you will be able to connect to the person you want to talk to as long as his network can connect to the instant messaging mashup.

Another benefit of this is that you can manage your contacts within the mashup and will be able to add them or remove them from all your instant messaging networks. For example if I want to add my friend Tom to be able to talk to him via Skype and MSN I currently have to know what his username is to be able to add him. I also have to add him in both services. In case of a mashup he has registered his name in the mashup and set that he is using Skype and MSN. Now I want to add him I do not add his Skype and MSN account to my account, just add his mashup account to my mashup account. The moment he agrees the mashup application adds his name to my MSN and Skype network.

What will be the downside?
The downside will be that not all networks shall be willing to corporate with this. They will no longer be in the picture and they will become more of a network than a tool. Now they are a tool a application that is on your desktop. It is spreading the marketing for the company who developed it. When it is incorporated in a mashup they will not have the advantage of a dominant position on your desktop.

You will see that they will make it harder to communicate with the messaging servers without using the official client. So this can be a downside. Also some will quite the market and be pushed out of it by this. So you will have less to pick from. Even do I started this topic with the problem of having too many different instant messaging networks it is good that you have a choice. By improving the interoperability of the networks you will see that some will quit. Having a choice is always important so I see this as a real downside.

Mashup network?
Should it be a good plan to also have a mashup network like we have for google talk and MSN for example? Yes, this should be a part of the solution. The ideal situation is that we have a opensource mashup server who is able to also run its own instant messaging service network and connect to other mashup servers. So if you think you do not have to join a existing network you can just make use of the mashup network. If it is propperly designed you can even use it on websites as a sort of makeshift instant messaging chatroom which enables you to talk to people in this specific room and if you like them invite them to become part of your mashup network.

Privacy and security?
Privacy and security, beside the obvious that the security of the mashup has to be really really tight there are more things to consider. However, on the topic of the mashup security. As not all networks will work with OpenID you will have to store some of your passwords in the mashup to be able to a sort of single sing-on. So if the mashup server is compromised all your passwords will be compromised. There are ways and encryption algorithms that can be preventing this however they person developing the mashup server should be aware of this.

Privacy, you should be able to set your own security and privacy in such a mashup as fine-grained as possible. For example, if I get a invitation from Carla and I have a MSN, Yahoo!Messenger and skype account I want to be able to only grant here access to for example MSN because I know that if she gets my Skype account I will have to talk to here every night. Also I want to be able to say I am available on all networks however not on MSN for person X, Y and Z. even do I might be available for person Y on Skype. All these small fine-grained settings you should be able to to define.

Where are we?
Somewhere in the middle I think. What we currently see is that instant messaging networks are opening up. You already have desktop clients that allow you to interact with a number of networks from a single client.

We do have API’s for quite a lot of networks. For example for MSN Messenger we have a Python library; msnlib - a Python MSN messenger protocol library and client.

For Google Talk we can also use Python
and as Google talk is based upon a jabber server we can talk to most of the networks which uses a Jabber server; Google Talk, Live Journal Talk, Nimbuzz, Ovi….. jabber is a XMPP server, you can even start your own server quite easily, just have a look at

So we can have some bits and pieces now we have to create a mashup for it. Well there is work been done on this subject also. If we check the
we can already find some interesting things. So as you can see some people are working on parts that can be used. Now all we need is a person who will be putting the pieces together and create this mashup which can make all of our lives a lot easier. Never install a client again, always be able to quickly add all your new friends to all your instant messaging networks….. sounds like a good build for 2011.

Is it already their? Yes in some form, if you have a look at ebuddy you will see that something like the above is already available. So why still request a build2011? It is not completly the product as described above. Some of the real ebnifits are missing in my opinion and their is not a strong user communcity. Maybe some people will disagree with me and state that their is already a good working mashup. Well, that is great. however I find it still to little and I would like to opt for a build for 2011 which is also preferably aopensource project so you can download it and create your own spinoff of a mashup. Maybe even connect that mashup to some central mashups. So eBuddy is a great tool it is not (yet) what I am intending.

What about Google Wave?
When I was talking to some people about writing this post Google wave came up. What about Google wave. Well as Google wave is quite new and it still needs to get a good user community I am not sure. Also Google Wave is not a instant messaging client as such. However as from what I have been reading and from what I have seen I would say that also Google Wave should be incorporated to this mashup. Or…… Google Wave should become this instant messaging mashup. As Google wave and the wave protocol are opensource it might be that the community will be building a mashup around Wave or that Google itself will create the interoperability between the different networks. Might be a good hint to the guys over at Google labs. Please do forward a link to them so they might have a peak at this post.