Thursday, March 20, 2014

Oracle Enterprise Manager plugin development PortARUId

Oracle Enterprise Manager is promoted by Oracle as the default monitoring and maintenance solution from Oracle. Oracle has provided numerous monitoring solutions for a growing number of systems. However, due to the fact that Oracle will not be able to create functionality for all systems that might be a candidate for monitoring and maintenance by Oracle Enterprise Manager there is an extensibility option. Oracle provides the option to develop your own plugins for Oracle Enterprise Manager and by doing so extend the capabilities of Oracle Enterprise Manager for your own company or to create a commercial product with it.

In essence Oracle Enterprise Manager provides capabilities to monitor targets on a diverse number of operating systems. When developing a custom plugin you might develop this for a specific platform. For example, the plugin you develop is only applicable on Linux systems and not on Windows systems. To ensure that your plugin will only be used on those platforms you can limit the deployment options during the development phase.

One of the base files in your plugin creation is the plugin.xml file which provides the general information about your plugin and tells Oracle Enterprise Manager how to handle this plugin. In this file you also have the option to limit the number of operating systems that can handle the plugin and should be able to be discovered and on which it can be deployed. When developing a plugin which is not usable on all platforms this is of special interest.

Within the plugin.xml you can define a number of things and among them the type of operating system where the plugin can be hosted (on the Oracle Enterprise Manager Management server) as well as which deployed agents are applicable to use the plugin based upon the target operating system and you can controle which targets should be discovered as potential deployment target for this plugin.

PluginOMSOSAruId: the PluginOMSOSAruId within plugin.xml is to state which Oracle Enterprise Manager Management server are applicable of running this plugin. In almost all cases this is applicable for all operating systems as the extensiability framework part on the management server is protecting you (up to a certain level) from making decisions that will limit this. Due to this the value is commonly set to 2000 which refers to “all”

<PluginOMSOSAruId value="2000">
</PluginOMSOSAruId>

Within the certification section of the plugin.xml file you can define the applicable operating systems on both agent and discovery. Both are defined as a component type as can be seen in the below example:

<Certification>
<Component type="Agent">
<CertifiedPorts>
<PortARUId value="46" />
<PortARUId value="226" />
</CertifiedPorts>
</Component>
<Component type="Discovery">
<CertifiedPorts>
<PortARUId value="46" />
<PortARUId value="226" />
</CertifiedPorts>
</Component>
</Certification>

Correct values for component type are:
Agent (Management Agent component)
Discovery (Discovery component)

Correct values for PortARUId are:
46 (Linux x86 (32-bit))
212 (AIX 5L and 6.1 (64-bit))
226 (Linux x86-64 (64-bit))
23 (Solaris Sparc (64-bit))
267 (Solaris x86-64 (64-bit))
233 (Microsoft Windows x86-64 (64-bit))

Saturday, March 15, 2014

The Capgemini perspectives on big data

Capgemini share their perspectives on big data and analytics. Perspectives include defining big data, good governance in a big data world, finding the value behind the big data hype, and what building blocks organizations will need to set in place to make it work for them.

It looks at the people aspects, the skills you need the move towards data driven decision making, digital transformation and the impact on the customer experience. Big data typically is very specific to industry so although there are common technologies and some common information sets ultimately each industry sector has many different new data sources and different business issues.

Therefore a key part of this series is looking at how big data is affecting sectors and the associated opportunities it presents. Visit the Capgemini expert pages or the Capgemini Big Data page for more information.

Wednesday, March 12, 2014

big data in the oil and gas industry

The oil and gas industry is in general an industry sector which collects large amounts of data. Not only companies who are active in upstream but also companies who are active in midstream and downstream. Specially companies who do work in all sectors of the industry do have large amounts of data ranging from seismic data to data on how the distribution chain is performing and much more. To be able to succeed in a industry like the oil and gas industry and to achieve compelling advantages over competitors it can be very beneficial to combine all this raw and unstructured data to help an organisation. Currently most of the data is siloed and not accessible or usable to do analysis over the whole chain. Also it is commonly impossible to use this data in a manner that is providing good results that are usable within the business planning.

For those cases and in oil and gas industry related companies it can be very beneficial to start thinking about a big data strategy. Implementing a big data strategy go's way beyond "only" implementing a hadoop cluster and give this to a number of tech people. Hortonworks recently published an article in which they provide a first insight into how hadoop and a big data strategy can be used in the oil and gas industry. A good image and starting point for thinking about a big data strategy is the below image.


The above image is showing it from a Hortonworks perspective however the same can be achieved with other Hadoop implementation vendors like for example Oracle or others. What is interesting about this image as a starting point is that it is showing a first impression of sources within the oil and gas industry that could potentially be used within a big data strategy.

Monitor your network connections on Linux

In some cases and in some environments you do want to keep track of all the network connections a Linux server or workstation is making. For example if you are planning to control your local network in a better way and think about implementing more strict firewall rules it is good to investigate what users are trying to access. In general external connections to webservers are common and should be allowed, most likely you also know which servers in your local network are likely to be accessed by other local servers and workstations. However, a lot of hidden network traffic can be executed which you are not aware of and when closing a lot of ports in your network you might start hindering daily operations.

In those cases it is good to start monitoring which traffic if executed so you can investigate this and make a network connection diagram. For this you can use logging on network switches, routers and firewalls. However, a more easy way in my opinion is to ensure all your workstations do have a running copy of tcpspy on it which will start collecting data for some time and report this back to a central location.

tcpspy is a little program that will log all the connections the moment the connect or disconnect. By default tcpspy will install in a manner that it will automatically start as a daemon and write all information to /var/log/syslog in a manner that it will capture everything. You can however create certain rules to what tpcspy needs to capture by editing the file /etc/tcpspy.rules or by entering a new rule with the tcpspy -e options.

Before implementing a more strict local firewall rule on the workstations on my private home network I first had tcpspy running for a couple of weeks and extracted all information from /var/log/syslog to a central location and visualized it with a small implementation of D3.js to visualize this. This showed that a number of unexpected however valid network connections were made on a regular basis which I was unaware of.

Implementing this at your local home network is something that could be considered not that difficult, especially if you have some scripted way of implementing tooling on all workstations in an automated manner. Also it might look a bit overdone in a home environment, however, as this can be considered a testdrive for preparing a blueprint to be implemented in a more business like environment it shows the value of being able to quickly visualize all internal and external network traffic.

When you are looking into manners to log all internal and external network connections that are made by a server or workstation it might be a good move to give tcpspy a look and when you are looking into ways to visualize the data you receive you might be interested in the options provided by D3.js

Saturday, March 08, 2014

Managing Oracle Big Data Appliance with Oracle Enterprise Manager

Oracle provides customers with a need to implement Hadoop based solutions the Oracle Big Data Appliance as an engineered system. As with most of the Oracle products, both hardware and software, Oracle is using Oracle Enterprise Manager as the primary monitoring and maintenance solution. For the big data appliance Oracle is providing the "Oracle Enterprise Manager System Monitoring Plug-in for Oracle Big Data Appliance" in the form of a OEM plugin that can be added to Oracle Enterprise Manager the moment you are adding a big data appliance to your IT landscape.

Even though managing Hadoop and tuning Hadoop is still the work for skilled people Oracle is making life a lot more easy with providing a ready build rack with both software and hardware that is tuned to work together. For example Mammoth is helping you in a great way during the initial setup of your big data appliance and during operation Oracle Enterprise Manager and the plugin will help you to monitor and manage the large part of the engineered system.

Due to the fact that most of the parts in the Oracle Big Data Appliance are build by Oracle and the other parts that are not manufactured by Oracle are in most of the other engineered system there is the option to show you virtually everything on both hardware and software level from within a single tool. As you can see from the below screenshot there is a visual recognizable representation of your hardware within the tool from where you can drill down to most of the components via the left pane menu.

Oracle Enterprise Manager for Oracle Big Data Appliance

Also it is providing you a single entry and screen to see on which node in your cluster which component is located and what there current status is. In the below screenshot you can see an overview of a cluster within the Oracle Big Data Appliance.

Oracle Enterprise Manager for Oracle Big Data Appliance

As you can see from the above screenshot it is providing you a software component overview broken up per server node. Each server node, represented as a record in the table, shows the status (if it is on this particular server node) of the namenode, failover node, journal node, data node, job tracker, task tracker or where ZooKeeper is.

All this can also be achieved by adding your own tooling however and scripting together. However, due to the fact that Oracle has had the option to both combine the hardware and the software and use for a large part technology that is already used in larger numbers for monitoring and maintenance you can be up and running in a fraction of the time that might be needed if you need to design and develop this yourself.

Thursday, March 06, 2014

Oracle Smart Flash Cache patching requirements

When using a PCI Express flash card in your server to enable the Oracle database to make use of the Oracle Smart Flash Cache options there are a number of things to consider. First of all, this only works when you are using Oracle Linux or Oracle Solaris as an operating system. This is commonly known, however, less commonly known is that you will have to ensure that your database is on a certain version.

Popular believe is that Oracle Database Smart Flash Cache is available and working with all Oracle database versions in an out of the box manner. However, when using Oracle database 11.2.0.1.2 or less you will have to apply database patch 8974084. To be able to apply patch 8974084 you will have at least have to applied 9654983.

When you are trying to get Oracle Database Smart Flash Cache up and running on an Oracle Database 11.2.0.1.2 or an earlier version you have to apply those patches before you start configuring this option. If the patches are not applied it will not work.