Thursday, April 24, 2014

Java 8 improved features overview

Oracle has released the new version of the Java programming language. The current version is Java 8. A number of changes and new features have been added to the new release. Also Oracle has been giving the upcoming internet of things a lot of thought and Oracle has ensured that you can now, even more, easily create extreme small java applications to run on devices. This is a great step into the direction of where Java will be ending up on more and more devices and will be claiming even more its place in the world of "internet of things".

The below video is a great and quick introduction into the new java 8 release.



The new Java release 8 includes the following new or improved features:

Java Programming Language

  • Lambda Expressions, a new language feature, has been introduced in this release. They enable you to treat functionality as a method argument, or code as data. Lambda expressions let you express instances of single-method interfaces (referred to as functional interfaces) more compactly.
  • Method references provide easy-to-read lambda expressions for methods that already have a name.
  • Default methods enable new functionality to be added to the interfaces of libraries and ensure binary compatibility with code written for older versions of those interfaces.
  • Repeating Annotations provide the ability to apply the same annotation type more than once to the same declaration or type use.
  • Type Annotations provide the ability to apply an annotation anywhere a type is used, not just on a declaration. Used with a pluggable type system, this feature enables improved type checking of your code.
  • Improved type inference.
  • Method parameter reflection.

Collections

  • Classes in the new java.util.stream package provide a Stream API to support functional-style operations on streams of elements. The Stream API is integrated into the Collections API, which enables bulk operations on collections, such as sequential or parallel map-reduce transformations.
  • Performance Improvement for HashMaps with Key Collisions
  • Compact Profiles contain predefined subsets of the Java SE platform and enable applications that do not require the entire Platform to be deployed and run on small devices.

Security

  • Client-side TLS 1.2 enabled by default
  • New variant of AccessController.doPrivileged that enables code to assert a subset of its privileges, without preventing the full traversal of the stack to check for other permissions
  • Stronger algorithms for password-based encryption
  • SSL/TLS Server Name Indication (SNI) Extension support in JSSE Server
  • Support for AEAD algorithms: The SunJCE provider is enhanced to support AES/GCM/NoPadding cipher implementation as well as GCM algorithm parameters. And the SunJSSE provider is enhanced to support AEAD mode based cipher suites. See Oracle Providers Documentation, JEP 115.
  • KeyStore enhancements, including the new Domain KeyStore type java.security.DomainLoadStoreParameter, and the new command option -importpassword for the keytool utility
  • SHA-224 Message Digests
  • Enhanced Support for NSA Suite B Cryptography
  • Better Support for High Entropy Random Number Generation
  • New java.security.cert.PKIXRevocationChecker class for configuring revocation checking of X.509 certificates
  • 64-bit PKCS11 for Windows
  • New rcache Types in Kerberos 5 Replay Caching
  • Support for Kerberos 5 Protocol Transition and Constrained Delegation
  • Kerberos 5 weak encryption types disabled by default
  • Unbound SASL for the GSS-API/Kerberos 5 mechanism
  • SASL service for multiple host names
  • JNI bridge to native JGSS on Mac OS X
  • Support for stronger strength ephemeral DH keys in the SunJSSE provider
  • Support for server-side cipher suites preference customization in JSSE

JavaFX

  • The new Modena theme has been implemented in this release. For more information, see the blog at fxexperience.com.
  • The new SwingNode class enables developers to embed Swing content into JavaFX applications. See the SwingNode javadoc and Embedding Swing Content in JavaFX Applications.
  • The new UI Controls include the DatePicker and the TreeTableView controls.
  • The javafx.print package provides the public classes for the JavaFX Printing API. See the javadoc for more information.
  • The 3D Graphics features now include 3D shapes, camera, lights, subscene, material, picking, and antialiasing. The new Shape3D (Box, Cylinder, MeshView, and Sphere subclasses), SubScene, Material, PickResult, LightBase (AmbientLight and PointLight subclasses) , and SceneAntialiasing API classes have been added to the JavaFX 3D Graphics library. The Camera API class has also been updated in this release. See the corresponding class javadoc for javafx.scene.shape.Shape3D, javafx.scene.SubScene, javafx.scene.paint.Material, javafx.scene.input.PickResult, javafx.scene.SceneAntialiasing, and the Getting Started with JavaFX 3D Graphics document.
  • The WebView class provides new features and improvements. Review Supported Features of HTML5 for more information about additional HTML5 features including Web Sockets, Web Workers, and Web Fonts.
  • Enhanced text support including bi-directional text and complex text scripts such as Thai and Hindi in controls, and multi-line, multi-style text in text nodes.
  • Support for Hi-DPI displays has been added in this release.
  • The CSS Styleable* classes became public API. See the javafx.css javadoc for more information.
  • The new ScheduledService class allows to automatically restart the service.
  • JavaFX is now available for ARM platforms. JDK for ARM includes the base, graphics and controls components of JavaFX.

Tools

  • The jjs command is provided to invoke the Nashorn engine.
  • The java command launches JavaFX applications.
  • The java man page has been reworked.
  • The jdeps command-line tool is provided for analyzing class files.
  • Java Management Extensions (JMX) provide remote access to diagnostic commands.
  • The jarsigner tool has an option for requesting a signed time stamp from a Time Stamping Authority (TSA).
  • Javac tool
  • The -parameters option of the javac command can be used to store formal parameter names and enable the Reflection API to retrieve formal parameter names.
  • The type rules for equality operators in the Java Language Specification (JLS) Section 15.21 are now correctly enforced by the javac command.
  • The javac tool now has support for checking the content of javadoc comments for issues that could lead to various problems, such as invalid HTML or accessibility issues, in the files that are generated when javadoc is run. The feature is enabled by the new -Xdoclint option. For more details, see the output from running "javac -X". This feature is also available in the javadoc tool, and is enabled there by default.
  • The javac tool now provides the ability to generate native headers, as needed. This removes the need to run the javah tool as a separate step in the build pipeline. The feature is enabled in javac by using the new -h option, which is used to specify a directory in which the header files should be written. Header files will be generated for any class which has either native methods, or constant fields annotated with a new annotation of type java.lang.annotation.Native.

Javadoc tool

  • The javadoc tool supports the new DocTree API that enables you to traverse Javadoc comments as abstract syntax trees.
  • The javadoc tool supports the new Javadoc Access API that enables you to invoke the Javadoc tool directly from a Java application, without executing a new process. See the javadoc what's new page for more information.
  • The javadoc tool now has support for checking the content of javadoc comments for issues that could lead to various problems, such as invalid HTML or accessibility issues, in the files that are generated when javadoc is run. The feature is enabled by default, and can also be controlled by the new -Xdoclint option. For more details, see the output from running "javadoc -X". This feature is also available in the javac tool, although it is not enabled by default there.

Internationalization

  • Unicode Enhancements, including support for Unicode 6.2.0
  • Adoption of Unicode CLDR Data and the java.locale.providers System Property
  • New Calendar and Locale APIs
  • Ability to Install a Custom Resource Bundle as an Extension

Deployment

  • For sandbox applets and Java Web Start applications, URLPermission is now used to allow connections back to the server from which they were started. SocketPermission is no longer granted.
  • The Permissions attribute is required in the JAR file manifest of the main JAR file at all security levels.

Date-Time Package

  • A new set of packages that provide a comprehensive date-time model.

Scripting 

  • Nashorn Javascript Engine

Pack200

  • Pack200 Support for Constant Pool Entries and New Bytecodes Introduced by JSR 292
  • JDK8 support for class files changes specified by JSR-292, JSR-308 and JSR-335

IO and NIO

  • New SelectorProvider implementation for Solaris based on the Solaris event port mechanism. To use, run with the system property java.nio.channels.spi.Selector set to the value sun.nio.ch.EventPortSelectorProvider.
  • Decrease in the size of the /jre/lib/charsets.jar file
  • Performance improvement for the java.lang.String(byte[], *) constructor and the java.lang.String.getBytes() method.

java.lang and java.util Packages
  • Parallel Array Sorting
  • Standard Encoding and Decoding Base64
  • Unsigned Arithmetic Support
JDBC
  • The JDBC-ODBC Bridge has been removed.
  • JDBC 4.2 introduces new features.
Java DB 

  •  JDK 8 includes Java DB 10.10.
Networking
  • The class java.net.URLPermission has been added.
  • In the class java.net.HttpURLConnection, if a security manager is installed, calls that request to open a connection require permission.
Concurrency
  • Classes and interfaces have been added to the java.util.concurrent package.
  • Methods have been added to the java.util.concurrent.ConcurrentHashMap class to support aggregate operations based on the newly added streams facility and lambda expressions.
  • Classes have been added to the java.util.concurrent.atomic package to support scalable updatable variables.
  • Methods have been added to the java.util.concurrent.ForkJoinPool class to support a common pool.
  • The java.util.concurrent.locks.StampedLock class has been added to provide a capability-based lock with three modes for controlling read/write access.
Java XML - JAXP

Monday, April 21, 2014

Security: How to Focus on Risk that Matters

Security is one, or should be one, of the most important areas in your entire IT landscape. It should be on the priority list in all layers of your IT organisation and everyone should be aware and involved up to a certain level. Issue with security is that it is not always clear where you need to put your focus, which parts are important and which parts are less important (however still important). The people at Rapid7 have put together a nice webcast to help you understand some more about how to put priority to certain things in your security strategy.


You can watch the recording of this webcast here:
All assets aren’t created equal – and they shouldn’t be treated the same way.  Security professionals know the secret to running an effective risk management program is providing business context to risk.  However, It’s easier said than done. Every organization is unique: all have different combinations of systems, users, business models, compliance requirements, and vulnerabilities.  Many security products tell you what risk you should focus on first, but don’t take into account the unique make up and priorities of each organization.

Night Vision For Your Network: How to Focus on Risk that Matters



Upgrade Oracle APEX ORA-22288 error resolved

Oracle APEX provides you a very nice and easy platform to build small (or even large) web-based applications within the Oracle APEX framework on top of an Oracle database. For developers who do want to work with Oracle APEX on their own laptop and who do not want to deploy this directly on their workstations operating system there is the option to download a complete Linux operating system with a working APEX installation. One of the things you see with downloading a virtual image is that they are not always up to the latest version and patch-level. In essence this is not an issue because you are using it as a local test and development system.

However, in some cases you might want to be on the latest version of APEX because you would like to work with some of the latest features available. Upgrading APEX is quite easy however there are some things you have to keep in mind to save you some time and some frustration.

The steps to upgrade to APEX 4.0 (and 4.*) are described by Oracle as below:

1) Download the latest version of Oracle APEX

2) Unzip the zip file, preferably in a location with a short path. For example /home/oracle

3) Change your working directory to the unzipped apex directory. For example /home/oracle/apex

4) Login to the database:
$ sqlplus /nolog
SQL> CONNECT SYS as SYSDBA
Enter Password:
SYS_Password

5) Execute the first part of the installation:
SQL> @apexins SYSAUX SYSAUX TEMP /i/

6) The previous step will log you out of the database, log back into the database as described above.

7) Execute the below command where APEX_HOME is the location of where you have unzipped the installation software (NOTE1)
SQL> @apxldimg.sql APEX_HOME

8) Execute the below command:
SQL> @apxchpwd

9) Open your browser and check if the installation was a success by opening http://localhost:8080/apex/apex_admin

In esscence these are all the steps you need to complete for your installation / upgrade of Oracle APEX to the latest version. If all is OK without any errors you could be done in a couple of minutes and ready to start developing and testing with the latest version of Oracle APEX. However, there is one small catch to it, refer to NOTE1 below which you need to keep in mind when executing step 7.



NOTE1:
The Oracle documentation states exactly the following:
SQL> @apxldimg.sql APEX_HOME
[Note: APEX_HOME is the directory you specified when unzipping the file. For example, with Windows 'C:\'.]

If you do exactly this you should be fine and everything should be running as expecting. However, you have to read the line carefully. You have to specify the location where you unzipped the file. For example /home/oracle issue is that a lot of people (me included) do not read this correctly and do think that the script will need some other scripts and for this reason you have to state the location where the installation software is located. This can be for example /home/oracle/apex. This is however incorrect.

The installation software will, at a certain point, start looking for the images it needs to load and will extend the given path with /apex/images. If you provide the wrong path (descending into the unzipped apex location) you might get the below error when executing one of the steps:

SQL> @apxldimg.sql /home/oracle
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Directory created.
Directory created.
declare
*
ERROR at line 1:
ORA-22288: file or LOB operation FILEOPEN failed 
The system cannot find the path specified. 
ORA-06512: at "SYS.DBMS_LOB", line 523 
ORA-06512: at "SYS.XMLTYPE", line 287 
ORA-06512: at line 17 
Commit complete.
timing for: Load Images
Elapsed: 00:00:00.03
Directory dropped.
Directory dropped.

While, if you do it correctly you will get the below output:
SQL> @apxldimg.sql /home/oracle
PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.
. Loading images directory: /home/oracle/apex/images
Directory created.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

Commit complete.

Directory dropped.
timing for: Load Images
Elapsed: 00:02:36.41
SQL>

Meaning, selecting the wrong path, and not following the instructions from Oracle to the letter, even though it is not described very clearly, might result in a situation where your upgrade/installation is not going as you might expect. 

Saturday, April 19, 2014

Oracle Database Backup Service explained

Oracle databases are commonly used for mission critical systems, in many cases databases are configured in a high availability setup spanning two or more datacenters. Even though a dual or triple datacenter is protecting you against a number of risks, like for example a fire in one of the datacenters it is not excusing you from implementing a proper backup and recovery strategy. In cases where your data is corrupted or for any other reason you need to consult a backup you will most likely rely on Oracle RMAN. RMAN used the default way for backup and recovery and ships with the Oracle database.

The below diagram shows a proper way of conducting backups. In this case all the data in database A and B is written to the tape library in another datacenter. Databases C and D write the data to the other datacenter. This ensures that your data is always at two locations. If for some reason datacenter-1 should be considered a total loss you can still recover your data from the other datacenter. For mission critical systems you most likely also will have a standby database in the other datacenter however this is not included in this diagram.

Even though this is considered a best practice it is for some companies a costly implementation. Specially smaller companies do not want to invest in a dual, or even triple, datacenter architecture. For this reason you commonly see that the data is written to tape in the same datacenter as the database is hosted and that a person is collecting the tapes on a daily basis. Or, in some worst case scenarios the tapes just reside in the same datacenter. This holds that in case of a fire the entire data collection of a company can be considered lost.

Oracle has recently introduced a solution for this issue by adding a cloud backup service to its cloud services portfolio. The Oracle database backup cloud provides an option to keep using your standard RMAN tooling, however instead of talking to a local tape library, or one in another datacenter, you will be writing your backup to the Oracle cloud. This cloud service, named Oracle Database Backup Service requires you to install the Oracle Database Cloud Backup Module on your database server. You can use the installed module as a RMAN channel to do your backup. By using encryption and compression you can ensure that your backup is send quickly and secure to the Oracle backup Service.


The above diagram shows the flow used in case you backup to the Oracle database backup service. This model is working when you have, for example, only a single datacenter. However it can also work as a strategic model when you have multiple datacenters and even if you have mixed this with cloud based hosting.

The above diagram shows how you can use the Oracle database backup service to do a cloud to cloud backup. If you, for example, host your database at Azure or Amazon and you would like to backup your data at the same backup service providers as all your other datacenters are using. Or you want to have it at Oracle to ensure your data is not with one single company, you can use the same mechanism to perform the backup to the Oracle Database Backup Service.

Creating an account at Oracle and ordering backup space is easy and can be done completely online. As you can see from the screenshot below you can order per terabyte of backup space.


One thing you have to keep in mind, as with all cloud based solutions. There are some legal considerations you need to review. When using the Oracle Database Backup Service you are moving your data away from your company and into the trust of another company. Oracle has provided numerous security options to ensure your data is safe, however, from a legal point of view you have to be sure you are allowed to move the data into the trust of Oracle. For most US based companies this will not be an issue, for US based government agencies and non-US companies it is something you might want to check with your legal department, just to be sure.

Friday, April 18, 2014

Enterprise cloud spending $235B

Companies are moving to the cloud. The trend is more and more to move business functions to cloud based solutions. A couple of years ago companies where not including cloud in the main consideration when thinking about new or improved IT solutions. Currently on almost every shortlist we do see cloud based solutions as a viable option. This is showing in the forecasts and the history of spendings on cloud technology and cloud based architectures where companies are deploying  enterprise functionality on.

Ryan Huang reports on the ZDnet page the growth of cloud based spending and the forecast voor 2017. Below you can see the graph showing the rise of cloud spending in the upcoming years as predicted.


This prediction shows that all companies who are currently investing in building cloud based platforms are making a solid investment as the trend is that the cloud based solutions, and associated customer investment, will continue to grow, for all good reasons. 

Monday, April 14, 2014

Oracle Big Data Appliance node layout

The Oracle Big Data Appliance ships in a full rack configuration with 18 compute nodes that you can use. Each node is a Sun X4-2L or a X3-2L server depending on the fact if you purchased a X4-2 or X3-2 Big data appliance. Both models do however provide you with 18 compute nodes.  The below image shows the rack layout of both the X3-2 and X4-2 rack. Important to note is that the number of servers is done bottom up. A starter rack has only 6 compute nodes and you can expand the rack with in-rack expansions of 6 nodes. Meaning, you can grow from 6 to 12 to a full rack as your capacity needs grow.


In every setup, regardless of the fact if you have a starter rack, a full rack or a rack extended with a single in-rack expansions of 6 nodes (making it a 12 node cluster), 1, 2 and 3 do have a special role. As we go bottom up, starting with node 1 we have the following software running on the nodes:

Node 1:
First NameNode
Zookeeper
Failover controller
Balancer
Puppet master
Puppet agent
NoSQL database
Datanode

Node 2:
Second NameNode
Zookeeper
Failover controller
MySQL backup server
NoSQL Admin
Puppet agent
DataNode

Node 3:
Job Tracker
Zookeeper
CMServer
ODI Agent
MySQL primary server
Hue
Hive
Beeswax
Puppet Agent
NoSQL
DataNode

Node 4 – till 6/12/18
Datanode
Tasktracker
Cloudera manager Agent
Puppet Agent
NoSQL

Understanding what runs where is of vital importance when you are working with an Oracle Big Data appliance. It helps you understand what parts can be brought down without having to much effect on the system and which parts you should be more careful about. As you can see from the above list there are some parts that are made high available and there are some parts that will result in loss of service when brought down for maintenance. 

Friday, April 04, 2014

Oracle ZFS storage appliance configuration

Oracle is incorporating its ZFS storage appliances in more and more engineered systems. Even if you are not a pure storage administrator or consultant and more into Oracle software and engineered systems it is good to have some basic understanding of how a ZFS storage appliance is working and what you can potentially do with it to enhance your solution and provide a better performing and maintainable solution.

The issue with hardware based solutions commonly is that you cannot just play with it without ordering the device. This is holding a lot of people back from gaining experience before they are getting involved in a project where this specific hardware solution is used. The Oracle ZFS storage appliance is bit different, reason for this is that Oracle has decided to create a virtual appliance you can use to play with the solution. The virtual appliance provides you all the options to test and work with the storage appliance in a Oracle VirtualBox image in the same manner as you would do when you would have purchased the real physical hardware.

Oracle ZFS storage appliance


The virtual Oracle ZFS storage appliance can be downloaded from the Oracle site. After unpacking and importing it into Oracle Virtualbox you will be up and running in  matter of minutes. One thing is good to keep in mind, this is a system to play around with, it is not intended to be used in any serious solution except playing and testing. When the initial boot has been completed you will notice that the welcome screen of the host informs you where you can point your browser to.

A minimal setup is done during the initial boot process, the full configuration and setup will be done via the browser. This is exactly the same manner as you would do when you use the real physical ZFS appliance in your datacenter.The primary things you need to completed are during the inital setup are:
  • Host Name
  • DNS Domain
  • Default Router
  • DNS server
  • Password

After completing those steps you will be pointed to a https://{ip}:215 address which will be the main URL for maintaining the ZFS storage appliance, or rather the ZFS storage appliance simulator in this case.

Oracle ZFS configuration STEP 1:
Before we can configure the machine you will have to login, for this you can use the root account in combination with the password you entered during the initial CLI configuration.


After login you will be shown the shown the welcome screen which again tells you that this is only to be used for demonstration purposes. You can use this also for some extreme small testing however, remember this system is not a solution for a storage need and just to play with.


Oracle ZFS configuration STEP 2:
The next step is to ensure you have all the correct networking in place to be able to use our ZFS appliance in the right manner within your corporate infrastructure.

Oracle ZFS appliance network configration

As you can see from the above screenshot there is a datalink and an interface already configured however still stating "untitled" which is giving a hint that you need to do some configuration to it before it will become usable. By clicking the pencil icon you can edit the details of both the datalinks and the interfaces as shown below.

Oracle ZFS storage configuration

After configuring a the ZFS storage appliance interfaces and datalinks you will be asked to configure the routing tables, DNS and NTP.




By having done this the pure network configuration steps are done. Optional you can now select a manner on how you will embed the new storage into your corporate authentication and authorization solution. You can solutions like; NIS, LDAP or an active directory solution you might already have in place within your corporate IT infrastructure.


More information on how to connect a new ZFS appliance to an already existing Microsoft Active Directory can be found in the Oracle documentation.

Oracle ZFS configuration STEP 3:
In step 3 the actual storage configuration will be done. Here you will have to select how you will use the disks and what type of data profile you will be using. All previous steps are more concerning how you will fit the appliance in your existing IT infrastructure. Those steps are concerning how you will actually configure and use the appliance on a storage level. It is advisable to have given this some thorough thoughts before you do the actual implementation of the appliance.

The first decision you will have to make is to decide how many storage pools your device will have (initially).


During this implementation we will only be using a single storage pool. The next important decision that needs to be made is what kind of storage profile you will be using within your pool or pools. You can have different storage  profiles per pool. The following strorage profiles are available:




Double parity
RAID in which each stripe contains two parity disks. This yields high capacity and high availability, as data remains available even with the failure of any two disks. The capacity and availability come at some cost to performance: parity needs to be calculated on writes (costing both CPU and I/O bandwidth) and many concurrent I/Os need to be performed to access a single block (reducing available I/O operations). The performance effects on read operations are often greatly diminished when cache is available.

Mirrored
Data is mirrored, reducing capacity by half, but yielding a highly reliable and high-performing system. Recommended when space is considered ample, but performance is at a premium (for example, database storage).

Single Parity, Narrow stripes
RAID in which each stripe is kept to three data disks and a single parity disk. At normal stripe widths, single parity RAID offers few advantages over double parity RAID -- and has the major disadvantage of only being able to survive a single disk failure. However, at narrow stripe widths, this single parity RAID configuration can fill a gap between mirroring and double parity RAID: its narrow width offers better random read performance than the wider stripe double parity configuration, but it does not have quite the capacity cost of a mirrored configuration. While this configuration may be an appropriate compromise in some situations, it is generally not recommended unless capacity and random read performance must be carefully balanced: those who need more capacity are encouraged to opt for a wider, double-parity configuration; those for whom random read performance is of paramount importance are encouraged to consider either a mirrored configuration or (if the workload is amenable to it) a double parity RAID configuration with sufficient memory and dedicated cache devices to service the workload without requiring disk-based I/O.

Striped
Data is striped across disks, with no redundancy whatsoever. While this maximizes both performance and capacity, it comes at great cost: a single disk failure will result in data loss. This configuration is not recommended, and should only be used when data loss is considered to be an acceptable trade off for marginal gains in capacity and performance.

Triple mirrored
Data is triply mirrored, reducing capacity by one third, but yielding a very highly reliable and high-performing system. This configuration is intended for situations in which maximum performance, and availability are required while capacity is much less important (for example, database storage). Compared with a two-way mirror, a three-way mirror adds additional protection against disk failures and latent disk failures in particular during reconstruction for a previous failure.

Triple parity, wide stripes
RAID in which each stripe has three disks for parity, and for which wide stripes are configured to maximize for capacity. Wide stripes can exacerbate the performance effects of double parity RAID: while bandwidth will be acceptable, the number of I/O operations that the entire system can perform will be greatly diminished. Resilvering data after one or more drive failures can take significantly longer due to the wide stripes and low random I/O performance. As with other RAID configurations, the presence of cache can mitigate the effects on read performance.

The decision which profile to apply is depending on a number of variables like what the type of performance you need will be and for example how "secure" your data should be in relation to data loss and hardware failure. The decision you make has a direct impact on performance as well as usable storage on your appliance. It is of the highest importance that, before you do the installation, have discussed the options with the consumers of your storage. This can be for example database and application administrators or even the business.

After having completed this section of the setup you should have similar situation as shown below.


This is completing the primary initial setup and you will be able to start distributing the storage to servers and users who will make use of the new ZFS appliance within your corporate IT infrastructure.

Thursday, March 20, 2014

Oracle Enterprise Manager plugin development PortARUId

Oracle Enterprise Manager is promoted by Oracle as the default monitoring and maintenance solution from Oracle. Oracle has provided numerous monitoring solutions for a growing number of systems. However, due to the fact that Oracle will not be able to create functionality for all systems that might be a candidate for monitoring and maintenance by Oracle Enterprise Manager there is an extensibility option. Oracle provides the option to develop your own plugins for Oracle Enterprise Manager and by doing so extend the capabilities of Oracle Enterprise Manager for your own company or to create a commercial product with it.

In essence Oracle Enterprise Manager provides capabilities to monitor targets on a diverse number of operating systems. When developing a custom plugin you might develop this for a specific platform. For example, the plugin you develop is only applicable on Linux systems and not on Windows systems. To ensure that your plugin will only be used on those platforms you can limit the deployment options during the development phase.

One of the base files in your plugin creation is the plugin.xml file which provides the general information about your plugin and tells Oracle Enterprise Manager how to handle this plugin. In this file you also have the option to limit the number of operating systems that can handle the plugin and should be able to be discovered and on which it can be deployed. When developing a plugin which is not usable on all platforms this is of special interest.

Within the plugin.xml you can define a number of things and among them the type of operating system where the plugin can be hosted (on the Oracle Enterprise Manager Management server) as well as which deployed agents are applicable to use the plugin based upon the target operating system and you can controle which targets should be discovered as potential deployment target for this plugin.

PluginOMSOSAruId: the PluginOMSOSAruId within plugin.xml is to state which Oracle Enterprise Manager Management server are applicable of running this plugin. In almost all cases this is applicable for all operating systems as the extensiability framework part on the management server is protecting you (up to a certain level) from making decisions that will limit this. Due to this the value is commonly set to 2000 which refers to “all”

<PluginOMSOSAruId value="2000">
</PluginOMSOSAruId>

Within the certification section of the plugin.xml file you can define the applicable operating systems on both agent and discovery. Both are defined as a component type as can be seen in the below example:

<Certification>
<Component type="Agent">
<CertifiedPorts>
<PortARUId value="46" />
<PortARUId value="226" />
</CertifiedPorts>
</Component>
<Component type="Discovery">
<CertifiedPorts>
<PortARUId value="46" />
<PortARUId value="226" />
</CertifiedPorts>
</Component>
</Certification>

Correct values for component type are:
Agent (Management Agent component)
Discovery (Discovery component)

Correct values for PortARUId are:
46 (Linux x86 (32-bit))
212 (AIX 5L and 6.1 (64-bit))
226 (Linux x86-64 (64-bit))
23 (Solaris Sparc (64-bit))
267 (Solaris x86-64 (64-bit))
233 (Microsoft Windows x86-64 (64-bit))

Saturday, March 15, 2014

The Capgemini perspectives on big data

Capgemini share their perspectives on big data and analytics. Perspectives include defining big data, good governance in a big data world, finding the value behind the big data hype, and what building blocks organizations will need to set in place to make it work for them.

It looks at the people aspects, the skills you need the move towards data driven decision making, digital transformation and the impact on the customer experience. Big data typically is very specific to industry so although there are common technologies and some common information sets ultimately each industry sector has many different new data sources and different business issues.

Therefore a key part of this series is looking at how big data is affecting sectors and the associated opportunities it presents. Visit the Capgemini expert pages or the Capgemini Big Data page for more information.

Wednesday, March 12, 2014

big data in the oil and gas industry

The oil and gas industry is in general an industry sector which collects large amounts of data. Not only companies who are active in upstream but also companies who are active in midstream and downstream. Specially companies who do work in all sectors of the industry do have large amounts of data ranging from seismic data to data on how the distribution chain is performing and much more. To be able to succeed in a industry like the oil and gas industry and to achieve compelling advantages over competitors it can be very beneficial to combine all this raw and unstructured data to help an organisation. Currently most of the data is siloed and not accessible or usable to do analysis over the whole chain. Also it is commonly impossible to use this data in a manner that is providing good results that are usable within the business planning.

For those cases and in oil and gas industry related companies it can be very beneficial to start thinking about a big data strategy. Implementing a big data strategy go's way beyond "only" implementing a hadoop cluster and give this to a number of tech people. Hortonworks recently published an article in which they provide a first insight into how hadoop and a big data strategy can be used in the oil and gas industry. A good image and starting point for thinking about a big data strategy is the below image.


The above image is showing it from a Hortonworks perspective however the same can be achieved with other Hadoop implementation vendors like for example Oracle or others. What is interesting about this image as a starting point is that it is showing a first impression of sources within the oil and gas industry that could potentially be used within a big data strategy.

Monitor your network connections on Linux

In some cases and in some environments you do want to keep track of all the network connections a Linux server or workstation is making. For example if you are planning to control your local network in a better way and think about implementing more strict firewall rules it is good to investigate what users are trying to access. In general external connections to webservers are common and should be allowed, most likely you also know which servers in your local network are likely to be accessed by other local servers and workstations. However, a lot of hidden network traffic can be executed which you are not aware of and when closing a lot of ports in your network you might start hindering daily operations.

In those cases it is good to start monitoring which traffic if executed so you can investigate this and make a network connection diagram. For this you can use logging on network switches, routers and firewalls. However, a more easy way in my opinion is to ensure all your workstations do have a running copy of tcpspy on it which will start collecting data for some time and report this back to a central location.

tcpspy is a little program that will log all the connections the moment the connect or disconnect. By default tcpspy will install in a manner that it will automatically start as a daemon and write all information to /var/log/syslog in a manner that it will capture everything. You can however create certain rules to what tpcspy needs to capture by editing the file /etc/tcpspy.rules or by entering a new rule with the tcpspy -e options.

Before implementing a more strict local firewall rule on the workstations on my private home network I first had tcpspy running for a couple of weeks and extracted all information from /var/log/syslog to a central location and visualized it with a small implementation of D3.js to visualize this. This showed that a number of unexpected however valid network connections were made on a regular basis which I was unaware of.

Implementing this at your local home network is something that could be considered not that difficult, especially if you have some scripted way of implementing tooling on all workstations in an automated manner. Also it might look a bit overdone in a home environment, however, as this can be considered a testdrive for preparing a blueprint to be implemented in a more business like environment it shows the value of being able to quickly visualize all internal and external network traffic.

When you are looking into manners to log all internal and external network connections that are made by a server or workstation it might be a good move to give tcpspy a look and when you are looking into ways to visualize the data you receive you might be interested in the options provided by D3.js

Saturday, March 08, 2014

Managing Oracle Big Data Appliance with Oracle Enterprise Manager

Oracle provides customers with a need to implement Hadoop based solutions the Oracle Big Data Appliance as an engineered system. As with most of the Oracle products, both hardware and software, Oracle is using Oracle Enterprise Manager as the primary monitoring and maintenance solution. For the big data appliance Oracle is providing the "Oracle Enterprise Manager System Monitoring Plug-in for Oracle Big Data Appliance" in the form of a OEM plugin that can be added to Oracle Enterprise Manager the moment you are adding a big data appliance to your IT landscape.

Even though managing Hadoop and tuning Hadoop is still the work for skilled people Oracle is making life a lot more easy with providing a ready build rack with both software and hardware that is tuned to work together. For example Mammoth is helping you in a great way during the initial setup of your big data appliance and during operation Oracle Enterprise Manager and the plugin will help you to monitor and manage the large part of the engineered system.

Due to the fact that most of the parts in the Oracle Big Data Appliance are build by Oracle and the other parts that are not manufactured by Oracle are in most of the other engineered system there is the option to show you virtually everything on both hardware and software level from within a single tool. As you can see from the below screenshot there is a visual recognizable representation of your hardware within the tool from where you can drill down to most of the components via the left pane menu.

Oracle Enterprise Manager for Oracle Big Data Appliance

Also it is providing you a single entry and screen to see on which node in your cluster which component is located and what there current status is. In the below screenshot you can see an overview of a cluster within the Oracle Big Data Appliance.

Oracle Enterprise Manager for Oracle Big Data Appliance

As you can see from the above screenshot it is providing you a software component overview broken up per server node. Each server node, represented as a record in the table, shows the status (if it is on this particular server node) of the namenode, failover node, journal node, data node, job tracker, task tracker or where ZooKeeper is.

All this can also be achieved by adding your own tooling however and scripting together. However, due to the fact that Oracle has had the option to both combine the hardware and the software and use for a large part technology that is already used in larger numbers for monitoring and maintenance you can be up and running in a fraction of the time that might be needed if you need to design and develop this yourself.

Thursday, March 06, 2014

Oracle Smart Flash Cache patching requirements

When using a PCI Express flash card in your server to enable the Oracle database to make use of the Oracle Smart Flash Cache options there are a number of things to consider. First of all, this only works when you are using Oracle Linux or Oracle Solaris as an operating system. This is commonly known, however, less commonly known is that you will have to ensure that your database is on a certain version.

Popular believe is that Oracle Database Smart Flash Cache is available and working with all Oracle database versions in an out of the box manner. However, when using Oracle database 11.2.0.1.2 or less you will have to apply database patch 8974084. To be able to apply patch 8974084 you will have at least have to applied 9654983.

When you are trying to get Oracle Database Smart Flash Cache up and running on an Oracle Database 11.2.0.1.2 or an earlier version you have to apply those patches before you start configuring this option. If the patches are not applied it will not work.

Wednesday, February 12, 2014

Oracle Exadata default passwords

When your Exadata is deployed it is by default equipped with a number of standard usernames and passwords. By default all root SSH keys and user accounts will be disabled, however, a number of accounts will be open and will have the standard passwords. Good practice dictates that all standard passwords should be changed directly to ensure that nobody can misuse this and make use of the default passwords. As a quick checklist you can find the default accounts and passwords that will be enabled below and you should ensure they are closed.

Database Server:
  • root/welcome1
  • oracle/welcome1
  • grid/welcome1
  • grub/sos1Exadata
Exadata Storage Servers:
  • root/welcome1
  • celladmin/welcome1
  • cellmonitor/welcome1
InfiniBand switches:
  • root/welcome1
  • nm2user/changeme
Ethernet switches:
  • admin/welcome1
Power distribution units (PDUs):
  • admin/welcome1
  • root/welcome1
Database server ILOMs:
  • root/welcome1
Exadata Storage Server ILOMs:
  • root/welcome1
InfiniBand ILOMs:
  • ilom-admin/ilom-admin
  • ilom-operator/ilom-operator
Keyboard, video, mouse (KVM):
  • admin/welcome1
Keeping the default passwords in use is, from a security point of view, very unwise decission and this should be changed as soon as possible. When not done the changes an attacker can gain access to your Exadata machine is increasing enormously. In many companies a default process for resetting passwords is in place for more common servers, however, Exadata servers are not implemented by the hunderds a year in a single company so processes might not always include them. Due to this it is an extra point of attention for administrators and security officers.

Oracle Database Smart Flash Cache

Oracle database has, already from release 11G the option to use Smart Flash Cache. What Smart Flash Cache enables you to do is to extend the SGA buffer cache size of your database without the need to extend your memory in your server but rather use level 2 caching options. For this you can use, for example, PCIe flash cache cards like the Sun Flash Accelerator F20 PCIe Card which is shipping from Oracle. However, other vendors do also manufactur cards that can be used to make use of this.

The below image shows what happens in essence when you are using the Oracle database Smart Flash Cache options.

  1. When a block is retrieved from the storage it is stored within the buffer cache of the system global area. 
  2. When using Smart Flash Cache a block is not removed from the buffer cache however is evicted to the flash cache instead.
  3. When a block is needed again and is not available in the buffer cache in the SGA it is retrieved (if available) from the flash cache instead.
By implementing this you can avoid, up until a certain level, recurring calls to your storage device. Requesting a block from storage is slow and if this can be avoided the benefits to the performance of your database are directly visible. Next to this, and often overlooked, is the fact that this is also resulting a positive effect for other databases and applications that make use of the same shared storage device. Due to the fact that you can lower the I/O operations on your shared storage there is more room to handle the requests from other applications and due to this there can be a positive performance gain due to this indirect relation between the components.

One good thing to keep in mind is that this option is only working when your database is deployed on Oracle Linux or on Oracle Solaris.

For more information please do have a look at the presentation on this subject which is embedded below: