When you are looking into the architecture of a new database system a lot of things are to be considered. Besides the applications that will be using the database you will have to think about hardware, networking, storage, failover systems, backup and recovery, etc etc etc.
Traditional backup is done in many cases on tape, when disks and storage appliances came into play and became cheaper backups started to be done on disk arrays. Network attached storage is currently a commonly used way to store backups, in most cases combined with tape storage to move backups to another location. Some companies use mirror filers to store backups on other locations and/or standby in case a failover should become active.
All options are still very useful and in some situations still the best however now we have a new and very good option, and it is still a very affordable solution. Oracle is providing a solution to store your backups in the cloud, in cooperation with Amazon it is now possible to use RMAN to backup your database into the storage cloud of Amazon. As part of the cloud computing solutions Amazon provides the also provide Amazon S3 (Amazon Simple Storage Solution).
Instead of using a tape to backup your database or a storage filer Oracle is providing you the option to backup your database into the cloud and on a Amazon server. Even do this is already possible for some time many companies have not yet started to use this because due to a couple of wrong assumptions. Common, and wrong) assumptions are that it is costly, insecure, slow, only applicable if you run your database in a Amazon datacenter, not applicable for the current setup used. However, all of those assumptions are, as already stated, incorrect.
They way the cloud backup module is created is that you can use it as if you were doing a backup on your own storage filer, you will be able to use the same scripts, tools and techniques as you are always used to however now your target system is the Amazon cloud. Also a restore can be done in the same way and it is definitely faster than a restore from tape. Meaning the assumption “not applicable for the current setup used” is in most cases not correct. DBA’s will still have the same options as they would have in cases that you use your own storage filer.
One of the other assumptions “only applicable if you run your database in a Amazon datacenter”, also not true. Even do you might possible gain some speed when you run it in the same Amazon datacenter it is not true. It is however possible to run your database at the Amazon Elastic Compute Cloud.
Costly, well especially during the current financial situation you will have to face this question. What is this costing me? Well currently this is costing you 3 dollar cents a month per GB. Now it becomes time to make a quick calculation for yourself, or your customer. For example, what are the costs of a new Netapp filer + what is it roughly costing me on cooling, rackspace, power consumption + what are the costs for moving my backup to a other location. Also take into account that if you want your own storage you will most likely need someone who can administer your storage filer. Also those costs need to be taken into account. Now reconsider the 3 dollar cents per GB from Amazon. Another thing to consider is that if you want your own storage you will most likely want to be prepared for data growth. You will buy that extra shelf of disks so you can store your data and backups. When you consider using Amazon S3 you will only pay for the storage used instead of also paying for all those empty disks in your filer. This can be very interesting for customers who have a changing demand for the amount of storage.
Slow, depending on some things. What is the current way of backup? Is it tape backup, then you will gain a lot of speed. If you try to benchmark it against a backup to your own filer in your own datacenter it will be somewhat slower as your data will be traveling over the public internet. However, even do bandwidth can be a problem there are some things notable in the Oracle Secure Backup Cloud Module has some things to speed things up considerably.
For example, Oracle Secure Backup will identify unused blocks in your database and will not incorporate them in the backup. In almost every database there is more space allocated than used. In a normal backup this will become part of the backup, now those blocks will be left out so they will not consume any bandwidth and time when you send your backup to the cloud. A second thing is that you have the option to leave out undo data which is already committed to the database. When you have a database with a high volume transactions your undo tablespace will be loaded with undo data which will never be used again because it is already committed by the users. You have the option to leave this out while preserving the undo data of transactions that are in process during the backup. This can also save a lot of data in the backup and save you a lot of data traffic.
And standard is the use of the 11G Fast Compressed Backup feature which normally will require the license for Oracle Advanced Compression and is standard with the cloud backup module. Resulting in a 50% compression in most cases.
Security, this is one of the parts you will have to convince your board of directors (or the one of the customer). This can become a hard battle. In most cases people tend to state that if it is not running on their own server it is not secure. If it is on a public storage location everyone can read your data. Also in this case they are wrong. To start with, all the data transmitted between the system running the database and the storage cloud is encrypted using a key pair which is currently seen as one of the best ways to encrypt data streams. This makes it virtually impossible to read the data while it is transmitted. Besides the security while transmitting the data there is also the option to encrypt your backup before you transmit it. Meaning that the real content of the backup is also encrypted. Even if someone gains unauthorized access and hacks into the already very secure Amazon storage cloud he is still unable to read the data because the backup itself is encrypted. When you like to look more into the ways of encryption you will be able to find some documents on all the security options and algorithms that can be used on the Oracle website.
In conclusion, Amazon E3 is a great, fast and secure way to backup a oracle database. It is cheaper than most other options and even do you will have to convince your customer to make use of it is a something that should be considered when thinking about backup. In some cases their might be a very good reason why you do not want to have your backup stored a Amazon E3 however in most cases it is worth a serious look.
1 comment:
dear sir \ Mr. Johan Louwers ,...
i have downloaded oracle developer suite 10gr2 and tried hardly to attach webutil library followed the steps provieded by oracle
but unfortunatly , i got this error while tring to make the plx file out of webutil.pll file
frm-91507 : Internal Error: Unable to generate library.
this is my forms_path = c:\DevSuiteHome_1\forms;C:\DevSuiteHome_1\forms\webutil;C:\DevSuiteHome_1\cgenf61\ADMIN
and this is my default.env details
PATH=C:\DevSuiteHome_1\bin;C:\DevSuiteHome_1\jdk\jre\bin\client
FORMS_PATH=C:\DevSuiteHome_1\forms;C:\ DevSuiteHome_1 \forms\webutil
WEBUTIL_CONFIG=C:\DevSuiteHome_1\forms\server\webutil.cfg
CLASSPATH=C:\DevSuiteHome_1\j2ee\OC4J_BI_Forms\applications\formsapp\formsweb\WEB-INF\lib\frmsrv.jar;C:\DevSuiteHome_1\jlib\repository.jar;C:\DevSuiteHome_1\jlib\ldapjclnt10.jar;C:\DevSuiteHome_1\jlib\debugger.jar;C:\DevSuiteHome_1\jlib\ewt3.jar;C:\DevSuiteHome_1\jlib\share.jar;C:\DevSuiteHome_1\jlib\utj.jar;C:\DevSuiteHome_1\jlib\zrclient.jar;C:\DevSuiteHome_1\reports\jlib\rwrun.jar;C:\DevSuiteHome_1\forms\java\frmwebutil.jar;C:\DevSuiteHome_1\forms\java\jacob.jar;C:\DevSuiteHome_1\jdk\jre\lib\rt.jar;C:\DevSuiteHome_1\forms\java\frmall.jar
thanks
Jerus Nymph
Post a Comment