In many cases the ideal world and the real world are miles apart. In an ideal world every system ever put into the datacenter is entered into a configuration management database and you will be able to find out with the click of a button what specific configuration is done to a system, what its use is and what hardware components it is using. As second part of the ideal world is that all your hardware for your compute farm is made of exactly the same hardware. However, reality is grim and in general configuration management database and asset management databases are not always as up to date as one would like.
When using Oracle Enterprise Manager and placing all operating systems under the management umbrella of Oracle Enterprise Manager you will already start to get the needed input for a unified and central database where you can look up a lot of the specification of a system. However, Oracle Enterprise Manager is build around the database, management of (Oracle) applications is added at a later stage just like the management of operating systems. For non-Oracle hardware the hardware inspect is also not as deep as one would like sometimes.
However, it can be vital to have a more in depth insight in the hardware that is used in a system. For example if you want to understand how your landscape is build up from an hardware point of view. A Linux tool that might be able to help you with that is lshw which will give you with a single command an overview of the hardware present in your system.
The Oracle YUM repository has the needed packages for lshw which makes the installation of lshw extremely easy as you can use the yum command for the installation as shown below;
When using lshw in a standard mode you will get a standard user friendly view of the hardware as shown below. Interesting to note, the below is running on an Oracle Linux instance on the Oracle Compute cloud so you will some interesting insights into the inner workings of the Oracle Compute cloud while reading through the below output. When running this on physical hardware the output will look a bit different and more realistic.
Even though the above is interesting, it is not helping in building a unified database containing the physical hardware of your servers. However, lshw has some more options that can be used as shown below;
The most interesting to note from the above is the xml option. This means you can have the above output in an xml format. We can use the xml format option in a custom check within Oracle Enterprise Manager and instruct the agent deployed on Oracle Linux to use the xml output from lshw as input for Oracle Enterprise Manager and so automatically maintain a hardware configuration management database in Oracle Enterprise Manager without the need to undertake manual actions.
For those who want to check the xml output, you can print it to screen or save it to a file using the below command;
When using Oracle Enterprise Manager and placing all operating systems under the management umbrella of Oracle Enterprise Manager you will already start to get the needed input for a unified and central database where you can look up a lot of the specification of a system. However, Oracle Enterprise Manager is build around the database, management of (Oracle) applications is added at a later stage just like the management of operating systems. For non-Oracle hardware the hardware inspect is also not as deep as one would like sometimes.
However, it can be vital to have a more in depth insight in the hardware that is used in a system. For example if you want to understand how your landscape is build up from an hardware point of view. A Linux tool that might be able to help you with that is lshw which will give you with a single command an overview of the hardware present in your system.
The Oracle YUM repository has the needed packages for lshw which makes the installation of lshw extremely easy as you can use the yum command for the installation as shown below;
yum install lshw
When using lshw in a standard mode you will get a standard user friendly view of the hardware as shown below. Interesting to note, the below is running on an Oracle Linux instance on the Oracle Compute cloud so you will some interesting insights into the inner workings of the Oracle Compute cloud while reading through the below output. When running this on physical hardware the output will look a bit different and more realistic.
[root@testbox09 ~]# lshw testbox09 description: Computer product: HVM domU vendor: Xen version: 4.3.1OVM serial: ffc59abb-f496-4819-8d0c-a6fad4334391 width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: boot=normal uuid=FFC59ABB-F496-4819-8D0C-A6FAD4334391 *-core description: Motherboard physical id: 0 *-firmware:0 description: BIOS vendor: Xen physical id: 0 version: 4.3.1OVM date: 11/05/2015 size: 96KiB capabilities: pci edd *-cpu:0 description: CPU product: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz vendor: Intel Corp. vendor_id: GenuineIntel physical id: 1 bus info: cpu@0 slot: CPU 1 size: 2993MHz capacity: 2993MHz width: 64 bits capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp x86-64 constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms *-cpu:1 description: CPU vendor: Intel physical id: 2 bus info: cpu@1 slot: CPU 2 size: 2993MHz capacity: 2993MHz *-memory:0 description: System Memory physical id: 3 capacity: 3584MiB *-bank:0 description: DIMM RAM physical id: 0 slot: DIMM 0 size: 7680MiB width: 64 bits *-bank:1 description: DIMM RAM physical id: 1 slot: DIMM 0 size: 7680MiB width: 64 bits *-firmware:1 description: BIOS vendor: Xen physical id: 4 version: 4.3.1OVM date: 11/05/2015 size: 96KiB capabilities: pci edd *-cpu:2 description: CPU vendor: Intel physical id: 5 bus info: cpu@2 slot: CPU 1 size: 2993MHz capacity: 2993MHz *-cpu:3 description: CPU vendor: Intel physical id: 6 bus info: cpu@3 slot: CPU 2 size: 2993MHz capacity: 2993MHz *-memory:1 description: System Memory physical id: 7 capacity: 3584MiB *-memory:2 UNCLAIMED physical id: 8 *-memory:3 UNCLAIMED physical id: 9 *-pci description: Host bridge product: 440FX - 82441FX PMC [Natoma] vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 02 width: 32 bits clock: 33MHz *-isa description: ISA bridge product: 82371SB PIIX3 ISA [Natoma/Triton II] vendor: Intel Corporation physical id: 1 bus info: pci@0000:00:01.0 version: 00 width: 32 bits clock: 33MHz capabilities: isa bus_master configuration: latency=0 *-ide description: IDE interface product: 82371SB PIIX3 IDE [Natoma/Triton II] vendor: Intel Corporation physical id: 1.1 bus info: pci@0000:00:01.1 version: 00 width: 32 bits clock: 33MHz capabilities: ide bus_master configuration: driver=ata_piix latency=64 resources: irq:0 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:c140(size=16) *-bridge UNCLAIMED description: Bridge product: 82371AB/EB/MB PIIX4 ACPI vendor: Intel Corporation physical id: 1.3 bus info: pci@0000:00:01.3 version: 01 width: 32 bits clock: 33MHz capabilities: bridge bus_master configuration: latency=0 *-display UNCLAIMED description: VGA compatible controller product: GD 5446 vendor: Cirrus Logic physical id: 2 bus info: pci@0000:00:02.0 version: 00 width: 32 bits clock: 33MHz capabilities: vga_controller bus_master configuration: latency=0 resources: memory:f0000000-f1ffffff memory:f3020000-f3020fff *-generic description: Unassigned class product: Xen Platform Device vendor: XenSource, Inc. physical id: 3 bus info: pci@0000:00:03.0 version: 01 width: 32 bits clock: 33MHz capabilities: bus_master configuration: driver=xen-platform-pci latency=0 resources: irq:28 ioport:c000(size=256) memory:f2000000-f2ffffff *-network description: Ethernet interface physical id: 1 logical name: eth0 serial: c6:b0:ed:00:52:16 capabilities: ethernet physical configuration: broadcast=yes driver=vif ip=10.196.73.178 link=yes multicast=yes [root@testbox09 ~]#
Even though the above is interesting, it is not helping in building a unified database containing the physical hardware of your servers. However, lshw has some more options that can be used as shown below;
[root@testbox09 ~]# lshw --help Hardware Lister (lshw) - B.02.17 usage: lshw [-format] [-options ...] lshw -version -version print program version (B.02.17) format can be -html output hardware tree as HTML -xml output hardware tree as XML -short output hardware paths -businfo output bus information options can be -dump OUTFILE save hardware tree to a file -class CLASS only show a certain class of hardware -C CLASS same as '-class CLASS' -c CLASS same as '-class CLASS' -disable TEST disable a test (like pci, isapnp, cpuid, etc. ) -enable TEST enable a test (like pci, isapnp, cpuid, etc. ) -quiet don't display status -sanitize sanitize output (remove sensitive information like serial numbers, etc.) -numeric output numeric IDs (for PCI, USB, etc.) [root@testbox09 ~]#
The most interesting to note from the above is the xml option. This means you can have the above output in an xml format. We can use the xml format option in a custom check within Oracle Enterprise Manager and instruct the agent deployed on Oracle Linux to use the xml output from lshw as input for Oracle Enterprise Manager and so automatically maintain a hardware configuration management database in Oracle Enterprise Manager without the need to undertake manual actions.
For those who want to check the xml output, you can print it to screen or save it to a file using the below command;
[root@testbox09 ~]# [root@testbox09 ~]# lshw -xml >> /tmp/lshw.xml [root@testbox09 ~]# ls -la /tmp/lshw.xml -rw-r--r-- 1 root root 12151 Oct 31 14:29 /tmp/lshw.xml [root@testbox09 ~]#