返回首页
当前位置: 主页 > 网络编程 > Jsp实例教程 >

Build Your Own Oracle RAC 10g Release 2 Cluster on Linux and

时间:2011-04-26 20:55来源:知行网www.zhixing123.cn 编辑:麦田守望者

Oracle RAC O2CB Cluster Service
Before we can do anything with OCFS2 like formatting or mounting the file system, we need to first have OCFS2's cluster stack, O2CB, running (which it will be as a result of the configuration process performed above). The stack includes the following services:

NM: Node Manager that keep track of all the nodes in the cluster.conf
HB: Heart beat service that issues up/down notifications when nodes join or leave the cluster
TCP: Handles communication between the nodes
DLM: Distributed lock manager that keeps track of all locks, its owners and status
CONFIGFS: User space driven configuration file system mounted at /config
DLMFS: User space interface to the kernel space DLM
All of the above cluster services have been packaged in the o2cb system service (/etc/init.d/o2cb). Here is a short listing of some of the more useful commands and options for the o2cb system service.

/etc/init.d/o2cb status
Module "configfs": Not loaded
Filesystem "configfs": Not mounted
Module "ocfs2_nodemanager": Not loaded
Module "ocfs2_dlm": Not loaded
Module "ocfs2_dlmfs": Not loaded
Filesystem "ocfs2_dlmfs": Not mounted
Note that with this example, all of the services are not loaded. I did an "unload" right before executing the "status" option. If you were to check the status of the o2cb service immediately after configuring OCFS using ocfs2console utility, they would all be loaded.
 

/etc/init.d/o2cb load
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Loads all OCFS modules.
 

/etc/init.d/o2cb online ocfs2
Starting cluster ocfs2: OK
The above command will online the cluster we created, ocfs2.
 

/etc/init.d/o2cb offline ocfs2
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
The above command will offline the cluster we created, ocfs2.
 

/etc/init.d/o2cb unload
Cleaning heartbeat on ocfs2: OK
Stopping cluster ocfs2: OK
The above command will unload all OCFS modules.
Configure O2CB to Start on Boot

You now need to configure the on-boot properties of the OC2B driver so that the cluster stack services will start on each boot. All the tasks within this section will need to be performed on both nodes in the cluster.

Note: At the time of writing this guide, OCFS2 contains a bug wherein the driver does not get loaded on each boot even after configuring the on-boot properties to do so. After attempting to configure the on-boot properties to start on each boot according to the official OCFS2 documentation, you will still get the following error on each boot:
...
Mounting other filesystems:
mount.ocfs2: Unable to access cluster service

Cannot initialize cluster mount.ocfs2:
Unable to access cluster service Cannot initialize cluster [FAILED]
...
Red Hat changed the way the service is registered between chkconfig-1.3.11.2-1 and chkconfig-1.3.13.2-1. The O2CB script used to work with the former.
Before attempting to configure the on-boot properties:

REMOVE the following lines in /etc/init.d/o2cb
### BEGIN INIT INFO
# Provides: o2cb
# Required-Start:
# Should-Start:
# Required-Stop:
# Default-Start: 2 3 5
# Default-Stop:
# Description: Load O2CB cluster services at system boot.
### END INIT INFO

Re-register the o2cb service.
# chkconfig --del o2cb
# chkconfig --add o2cb
# chkconfig --list o2cb
o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off

# ll /etc/rc3.d/*o2cb*
lrwxrwxrwx 1 root root 14 Sep 29 11:56 /etc/rc3.d/S24o2cb -> ../init.d/o2cb
The service should be S24o2cb in the default runlevel.
After resolving this bug, you can continue to set the on-boot properties as follows:

# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver. The following questions will determine whether the driver is loaded on boot. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort.
Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting cluster ocfs2: OK
Format the OCFS2 Filesystem

If the O2CB cluster is offline, start it. The format operation needs the cluster to be online, as it needs to ensure that the volume is not mounted on some node in the cluster.

Create the OCFS2 Filesystem

Unlike the other tasks in this section, creating the OCFS2 filesystem should only be executed on one node in the RAC cluster. You will be executing all commands in this section from linux1 only.

Note that it is possible to create and mount the OCFS2 file system using either the GUI tool ocfs2console or the command-line tool mkfs.ocfs2. From the ocfs2console utility, use the menu [Tasks] - [Format].

See the instructions below on how to create the OCFS2 file system using the command-line tool mkfs.ocfs2.

To create the filesystem, use the Oracle executable mkfs.ocfs2. For the purpose of this example, I run the following command only from linux1 as the root user account:

$ su -
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oradatafiles /dev/sda1

mkfs.ocfs2 1.0.2
Filesystem label=oradatafiles
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=1011675136 (30873 clusters) (246984 blocks)
1 cluster groups (tail covers 30873 clusters, rest cover 30873 clusters)
Journal size=16777216
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing lost+found: done
mkfs.ocfs2 successful
Mount the OCFS2 Filesystem

Now that the file system is created, you can mount it. Let's first do it using the command-line, then I'll show how to include it in the /etc/fstab to have it mount on each boot. Mounting the filesystem will need to be performed on all nodes in the Oracle RAC cluster as the root user account.

First, here is how to manually mount the OCFS2 file system from the command line. Remember, this needs to be performed as the root user account:

$ su -
# mount -t ocfs2 -o datavolume /dev/sda1 /u02/oradata/orcl
If the mount was successful, you will simply got your prompt back. You should, however, run the following checks to ensure the fil system is mounted correctly.

Let's use the mount command to ensure that the new filesystem is really mounted. This should be performed on all nodes in the RAC cluster:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)

------分隔线----------------------------
标签(Tag):数据库 oralce
------分隔线----------------------------
推荐内容
猜你感兴趣