`
csstome
  • 浏览: 1476719 次
  • 性别: Icon_minigender_1
  • 来自: 北京
文章分类
社区版块
存档分类
最新评论

使用 runcluvfy 校验Oracle RAC安装环境

 
阅读更多

--*****************************************

-- 使用 runcluvfy Oracle RAC安装

--*****************************************

工欲善其事,必先利其器。安装 Orale RAC 是一浩大的工程,尤其是有做好前期的规划与配置工作时将导致安装的复杂

非想象。幸好有runcluvfy工具,大大化了安装工作。下面的演示是基于安装Oracle 10g RAC / Linux完成的。

1.安装文件路下使用runcluvfy施安装前的校

[oracle@node1 cluvfy]$ pwd

/u01/Clusterware/clusterware/cluvfy

[oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "node1"

Destination Node Reachable?

------------------------------------ ------------------------

node1 yes

node2 yes

Result: Node reachability check passed from node "node1".

Checking user equivalence...

Check: User equivalence for user "oracle"

Node Name Comment

------------------------------------ ------------------------

node2 passed

node1 passed

Result: User equivalence check passed for user "oracle".

Checking administrative privileges...

Check: Existence of user "oracle"

Node Name User Exists Comment

------------ ------------------------ ------------------------

node2 yes passed

node1 yes passed

Result: User existence check passed for "oracle".

Check: Existence of group "oinstall"

Node Name Status Group ID

------------ ------------------------ ------------------------

node2 exists 500

node1 exists 500

Result: Group existence check passed for "oinstall".

Check: Membership of user "oracle" in group "oinstall" [as Primary]

Node Name User Exists Group Exists User in Group Primary Comment

---------------- ------------ ------------ ------------ ------------ ------------

node2 yes yes yes yes passed

node1 yes yes yes yes passed

Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity...

Interface information for node "node2"

Interface Name IP Address Subnet

------------------------------ ------------------------------ ----------------

eth0 192.168.0.12 192.168.0.0

eth1 10.101.0.12 10.101.0.0

Interface information for node "node1"

Interface Name IP Address Subnet

------------------------------ ------------------------------ ----------------

eth0 192.168.0.11 192.168.0.0

eth1 10.101.0.11 10.101.0.0

Check: Node connectivity of subnet "192.168.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node2:eth0 node1:eth0 yes

Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) node2,node1.

Check: Node connectivity of subnet "10.101.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

node2:eth1 node1:eth1 yes

Result: Node connectivity check passed for subnet "10.101.0.0" with node(s) node2,node1.

Suitable interfaces for the private interconnect on subnet "192.168.0.0":

node2 eth0:192.168.0.12

node1 eth0:192.168.0.11

Suitable interfaces for the private interconnect on subnet "10.101.0.0":

node2 eth1:10.101.0.12

node1 eth1:10.101.0.11

ERROR:

Could not find a suitable set of interfaces for VIPs.

Result: Node connectivity check failed.

Checking system requirements for 'crs'...

Check: Total memory

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

node2 689.38MB (705924KB) 512MB (524288KB) passed

node1 689.38MB (705924KB) 512MB (524288KB) passed

Result: Total memory check passed.

Check: Free disk space in "/tmp" dir

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

node2 4.22GB (4428784KB) 400MB (409600KB) passed

node1 4.22GB (4426320KB) 400MB (409600KB) passed

Result: Free disk space check passed.

Check: Swap space

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

node2 2GB (2096472KB) 1GB (1048576KB) passed

node1 2GB (2096472KB) 1GB (1048576KB) passed

Result: Swap space check passed.

Check: System architecture

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

node2 i686 i686 passed

node1 i686 i686 passed

Result: System architecture check passed.

Check: Kernel version

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

node2 2.6.18-194.el5 2.4.21-15EL passed

node1 2.6.18-194.el5 2.4.21-15EL passed

Result: Kernel version check passed.

Check: Package existence for "make-3.79"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 make-3.81-3.el5 passed

node1 make-3.81-3.el5 passed

Result: Package existence check passed for "make-3.79".

Check: Package existence for "binutils-2.14"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 binutils-2.17.50.0.6-14.el5 passed

node1 binutils-2.17.50.0.6-14.el5 passed

Result: Package existence check passed for "binutils-2.14".

Check: Package existence for "gcc-3.2"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 gcc-4.1.2-48.el5 passed

node1 gcc-4.1.2-48.el5 passed

Result: Package existence check passed for "gcc-3.2".

Check: Package existence for "glibc-2.3.2-95.27"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 glibc-2.5-49 passed

node1 glibc-2.5-49 passed

Result: Package existence check passed for "glibc-2.3.2-95.27".

Check: Package existence for "compat-db-4.0.14-5"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 compat-db-4.2.52-5.1 passed

node1 compat-db-4.2.52-5.1 passed

Result: Package existence check passed for "compat-db-4.0.14-5".

Check: Package existence for "compat-gcc-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 missing failed

node1 missing failed

Result: Package existence check failed for "compat-gcc-7.3-2.96.128".

Check: Package existence for "compat-gcc-c++-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 missing failed

node1 missing failed

Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 missing failed

node1 missing failed

Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 missing failed

node1 missing failed

Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".

Check: Package existence for "openmotif-2.2.3"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 openmotif-2.3.1-2.el5_4.1 passed

node1 openmotif-2.3.1-2.el5_4.1 passed

Result: Package existence check passed for "openmotif-2.2.3".

Check: Package existence for "setarch-1.3-1"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

node2 setarch-2.0-1.1 passed

node1 setarch-2.0-1.1 passed

Result: Package existence check passed for "setarch-1.3-1".

Check: Group existence for "dba"

Node Name Status Comment

------------ ------------------------ ------------------------

node2 exists passed

node1 exists passed

Result: Group existence check passed for "dba".

Check: Group existence for "oinstall"

Node Name Status Comment

------------ ------------------------ ------------------------

node2 exists passed

node1 exists passed

Result: Group existence check passed for "oinstall".

Check: User existence for "nobody"

Node Name Status Comment

------------ ------------------------ ------------------------

node2 exists passed

node1 exists passed

Result: User existence check passed for "nobody".

System requirement failed for 'crs'

Pre-check for cluster services setup was unsuccessful on all the nodes.

Could not find a suitable set of interfaces for VIPs.”,可以忽略该错误

信息,是一bugMetalink中有详细说明,doc.id:338924.1考本文尾部列出的容。

于上面描述的failed的包,可能的其安装到系

2.安装Clusterware 后的检查,注意,此时执行的cluvfy是位于已安装的路

[oracle@node1 ~]$ pwd

/u01/app/oracle/product/10.2.0/crs_1/bin

[oracle@node1 ~]$./cluvfy stage -post crsinst -n node1,node2

Performing post-checks for cluster services setup

Checking node reachability...

Node reachability check passed from node "node1".

Checking user equivalence...

User equivalence check passed for user "oracle".

Checking Cluster manager integrity...

Checking CSS daemon...

Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...

Cluster integrity check passed

Checking OCR integrity...

Checking the absence of a non-clustered configuration...

All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...

OCR of correct Version "2" exists.

Checking data integrity of OCR...

Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...

Checking daemon liveness...

Liveness check passed for "CRS daemon".

Checking daemon liveness...

Liveness check passed for "CSS daemon".

Checking daemon liveness...

Liveness check passed for "EVM daemon".

Checking CRS health...

CRS health check passed.

CRS integrity check passed.

Checking node application existence...

Checking existence of VIP node application (required)

Check passed.

Checking existence of ONS node application (optional)

Check passed.

Checking existence of GSD node application (optional)

Check passed.

Post-check for cluster services setup was successful.

上面的校可以看出,Clusterware的相后台程,nodeapps关资源以及OCRpassed状态,即Clusterware成功安装

3.cluvfy的用法

[oracle@node1 ~]$ cluvfy -help #直接使用-help参数即可cluvfy助信息

USAGE:

cluvfy [ -help ]

cluvfy stage { -list | -help }

cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]

cluvfy comp { -list | -help }

cluvfy comp <component-name> <component-specific options> [-verbose]

[oracle@node1 ~]$ cluvfy comp -list

USAGE:

cluvfy comp <component-name> <component-specific options> [-verbose]

Valid components are:

nodereach : checks reachability between nodes

nodecon : checks node connectivity

cfs : checks CFS integrity

ssa : checks shared storage accessibility

space : checks space availability

sys : checks minimum system requirements

clu : checks cluster integrity

clumgr : checks cluster manager integrity

ocr : checks OCR integrity

crs : checks CRS integrity

nodeapp : checks node applications existence

admprv : checks administrative privileges

peer : compares properties with peers

4.ID 338924.1

CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs [ID 338924.1]

________________________________________

Modified 29-JUL-2010 Type PROBLEM Status PUBLISHED

In this Document

Symptoms

Cause

Solution

References

________________________________________

Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.7 - Release: 10.2 to 11.1

Information in this document applies to any platform.

Symptoms

When running cluvfy to check network connectivity at various stages of the RAC/CRS installation process, cluvfy fails

with errors similar to the following:

=========================

Suitable interfaces for the private interconnect on subnet "10.0.0.0":

node1 eth0:10.0.0.1

node2 eth0:10.0.0.2

Suitable interfaces for the private interconnect on subnet "192.168.1.0":

node1_internal eth1:192.168.1.2

node2_internal eth1:192.168.1.1

ERROR:

Could not find a suitable set of interfaces for VIPs.

Result: Node connectivity check failed.

========================

On Oracle 11g, you may still see a warning in some cases, such as:

========================

WARNING:

Could not find a suitable set of interfaces for VIPs.

========================

Output seen will be comparable to that noted above, but IP addresses and node_names may be different - i.e. the node names

of 'node1','node2','node1_internal','node2_internal' will be substituted with your actual Public and Private node names.

A second problem that will be encountered in this situation is that at the end of the CRS installation for 10gR2, VIPCA

will be run automatically in silent mode, as one of the 'optional' configuration assistants. In this scenario, the VIPCA

will fail at the end of the CRS installation. The InstallActions log will show output such as:

> />> Oracle CRS stack installed and running under init(1M)

> />> Running vipca(silent) for configuring nodeapps

> />> The given interface(s), "eth0" is not public. Public interfaces should

> />> be used to configure virtual IPs.

Cause

This issue occurs due to incorrect assumptions made in cluvfy and vipca based on an Internet Best Practice document -

"RFC1918 - Address Allocation for Private Internets". This Internet Best Practice RFC can be viewed here:

http://www.faqs.org/rfcs/rfc1918.html

From an Oracle perspective, this issue is tracked in BUG:4437727

Per BUG:4437727, cluvfy makes an incorrect assumption based on RFC 1918 that any IP address/subnet that begins with any

of the following octets is private and hence may not be fit for use as a VIP:

172.16.x.x through 172.31.x.x

192.168.x.x

10.x.x.x

However, this assumption does not take into account that it is possible to use these IPs as Public IP's on an internal

network (or intranet). Therefore, it is very common to use IP addresses in these ranges as Public IP's and as Virtual

IP(s), and this is a supported configuration.

Solution

The solution to the error above that is given when running 'cluvfy' is to simply ignore it if you intend to use an IP in

one of the above ranges for your VIP. The installation and configuration can continue with no corrective action necessary.

One result of this, as noted in the problem section, is that the silent VIPCA will fail at the end of the 10gR2 CRS

installation. This is because VIPCA is running in silent mode and is trying to notify that the IPs that were provided

may not be fit to be used as VIP(s). To correct this, you can manually execute the VIPCA GUI after the CRS installation

is complete. VIPCA needs to be executed from the CRS_HOME/bin directory as the 'root' user (on Unix/Linux) or as a

Local Administrator (on Windows):

$ cd $ORA_CRS_HOME/bin

$ ./vipca

Follow the prompts for VIPCA to select the appropriate interface for the public network, and assign the VIPs for each node

when prompted. Manually running VIPCA in the GUI mode, using the same IP addresses, should complete successfully.

Note that if you patch to 10.2.0.3 or above, VIPCA will run correctly in silent mode. The command to re-run vipca

silently can be found in CRS_HOME/cfgtoollogs (or CRS_HOME/cfgtoollogs) in the file 'configToolAllCommands' or

'configToolFailedCommands'. Thus, in the case of a new install, the silent mode VIPCA command will fail after the

10.2.0.1 base release install, but once the CRS Home is patched to 10.2.0.3 or above, vipca can be re-run silently,

without the need to invoke the GUI tool

References

NOTE:316583.1 - VIPCA FAILS COMPLAINING THAT INTERFACE IS NOT PUBLIC

Related

________________________________________

Products

________________________________________

? Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition

Keywords

________________________________________

INSTALLATION FAILS; INTERCONNECT; PRIVATE INTERCONNECT; PRIVATE NETWORKS

Errors

________________________________________RFC-1918

上面的描述很多,下面

在出现错误点修改vipca 文件

[root@node2 ~]# vi $CRS_ORA_HOME/bin/vipca

找到如下容:

Remove this workaround when the bug 3937317 is fixed

arch=`uname -m`

if [ "$arch" = "i686" -o "$arch" = "ia64" ]

then

LD_ASSUME_KERNEL=2.4.19

export LD_ASSUME_KERNEL

fi

#End workaround

fi 后新添加一行:

unset LD_ASSUME_KERNEL

以及srvctl 文件

[root@node2 ~]# vi $CRS_ORA_HOME/bin/srvctl

找到如下容:

LD_ASSUME_KERNEL=2.4.19

export LD_ASSUME_KERNEL

在其后新增加一行:

unset LD_ASSUME_KERNEL

保存退出,然后在故障重新root.sh

5. 快捷参考

性能请参

Oracle 硬解析与软解析

共享池的与优(Shared pool Tuning)

Buffer cache 与优(一)

Oracle (caching table)的使用

ORACLE体系结构请参

Oracle 表空间与数据文件

Oracle 文件

Oracle 参数文件

Oracle 机重做日志文件(ONLINE LOG FILE)

Oracle 控制文件(CONTROLFILE)

Oracle 归档日志

Oracle (ROLLBACK)和撤(UNDO)

Oracle 库实启动关闭过

Oracle 10g SGA 的自化管理

Oracle 例和Oracle(Oracle体系结构)

关闪回特性请参

Oracle 回特性(FLASHBACK DATABASE)

Oracle 回特性(FLASHBACK DROP & RECYCLEBIN)

Oracle 回特性(Flashback Query、Flashback Table)

Oracle 回特性(Flashback Version、Flashback Transaction)

基于用管理的备份备份请参

Oracle 备份

Oracle 热备份

Oracle 备份复概

Oracle 例恢

Oracle 基于用管理恢(详细描述了介及其)

SYSTEM 表空管理及备份

SYSAUX表空管理及恢

RMAN备份复与管理请参

RMAN 述及其体系结构

RMAN 配置、管理

RMAN 备份详

RMAN

RMAN catalog 建和使用

基于catalog RMAN脚本

基于catalog 的RMAN 备份与

使用RMAN迁移文件系统数据库到ASM

RMAN 备份困惑(使用plus archivelog)

ORACLE故障请参

ORA-32004 错误处

ORA-01658 错误

CRS-0215 错误处

ORA-00119,ORA-00132 错误处

又一例SPFILE错误导无法启动

对参数FAST_START_MTTR_TARGET = 0 解及

SPFILE 错误导无法启动(ORA-01565)

ASM请参

ASM例及ASM

ASM 、目的管理

使用 ASMCMD 工具管理ASM及文件

SQL/PLSQL请参

SQLPlus 常用命令

替代SQL*Plus

使用Uniread实现SQLplus功能

SQL -->SELECT 查询

SQL --> NEW_VALUE 的使用

SQL --> 集合(UNION UNION ALL)

SQL --> 常用函

SQL --> 视图(CREATE VIEW)

SQL --> 建和管理表

SQL --> 多表查询

SQL --> 过滤和排序

SQL --> 查询

SQL --> 组与

SQL --> 次化查询(START BY ... CONNECT BY PRIOR)

SQL --> ROLLUPCUBE算符实现数汇总

PL/SQL -->

PL/SQL --> (Exception)

PL/SQL --> 言基

PL/SQL --> 流程控制

PL/SQL --> PL/SQL记录

PL/SQL --> 包的管理

PL/SQL --> 式游(SQL%FOUND)

PL/SQL --> 包重、初始化

PL/SQL --> DBMS_DDL包的使用

PL/SQL --> DML 触发

PL/SQL --> INSTEAD OF 触发

PL/SQL --> 储过

PL/SQL -->

PL/SQL --> 动态SQL

PL/SQL --> 动态SQL的常见错误

ORACLE特性

Oracle 常用目录结构(10g)

使用OEM,SQL*Plus,iSQL*Plus 管理Oracle

日志记录模式(LOGGING 、FORCE LOGGING 、NOLOGGING)

表段、索引段上的LOGGINGNOLOGGING

Oralce OMF 功能

Oracle 限、系统权

Oracle 角色、配置文件

Oracle

Oracle 外部表

使用外部表管理Oracle 告警日志(ALAERT_$SID.LOG)

簇表及簇表管理(Index clustered tables)

EXPDP 工具的使用

IMPDP 入工具的使用

Oracle

SQL*Loader使用方法

用用户进程跟踪

配置非默口的动态

配置ORACLE 接到

system sys,sysoper sysdba 区别

ORACLE_SID、DB_NAME、INSTANCE_NAME、DB_DOMIAN、GLOBAL_NAME

Oracle 丁全集 (Oracle 9i 10g 11g Path)

Oracle 10.2.0.1 10.2.0.4

Oracle kill session

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics