在linux 5上裝10g rac時,常常會碰到“libpthread.so.0: cannot open shared object file"這個報錯的,這個報錯是由于無法使用vipca導緻的。 該報錯有以下兩種解決方案:
方法1
不去理會,選擇繼續,然後安裝10.2.0.4及以上版本的patchsets,然後在來手工執行vipca完成vip配置工作,因為這個錯誤在10.2.0.4版本中已經得到修複
方法2
手工配置
确認網絡配置
# ./oifcfg getif
eth0 172.21.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
# ./oifcfg iflist
eth0 172.21.1.0
eth1 10.10.10.0
如果不正确可以使用下列指令配置
# ./oifcfg setif -global eth0/172.21.1.0:public
# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
然後修改vipca和srvctl ,搜尋ld_assume_kernel,注釋掉下列幾行
arch='uname -m'
# if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
# then
# ld_assume_kernel=2.4.19
# export ld_assume_kernel
# fi
再執行./vipca即可,二者原理相同
關于這個報錯,oracle有以下文檔進行說明
10gr2 rac install issues on oracle el5 or rhel5 or sles10 (vipca / srvctl / oui failures) [id 414163.1]
modified 04-aug-2010 type problem statusarchived
in this document
oracle server - enterprise edition - version: 10.2.0.1 to 10.2.0.3 - release: 10.2 to 10.2
linux x86
generic linux
linux x86-64
***checked for relevance on 04-aug-2010***
when installing 10gr2 rac on oracle enterprise linux 5 or rhel5 or sles10 there are three issues that users must be aware of.
issue#1 : to install 10gr2, you must first install the base release, which is 10.2.0.1. as these version of os are newer, you should use the following command to invoke the installer:
<code>$ runinstaller -ignoresysprereqs // this will bypass the os check //</code>
issue#2 : at end of root.sh on the last node vipca will fail to run with the following error:
<code>oracle crs stack installed and running under init(1m)</code>
<code>running vipca(silent) for configuring nodeapps</code>
<code>shared libraries: libpthread.so.0: cannot open shared object file:</code>
<code>no such file or directory </code>
also, srvctl will show similar output if workaround below is not implemented.
issue#3 : after working around issue#2 above, vipca will fail to run with the following error if the vip ip's are in a non-routable range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:
<code># vipca</code>
<code>error 0(native: listnetinterfaces:[3]) </code>
<code>[error 0(native: listnetinterfaces:[3])]</code>
these releases of the linux kernel fix an old bug in the linux threading that oracle worked around using ld_assume_kernel settings in both vipca and srvctl, this workaround is no longer valid on oel5 or rhel5 or sles10 hence the failures.
if you have not yet run root.sh on the last node, implement workaround for issue#2 below and run root.sh (you may skip running the vipca portion at the bottom of this note).
if you have a non-routable ip range for vips you will also need workaround for issue# 3 and then run vipca manually.
to workaround issue#2 above, edit vipca (in the crs bin directory on all nodes ) to undo the setting of ld_assume_kernel. after the if statement around line 120 add an unset command to ensure ld_assume_kernel is not set as follows:
<code>if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]</code>
<code>then</code>
<code> ld_assume_kernel=2.4.19</code>
<code> export ld_assume_kernel</code>
<code>fi</code>
unset ld_assume_kernel <<<== line to be added
similarly for srvctl (in both the crs and, when installed, rdbms and asm bin directories on all nodes ), unset ld_assume_kernel by adding one line, around line 168 should look like this:
<code>ld_assume_kernel=2.4.19</code>
<code>export ld_assume_kernel</code>
unset ld_assume_kernel <<<== line to be added
remember to re-edit these files on all nodes :
<crs_home>/bin/vipca
<crs_home>/bin/srvctl
<rdbms_home>/bin/srvctl
<asm_home>/bin/srvctl
after applying the 10.2.0.2 or 10.2.0.3 patchsets, as these patchset will still include those settings unnecessary for oel5 or rhel5 or sles10 . this issue was raised with development and is fixed in the 10.2.0.4 patchsets .
note that we are explicitly unsetting ld_assume_kernel and not merely commenting out its setting to handle a case where the user has it set in their environment (login shell).
to workaround issue#3 (vipca failing on non-routable vip ip ranges, manually or during root.sh), if you still have the oui window open, click ok and it will create the "oifcfg" information, then cluvfy will fail due to vipca not completed successfully, skip below in this note and run vipca manually then return to the installer and cluvfy will succeed. otherwise you may configure the interfaces for rac manually using the oifcfg command as root, like in the following example (from any node):
<code><crs_home>/bin # ./oifcfg setif -global eth0/192.168.1.0:public </code>
<code><crs_home>/bin # ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect </code>
<code><crs_home>/bin # ./oifcfg getif </code>
<code> eth0 192.168.1.0 global public </code>
<code> eth1 10.10.10.0 global cluster_interconnect</code>
the goal is to get the output of "oifcfg getif" to include both public and cluster_interconnect interfaces, of course you should exchange your own ip addresses and interface name from your environment. to get the proper ips in your environment run this command:
<code><crs_home>/bin # ./oifcfg iflist</code>
<code>eth0 192.168.1.0</code>
<code>eth1 10.10.10.0 </code>
if you have not yet run root.sh on the last node, implement workaround for issue #2 above and run root.sh (you may skip running the vipca portion below. if you have a non-routable ip range for vips you will also need workaround for issue# 3 above, and then run vipca manually.
running vipca:
after implementing the above workaround(s), you should be able invoke vipca (as root, from last node) manually and configure the vip ips via the gui interface.
<code><crs_home>/bin # export display=<x-display:0></code>
<code><crs_home>/bin # ./vipca</code>
make sure the display environment variable is set correctly and you can open x-clock or other x applications from that shell.
once vipca completes running, all the clusterware resources (vip, gsd, ons) will be started, there is no need to re-run root.sh since vipca is the last step in root.sh.
to verify the clusterware resources are running correctly:
<code><crs_home>/bin # ./crs_stat -t</code>
<code>name type target state host</code>
<code>------------------------------------------------------------</code>
<code>ora....ux1.gsd application online online raclinux1</code>
<code>ora....ux1.ons application online online raclinux1</code>
<code>ora....ux1.vip application online online raclinux1</code>
<code>ora....ux2.gsd application online online raclinux2</code>
<code>ora....ux2.ons application online online raclinux2</code>
<code>ora....ux2.vip application online online raclinux2</code>
you may now proceed with the rest of the rac installation.
本文原創,轉載請注明出處、作者
參考至:http://blog.csdn.net/tianlesoftware/article/details/6045128
http://space.itpub.net/8797129/viewspace-694738
http://cs.felk.cvut.cz/10gr2/relnotes.102/b15659/toc.htm#cjabaiif
http://blog.chinaunix.net/uid-7589639-id-2921631.html
如有錯誤,歡迎指正
作者:czmmiao 文章出處:http://czmmiao.iteye.com/blog/1734541