Products‎ > ‎

Oracle GI 18c

Oracle Grid Infrastructure 18c deployment on Oracle Linux LXC host, or Ubuntu Linux LXC host, in Oracle Linux LXC Linux containers is very easy with Orabuntu-LXC.  This functionality is available in Orabuntu-LXC v6.11.7-beta AMIDE and higher.

Orabuntu-LXC allows deployment of Oracle Grid Infrastructure 18c on either Oracle Linux 7.6 LXC host (supported by Oracle Corporation if you use their UEK kernel, which Orabuntu-LXC supports), or, on Ubuntu Linux 18.04 or 18.10 (not supported by Oracle Corporation) both in Oracle Linux 7.6 LXC containers.

Running Oracle 18c GI and DB in Orabuntu-LXC Linux containers can be very useful when running a system for educational purposes on a system that has resources such as memory and CPU which would be insufficient to run GI and DB in VM's but which is sufficient to run GI and DB in LXC containers.  It is also useful for understaning how Oracle Grid Infrastructure actually works, using for example a tool like "iotop" because if all Oracle Linux LXC RAC containers are put on the same host, all io activity can be seen across all LXC RAC containers in a single iotop pane.

Create an LXC host server (Oracle Linux 7.x or Ubuntu Linux 18.x) that has the following characteristics as shown below.

The best way to try Orabuntu-LXC Oracle GI 18c deployment is to test on a VM first, although you can run it on a physical host as well.  For this 4 LXC container 4-node RAC, 16 Gb RAM is used as shown below, however the 4-LXC container RAC will run ok on an 8Gb RAM system.

It is best if the swap space is NOT a swapfile but rather is a dedicated logical volume or physical disk.  If using a VM adding a dedicated virtual disk for swap is advisable.

200Gb available on root "/"
 32Gb swap space.
 16Gb memory

Then simply select the "oracle-gi-18c" product in the appropriate Orabuntu-LXC deployment script. 

If installing Orabuntu-LXC for the first time on a physical host or in a VM, use the script:

anylinux-services.HUB.HOST.sh 

If you are installing Orabuntu-LXC in a VM and just trying it out, nothing needs to be configured.

Be sure that the VM has been updated with latest packages first.
Unzip the download of the latest version of Orabuntu-LXC (at the time of this writing, "6.11.7-beta AMIDE")
Navigate to the "anylinux" directory
Run:

./anylinux-services.HUB.HOST.sh

and Orabuntu-LXC will take care of all the rest for an Oracle GI 18c deployment, because oracle-gi-18c is currently the default product value in the deployment scripts (other products are available; edit the deployment script to select other products).

When the script runs for this first time, enter the sudo password for the install user, and accept the "new" Install Type, as shown below.

==============================================
anylinux-services.HUB.HOST.sh     
==============================================

[sudo] password for ubuntu:

==============================================
                                             
If you doing a fresh Orabuntu-LXC install    
on this host then take default 'new'         
                                             
If you are doing a complete Orabuntu-LXC     
reinstall then answer 'reinstall'            
                                             
==============================================
                                             
Install Type New or Reinstall [new/rei] new

If installing Orabuntu-LXC in a VM running on an existing Orabuntu-LXC physical host, then use the script "anylinux-services.VM.ON.HUB.HOST.1500.sh".  In this case, SPOKEIP, HUBIP, HubUserAcct, and HubSudoPwd will need to be set in the anylinux-services.VM.ON.HUB.HOST.1500.sh file.  As shown below, use the public IP address (in this example the 10.209.53.x network) for the SPOKEIP.  In the example below, the file has already been edited and the correct values are shown.


ubuntu@ubuntu1904a:~/Downloads/orabuntu-lxc-6.11.7-beta/anylinux$ ifconfig

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.29.108.19  netmask 255.255.255.0  broadcast 172.29.108.255
        inet6 fe80::ab42:1244:74ce:2822  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:96:3a:63  txqueuelen 1000  (Ethernet)
        RX packets 2118  bytes 212518 (212.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1767  bytes 222193 (222.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s17: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.209.53.120  netmask 255.255.255.0  broadcast 10.209.53.255
        inet6 fe80::22fd:6d72:9675:225d  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:00:d4:cd  txqueuelen 1000  (Ethernet)
        RX packets 6923  bytes 8244736 (8.2 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3144  bytes 366262 (366.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 730  bytes 72182 (72.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 730  bytes 72182 (72.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ubuntu@ubuntu1904a:~/Downloads/orabuntu-lxc-6.11.7-beta/anylinux$

grep -B2 -A22 SPOKEIP anylinux-services.VM.ON.HUB.HOST.1500.sh | head -27

if [ -z $2 ]
then
        SPOKEIP=10.209.53.120
else
        SPOKEIP=$2
fi

if [ -z $3 ]
then
        HUBIP=10.209.53.1
else
        HUBIP=$3
fi

if [ -z $4 ]
then
        HubUserAct=ubuntu
else
        HubUserAct=$4
fi

if [ -z $5 ]
then
        HubSudoPwd=ubuntu
else
        HubSudoPwd=$5
fi
ubuntu@ubuntu1904a:~/Downloads/orabuntu-lxc-6.11.7-beta/anylinux$

If installing Orabuntu-LXC on a GRE host (in the multihost configuration of Orabuntu-LXC) use the "anylinux-services.GRE.HOST.sh" script and set the SPOKEIP etc as described above. 

Finally, if installing Orabuntu-LXC in a VM that is on a GRE host, in the Orabuntu-LXC multihost configuration, then use the "anylinux-services.VM.ON.GRE.HOST.1420.sh" script and set the SPOKEIP etc as described above.


[ubuntu@oracle76a ~]$ cd Downloads/orabuntu-lxc-6.11.7-beta/anylinux/

[ubuntu@oracle76a anylinux]$ ls -lrt | grep anylinux-services | sort

-rwxr-xr-x. 1 ubuntu ubuntu 14114 Apr 18 06:54 anylinux-services.VM.ON.HUB.HOST.1500.sh
-rwxr-xr-x. 1 ubuntu ubuntu 14461 Apr 16 09:26 anylinux-services.VM.ON.GRE.HOST.1420.sh
-rwxr-xr-x. 1 ubuntu ubuntu 14908 Apr 16 09:26 anylinux-services.GRE.HOST.sh
-rwxr-xr-x. 1 ubuntu ubuntu  3732 Apr 16 09:26 anylinux-services.ADD.CLONES.sh
-rwxr-xr-x. 1 ubuntu ubuntu  4969 Apr 16 09:26 anylinux-services.HUB.HOST.sh
-rwxr-xr-x. 1 ubuntu ubuntu  5250 Apr 16 09:26 anylinux-services.ADD.RELEASE.ON.GRE.HOST.1420.sh
-rwxr-xr-x. 1 ubuntu ubuntu  5311 Apr 16 09:26 anylinux-services.ADD.RELEASE.ON.HUB.HOST.1500.sh

[ubuntu@oracle76a anylinux]$

Orabuntu-LXC takes care of setting up all package and OS configuration prerequisites for Oracle GI 18c, and sets up the required additional private interconnect and ASM flex cluster private networks automatically. Network IP addresses for the Private and ASM networks are set in pgroup1 of anylinux-services.sh and are user-selectable.  Edit the pgroup1 section to set these to desired network values, keeping in mind to typically use only reserved ranges that are used for internal networks, e.g. "10.x.x.x" or 192.x.x.x" ranges for these private networks. 

To determine that value to use for SPOKEIP run ifconfig and note the address of the urdomain1.com vNIC.  By default this is the 10.209.53.x vNIC.

To determine the settings for HubUserAcct use the account that Orabuntu-LXC was installed under and the password for that user.

For example, here is an install of Oracle Grid Infrastructure on Oracle Linux 7.6 VirtualBox VM using Orabuntu-LXC deployment of Oracle Linux 7.6 LXC containers.  Here is the setting in the relevant anylinux file as shown below.


if   [ $OpType = 'rei' ]
then
        Operation=reinstall

elif [ $OpType = 'new' ]
then
        Operation=new
fi

if [ -z $2 ]
then
        SPOKEIP=10.209.53.103
else
        SPOKEIP=$2
fi

if [ -z $3 ]
then
        HUBIP=10.209.53.1
else
        HUBIP=$3
fi

if [ -z $4 ]
then
        HubUserAct=ubuntu
else
        HubUserAct=$4
fi

if [ -z $5 ]
then
        HubSudoPwd=ubuntu
else
        HubSudoPwd=$5
fi

if [ -z $6 ]
then
        Product=workspaces
        Product=no-product
        Product=oracle-db
        Product=oracle-gi-18c
else
        Product=$6
fi

The IP addresses and user account and password info is only needed for non-HUB hosts.  In other words, if installing with the "anylinux-services.HUB.HOST.sh" script, the SPOKEIP, HUBIP, HubUserAcct, and HubSudoPwd values are not needed.

But in all cases, whether HUB, GRE, or VM deployment of Orabuntu-LXC, to install Oracle Grid Infrastructure 18c it is necessary to be sure to set the "Product" to "oracle-gi-18c" as shown in bold above.

Note that after Orabuntu-LXC install finishes, check to be sure the required number of LXC Oracle Linux 7.6 GI containers has been created and that the required private networks have been created, by running "sudo lxc-ls -f".

Once all the containers are created, scp the Oracle Grid Infrastructure 18c install media to the first LXC install container in the GI cluster as shown below.

ubuntu@ubuntu-ThinkPad-P72:~/Downloads$ scp LINUX.X64_180000_grid_home.zip grid@ora76c74:~/.

The authenticity of host 'ora76c74 (10.209.53.104)' can't be established.
ECDSA key fingerprint is SHA256:4Dfre3CBlUyKQezA71mVYs5jByubMThPbBXtSqHXQJo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ora76c74' (ECDSA) to the list of known hosts.
Warning: the ECDSA host key for 'ora76c74' differs from the key for the IP address '10.209.53.104'
Offending key for IP in /home/ubuntu/.ssh/known_hosts:111
Are you sure you want to continue connecting (yes/no)? yes

grid@ora76c74's password:

LINUX.X64_180000_grid_home.zip                                                                                                                                                   100% 5133MB  50.2MB/s   01:42   

ubuntu@ubuntu-ThinkPad-P72:~/Downloads$

Next ssh into the LXC container and unzip the install media to the Oracle Home as shown below.  Check to be sure the required directories for $ORACLE_HOME and $ORACLE_BASE exist, and if not, create them, as shown below.  Orabuntu-LXC automatically sets the environment values and creates the required directories, so it should not be necessary to create them manually.

ubuntu@ubuntu-ThinkPad-P72:~/Downloads$ ssh -X grid@ora76c74

The authenticity of host 'ora76c74 (10.209.53.104)' can't be established.
ECDSA key fingerprint is SHA256:4Dfre3CBlUyKQezA71mVYs5jByubMThPbBXtSqHXQJo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ora76c74,10.209.53.104' (ECDSA) to the list of known hosts.
grid@ora76c74's password:
Last login: Tue Apr 16 13:55:11 2019 from 10.209.53.1
/usr/bin/xauth:  file /home/grid/.Xauthority does not exist

[grid@ora76c74 ~]$ ls -lrt
total 5256120

-rw-r--r--. 1 grid oinstall 5382265496 Apr 16 13:39 LINUX.X64_180000_grid_home.zip

[grid@ora76c74 ~]$ env | grep ORA

ORACLE_BASE=/u00/app/18c/grid
ORACLE_HOME=/u00/app/grid/product/18c/grid

[grid@ora76c74 ~]$ unzip LINUX.X64_180000_grid_home.zip -d $ORACLE_HOME

Next scp the cvuqdisk rpm to the other GI LXC containers as root and run it on those LXC containers also as shown below so that the cvuqdisk rpm is installer on all RAC LXC containers.


[grid@ora76c74 ~]$ su - root
Password:
Last login: Thu Apr 18 09:32:51 CDT 2019 from 10.209.53.103 on pts/4
[root@ora76c74 ~]# cd /u00/app/grid/product/18c/grid/cv/rpm
[root@ora76c74 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing...                          ################################# [100%]
Using default group oinstall to install package
Updating / installing...
   1:cvuqdisk-1.0.10-1                ################################# [100%]
[root@ora76c74 rpm]# scp cvuqdisk-1.0.10-1.rpm root@ora76c75:~/.
The authenticity of host 'ora76c75 (10.209.53.105)' can't be established.
ECDSA key fingerprint is SHA256:o/oUuWHQycs3mZrE9MduyLaUHzP4HjemfiBY8mKXL0M.
ECDSA key fingerprint is MD5:25:9d:5d:89:14:d4:f6:43:b8:3e:ed:68:5a:4d:34:24.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ora76c75,10.209.53.105' (ECDSA) to the list of known hosts.
root@ora76c75's password:
cvuqdisk-1.0.10-1.rpm                                                                                                                                                            100% 8860     8.0MB/s   00:00   
[root@ora76c74 rpm]# scp cvuqdisk-1.0.10-1.rpm root@ora76c76:~/.
The authenticity of host 'ora76c76 (10.209.53.106)' can't be established.
ECDSA key fingerprint is SHA256:o/oUuWHQycs3mZrE9MduyLaUHzP4HjemfiBY8mKXL0M.
ECDSA key fingerprint is MD5:25:9d:5d:89:14:d4:f6:43:b8:3e:ed:68:5a:4d:34:24.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ora76c76,10.209.53.106' (ECDSA) to the list of known hosts.
root@ora76c76's password:
cvuqdisk-1.0.10-1.rpm                                                                                                                                                            100% 8860    12.6MB/s   00:00   
[root@ora76c74 rpm]# scp cvuqdisk-1.0.10-1.rpm root@ora76c77:~/.
The authenticity of host 'ora76c77 (10.209.53.107)' can't be established.
ECDSA key fingerprint is SHA256:o/oUuWHQycs3mZrE9MduyLaUHzP4HjemfiBY8mKXL0M.
ECDSA key fingerprint is MD5:25:9d:5d:89:14:d4:f6:43:b8:3e:ed:68:5a:4d:34:24.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ora76c77,10.209.53.107' (ECDSA) to the list of known hosts.
root@ora76c77's password:
cvuqdisk-1.0.10-1.rpm                                                                                                                                                            100% 8860     7.6MB/s   00:00   
[root@ora76c74 rpm]# ssh root@ora76c75 'rpm -ivh cvuqdisk-1.0.10-1.rpm'
root@ora76c75's password:
Preparing...                          ########################################
Using default group oinstall to install package
Updating / installing...
cvuqdisk-1.0.10-1                     ########################################
[root@ora76c74 rpm]# ssh root@ora76c76 'rpm -ivh cvuqdisk-1.0.10-1.rpm'
root@ora76c76's password:
Preparing...                          ########################################
Using default group oinstall to install package
Updating / installing...
cvuqdisk-1.0.10-1                     ########################################
[root@ora76c74 rpm]# ssh root@ora76c77 'rpm -ivh cvuqdisk-1.0.10-1.rpm'
root@ora76c77's password:
Preparing...                          ########################################
Using default group oinstall to install package
Updating / installing...
cvuqdisk-1.0.10-1                     ########################################
[root@ora76c74 rpm]#

Storage LUNs for GI ASM will be needed.  Create them using the Orabuntu-LXC #1 rated SCST Linux SAN installer as shown below.  This step is done on the LXC host machine.

On RedHat-family host machines such as Oracle Linux, run the scst-create.sh script as root. On Debian-family host machines such as Ubuntu Linux, run the scst-create.sh script as a user with "sudo" privs (typically the Ubuntu install user).

[ubuntu@oracle76a ~]$ su - root
Password:
Last login: Tue Apr 16 10:06:10 CDT 2019 on pts/2

[root@oracle76a ~]# cd /opt/olxc/home/scst-files/

[root@oracle76a scst-files]#

Edit the create-scst.sh file as shown below.

[root@oracle76a scst-files]# vi create-scst.sh
[root@oracle76a scst-files]#

and then edit the file and find the section shown below to verify the bolded values are set for Oracle Grid Infrastructure. 

Note that the default configuration of the create-scst.sh file is to support an EXTERNAL deployment of a basic Oracle Grid Infrastructure 18c in LXC containers.

The default values are for EXTERNAL redundancy, but you can use any ASM redundancy values, such as NORMAL or HIGH.  Be sure there is sufficient disk space on root "/" filesystem for the chosen ASM redundancy mode.  The following values are the default when the create-scst.sh script is deployed.

If you changed the default "urdomain1.com" to a custom domain, then change "com.urdomain1" below to the domain you configured in pgroup2 of anylinux-services.sh script.  Otherwise, use "com.urdomain1" as shown below.

An example is included for BlackBerry Workspaces to show how other Orabuntu-LXC SAN configurations can be setup for products other than Oracle GI or Oracle Database. 

# Oracle Database
  ./create-scst-target.sh grid asmadmin 0660 lxc_luns grid asm com.urdomain1 lxc1 external sysd data mgmt 7G 15G 40G 512 oracle

# Blackberry WatchDox
# ./create-scst-target.sh watchdox watchdox 0660 ora66c16 watchdox wdx com.urdomain1 wksp1 external data fspa fcac 1G 1G 1G 512 blackberry

echo ''
echo "======================================================="
echo "Next script:  create-scst-multipath.sh                   "
echo "======================================================="

sleep 5

clear

# Oracle Database
  ./create-scst-multipath.sh grid 1098 asmadmin 1100 0660 lxc_luns asm new

# Blackberry WatchDox
# ./create-scst-multipath.sh watchdox 700 watchdox 700 0660 wdx_luns wdx new

Next do a double-check to make sure there is enough disk space available for your specified LUN sizes. 

In this case 7G+15G+40G = 62G total LUN size has been specified which goes into "/" root by default ("/asm0") and so check that the root file system on the LXC host has enough space for this as shown below.


[root@oracle76a scst-files]# df -TH

Filesystem          Type      Size  Used Avail Use% Mounted on
devtmpfs            devtmpfs  8.3G     0  8.3G   0% /dev
tmpfs               tmpfs     8.3G     0  8.3G   0% /dev/shm
tmpfs               tmpfs     8.3G  9.9M  8.3G   1% /run
tmpfs               tmpfs     8.3G     0  8.3G   0% /sys/fs/cgroup

/dev/mapper/ol-root xfs       184G   30G  155G  16% /  <-- Enough space is available!

/dev/sda1           xfs       1.1G  334M  730M  32% /boot
/dev/mapper/ol-home xfs        22G  254M   22G   2% /home
tmpfs               tmpfs     1.7G   13k  1.7G   1% /run/user/42
tmpfs               tmpfs     1.7G     0  1.7G   0% /run/user/1000

[root@oracle76a scst-files]#

Once space requirements have been verified, and the create-scst.sh file has been edited, as shown above, it's time to run the create-scst.sh file as the root user (on RedHat-family Linuxes) or as a user with sudo ALL privilege (on Debian-family Linuxes) as shown below.

After the create-scst.sh script has run, run the following steps as shown below.  Note that SCST service should be already enabled by Orabuntu-LXC but it is a good idea to run "systemctl enable scst" just to be sure SCST is enable for start across reboots and shutdowns.

[root@oracle76a scst-files]# ls -l /dev/lxc_luns
total 0

brw-rw----. 1 grid asmadmin 252, 4 Apr 16 14:53 asm_data_103_00
brw-rw----. 1 grid asmadmin 252, 5 Apr 16 14:53 asm_mgmt_103_00
brw-rw----. 1 grid asmadmin 252, 3 Apr 16 14:53 asm_sysd_103_00

[root@oracle76a scst-files]# ls -l /dev/mapper
total 0

lrwxrwxrwx. 1 root root       7 Apr 16 14:53 asm_data_103_00 -> ../dm-4
lrwxrwxrwx. 1 root root       7 Apr 16 14:53 asm_mgmt_103_00 -> ../dm-5
lrwxrwxrwx. 1 root root       7 Apr 16 14:53 asm_sysd_103_00 -> ../dm-3
crw-------. 1 root root 10, 236 Apr 16 12:14 control
lrwxrwxrwx. 1 root root       7 Apr 16 12:14 ol-home -> ../dm-2
lrwxrwxrwx. 1 root root       7 Apr 16 12:14 ol-root -> ../dm-0
lrwxrwxrwx. 1 root root       7 Apr 16 12:14 ol-swap -> ../dm-1

[root@oracle76a scst-files]# cat /etc/network/openvswitch/strt_scst.sh
#!/bin/bash
/usr/sbin/iscsiadm --mode node --targetname iqn.2019-04.com.urdomain1:oracle76a.urdomain1.com.san.asm.oracle --portal 127.0.0.1   --login
/usr/sbin/iscsiadm --mode node --targetname iqn.2019-04.com.urdomain1:oracle76a.urdomain1.com.san.asm.oracle --portal 10.210.107.1 --login
/usr/sbin/iscsiadm --mode node --targetname iqn.2019-04.com.urdomain1:oracle76a.urdomain1.com.san.asm.oracle --portal 10.211.107.1 --login

[root@oracle76a scst-files]# systemctl enable scst

scst.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig scst on

[root@oracle76a scst-files]#

Next it is necessary to verify the config files of the LXC containers which will be in the Oracle GI 18c cluster to ensure they are set to present into the LXC containers the LUNs that will be shared and used for ASM.  Verify the config files as shown below.   When Orabuntu-LXC runs, and when oracle-gi-18c product is selected, Orabuntu-LXC creates the /dev/lxc_luns directory, and uncomments the lxc.mount.entry as shown below.  This is done so that the LXC containers will start.  Once create-scst.sh is run, there will be actual LUNs ready for ASM use at the /dev/lxc_luns location.

[ubuntu@oracle76a ~]$ su - root
Password:
Last login: Tue Apr 16 16:32:13 CDT 2019 on pts/5

[root@oracle76a ~]# cd /var/lib/lxc

[root@oracle76a lxc]# grep lxc_luns ora76c*/config

ora76c74/config:lxc.mount.entry = /dev/lxc_luns /var/lib/lxc/ora76c74/rootfs/dev/lxc_luns none defaults,bind,create=dir 0 0
ora76c75/config:lxc.mount.entry = /dev/lxc_luns /var/lib/lxc/ora76c75/rootfs/dev/lxc_luns none defaults,bind,create=dir 0 0
ora76c76/config:lxc.mount.entry = /dev/lxc_luns /var/lib/lxc/ora76c76/rootfs/dev/lxc_luns none defaults,bind,create=dir 0 0
ora76c77/config:lxc.mount.entry = /dev/lxc_luns /var/lib/lxc/ora76c77/rootfs/dev/lxc_luns none defaults,bind,create=dir 0 0

[root@oracle76a lxc]#

Next verify that on the LXC host, the command "ulimit -Hn" returns 65536 as shown below.  If it does not, check the /etc/systemd/system.conf file and make sure that the "DefaultFileNOFILE=65536" is set, and then reboot the LXC physical host machine or VM so that the setting takes effect, and verify after the host comes back up that the NOFILE setting is 65536.

[root@oracle76a lxc]# ulimit -Hn
65536
[root@oracle76a lxc]#

[root@oracle76a lxc]# cat /etc/systemd/system.conf | grep 65536
DefaultLimitNOFILE=65536
[root@oracle76a lxc]#

Now it is best to reboot the LXC host (although reboot is not required) and verify after reboot that:
  • All required networks come up in all RAC LXC containers [public, private (4 networks), and ASM (2 networks)]
  • That "ulimit -Hn" run as the grid user in all RAC LXC containers returns "65536"
  • That "service ntpd status" run as the root user in all RAC LXC containers returns a normal running ntp service status
  • That "ntpq -p" run as the root user in all RAC LXC containers returns both the host and some external time sources
  • That the "/etc/hosts" file in all RAC LXC containers has no references to the LXC hostname (DNS is being used)
  • That the "/etc/resolv.conf" file in all RAC LXC containers "search" string contains only "urdomain1.com gns1.urdomain1.com"
  • That all RAC LXC containers have "cvuqdisk" rpm installed
  • That "nslookup" quickly resolves LXC containers name lookups (both forward and reverse lookups)
  • That "nslookup" quickly resolves Oracle GI GNS delegated subdomain IP (the xxx.yyy.zzz.3) in this example 10.209.53.3
Some of these checks are shown below.

[ubuntu@oracle76a anylinux]$ ssh root@ora76c74

root@ora76c74's password:
Last login: Thu Apr 18 10:38:56 2019

[root@ora76c74 ~]# ulimit -Hn
65536

[root@ora76c74 ~]# service ntpd status

Redirecting to /bin/systemctl status ntpd.service
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-04-18 12:42:59 CDT; 27min ago
  Process: 569 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 599 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─599 /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid -g

Apr 18 12:43:33 ora76c74 ntpd[599]: 0.0.0.0 c615 05 clock_sync
Apr 18 12:47:45 ora76c74 ntpd[599]: Listen normally on 18 eth1:1 169.254.3.20 UDP 123
Apr 18 12:47:45 ora76c74 ntpd[599]: Listen normally on 19 eth2:1 169.254.11.100 UDP 123
Apr 18 12:47:45 ora76c74 ntpd[599]: Listen normally on 20 eth3:1 169.254.23.245 UDP 123
Apr 18 12:47:45 ora76c74 ntpd[599]: Listen normally on 21 eth4:1 169.254.24.64 UDP 123
Apr 18 12:47:45 ora76c74 ntpd[599]: new interface(s) found: waking up resolver
Apr 18 12:53:00 ora76c74 ntpd[599]: Listen normally on 22 eth0:1 10.209.53.111 UDP 123
Apr 18 12:53:00 ora76c74 ntpd[599]: new interface(s) found: waking up resolver
Apr 18 12:53:02 ora76c74 ntpd[599]: Listen normally on 23 eth0:2 10.209.53.74 UDP 123
Apr 18 12:53:02 ora76c74 ntpd[599]: new interface(s) found: waking up resolver

[root@ora76c74 ~]# ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ubuntu-ThinkPad 91.189.91.157    3 u   39  128  377    0.619  466.910 159.732
+io.crash-overri 139.78.97.128    2 u   63  128  377   37.624  694.778 123.639
+50-76-34-188-ip .GPS.            1 u   56   64  377   68.136  591.388  77.987
+216.6.2.70      58.148.140.87    2 u  131  128  377   62.134  739.550 123.163
 192.138.210.214 .INIT.          16 u    - 1024    0    0.000    0.000   0.000

[root@ora76c74 ~]# cat /etc/hosts

127.0.0.1 localhost

[root@ora76c74 ~]# cat /etc/resolv.conf

; generated by /usr/sbin/dhclient-script
search urdomain1.com gns1.urdomain1.com
nameserver 10.209.53.2

[root@ora76c74 ~]# rpm -qa | grep cvuqdisk

cvuqdisk-1.0.10-1.x86_64

[root@ora76c74 ~]# nslookup 10.209.53.104

Server:        10.209.53.2
Address:    10.209.53.2#53

104.53.209.10.in-addr.arpa    name = ora76c74.urdomain1.com.

[root@ora76c74 ~]# nslookup 10.209.53.3

Server:        10.209.53.2
Address:    10.209.53.2#53

3.53.209.10.in-addr.arpa    name = lxc1-gns-vip.urdomain1.com.

[root@ora76c74 ~]#

Also it is a good idea to double check the LUNs that will be used on all the RAC LXC containers for ASM at this point as shown below.

[grid@ora76c74 ~]$ ls -l /dev/lxc_luns/
total 0
brw-rw----. 1 grid asmadmin 252, 4 Apr 16 14:53 asm_data_103_00
brw-rw----. 1 grid asmadmin 252, 5 Apr 16 14:53 asm_mgmt_103_00
brw-rw----. 1 grid asmadmin 252, 3 Apr 16 14:53 asm_sysd_103_00
[grid@ora76c74 ~]$ ssh ora76c75
[grid@ora76c75 ~]$ ls -l /dev/lxc_luns
total 0
brw-rw----. 1 grid asmadmin 252, 4 Apr 16 14:53 asm_data_103_00
brw-rw----. 1 grid asmadmin 252, 5 Apr 16 14:53 asm_mgmt_103_00
brw-rw----. 1 grid asmadmin 252, 3 Apr 16 14:53 asm_sysd_103_00
[grid@ora76c75 ~]$ ssh ora76c76
[grid@ora76c76 ~]$ ls -l /dev/lxc_luns
total 0
brw-rw----. 1 grid asmadmin 252, 4 Apr 16 14:53 asm_data_103_00
brw-rw----. 1 grid asmadmin 252, 5 Apr 16 14:53 asm_mgmt_103_00
brw-rw----. 1 grid asmadmin 252, 3 Apr 16 14:53 asm_sysd_103_00
[grid@ora76c76 ~]$ ssh ora76c77
[grid@ora76c77 ~]$ ls -l /dev/lxc_luns
total 0
brw-rw----. 1 grid asmadmin 252, 4 Apr 16 14:53 asm_data_103_00
brw-rw----. 1 grid asmadmin 252, 5 Apr 16 14:53 asm_mgmt_103_00
brw-rw----. 1 grid asmadmin 252, 3 Apr 16 14:53 asm_sysd_103_00
[grid@ora76c77 ~]$

Below are some commands that can be run on the LXC host in separate terminals to monitor the progress of the Oracle GI 18c install in the LXC containers once the graphical GI installer has been started, as shown below.

[ubuntu@oracle76a ~]$ ps -ef | grep cssd | rev | sort | rev

[ubuntu@oracle76a ~]$ ps -ef | grep grid | grep -v java

[ubuntu@oracle76a ~]$ ps -ef | grep pmon

[ubuntu@oracle76a ~]$ ps -ef | grep ASM

[ubuntu@oracle76a ~]$ ps -ef | grep ASM | grep -v java

Below is a command that can be run in the LXC Oracle GI 18c "installer" container (in this example ora76c74) to monitor progress of the Oracle GI 18c deployment root.sh script progress (when the installer gets to that step), as shown below.

[grid@ora76c74 trace]$ tail -f alert.log

Now run the gridSetup.sh script to install Oracle Grid Infrastructure as shown below.

ubuntu@ubuntu-ThinkPad-P72:~/Downloads$ ssh -X grid@ora76c74
grid@ora76c74's password:
Last login: Tue Apr 16 13:55:41 2019 from 10.209.53.1
[grid@ora76c74 ~]$ cd $ORACLE_HOME
[grid@ora76c74 grid]$ ./gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard...


Now follow the installer steps as shown below.





As shown below, use the ".3" fourth octet with whatever you have set your public network address range.  In this example it uses the default "10.209.53" with the ".3" which is where the GNS delegated subdomain is automatically provided by Orabuntu-LXC.  You can choose any cluster name that complies with the naming requirements for clusters.  When the "Cluster Name" is typed in, the "SCAN Name will auto-populate with "ClusterName-scan" and then when the "GNS Sub Domain" is typed in, the "SCAN Name" will automatically be completed by the installer with that information as well, as shown below.  Orabuntu-LXC is preconfigured to provide a GNS at the ".3" address of your chosen public network range, in this example "10.209.53" then ".3".


Add the additional RAC LXC container nodes as shown in the following screenshots.  Note that you can add groups of server names using a regular expression in batches as shown below to speed up entry of the names and reduce the amount of typing.







Next no-password SSH must be configured as shown in the screenshots below.  The password for "grid" user which owns the Oracle 18c GI install is set by default by Orabuntu-LXC to "grid."  You can change the password for "grid" before running the installler if desired.  In this example, the username and password are left set to "grid / grid".








Click on Next to go to the next screen.  The installer re-tests the passwordless SSH connectivity as shown below.


Configure the private and ASM networks as shown below.  By default, Orabuntu-LXC oracle-gi-18c Product configuration script provides 6 additional OpenvSwitch networks for Private Interconnect and for ASM Flex Cluster networks.  In this example, four networks have been dedicated to Private Interconnect networks, and two networks have been dedicated to ASM Flex Cluster networking.  Choose a preferred allocation scheme that is the same, or allocated them differently, then click Next, as shown below.








The LUNs for ASM storage are at /dev/lxc_luns on each LXC RAC container node.  Change the discovery path as shown below to make the LUNs visible to the Oracle GI 18c installer, as shown below.


In this example, the SYS diskgroup is given the name "SYSD103" with the "103" matching the LUN numeric identified.  Orabuntu-LXC SCST installer assigns a numeric postfix to the LUNs on each host that corresponds to the DHCP last octet on the IP address on the public network.  This helps to know which storage is being used on multihost Orabuntu-LXC deployments.


Assign storage to the MGMT diskgroup, similar to the way the SYSD storage was assigned, as shown below.


Choose a password scheme, as shown below.  In this example, all passwords are set to "Orabuntu#1", as shown below.





If Enterprise Manager is running in the environment, this option can be set.  Otherwise, leave it unchecked, as shown below.




The ORACLE_BASE should autopopulate to the value that Orabuntu-LXC sets be default, as shown below.


The Oracle Inventory should auto-populate to the value set by Orabuntu-LXC as shown below.


The default root password for Orabuntu-LXC containers is "root".  It can be changed before running the GI installer if desired.  Otherwise, type in "root" here for the root credential.


The cluster verification pre-install checks run as shown below.





These are all acceptable check warnings and failures, and are normal when installing Oracle 18c GI in LXC containers.  When the GI install is completed, the cluster verification will be 100% successful, even with the warnings shown below.  These should typically be the only acceptable warnings that can be ignored.  

If warnings are given about NTP, those must all be resolved and the "Check Again" button run until there are no NTP errors.  Orabuntu-LXC automatically configures NTP in all the LXC containers, so there should be no manual configuration of NTP necessary and there should be no NTP errors at this step. If the installer is launched with any ignored NTP errors, the GI install will fail, so 100% successful check of NTP is a requirement of installing Oracle GI 18c in LXC Linux containers.








Save the response file as shown below.  It is useful for later configuring a silent install if desired for additional installs of Oracle 18c GI in other LXC container clusters.




When the popup dialog asks about running root scripts, answer yes.  Earlier in the install the option to have the installer run the root scripts automatically was chosen.  Click "Yes" button as shown below.








The Cluster Verification Utility gives a 100% successful install result. Click "close" to complete the install as shown below.




























Comments