This page (revision-3) was last changed on 17-Feb-2022 15:10 by Olaf Bohlen

This page was created on 16-Feb-2022 18:15 by Olaf Bohlen

Only authorized users are allowed to rename pages.

Only authorized users are allowed to delete pages.

Page revision history

Version Date Modified Size Author Changes ... Change note
3 17-Feb-2022 15:10 16 KB Olaf Bohlen to previous
2 16-Feb-2022 20:30 12 KB Olaf Bohlen to previous | to last
1 16-Feb-2022 18:15 10 KB Olaf Bohlen to last first part

Page References

Incoming links Outgoing links

Version management

Difference between version and

At line 251 changed 2 lines
!! Installing OpenShift
! Download required material
!! Download required material
At line 293 removed one line
! Create Ignition Files
At line 295 removed 139 lines
After creating the install-config.yaml we can process this and create ignition files for Red Hat CoreOS. We first create manifests (which you could review or customize, but we don't) and then we create ignitions:
{{{
[localadm@bastion ocp4]$ ls
install-config.yaml
[localadm@bastion ocp4]$ openshift-install create manifests
[...]
[localadm@bastion ocp4]$ openshift-install create ignition-configs
[...]
[localadm@bastion ocp4]$ ls
auth metadata.json worker.ign bootstrap.ign master.ign
}}}
The ignition files (*.ign) need to be pushed to a web-server that is accessible for the OpenShift nodes.
! Set up PXE boot
The relevant part of my dhcpd4.conf looks like this:
{{{
subnet 172.18.0.0 netmask 255.255.252.0 {
range 172.18.3.101 172.18.3.101;
option routers 172.18.0.200;
option domain-name "srv.home.eenfach.de";
option domain-search "srv.home.eenfach.de","home.eenfach.de","eenfach.de";
option domain-name-servers 172.18.1.53;
option subnet-mask 255.255.224.0;
option ntp-servers 192.53.103.108,192.53.103.104,192.53.103.103;
group {
filename "shimx64.efi";
next-server 172.18.1.67;
host master00.ocp4.home.eenfach.de {
hardware ethernet 2:8:20:61:3b:13;
fixed-address 172.18.3.10;
option host-name "master00";
[...]
}
}
}
}}}
Of course you want to put all masters, workers and the bootstrap in there also.
Then you need to get the shimx64.efi and grubx64.efi from the RHEL shim-x64.x86_64 rpm. I just installed it on the bastion and copied it to the /tftpboot on my DHCP Server.
The reason is that bhyve boots UEFI only and the classic pxelinux from RHEL only supports BIOS boots, so we use the grub shim boot.
Create grub configs for every MAC.
An example for a Master:
{{{
root@skirnir:/tftpboot# cat grub.cfg-01-02-08-20-22-c1-10
menuentry 'Master: Install Red Hat Enterprise Linux CoreOS' --class fedora --class gnu-linux --class gnu --class os {
linuxefi rhcos-4.6.1-x86_64-live-kernel-x86_64 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url=http://172.18.1.80/install/ocp4/rhcos-4.6.1-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://172.18.1.80/install/ocp4/ignitions/master.ign console=tty0 console=ttyS0
initrdefi rhcos-4.6.1-x86_64-live-initramfs.x86_64.img
}
}}}
An example for a worker:
{{{
root@skirnir:/tftpboot# cat grub.cfg-01-02-08-20-12-51-4d
menuentry 'Worker: Install Red Hat Enterprise Linux CoreOS' --class fedora --class gnu-linux --class gnu --class os {
linuxefi rhcos-4.6.1-x86_64-live-kernel-x86_64 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url=http://172.18.1.80/install/ocp4/rhcos-4.6.1-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://172.18.1.80/install/ocp4/ignitions/worker.ign console=tty0 console=ttyS0
initrdefi rhcos-4.6.1-x86_64-live-initramfs.x86_64.img
}
}}}
And an example for the bootstrap node:
{{{
root@skirnir:/tftpboot# cat grub.cfg-01-02-08-20-3b-26-02
menuentry 'Bootstrap: Install Red Hat Enterprise Linux CoreOS' --class fedora --class gnu-linux --class gnu --class os {
linuxefi rhcos-4.6.1-x86_64-live-kernel-x86_64 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url=http://172.18.1.80/install/ocp4/rhcos-4.6.1-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://172.18.1.80/install/ocp4/ignitions/bootstrap.ign console=tty0 console=ttyS0
initrdefi rhcos-4.6.1-x86_64-live-initramfs.x86_64.img
}
}}}
Ensure that you have rhcos-4.6.1-x86_64-live-initramfs.x86_64.img and rhcos-4.6.1-x86_64-live-kernel-x86_64 in the /tftpboot as well, and that rhcos-4.6.1-x86_64-live-rootfs.x86_64.img is accessible from the specified web url (try with curl for example).
! Setting up DNS
My DNS zone looks like this:
{{{
root@voluspa:/var/named# cat ocp4.home.eenfach.de
$TTL 3600
@ IN SOA ocp4.home.eenfach.de. hostmaster.eenfach.de. (
2022021500 ; Serial
3600 ; Refresh
300 ; Retry
3600000 ; Expire
3600 ) ; Minimum
IN NS voluspa.srv.home.eenfach.de.
bastion IN A 172.18.3.1
bootstrap IN A 172.18.3.5
master00 IN A 172.18.3.10
etcd00 IN A 172.18.3.10
_etcd-server-ssl._tcp.ocp4 IN SRV 0 10 2380 etcd00
master01 IN A 172.18.3.20
etcd01 IN A 172.18.3.20
_etcd-server-ssl._tcp.ocp4 IN SRV 0 10 2380 etcd01
master02 IN A 172.18.3.30
etcd02 IN A 172.18.3.30
_etcd-server-ssl._tcp.ocp4 IN SRV 0 10 2380 etcd02
worker00 IN A 172.18.3.50
worker01 IN A 172.18.3.60
*.apps IN A 172.18.3.100
api IN A 172.18.3.100
api-int IN A 172.18.3.100
dns IN CNAME voluspa.srv.home.eenfach.de.
}}}
Ensure that you set up matching PTR records also.
That should be it, we could kick off the installations.
! Starting the RHCOS installation
Boot your bootstrap zone, zlogin -C bootstrap and watch the console.
When the grub menu appears, hit return (or put timeout values in the grub.cfg files) and let the boot proceed. The bootstrap will reboot after a while and finally it will show a login prompt.
Now also boot your master zones, also observe the console and wait until they show the login prompt, after that do the same with the workers.
! Observing the installation
On the bastion node, observe the installation with
{{{ openshift-install wait-for bootstrap-complete }}}
At some point it should say that it is now safe to remove the bootstrap node. Log in to the loadbalancer zone and remove the bootstrap from the masters servergroup.
Then shut down the bootstrap.
Again on the bastion run
{{{ openshift-install wait-for install-complete --log-level=debug }}}
Open another shell to the bastion and watch the cluster:
{{{
export KUBECONFIG=$HOME/ocp4/auth/kubeconfig
oc get nodes; oc get csr
}}}
You should see the nodes getting read. If CSRs are in a pending state, approve them with "oc adm certificate approve".
Depending on your hardware and network this takes some time, but finally your cluster should be ready.