Guillaume Coré
2019-09-16 c39f4058991220edda54742b7f4b30786d355412
commit | author | age
805b3d 1 = ocp4-workload-quarkus-workshop - Deploy Quarkus Workshop into OCP4
JF 2
3 == Role overview
4
5 * This role deploys logging into an OpenShift 4 Cluster. It depends on infrastructure nodes existing. It consists of the following playbooks:
6 ** Playbook: link:./tasks/pre_workload.yml[pre_workload.yml] - Sets up an
7  environment for the workload deployment.
8 *** Debug task will print out: `pre_workload Tasks completed successfully.`
9
10 ** Playbook: link:./tasks/workload.yml[workload.yml] - Used to deploy logging
11 *** Debug task will print out: `workload Tasks completed successfully.`
12
13 ** Playbook: link:./tasks/post_workload.yml[post_workload.yml] - Used to
14  configure the workload after deployment
15 *** This role doesn't do anything here
16 *** Debug task will print out: `post_workload Tasks completed successfully.`
17
18 ** Playbook: link:./tasks/remove_workload.yml[remove_workload.yml] - Used to
19  delete the workload
20 *** This role removes the logging deployment and project but not the operator configs
21 *** Debug task will print out: `remove_workload Tasks completed successfully.`
22
23 == Review the defaults variable file
24
25 * This file link:./defaults/main.yml[./defaults/main.yml] contains all the variables you need to define to control the deployment of your workload.
26 * The variable *ocp_username* is mandatory to assign the workload to the correct OpenShift user.
27 * A variable *silent=True* can be passed to suppress debug messages.
28 * You can modify any of these default values by adding `-e "variable_name=variable_value"` to the command line
29
30 === Deploy a Workload with the `ocp-workload` playbook [Mostly for testing]
31
32 ----
33 TARGET_HOST="bastion.orlando-2c83.openshiftworkshop.com"
34 OCP_USERNAME="jfalkner-redhat.com"
35 WORKLOAD="ocp4-workload-quarkus-workshop"
36 GUID=2c83
37 SSH_KEY=~/.ssh/id_rsa
a1d54b 38 AWS_REGION=us-east-1
805b3d 39
JF 40 # a TARGET_HOST is specified in the command line, without using an inventory file
41 ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \
42     -e"ansible_ssh_private_key_file=${SSH_KEY}" \
43     -e"ansible_user=${OCP_USERNAME}" \
44     -e"ocp_username=${OCP_USERNAME}" \
45     -e"ocp_workload=${WORKLOAD}" \
46     -e"silent=False" \
47     -e"guid=${GUID}" \
a1d54b 48     -e"aws_region=${AWS_REGION}" \
7f2bed 49     -e"num_users=50" \
805b3d 50     -e"subdomain_base=${TARGET_HOST}" \
JF 51     -e"ACTION=create"
52 ----
53
54 === To Delete an environment
55
56 ----
57 TARGET_HOST="bastion.orlando-2c83.openshiftworkshop.com"
58 OCP_USERNAME="jfalkner-redhat.com"
59 WORKLOAD="ocp4-workload-quarkus-workshop"
60 GUID=2c83
61 SSH_KEY=~/.ssh/id_rsa
a1d54b 62 AWS_REGION=us-east-1
805b3d 63
JF 64 # a TARGET_HOST is specified in the command line, without using an inventory file
65 ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \
66     -e"ansible_ssh_private_key_file=${SSH_KEY}" \
67     -e"ansible_user=${OCP_USERNAME}" \
68     -e"ocp_username=${OCP_USERNAME}" \
69     -e"ocp_workload=${WORKLOAD}" \
70     -e"silent=False" \
71     -e"guid=${GUID}" \
a1d54b 72     -e"aws_region=${AWS_REGION}" \
7f2bed 73     -e"num_users=50" \
805b3d 74     -e"subdomain_base=${TARGET_HOST}" \
JF 75     -e"ACTION=remove"
76 ----
77
78
79 == Other related information:
80
81 === Deploy Workload on OpenShift Cluster from an existing playbook:
82
83 [source,yaml]
84 ----
85 - name: Deploy a workload role on a master host
86   hosts: all
87   become: true
88   gather_facts: False
89   tags:
90     - step007
91   roles:
92     - { role: "{{ocp_workload}}", when: 'ocp_workload is defined' }
93 ----
94 NOTE: You might want to change `hosts: all` to fit your requirements
95
96
97 === Set up your Ansible inventory file
98
99 * You can create an Ansible inventory file to define your connection method to your host (Master/Bastion with `oc` command)
100 * You can also use the command line to define the hosts directly if your `ssh` configuration is set to connect to the host correctly
101 * You can also use the command line to use localhost or if your cluster is already authenticated and configured in your `oc` configuration
102
103 .Example inventory file
104 [source, ini]
105 ----
106 [gptehosts:vars]
107 ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem
108 ansible_user=ec2-user
109
110 [gptehosts:children]
111 openshift
112
113 [openshift]
114 bastion.cluster1.openshift.opentlc.com
115 bastion.cluster2.openshift.opentlc.com
116 bastion.cluster3.openshift.opentlc.com
117 bastion.cluster4.openshift.opentlc.com
118
119 [dev]
120 bastion.cluster1.openshift.opentlc.com
121 bastion.cluster2.openshift.opentlc.com
122
123 [prod]
124 bastion.cluster3.openshift.opentlc.com
125 bastion.cluster4.openshift.opentlc.com
126 ----