= ocp4-workload-quarkus-workshop - Deploy Quarkus Workshop into OCP4 == Role overview * This role deploys logging into an OpenShift 4 Cluster. It depends on infrastructure nodes existing. It consists of the following playbooks: ** Playbook: link:./tasks/pre_workload.yml[pre_workload.yml] - Sets up an environment for the workload deployment. *** Debug task will print out: `pre_workload Tasks completed successfully.` ** Playbook: link:./tasks/workload.yml[workload.yml] - Used to deploy logging *** Debug task will print out: `workload Tasks completed successfully.` ** Playbook: link:./tasks/post_workload.yml[post_workload.yml] - Used to configure the workload after deployment *** This role doesn't do anything here *** Debug task will print out: `post_workload Tasks completed successfully.` ** Playbook: link:./tasks/remove_workload.yml[remove_workload.yml] - Used to delete the workload *** This role removes the logging deployment and project but not the operator configs *** Debug task will print out: `remove_workload Tasks completed successfully.` == Review the defaults variable file * This file link:./defaults/main.yml[./defaults/main.yml] contains all the variables you need to define to control the deployment of your workload. * The variable *ocp_username* is mandatory to assign the workload to the correct OpenShift user. * A variable *silent=True* can be passed to suppress debug messages. * You can modify any of these default values by adding `-e "variable_name=variable_value"` to the command line === Deploy a Workload with the `ocp-workload` playbook [Mostly for testing] ---- TARGET_HOST="bastion.orlando-2c83.openshiftworkshop.com" OCP_USERNAME="jfalkner-redhat.com" WORKLOAD="ocp4-workload-quarkus-workshop" GUID=2c83 SSH_KEY=~/.ssh/id_rsa # a TARGET_HOST is specified in the command line, without using an inventory file ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \ -e"ansible_ssh_private_key_file=${SSH_KEY}" \ -e"ansible_user=${OCP_USERNAME}" \ -e"ocp_username=${OCP_USERNAME}" \ -e"ocp_workload=${WORKLOAD}" \ -e"silent=False" \ -e"guid=${GUID}" \ -e"user_count=50" \ -e"subdomain_base=${TARGET_HOST}" \ -e"ACTION=create" ---- === To Delete an environment ---- TARGET_HOST="bastion.orlando-2c83.openshiftworkshop.com" OCP_USERNAME="jfalkner-redhat.com" WORKLOAD="ocp4-workload-quarkus-workshop" GUID=2c83 SSH_KEY=~/.ssh/id_rsa # a TARGET_HOST is specified in the command line, without using an inventory file ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \ -e"ansible_ssh_private_key_file=${SSH_KEY}" \ -e"ansible_user=${OCP_USERNAME}" \ -e"ocp_username=${OCP_USERNAME}" \ -e"ocp_workload=${WORKLOAD}" \ -e"silent=False" \ -e"guid=${GUID}" \ -e"user_count=50" \ -e"subdomain_base=${TARGET_HOST}" \ -e"ACTION=remove" ---- == Other related information: === Deploy Workload on OpenShift Cluster from an existing playbook: [source,yaml] ---- - name: Deploy a workload role on a master host hosts: all become: true gather_facts: False tags: - step007 roles: - { role: "{{ocp_workload}}", when: 'ocp_workload is defined' } ---- NOTE: You might want to change `hosts: all` to fit your requirements === Set up your Ansible inventory file * You can create an Ansible inventory file to define your connection method to your host (Master/Bastion with `oc` command) * You can also use the command line to define the hosts directly if your `ssh` configuration is set to connect to the host correctly * You can also use the command line to use localhost or if your cluster is already authenticated and configured in your `oc` configuration .Example inventory file [source, ini] ---- [gptehosts:vars] ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem ansible_user=ec2-user [gptehosts:children] openshift [openshift] bastion.cluster1.openshift.opentlc.com bastion.cluster2.openshift.opentlc.com bastion.cluster3.openshift.opentlc.com bastion.cluster4.openshift.opentlc.com [dev] bastion.cluster1.openshift.opentlc.com bastion.cluster2.openshift.opentlc.com [prod] bastion.cluster3.openshift.opentlc.com bastion.cluster4.openshift.opentlc.com ----