ansible/roles/ocp4-workload-cluster-autoscale/defaults/main.yml | ●●●●● patch | view | raw | blame | history | |
ansible/roles/ocp4-workload-cluster-autoscale/readme.adoc | ●●●●● patch | view | raw | blame | history | |
ansible/roles/ocp4-workload-cluster-autoscale/tasks/main.yml | ●●●●● patch | view | raw | blame | history | |
ansible/roles/ocp4-workload-cluster-autoscale/tasks/post_workload.yml | ●●●●● patch | view | raw | blame | history | |
ansible/roles/ocp4-workload-cluster-autoscale/tasks/pre_workload.yml | ●●●●● patch | view | raw | blame | history | |
ansible/roles/ocp4-workload-cluster-autoscale/tasks/remove_workload.yml | ●●●●● patch | view | raw | blame | history | |
ansible/roles/ocp4-workload-cluster-autoscale/tasks/workload.yml | ●●●●● patch | view | raw | blame | history |
ansible/roles/ocp4-workload-cluster-autoscale/defaults/main.yml
New file @@ -0,0 +1,6 @@ --- become_override: False ocp_username: opentlc-mgr silent: False _project_request_message: "To provision Projects you must request access in https://labs.opentlc.com or https://rhpds.redhat.com" ansible/roles/ocp4-workload-cluster-autoscale/readme.adoc
New file @@ -0,0 +1,118 @@ = ocp4-workload-cluster-autoscale - Sets up cluster autoscaling and creates machineautoscalers == Role overview * This role configures the cluster autoscaler and creates machine autoscalers for the default worker pool. It assumes a default cluster size of 3 workers and will allow the cluster to scale up to 9 workers. It consists of the following playbooks: ** Playbook: link:./tasks/pre_workload.yml[pre_workload.yml] - Sets up an environment for the workload deployment. *** Debug task will print out: `pre_workload Tasks completed successfully.` ** Playbook: link:./tasks/workload.yml[workload.yml] - Used to set up autoscalers *** Debug task will print out: `workload Tasks completed successfully.` ** Playbook: link:./tasks/post_workload.yml[post_workload.yml] - Used to configure the workload after deployment *** This role doesn't do anything here *** Debug task will print out: `post_workload Tasks completed successfully.` ** Playbook: link:./tasks/remove_workload.yml[remove_workload.yml] - Used to delete the workload *** This role removes the infrastructure nodes (DANGER!!!) *** Debug task will print out: `remove_workload Tasks completed successfully.` == Review the defaults variable file * This file link:./defaults/main.yml[./defaults/main.yml] contains all the variables you need to define to control the deployment of your workload. * The variable *ocp_username* is mandatory to assign the workload to the correct OpenShift user. * A variable *silent=True* can be passed to suppress debug messages. * You can modify any of these default values by adding `-e "variable_name=variable_value"` to the command line === Deploy a Workload with the `ocp-workload` playbook [Mostly for testing] ---- TARGET_HOST="bastion.na311.openshift.opentlc.com" OCP_USERNAME="shacharb-redhat.com" WORKLOAD="ocp-workload-enable-service-broker" GUID=1001 # a TARGET_HOST is specified in the command line, without using an inventory file ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \ -e"ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem" \ -e"ansible_user=ec2-user" \ -e"ocp_username=${OCP_USERNAME}" \ -e"ocp_workload=${WORKLOAD}" \ -e"silent=False" \ -e"guid=${GUID}" \ -e"ACTION=create" ---- === To Delete an environment ---- TARGET_HOST="bastion.na311.openshift.opentlc.com" OCP_USERNAME="opentlc-mgr" WORKLOAD="ocp4-workload-infra-nodes" GUID=1002 # a TARGET_HOST is specified in the command line, without using an inventory file ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \ -e"ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem" \ -e"ansible_user=ec2-user" \ -e"ocp_username=${OCP_USERNAME}" \ -e"ocp_workload=${WORKLOAD}" \ -e"guid=${GUID}" \ -e"ACTION=remove" ---- == Other related information: === Deploy Workload on OpenShift Cluster from an existing playbook: [source,yaml] ---- - name: Deploy a workload role on a master host hosts: all become: true gather_facts: False tags: - step007 roles: - { role: "{{ocp_workload}}", when: 'ocp_workload is defined' } ---- NOTE: You might want to change `hosts: all` to fit your requirements === Set up your Ansible inventory file * You can create an Ansible inventory file to define your connection method to your host (Master/Bastion with `oc` command) * You can also use the command line to define the hosts directly if your `ssh` configuration is set to connect to the host correctly * You can also use the command line to use localhost or if your cluster is already authenticated and configured in your `oc` configuration .Example inventory file [source, ini] ---- [gptehosts:vars] ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem ansible_user=ec2-user [gptehosts:children] openshift [openshift] bastion.cluster1.openshift.opentlc.com bastion.cluster2.openshift.opentlc.com bastion.cluster3.openshift.opentlc.com bastion.cluster4.openshift.opentlc.com [dev] bastion.cluster1.openshift.opentlc.com bastion.cluster2.openshift.opentlc.com [prod] bastion.cluster3.openshift.opentlc.com bastion.cluster4.openshift.opentlc.com ---- ansible/roles/ocp4-workload-cluster-autoscale/tasks/main.yml
New file @@ -0,0 +1,23 @@ --- # Do not modify this file - name: Running Pre Workload Tasks import_tasks: ./pre_workload.yml become: "{{ become_override | bool }}" when: ACTION == "create" or ACTION == "provision" - name: Running Workload Tasks import_tasks: ./workload.yml become: "{{ become_override | bool }}" when: ACTION == "create" or ACTION == "provision" - name: Running Post Workload Tasks import_tasks: ./post_workload.yml become: "{{ become_override | bool }}" when: ACTION == "create" or ACTION == "provision" - name: Running Workload removal Tasks import_tasks: ./remove_workload.yml become: "{{ become_override | bool }}" when: ACTION == "destroy" or ACTION == "remove" ansible/roles/ocp4-workload-cluster-autoscale/tasks/post_workload.yml
New file @@ -0,0 +1,9 @@ --- # Implement your Post Workload deployment tasks here # Leave this as the last task in the playbook. - name: post_workload tasks complete debug: msg: "Post-Workload Tasks completed successfully." when: not silent|bool ansible/roles/ocp4-workload-cluster-autoscale/tasks/pre_workload.yml
New file @@ -0,0 +1,9 @@ --- # Implement your Pre Workload deployment tasks here # Leave this as the last task in the playbook. - name: pre_workload tasks complete debug: msg: "Pre-Workload tasks completed successfully." when: not silent|bool ansible/roles/ocp4-workload-cluster-autoscale/tasks/remove_workload.yml
New file @@ -0,0 +1,31 @@ # vim: set ft=ansible --- # Implement your Workload removal tasks here - name: delete cluster autoscaler k8s: state: absent definition: apiVersion: "autoscaling.openshift.io/v1alpha1" kind: "ClusterAutoscaler" metadata: name: "default" - name: get machine auto scalers k8s_facts: api_version: autoscaling.openshift.io/v1alpha1 kind: MachineAutoscaler namespace: openshift-machine-api register: machine_auto_scalers - name: delete machine autoscalers k8s: state: absent definition: "{{ item }}" with_items: "{{ machine_auto_scalers.resources }}" # Leave this as the last task in the playbook. - name: remove_workload tasks complete debug: msg: "Remove Workload tasks completed successfully." when: not silent|bool ansible/roles/ocp4-workload-cluster-autoscale/tasks/workload.yml
New file @@ -0,0 +1,50 @@ # vim: set ft=ansible --- # Implement your Workload deployment tasks here - name: get current machinesets k8s_facts: api_version: machine.openshift.io/v1beta1 kind: MachineSet namespace: openshift-machine-api register: machinesets_list - name: create machine autoscaler for each machineset k8s: state: present definition: apiVersion: "autoscaling.openshift.io/v1alpha1" kind: MachineAutoscaler metadata: name: "autoscale-{{ item.metadata.name }}" namespace: "openshift-machine-api" spec: minReplicas: 1 maxReplicas: 4 scaleTargetRef: apiVersion: "machine.openshift.io/v1beta1" kind: MachineSet name: "{{ item.metadata.name }}" with_items: "{{ machinesets_list.resources }}" - name: create the cluster autoscaler k8s: state: present definition: apiVersion: "autoscaling.openshift.io/v1alpha1" kind: "ClusterAutoscaler" metadata: name: "default" spec: resourceLimits: maxNodesTotal: 9 scaleDown: enabled: true delayAfterAdd: 10s delayAfterDelete: 10s delayAfterFailure: 10s # Leave this as the last task in the playbook. - name: workload tasks complete debug: msg: "Workload Tasks completed successfully." when: not silent|bool