Mitesh The Mouse
2020-01-02 d1dc2bf41c915bd525e6e55bf6f29d187ecc11e1
satellite installation role (#946)

* Few changes made for creating PR

* satellite v64 for production with local capsule

* minor change

* satellite local capsule configuration updates

* Update env_vars.yml

* Update env_vars.yml

* host template: Fix possible mismatch between host and loop.index

* Security fixes

* improvements/fixes for satellite roles and satellite-v64-config

* satellite-multi-region

* Role and sample_vars updated

* inden issue fixed in satellite role

* fixed typo in satellite-v64-prod env_vars

* satellite roles updated

* satellite installation role modified

* satellite-multi-region

* satellite-multi-region config

* Readme.adoc

* Readme.adoc

* Update for merge

* fix

* fix

* error fixed for merge

* config satellite-v64-prod deleted

* Role renamed to satellite-manage-subscription

* Update Role:satellite-manage-content-view

* update for role: satellite-manage-organization

* Update readme.adoc

* Update readme.adoc

* update Role: satellite-manage-content-view

* update role: satellite-installation

* update role: satellite-installation

* update role: satellite-manage-manifest

* update role: satellite-manage-subscription

* Readme.adoc

* update role: satellite-manage-sync

* update role: satellite-manage-sync

* update role: satellite-manage-sync

* update role: satellite-manage-activationkey

* update role:satellite-manage-capsule-certificate

* update role:satellite-manage-activationkey

* readme.adoc

* created role:satellite-public-hostname

* update role:satellite-manage-activationkey

* update role:satellite-manage-lifecycle

* update role:satellite-installation

* update readme.adoc

* update readme.adoc

* update readme.adoc

* update readme.adoc

* update readme.adoc

* update readme

* update readme

* update config: satellite-vm

* updare role:satellite-manage-lifecycle

* Delete role:satellite-configuration

* update role: satellite-manage-subscription

* update config:satellite-vm

* update role:satellite-capsule-installation

* update role:satellite-public-hostname

* variable update role:satellite-capsule-installation

* update role:satellite-capsule-configuration

* update config:satellite-multi-region

* update readme

Co-authored-by: Shachar Borenstein <shachar.borenstein@gmail.com>
Co-authored-by: Guillaume Coré <guillaume.core@gmail.com>
29 files deleted
40 files added
22 files modified
3 files renamed
7575 ■■■■ changed files
ansible/configs/ocp-ha-lab/files/hosts_template.3.11.j2 1 ●●●● patch | view | raw | blame | history
ansible/configs/ocp-ha-lab/files/hosts_template.3.11.j2 404 ●●●●● patch | view | raw | blame | history
ansible/configs/ocp-ha-lab/files/labs_hosts_template.3.11.j2 1 ●●●● patch | view | raw | blame | history
ansible/configs/ocp-ha-lab/files/labs_hosts_template.3.11.j2 404 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/README.adoc 295 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/env_vars.yml 9 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/files/cloud_providers/capsule.j2 6 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/files/cloud_providers/default.j2 4 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/files/cloud_providers/ec2_cloud_template.j2 189 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/files/etc_hosts_template.j2 11 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/sample_vars.yml 168 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-multi-region/software.yml 34 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/README.adoc 39 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/destroy_env.yml 18 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/env_vars.yml 271 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/files/etc_hosts_template.j2 16 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/files/hosts_template.j2 23 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/files/repos_template.j2 122 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/files/tower_template_inventory.j2 54 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/post_infra.yml 24 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/post_software.yml 36 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/pre_infra.yml 21 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/pre_software.yml 57 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/sample_vars.yml 85 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-v64-prod/software.yml 55 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/README.adoc 283 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/env_vars.yml 330 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/files/cloud_providers/ec2_cloud_template.j2 6 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/files/cloud_providers/ec2_cloud_template_json.j2 1014 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/files/hosts_template.j2 18 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/files/repos_template.j2 65 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/files/tower_hosts_template.j2 39 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/pre_software.yml 67 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/sample_vars.yml 108 ●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/software.yml 16 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/start.yml 21 ●●●●● patch | view | raw | blame | history
ansible/configs/satellite-vm/stop.yml 21 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-capsule-configuration/README.adoc 78 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-capsule-configuration/tasks/main.yml 86 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-capsule-configuration/tasks/version_6.4.yml 37 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-capsule-installation/README.adoc 125 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-capsule-installation/tasks/main.yml 22 ●●●● patch | view | raw | blame | history
ansible/roles/satellite-capsule-installation/tasks/version_6.4.yml 85 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-capsule-installation/vars/main.yml 7 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-configuration/tasks/01_manifest.yml 40 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-configuration/tasks/02_repository.yml 76 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-configuration/tasks/03_content_view.yml 125 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-configuration/tasks/04_lifecycle.yml 7 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-configuration/tasks/05_activationkey.yml 71 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-configuration/tasks/main.yml 9 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-configuration/vars/main.yml 65 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-hammer-cli/tasks/main.yml 12 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-hammer-cli/templates/hammer_cli.j2 4 ●●●● patch | view | raw | blame | history
ansible/roles/satellite-installation/README.adoc 119 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-installation/defaults/main.yml patch | view | raw | blame | history
ansible/roles/satellite-installation/tasks/main.yml 18 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-installation/tasks/satellite_installation.yml 37 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-installation/tasks/version_6.4.yml 28 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-installation/tasks/version_6.6.yml 28 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-activationkey/README.adoc 129 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-activationkey/files/activationkey_script_version_6.4.sh 42 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-activationkey/tasks/main.yml 69 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-activationkey/tasks/version_6.4.yml 31 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-capsule-certificate/README.adoc 87 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-capsule-certificate/tasks/main.yml 53 ●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-capsule-certificate/tasks/version_6.4.yml 52 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-content-view/README.adoc 143 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-content-view/tasks/main.yml 124 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-content-view/tasks/version_6.4.yml 93 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-lifecycle/README.adoc 127 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-lifecycle/tasks/main.yml 8 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-lifecycle/tasks/version_6.4.yml 29 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-liifecycle/tasks/main.yml patch | view | raw | blame | history
ansible/roles/satellite-manage-liifecycle/vars/main.yml patch | view | raw | blame | history
ansible/roles/satellite-manage-location/tasks/main.yml 19 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-manifest/README.adoc 103 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-manifest/tasks/main.yml 40 ●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-manifest/tasks/version_6.4.yml 38 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-organization/README.adoc 92 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-organization/tasks/main.yml 9 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-organization/tasks/version_6.4.yml 24 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-subscription-and-sync/tasks/main.yml 95 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-subscription/README.adoc 141 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-subscription/files/subscription_script_version_6.4.sh 32 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-subscription/tasks/main.yml 8 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-subscription/tasks/version_6.4.yml 25 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-sync/README.adoc 149 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-sync/files/sync_script_version_6.4.sh 34 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-sync/tasks/main.yml 8 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-manage-sync/tasks/version_6.4.yml 13 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-public-hostname/README.adoc 67 ●●●●● patch | view | raw | blame | history
ansible/roles/satellite-public-hostname/tasks/main.yml 25 ●●●●● patch | view | raw | blame | history
tests/jenkins/ocp-workload-iot-demo.groovy 1 ●●●● patch | view | raw | blame | history
tests/jenkins/ocp-workload-iot-demo.groovy 251 ●●●●● patch | view | raw | blame | history
ansible/configs/ocp-ha-lab/files/hosts_template.3.11.j2
File was deleted
ansible/configs/ocp-ha-lab/files/hosts_template.3.11.j2
New file
@@ -0,0 +1,404 @@
#
# ansible inventory for OpenShift Container Platform  3.11.51
# AgnosticD ansible-config: ocp-ha-lab
[OSEv3:vars]
###########################################################################
### Ansible Vars
###########################################################################
timeout=60
ansible_user={{ansible_user}}
ansible_become=yes
###########################################################################
### OpenShift Basic Vars
###########################################################################
openshift_deployment_type=openshift-enterprise
openshift_disable_check="disk_availability,memory_availability,docker_image_availability"
# OpenShift Version:
# If you modify the openshift_image_tag or the openshift_pkg_version variables after the cluster is set up, then an upgrade can be triggered, resulting in downtime.
# If openshift_image_tag is set, its value is used for all hosts in system container environments, even those that have another version installed. If
# Use this variable to specify a container image tag to install or configure.
#openshift_pkg_version is set, its value is used for all hosts in RPM-based environments, even those that have another version installed.
openshift_image_tag=v{{ osrelease }}
# Use this variable to specify an RPM version to install or configure.
openshift_pkg_version=-{{ osrelease }}
openshift_release={{ osrelease }}
{% if container_runtime == "cri-o" %}
openshift_use_crio=True
openshift_crio_enable_docker_gc=True
openshift_crio_docker_gc_node_selector={'runtime': 'cri-o'}
{% endif %}
# Node Groups
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true','runtime={{container_runtime}}']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true','runtime={{container_runtime}}']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true','runtime={{container_runtime}}'], 'edits': [{ 'key': 'kubeletArguments.pods-per-core','value': ['20']}]}]
# Configure node kubelet arguments. pods-per-core is valid in OpenShift Origin 1.3 or OpenShift Container Platform 3.3 and later. -> These  need to go into the above
# openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['85'], 'image-gc-low-threshold': ['75']}
# Configure logrotate scripts
# See: https://github.com/nickhammond/ansible-logrotate
logrotate_scripts=[{"name": "syslog", "path": "/var/log/cron\n/var/log/maillog\n/var/log/messages\n/var/log/secure\n/var/log/spooler\n", "options": ["daily", "rotate 7","size 500M", "compress", "sharedscripts", "missingok"], "scripts": {"postrotate": "/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true"}}]
# Deploy Operator Lifecycle Manager Tech Preview
openshift_enable_olm=true
###########################################################################
### OpenShift Registries Locations
###########################################################################
#oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version}
oreg_url=registry.redhat.io/openshift3/ose-${component}:${version}
oreg_auth_user={{ redhat_registry_user }}
oreg_auth_password={{ redhat_registry_password }}
# For Operator Framework Images
openshift_additional_registry_credentials=[{'host':'registry.connect.redhat.com','user':'{{ redhat_registry_user }}','password':'{{ redhat_registry_password }}','test_image':'mongodb/enterprise-operator:0.3.2'}]
openshift_examples_modify_imagestreams=true
{% if install_glusterfs|bool %}
###########################################################################
### OpenShift Container Storage
###########################################################################
openshift_master_dynamic_provisioning_enabled=True
# CNS storage cluster
# From https://github.com/red-hat-storage/openshift-cic
openshift_storage_glusterfs_namespace=openshift-storage
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=false
openshift_storage_glusterfs_block_deploy=true
openshift_storage_glusterfs_block_host_vol_create=true
openshift_storage_glusterfs_block_host_vol_size=200
openshift_storage_glusterfs_block_storageclass=true
openshift_storage_glusterfs_block_storageclass_default=true
# Container image to use for glusterfs pods
openshift_storage_glusterfs_image="registry.access.redhat.com/rhgs3/rhgs-server-rhel7:{{ glusterfs_image_tag }}"
# Container image to use for glusterblock-provisioner pod
openshift_storage_glusterfs_block_image="registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:{{ glusterfs_image_tag }}"
# Container image to use for heketi pods
openshift_storage_glusterfs_heketi_image="registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:{{ glusterfs_image_tag }}"
# GlusterFS version
#  Knowledgebase
#   https://access.redhat.com/solutions/3617551
#  Bugzilla
#   https://bugzilla.redhat.com/show_bug.cgi?id=163.1057
#  Complete OpenShift GlusterFS Configuration README
#   https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs
openshift_storage_glusterfs_version=v3.10
openshift_storage_glusterfs_block_version=v3.10
openshift_storage_glusterfs_s3_version=v3.10
openshift_storage_glusterfs_heketi_version=v3.10
# openshift_storage_glusterfs_registry_version=v3.10
# openshift_storage_glusterfs_registry_block_version=v3.10
# openshift_storage_glusterfs_registry_s3_version=v3.10
# openshift_storage_glusterfs_registry_heketi_version=v3.10
{% endif %}
{% if install_nfs|bool %}
# Set this line to enable NFS
openshift_enable_unsupported_configurations=True
{% endif %}
###########################################################################
### OpenShift Master Vars
###########################################################################
openshift_master_api_port={{master_api_port}}
openshift_master_console_port={{master_api_port}}
#Default:  openshift_master_cluster_method=native
openshift_master_cluster_hostname=loadbalancer.{{guid}}.internal
openshift_master_cluster_public_hostname={{master_lb_dns}}
openshift_master_default_subdomain={{cloudapps_suffix}}
#openshift_master_ca_certificate={'certfile': '/root/intermediate_ca.crt', 'keyfile': '/root/intermediate_ca.key'}
openshift_master_overwrite_named_certificates={{openshift_master_overwrite_named_certificates}}
# Audit log
# openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/log/openpaas-oscp-audit/openpaas-oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5}
# ocp-ha-lab
# AWS Autoscaler
#openshift_master_bootstrap_auto_approve=false
# This variable is a cluster identifier unique to the AWS Availability Zone. Using this avoids potential issues in Amazon Web Services (AWS) with multiple zones or multiple clusters.
#openshift_clusterid
###########################################################################
### OpenShift Network Vars
###########################################################################
osm_cluster_network_cidr=10.1.0.0/16
openshift_portal_net=172.30.0.0/16
# os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy'
{{multi_tenant_setting}}
###########################################################################
### OpenShift Authentication Vars
###########################################################################
# LDAP AND HTPASSWD Authentication (download ipa-ca.crt first)
# openshift_master_identity_providers=[{'name': 'ldap', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider','attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com', 'bindPassword': 'r3dh4t1!', 'ca': '/etc/origin/master/ipa-ca.crt','insecure': 'false', 'url': 'ldaps://ipa.shared.example.opentlc.com:636/cn=users,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com?uid?sub?(memberOf=cn=ocp-users,cn=groups,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com)'},{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# Just LDAP
openshift_master_identity_providers=[{'name': 'ldap', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider','attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com', 'bindPassword': 'r3dh4t1!', 'ca': '/etc/origin/master/ipa-ca.crt','insecure': 'false', 'url': 'ldaps://ipa.shared.example.opentlc.com:636/cn=users,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com?uid?sub?(memberOf=cn=ocp-users,cn=groups,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com)'}]
# Just HTPASSWD
# openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# LDAP and HTPASSWD dependencies
openshift_master_htpasswd_file=/root/htpasswd.openshift
openshift_master_ldap_ca_file=/root/ipa-ca.crt
{% if admission_plugin_config is defined %}
###########################################################################
### OpenShift admission plugin config
###########################################################################
openshift_master_admission_plugin_config={{admission_plugin_config|to_json}}
{% endif %}
###########################################################################
### OpenShift Metrics and Logging Vars
###########################################################################
#########################
# Prometheus Metrics
#########################
openshift_hosted_prometheus_deploy=true
openshift_prometheus_namespace=openshift-metrics
openshift_prometheus_node_selector={"node-role.kubernetes.io/infra":"true"}
openshift_cluster_monitoring_operator_install=true
{% if install_glusterfs|bool %}
openshift_cluster_monitoring_operator_prometheus_storage_capacity=20Gi
openshift_cluster_monitoring_operator_alertmanager_storage_capacity=2Gi
openshift_cluster_monitoring_operator_prometheus_storage_enabled=True
openshift_cluster_monitoring_operator_alertmanager_storage_enabled=True
# The next two will be enabled in 3.11.z
# will use deafult storage class until then
# so set the block storage class as default
# openshift_cluster_monitoring_operator_prometheus_storage_class_name='glusterfs-storage-block'
# openshift_cluster_monitoring_operator_alertmanager_storage_class_name='glusterfs-storage-block'
{% endif %}
########################
# Cluster Metrics
########################
openshift_metrics_install_metrics={{install_metrics}}
{% if install_nfs|bool and not install_glusterfs|bool %}
openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_nfs_directory=/srv/nfs
openshift_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
openshift_metrics_storage_labels={'storage': 'metrics'}
openshift_metrics_cassandra_pvc_storage_class_name=''
{% endif %}
{% if install_glusterfs|bool %}
openshift_metrics_cassandra_storage_type=dynamic
openshift_metrics_cassandra_pvc_storage_class_name='glusterfs-storage-block'
{% endif %}
openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"}
# Store Metrics for 2 days
openshift_metrics_duration=2
# Suggested Quotas and limits for Prometheus components:
openshift_prometheus_memory_requests=2Gi
openshift_prometheus_cpu_requests=750m
openshift_prometheus_memory_limit=2Gi
openshift_prometheus_cpu_limit=750m
openshift_prometheus_alertmanager_memory_requests=300Mi
openshift_prometheus_alertmanager_cpu_requests=200m
openshift_prometheus_alertmanager_memory_limit=300Mi
openshift_prometheus_alertmanager_cpu_limit=200m
openshift_prometheus_alertbuffer_memory_requests=300Mi
openshift_prometheus_alertbuffer_cpu_requests=200m
openshift_prometheus_alertbuffer_memory_limit=300Mi
openshift_prometheus_alertbuffer_cpu_limit=200m
{# The following file will need to be copied over to the bastion before deployment
# There is an example in ocp-workshop/files
# openshift_prometheus_additional_rules_file=/root/prometheus_alerts_rules.yml #}
# Grafana
openshift_grafana_node_selector={"node-role.kubernetes.io/infra":"true"}
openshift_grafana_storage_type=pvc
openshift_grafana_pvc_size=2Gi
openshift_grafana_node_exporter=true
{% if install_glusterfs|bool %}
openshift_grafana_sc_name=glusterfs-storage
{% endif %}
########################
# Cluster Logging
########################
openshift_logging_install_logging={{install_logging}}
openshift_logging_install_eventrouter={{install_logging}}
{% if install_nfs|bool and not install_glusterfs|bool %}
openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_nfs_directory=/srv/nfs
openshift_logging_storage_nfs_options='*(rw,root_squash)'
openshift_logging_storage_volume_name=logging
openshift_logging_storage_volume_size=10Gi
openshift_logging_storage_labels={'storage': 'logging'}
openshift_logging_es_pvc_storage_class_name=''
{% endif %}
{% if install_glusterfs|bool %}
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=20Gi
openshift_logging_es_pvc_storage_class_name='glusterfs-storage-block'
{% endif %}
openshift_logging_es_memory_limit=8Gi
openshift_logging_es_cluster_size=1
openshift_logging_curator_default_days=2
openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_eventrouter_nodeselector={"node-role.kubernetes.io/infra": "true"}
###########################################################################
### OpenShift Router and Registry Vars
###########################################################################
# default selectors for router and registry services
# openshift_router_selector='node-role.kubernetes.io/infra=true'
# openshift_registry_selector='node-role.kubernetes.io/infra=true'
openshift_hosted_router_replicas={{infranode_instance_count}}
# openshift_hosted_router_certificate={"certfile": "/path/to/router.crt", "keyfile": "/path/to/router.key", "cafile": "/path/to/router-ca.crt"}
openshift_hosted_registry_replicas=1
openshift_hosted_registry_pullthrough=true
openshift_hosted_registry_acceptschema2=true
openshift_hosted_registry_enforcequota=true
{% if install_glusterfs|bool %}
openshift_hosted_registry_storage_kind=glusterfs
openshift_hosted_registry_storage_volume_size=10Gi
openshift_hosted_registry_selector="node-role.kubernetes.io/infra=true"
{% endif %}
{% if install_nfs|bool %}
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/srv/nfs
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=20Gi
{% endif %}
###########################################################################
### OpenShift Service Catalog Vars
###########################################################################
# default=true
openshift_enable_service_catalog=true
# default=true
template_service_broker_install=true
openshift_template_service_broker_namespaces=['openshift']
# default=true
ansible_service_broker_install=true
ansible_service_broker_local_registry_whitelist=['.*-apb$']
###########################################################################
### OpenShift Hosts
###########################################################################
# openshift_node_labels DEPRECATED
# openshift_node_problem_detector_install
[OSEv3:children]
lb
masters
etcd
nodes
{% if install_nfs|bool %}
nfs
{% endif %}
{% if install_glusterfs|bool %}
glusterfs
{% endif %}
[lb]
{% for host in groups['loadbalancers'] %}
{{ hostvars[host].internaldns }}
{% endfor %}
[masters]
{% for host in groups['masters']|sort %}
{{ hostvars[host].internaldns }}
{% endfor %}
[etcd]
{% for host in groups['masters']|sort %}
{{ hostvars[host].internaldns }}
{% endfor %}
[nodes]
## These are the masters
{% for host in groups['masters']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
{% endfor %}
## These are infranodes
{% for host in groups['infranodes']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
{% endfor %}
## These are regular nodes
{% for host in groups['nodes']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
{% endfor %}
{% if install_glusterfs|bool %}
## These are OCS nodes
{% for host in groups['support']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
{% endfor %}
{% endif %}
{% if install_nfs|bool %}
[nfs]
{% for host in [groups['support']|sort|first] %}
{{ hostvars[host].internaldns }}
{% endfor %}
{% endif %}
{% if install_glusterfs|bool %}
[glusterfs]
{% for host in groups['support']|sort %}
{{ hostvars[host].internaldns }} glusterfs_devices='[ "{{ glusterfs_app_device_name }}" ]'
{% endfor %}
{% endif %}
ansible/configs/ocp-ha-lab/files/labs_hosts_template.3.11.j2
File was deleted
ansible/configs/ocp-ha-lab/files/labs_hosts_template.3.11.j2
New file
@@ -0,0 +1,404 @@
#
# ansible inventory for OpenShift Container Platform  3.11.51
# AgnosticD ansible-config: ocp-ha-lab
[OSEv3:vars]
###########################################################################
### Ansible Vars
###########################################################################
timeout=60
ansible_user={{ansible_user}}
ansible_become=yes
###########################################################################
### OpenShift Basic Vars
###########################################################################
openshift_deployment_type=openshift-enterprise
openshift_disable_check="disk_availability,memory_availability,docker_image_availability"
# OpenShift Version:
# If you modify the openshift_image_tag or the openshift_pkg_version variables after the cluster is set up, then an upgrade can be triggered, resulting in downtime.
# If openshift_image_tag is set, its value is used for all hosts in system container environments, even those that have another version installed. If
# Use this variable to specify a container image tag to install or configure.
#openshift_pkg_version is set, its value is used for all hosts in RPM-based environments, even those that have another version installed.
openshift_image_tag=
# Use this variable to specify an RPM version to install or configure.
openshift_pkg_version=
openshift_release=
{% if container_runtime == "cri-o" %}
openshift_use_crio=
openshift_crio_enable_docker_gc=
openshift_crio_docker_gc_node_selector=
{% endif %}
# Node Groups
openshift_node_groups=
# Configure node kubelet arguments. pods-per-core is valid in OpenShift Origin 1.3 or OpenShift Container Platform 3.3 and later. -> These  need to go into the above
# openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['85'], 'image-gc-low-threshold': ['75']}
# Configure logrotate scripts
# See: https://github.com/nickhammond/ansible-logrotate
logrotate_scripts=[{"name": "syslog", "path": "/var/log/cron\n/var/log/maillog\n/var/log/messages\n/var/log/secure\n/var/log/spooler\n", "options": ["daily", "rotate 7","size 500M", "compress", "sharedscripts", "missingok"], "scripts": {"postrotate": "/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true"}}]
# Deploy Operator Lifecycle Manager Tech Preview
openshift_enable_olm=
###########################################################################
### OpenShift Registries Locations
###########################################################################
#oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version}
oreg_url=
oreg_auth_user=
oreg_auth_password=
# For Operator Framework Images
openshift_additional_registry_credentials=
openshift_examples_modify_imagestreams=
{% if install_glusterfs|bool %}
###########################################################################
### OpenShift Container Storage
###########################################################################
openshift_master_dynamic_provisioning_enabled=
# CNS storage cluster
# From https://github.com/red-hat-storage/openshift-cic
openshift_storage_glusterfs_namespace=
openshift_storage_glusterfs_storageclass=
openshift_storage_glusterfs_storageclass_default=
openshift_storage_glusterfs_block_deploy=
openshift_storage_glusterfs_block_host_vol_create=
openshift_storage_glusterfs_block_host_vol_size=
openshift_storage_glusterfs_block_storageclass=
openshift_storage_glusterfs_block_storageclass_default=
# Container image to use for glusterfs pods
openshift_storage_glusterfs_image=
# Container image to use for glusterblock-provisioner pod
openshift_storage_glusterfs_block_image=
# Container image to use for heketi pods
openshift_storage_glusterfs_heketi_image=
# GlusterFS version
#  Knowledgebase
#   https://access.redhat.com/solutions/3617551
#  Bugzilla
#   https://bugzilla.redhat.com/show_bug.cgi?id=163.1057
#  Complete OpenShift GlusterFS Configuration README
#   https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs
openshift_storage_glusterfs_version=
openshift_storage_glusterfs_block_version=
openshift_storage_glusterfs_s3_version=
openshift_storage_glusterfs_heketi_version=
# openshift_storage_glusterfs_registry_version=v3.10
# openshift_storage_glusterfs_registry_block_version=v3.10
# openshift_storage_glusterfs_registry_s3_version=v3.10
# openshift_storage_glusterfs_registry_heketi_version=v3.10
{% endif %}
{% if install_nfs|bool %}
# Set this line to enable NFS
openshift_enable_unsupported_configurations=
{% endif %}
###########################################################################
### OpenShift Master Vars
###########################################################################
openshift_master_api_port=
openshift_master_console_port=
#Default:  openshift_master_cluster_method=native
openshift_master_cluster_hostname=
openshift_master_cluster_public_hostname=
openshift_master_default_subdomain=
#openshift_master_ca_certificate=
openshift_master_overwrite_named_certificates=
# Audit log
# openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/log/openpaas-oscp-audit/openpaas-oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5}
# ocp-ha-lab
# AWS Autoscaler
#openshift_master_bootstrap_auto_approve=false
# This variable is a cluster identifier unique to the AWS Availability Zone. Using this avoids potential issues in Amazon Web Services (AWS) with multiple zones or multiple clusters.
#openshift_clusterid
###########################################################################
### OpenShift Network Vars
###########################################################################
osm_cluster_network_cidr=10.1.0.0/16
openshift_portal_net=172.30.0.0/16
# os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy'
{{multi_tenant_setting}}
###########################################################################
### OpenShift Authentication Vars
###########################################################################
# LDAP AND HTPASSWD Authentication (download ipa-ca.crt first)
#openshift_master_identity_providers=
# Just LDAP
openshift_master_identity_providers=[{'name': 'ldap', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider','attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com', 'bindPassword': 'r3dh4t1!', 'ca': '/etc/origin/master/ipa-ca.crt','insecure': 'false', 'url': 'ldaps://ipa.shared.example.opentlc.com:636/cn=users,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com?uid?sub?(memberOf=cn=ocp-users,cn=groups,cn=accounts,dc=shared,dc=example,dc=opentlc,dc=com)'}]
# Just HTPASSWD
# openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# LDAP and HTPASSWD dependencies
openshift_master_htpasswd_file=/root/htpasswd.openshift
openshift_master_ldap_ca_file=/root/ipa-ca.crt
{% if admission_plugin_config is defined %}
###########################################################################
### OpenShift admission plugin config
###########################################################################
openshift_master_admission_plugin_config={{admission_plugin_config|to_json}}
{% endif %}
###########################################################################
### OpenShift Metrics and Logging Vars
###########################################################################
#########################
# Prometheus Metrics
#########################
openshift_hosted_prometheus_deploy=
openshift_prometheus_namespace=
openshift_prometheus_node_selector=
openshift_cluster_monitoring_operator_install=
{% if install_glusterfs|bool %}
openshift_cluster_monitoring_operator_prometheus_storage_capacity=20Gi
openshift_cluster_monitoring_operator_alertmanager_storage_capacity=2Gi
openshift_cluster_monitoring_operator_prometheus_storage_enabled=True
openshift_cluster_monitoring_operator_alertmanager_storage_enabled=True
# The next two will be enabled in 3.11.z
# will use deafult storage class until then
# so set the block storage class as default
# openshift_cluster_monitoring_operator_prometheus_storage_class_name='glusterfs-storage-block'
# openshift_cluster_monitoring_operator_alertmanager_storage_class_name='glusterfs-storage-block'
{% endif %}
########################
# Cluster Metrics
########################
openshift_metrics_install_metrics=
{% if install_nfs|bool and not install_glusterfs|bool %}
openshift_metrics_storage_kind=
openshift_metrics_storage_access_modes=
openshift_metrics_storage_nfs_directory=
openshift_metrics_storage_nfs_options=
openshift_metrics_storage_volume_name=
openshift_metrics_storage_volume_size=
openshift_metrics_storage_labels=
openshift_metrics_cassandra_pvc_storage_class_name=
{% endif %}
{% if install_glusterfs|bool %}
openshift_metrics_cassandra_storage_type=
openshift_metrics_cassandra_pvc_storage_class_name=
{% endif %}
openshift_metrics_hawkular_nodeselector=
openshift_metrics_cassandra_nodeselector=
openshift_metrics_heapster_nodeselector=
# Store Metrics for 2 days
openshift_metrics_duration=2
# Suggested Quotas and limits for Prometheus components:
openshift_prometheus_memory_requests=2Gi
openshift_prometheus_cpu_requests=750m
openshift_prometheus_memory_limit=2Gi
openshift_prometheus_cpu_limit=750m
openshift_prometheus_alertmanager_memory_requests=300Mi
openshift_prometheus_alertmanager_cpu_requests=200m
openshift_prometheus_alertmanager_memory_limit=300Mi
openshift_prometheus_alertmanager_cpu_limit=200m
openshift_prometheus_alertbuffer_memory_requests=300Mi
openshift_prometheus_alertbuffer_cpu_requests=200m
openshift_prometheus_alertbuffer_memory_limit=300Mi
openshift_prometheus_alertbuffer_cpu_limit=200m
{# The following file will need to be copied over to the bastion before deployment
# There is an example in ocp-workshop/files
# openshift_prometheus_additional_rules_file=/root/prometheus_alerts_rules.yml #}
# Grafana
openshift_grafana_node_selector=
openshift_grafana_storage_type=
openshift_grafana_pvc_size=
openshift_grafana_node_exporter=
{% if install_glusterfs|bool %}
openshift_grafana_sc_name=glusterfs-storage
{% endif %}
########################
# Cluster Logging
########################
openshift_logging_install_logging=
openshift_logging_install_eventrouter=
{% if install_nfs|bool and not install_glusterfs|bool %}
openshift_logging_storage_kind=
openshift_logging_storage_access_modes=
openshift_logging_storage_nfs_directory=
openshift_logging_storage_nfs_options=
openshift_logging_storage_volume_name=
openshift_logging_storage_volume_size=
openshift_logging_storage_labels=
openshift_logging_es_pvc_storage_class_name=
{% endif %}
{% if install_glusterfs|bool %}
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=20Gi
openshift_logging_es_pvc_storage_class_name='glusterfs-storage-block'
{% endif %}
openshift_logging_es_memory_limit=8Gi
openshift_logging_es_cluster_size=1
openshift_logging_curator_default_days=2
openshift_logging_kibana_nodeselector=
openshift_logging_curator_nodeselector=
openshift_logging_es_nodeselector=
openshift_logging_eventrouter_nodeselector=
###########################################################################
### OpenShift Router and Registry Vars
###########################################################################
# default selectors for router and registry services
# openshift_router_selector='node-role.kubernetes.io/infra=true'
# openshift_registry_selector='node-role.kubernetes.io/infra=true'
openshift_hosted_router_replicas=
# openshift_hosted_router_certificate={"certfile": "/path/to/router.crt", "keyfile": "/path/to/router.key", "cafile": "/path/to/router-ca.crt"}
openshift_hosted_registry_replicas=1
openshift_hosted_registry_pullthrough=true
openshift_hosted_registry_acceptschema2=true
openshift_hosted_registry_enforcequota=true
{% if install_glusterfs|bool %}
openshift_hosted_registry_storage_kind=glusterfs
openshift_hosted_registry_storage_volume_size=10Gi
openshift_hosted_registry_selector="node-role.kubernetes.io/infra=true"
{% endif %}
{% if install_nfs|bool %}
openshift_hosted_registry_storage_kind=
openshift_hosted_registry_storage_access_modes=
openshift_hosted_registry_storage_nfs_directory=
openshift_hosted_registry_storage_nfs_options=
openshift_hosted_registry_storage_volume_name=
openshift_hosted_registry_storage_volume_size=
{% endif %}
###########################################################################
### OpenShift Service Catalog Vars
###########################################################################
# default=true
openshift_enable_service_catalog=
# default=true
template_service_broker_install=
openshift_template_service_broker_namespaces=
# default=true
ansible_service_broker_install=true
ansible_service_broker_local_registry_whitelist=['.*-apb$']
###########################################################################
### OpenShift Hosts
###########################################################################
# openshift_node_labels DEPRECATED
# openshift_node_problem_detector_install
[OSEv3:children]
lb
masters
etcd
nodes
{% if install_nfs|bool %}
nfs
{% endif %}
{% if install_glusterfs|bool %}
glusterfs
{% endif %}
[lb]
{% for host in groups['loadbalancers'] %}
{{ hostvars[host].internaldns }}
{% endfor %}
[masters]
{% for host in groups['masters']|sort %}
{{ hostvars[host].internaldns }}
{% endfor %}
[etcd]
{% for host in groups['masters']|sort %}
{{ hostvars[host].internaldns }}
{% endfor %}
[nodes]
## These are the masters
{% for host in groups['masters']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
{% endfor %}
## These are infranodes
{% for host in groups['infranodes']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
{% endfor %}
## These are regular nodes
{% for host in groups['nodes']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
{% endfor %}
{% if install_glusterfs|bool %}
## These are OCS nodes
{% for host in groups['support']|sort %}
{{ hostvars[host].internaldns }} openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
{% endfor %}
{% endif %}
{% if install_nfs|bool %}
[nfs]
{% for host in [groups['support']|sort|first] %}
{{ hostvars[host].internaldns }}
{% endfor %}
{% endif %}
{% if install_glusterfs|bool %}
[glusterfs]
{% for host in groups['support']|sort %}
{{ hostvars[host].internaldns }} glusterfs_devices='[ "{{ glusterfs_app_device_name }}" ]'
{% endfor %}
{% endif %}
ansible/configs/satellite-multi-region/README.adoc
@@ -1,35 +1,278 @@
= Multi-region-tower
This config deploys towers in multiple regions.
:config: satellite-multi-region
:author: GPTE Team
:tag1: install_satellite
:tag2: configure_satellite
:tag3: install_capsule
:tag4: configure_capsule
. You specify the target regions using the `target_regions` variable, for example:
+
[source,yaml]
Config: {config}
===============
This config deploys bastion, satellite and multi region capsules.
Requirements
------------
Following are the requirements:
. Yum repositories are required for bastion and satellite host.
. Aws credentials are required.
Config Variables
----------------
* Cloud specfic settings related variables.
|===
|*Variable* | *State* |*Description*
| env_type: satellite-multi-region |Required | Name of the config
| output_dir: /tmp/workdir |Required | Writable working scratch directory
| email: satellite_multi_region@example.com |Required |  User info for notifications
| guid: defaultguid | Reqired |Unique identifier
| cloud_provider: ec2 |Required        | Which AgnosticD Cloud Provider to use
|===
* Multi region sepfic settings related variables.
|===
|*Variable* | *State* |*Description*
|target_regions: [List] |Required | Main list to define regions
|region: "String" |Required | region name as per aws
|stack: "String" |Required | cloud formation template name
|name: "String" |Required | Freindly region name
|vpc_cidr: "IP addr" |Required | vpc cidr ip address
|subnet_cidr: "IP Addr" |Required | subnet ip address
|===
* Satellite specfic settings related variables.
|===
|*Variable* | *State* |*Description*
|install_satellite: Boolean   |Required | To enable installation roles
|configure_satellite: Boolean |Required | To enable configuration roles
|install_capsule: Boolean   |Required | To enable capsule installation roles
|configure_capsule: Boolean |Required | To enable capsule configuration roles
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Required | Organization label in string without space
|org_description: "String" |Required | Organization description
|lifecycle_environment_path: [list] |Required | Contains nested list of environment path
|satellite_content: [list] |Required | Main List variable
|subscription_name: "String" |Required | Subscription name mainly required for manifest role
| manifest_file: "/path/to/manifest.zip" |Required | Path of download satellite manifest
|===
[NOTE]
For more about variables read README.adoc of the roles.
* Example variables
. Sample of sample_vars.yml
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ cat ./configs/satellite-multi-region/sample_vars.yml
env_type: satellite-multi-region
output_dir: /tmp/workdir
email: satellite_multi_region@example.com
cloud_provider: ec2
target_regions:
  - region: us-east-1
  - region: ap-southeast-2
    stack: default
    name: na
    vpc_cidr: 10.1.0.0/16
    subnet_cidr: 10.1.0.0/24
  - region: eu-central-1
    stack: worker.j2
    name: emea
    vpc_cidr: 10.1.0.0/16
    subnet_cidr: 10.1.0.0/24
  - region: ap-southeast-1
    stack: worker.j2
    name: apac
    vpc_cidr: 10.1.0.0/16
    subnet_cidr: 10.1.0.0/24
----
+
This will deploy the same stack (default) in the specified regions. The default CloudFormation template is used: either in the config dir, or the default cloudformation template if there is none in the config dir.
+
If you want to use a specific template in a region, just specify the file name and store the template in `configs/multi-region-example/files/cloud_providers/FILENAME`.
+
. In this config the first region specified under target_regions dictionary will have master cluster installed which will include tower nodes and database node.
. Respective other regions next specified list under target_regions will be used to create Isolated group i.e. "name: emea" = [isolated_group_emea ] and worker nodes will be added.
  - region: ap-southeast-1
    stack: capsule.j2
    name: emea
    vpc_cidr: 10.2.0.0/16
    subnet_cidr: 10.2.0.0/24
Have a look at link:sample_vars.yml[] too for example vars.
install_satellite: True
configure_satellite: True
install_capsule: True
configure_capsule: True
satellite_version: 6.4
org: gpte
org_label: gpte
org_description: "Global Partner Training and Enablement"
subscription_name: "Employee SKU"
lifecycle_environment_path:
    - name: "Dev"
      label: "dev"
      description: "Development Environment"
      prior_env: "Library"
    - name: "QA"
      label: "qa"
      description: "Quality Environment"
      prior_env: "Dev"
    - name: "Prod"
      label: "prod"
      description: "Production Enviornment"
      prior_env: "QA"
satellite_content:
  - name:             "Capsule Server"
    activation_key:   "capsule_key"
    subscriptions:
      - "Employee SKU"
    life_cycle:       "Library"
    content_view:     "Capsule Content"
    content_view_update: False
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Capsule 6.4 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Satellite Capsule'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    activation_key:   "three_tier_app_key"
    content_view:     "Three Tier App Content"
    life_cycle:       "Library"
    subscriptions:
      - "Employee SKU"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
----
for reference look at link:sample_vars.yml[]
. Sample of secrets.yml
[source=text]
----
[user@desktop ~]$ cat ~/secrets.yml
aws_access_key_id: xxxxxxxxxxxxxxxx
aws_secret_access_key: xxxxxxxxxxxxxxxxxx
own_repo_path: http://localrepopath/to/repo
openstack_pem: ldZYgpVcjl0YmZNVytSb2VGenVrTG80SzlEU2xtUTROMHUzR1BZdzFoTEg3R2hXM
====Omitted=====
25ic0NTTnVDblp4bVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
openstack_pub: XZXYgpVcjl0YmZNVytSb2VGenVrTG80SzlEU2xtUTROMHUzR1BZdzFoTEg3R2hXM
====Omitted=====
53ic0NTTnVDblp4bVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
----
Roles
-----
* List of satellite and capsule roles
|===
|*Role*| *Link* | *Description*
|satellite-public-hostname | link:../../roles/satellite-public-hostname[satellite-public-hostname] | Set public hostname
|satellite-installation |link:../../roles/satellite-installation[satellite-installation] | Install and configure satellite
|satellite-hammer-cli |link:../../roles/satellite-hammer-cli[satellite-hammer-cli] | Setup hammer cli
|satellite-manage-organization |link:../../roles/satellite-manage-organization[satellite-manage-organization] | Create satellite organization
|satellite-manage-manifest |link:../../roles/ssatellite-manage-manifest[satellite-manage-manifest] | uploads manifest
|satellite-manage-subscription |link:../../roles/satellite-manage-subscription[satellite-manage-subscription] | Manage subscription/repository
|satellite-manage-sync |link:../../roles/satellite-manage-sync[satellite-manage-sync] | Synchronize repository
|satellite-manage-lifecycle |link:../../roles/satellite-manage-lifecycle[satellite-manage-lifecycle]  | Create lifecycle environment
|satellite-manage-content-view |link:../../roles/satellite-manage-content-view[satellite-manage-content-view]  | Create content-view
|satellite-manage-activationkey |link:../../roles/satellite-manage-activationkey[satellite-manage-content-view]  | Create activation key
|satellite-manage-capsule-certificate | link:../../roles/satellite-manage-capsule-certificate[satellite-manage-capsule-certificate]  | Create certificates for capsule installation on satellite
|satellite-capsule-installation |link:../../roles/satellite-capsule-installation[satellite-capsule-installation]  | Install capsule packages
|satellite-capsule-configuration | link:../../roles/satellite-capsule-configuration[satellite-capsule-configuration] | Setup capsule server
|===
Tags
---
|===
|{tag1} |Consistent tag for all satellite installation roles
|{tag2} |Consistent tag for all satellite configuration roles
|{tag3} |Consistent tag for all capsule installation roles
|{tag4} |Consistent tag for all capsule configuration roles
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite,configure_capsule
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags install_satellite,install_capsule
----
Example to run config
---------------------
How to use config (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  main.yml \
  -e @./configs/satellite-multi-region/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid  \
  -e satellite_admin=admin \
  -e 'satellite_admin_password=password' \
  -e manifest_file=/path/to/manifest_satellite_6.4.zip
----
Example to stop environment
---------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/satellite-multi-region/stop.yml \
  -e @./configs/satellite-multi-region/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid
----
Example to start environment
---------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/satellite-multi-region/start.yml \
  -e @./configs/satellite-multi-region/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid
----
Example to destroy environment
------------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/satellite-multi-region/destroy_env.yml \
  -e @./configs/satellite-multi-region/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid
----
Author Information
------------------
{author}
ansible/configs/satellite-multi-region/env_vars.yml
@@ -32,10 +32,10 @@
bastion_instance_image: RHEL75
satellite_instance_count: 1
satellite_instance_type: "m5.xlarge"
satellite_instance_type: "m5.2xlarge"
capsule_instance_count: 1
capsule_instance_type: "m5.xlarge"
capsule_instance_type: "m5.2xlarge"
security_groups:
  - name: BastionSG
@@ -170,8 +170,6 @@
install_ipa_client: false
repo_method: file 
use_own_repos: true
repo_version: "6.4"
@@ -211,5 +209,6 @@
HostedZoneId: Z3IHLWJZOU9SRT
project_tag: "{{ env_type }}-{{ guid }}"
cloud_provider: ec2                     # Which AgnosticD Cloud Provider to use
key_name: ocpkey                        # Keyname must exist in AWS
ansible/configs/satellite-multi-region/files/cloud_providers/capsule.j2
@@ -217,11 +217,15 @@
          Value: {{instance['name']}}
        - Key: internaldns
          Value: {{instance['name']}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{subdomain_base }}
    {% else %}
        - Key: Name
          Value: {{instance['name']}}{{loop.index}}
          Value: {{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{ aws_dns_zone_public }}
        - Key: internaldns
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_public_prefix|d('')}}{{subdomain_base }}
    {% endif %}
        - Key: "owner"
          Value: "{{ email | default('unknownuser') }}"
ansible/configs/satellite-multi-region/files/cloud_providers/default.j2
@@ -217,11 +217,15 @@
          Value: {{instance['name']}}
        - Key: internaldns
          Value: {{instance['name']}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{subdomain_base }}
    {% else %}
        - Key: Name
          Value: {{instance['name']}}{{loop.index}}
        - Key: internaldns
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_public_prefix|d('')}}{{subdomain_base }}
    {% endif %}
        - Key: "owner"
          Value: "{{ email | default('unknownuser') }}"
ansible/configs/satellite-multi-region/files/cloud_providers/ec2_cloud_template.j2
File was renamed from ansible/configs/satellite-v64-prod/files/cloud_providers/ec2_cloud_template.j2
@@ -17,20 +17,16 @@
        - Key: Hostlication
          Value:
            Ref: "AWS::StackId"
  VpcInternetGateway:
    Type: "AWS::EC2::InternetGateway"
  VpcGA:
    Type: "AWS::EC2::VPCGatewayAttachment"
    Properties:
      InternetGatewayId:
        Ref: VpcInternetGateway
      VpcId:
        Ref: Vpc
  VpcRouteTable:
    Type: "AWS::EC2::RouteTable"
    Properties:
      VpcId:
        Ref: Vpc
  VPCRouteInternetGateway:
    DependsOn: VpcGA
    Type: "AWS::EC2::Route"
@@ -41,6 +37,14 @@
      RouteTableId:
        Ref: VpcRouteTable
  VpcGA:
    Type: "AWS::EC2::VPCGatewayAttachment"
    Properties:
      InternetGatewayId:
        Ref: VpcInternetGateway
      VpcId:
        Ref: Vpc
  PublicSubnet:
    Type: "AWS::EC2::Subnet"
    DependsOn:
@@ -49,6 +53,7 @@
    {% if aws_availability_zone is defined %}
      AvailabilityZone: {{ aws_availability_zone }}
    {% endif %}
      CidrBlock: "{{ aws_public_subnet_cidr }}"
      Tags:
        - Key: Name
@@ -79,6 +84,7 @@
        - Key: Name
          Value: "{{security_group['name']}}"
{% endfor %}
{% for security_group in default_security_groups|list + security_groups|list %}
{% for rule in security_group.rules %}
  {{security_group['name']}}{{rule['name']}}:
@@ -94,6 +100,7 @@
  {% if rule['cidr'] is defined %}
     CidrIp: "{{rule['cidr']}}"
  {% endif  %}
  {% if rule['from_group'] is defined %}
     SourceSecurityGroupId:
       Fn::GetAtt:
@@ -209,12 +216,16 @@
        - Key: Name
          Value: {{instance['name']}}
        - Key: internaldns
          Value: {{instance['name']}}.{{aws_dns_zone_private_chomped}}
          Value: {{instance['name']}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{subdomain_base }}
    {% else %}
        - Key: Name
          Value: {{instance['name']}}{{loop.index}}
        - Key: internaldns
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_public_prefix|d('')}}{{ subdomain_base}}
    {% endif %}
        - Key: "owner"
          Value: "{{ email | default('unknownuser') }}"
@@ -299,168 +310,6 @@
{% endif %}
{% endfor %}
{% endfor %}
{% for capsule_region in target_regions %}
{% for instance in capsule_instances %}
{% if instance['dns_loadbalancer'] | d(false) | bool
  and not instance['unique'] | d(false) | bool %}
  {{instance['name']}}{{capsule_region['name']}}DnsLoadBalancer:
    Type: "AWS::Route53::RecordSetGroup"
    DependsOn:
    {% for c in range(1, (instance['count']|int)+1) %}
      - {{instance['name']}}{{c}}{{capsule_region['name']}}
      {% if instance['public_dns'] %}
      - {{instance['name']}}{{c}}{{capsule_region['name']}}EIP
      {% endif %}
    {% endfor %}
    Properties:
      {% if secondary_stack is defined %}
      HostedZoneName: "{{ aws_dns_zone_public }}"
      {% else %}
      HostedZoneId:
        Ref: DnsZonePublic
      {% endif %}
      RecordSets:
      - Name: "{{instance['name']}}{{capsule_region['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{ aws_dns_zone_public }}"
        Type: A
        TTL: {{ aws_dns_ttl_public }}
        ResourceRecords:
{% for c in range(1,(instance['count'] |int)+1) %}
          - "Fn::GetAtt":
            - {{instance['name']}}{{c}}.{{capsule_region['name']}}
            - PublicIp
{% endfor %}
{% endif %}
{% for c in range(1,(instance['count'] |int)+1) %}
  {{instance['name']}}{{loop.index}}{{capsule_region['name']}}:
    Type: "AWS::EC2::Instance"
    Properties:
{% if custom_image is defined %}
      ImageId: {{ custom_image.image_id }}
{% else %}
      ImageId:
        Fn::FindInMap:
        - RegionMapping
        - Ref: AWS::Region
        - {{ instance.image | default(aws_default_image) }}
{% endif %}
      InstanceType: "{{instance['flavor'][cloud_provider]}}"
      KeyName: "{{instance.key_name | default(key_name)}}"
    {% if instance['UserData'] is defined %}
      {{instance['UserData']}}
    {% endif %}
    {% if instance['security_groups'] is defined %}
      SecurityGroupIds:
      {% for sg in instance.security_groups %}
        - Ref: {{ sg }}
      {% endfor %}
    {% else %}
      SecurityGroupIds:
        - Ref: DefaultSG
    {% endif %}
      SubnetId:
        Ref: PublicSubnet
      Tags:
    {% if instance['unique'] | d(false) | bool %}
        - Key: Name
          Value: {{instance['name']}}
        - Key: internaldns
          Value: {{instance['name']}}{{capsule_region['name']}}.{{aws_dns_zone_private_chomped}}
    {% else %}
        - Key: Name
          Value: {{instance['name']}}{{loop.index}}.{{capsule_region['name']}}
        - Key: internaldns
          Value: {{instance['name']}}{{loop.index}}.{{capsule_region['name']}}.{{aws_dns_zone_private_chomped}}
    {% endif %}
        - Key: "owner"
          Value: "{{ email | default('unknownuser') }}"
        - Key: "Project"
          Value: "{{project_tag}}"
        - Key: "{{project_tag}}"
          Value: "{{ instance['name'] }}{{loop.index}}.{{capsule_region['name']}}"
    {% for tag in instance['tags'] %}
        - Key: {{tag['key']}}
          Value: {{tag['value']}}
    {% endfor %}
      BlockDeviceMappings:
    {% if '/dev/sda1' not in instance.volumes|d([])|json_query('[].device_name')
      and '/dev/sda1' not in instance.volumes|d([])|json_query('[].name')
%}
        - DeviceName: "/dev/sda1"
          Ebs:
            VolumeSize: "{{ instance['rootfs_size'] | default(aws_default_rootfs_size) }}"
            VolumeType: "{{ aws_default_volume_type }}"
    {% endif %}
    {% for vol in instance.volumes|default([]) if vol.enable|d(true) %}
        - DeviceName: "{{ vol.name | default(vol.device_name) }}"
          Ebs:
          {% if cloud_provider in vol and 'type' in vol.ec2 %}
            VolumeType: "{{ vol[cloud_provider].type }}"
          {% else %}
            VolumeType: "{{ aws_default_volume_type }}"
          {% endif %}
            VolumeSize: "{{ vol.size }}"
    {% endfor %}
  {{instance['name']}}{{loop.index}}{{capsule_region['name']}}InternalDns:
    Type: "AWS::Route53::RecordSetGroup"
    Properties:
      HostedZoneId:
        Ref: DnsZonePrivate
      RecordSets:
    {% if instance['unique'] | d(false) | bool %}
        - Name: "{{instance['name']}}{{capsule_region['name']}}.{{aws_dns_zone_private}}"
    {% else %}
        - Name: "{{instance['name']}}{{loop.index}}.{{capsule_region['name']}}.{{aws_dns_zone_private}}"
    {% endif %}
          Type: A
          TTL: {{ aws_dns_ttl_private }}
          ResourceRecords:
            - "Fn::GetAtt":
              - {{instance['name']}}{{loop.index}}{{capsule_region['name']}}
              - PrivateIp
{% if instance['public_dns'] %}
  {{instance['name']}}{{loop.index}}{{capsule_region['name']}}EIP:
    Type: "AWS::EC2::EIP"
    DependsOn:
    - VpcGA
    Properties:
      InstanceId:
        Ref: {{instance['name']}}{{loop.index}}{{capsule_region['name']}}
  {{instance['name']}}{{loop.index}}{{capsule_region['name']}}PublicDns:
    Type: "AWS::Route53::RecordSetGroup"
    DependsOn:
      - {{instance['name']}}{{loop.index}}{{capsule_region['name']}}EIP
    Properties:
      {% if secondary_stack is defined %}
      HostedZoneName: "{{ aws_dns_zone_public }}"
      {% else %}
      HostedZoneId:
        Ref: DnsZonePublic
      {% endif %}
      RecordSets:
      {% if instance['unique'] | d(false) | bool %}
        - Name: "{{instance['name']}}{{capsule_region['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{ aws_dns_zone_public }}"
      {% else %}
        - Name: "{{instance['name']}}{{loop.index}}.{{capsule_region['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{ aws_dns_zone_public }}"
      {% endif %}
          Type: A
          TTL: {{ aws_dns_ttl_public }}
          ResourceRecords:
          - "Fn::GetAtt":
            - {{instance['name']}}{{loop.index}}{{capsule_region['name']}}
            - PublicIp
{% endif %}
{% endfor %}
{% endfor %}
{% endfor %}
  {% if secondary_stack is not defined %}
  Route53User:
ansible/configs/satellite-multi-region/files/etc_hosts_template.j2
@@ -2,15 +2,22 @@
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
{{ ansible_default_ipv4['address'] }}  {{publicname}}
{# comment starts
{% for host in groups['satellites'] %}
{% if host == inventory_hostname %}
{{ ansible_default_ipv4['address'] }}  {{host}}
{{ ansible_default_ipv4['address'] }}  {{ansible_nodename}}
{% endif %}
{% endfor %}
{% for host in groups['capsules'] %}
{% if host == inventory_hostname %}
{{ ansible_default_ipv4['address'] }}  {{host}}
{{ ansible_default_ipv4['address'] }}  {{ansible_nodename}}
{% endif %}
{% endfor %}
comment end #}
 
ansible/configs/satellite-multi-region/sample_vars.yml
@@ -2,19 +2,11 @@
env_type: satellite-multi-region      # Name of config to deploy
output_dir: /tmp/workdir                # Writable working scratch directory
email: name@example.com                 # User info for notifications
guid: defaultguid                            # Unique string used in FQDN
subdomain_base_suffix: .example.opentlc.com      # Your domain used in FQDN
email: satellite_multi_region@example.com
cloud_provider: ec2                     # Which AgnosticD Cloud Provider to use                # User info for notifications
# Cloud specfic settings - example given here for AWS
cloud_provider: ec2                     # Which AgnosticD Cloud Provider to use
HostedZoneId: Z3IHLWJZOU9SRT            # You will need to change this
key_name: ocpkey                        # Keyname must exist in AWS
target_regions:
  - region: ap-southeast-2
    stack: default
@@ -26,68 +18,102 @@
    name: emea
    vpc_cidr: 10.2.0.0/16
    subnet_cidr: 10.2.0.0/24
  # - region: ap-southeast-1
  #   stack: default
  #   name: apac
  #   vpc_cidr: 10.3.0.0/16
  #   subnet_cidr: 10.3.0.0/24
###### satellite env related variables ###############
satellite_admin: admin
satellite_admin_password: r3dh4t1!
org: "Default Organization"
subscription_name:  "Employee SKU"
# manifest_file: ~/office_work/manifests/manifest_satellite-vm_1.zip
content_view_name: "capsule server content"
activation_key_name: "capsule_activation_key"
life_cycle_env_name: "Library"
#
########### repo product and name ###############
satellite_repository:
  - organization: "{{org}}"
    product: 'Red Hat Enterprise Linux Server'
    basearch: 'x86_64'
    releasever:  '7Server'
    name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
    # sync_name: 'Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server'
  - organization: "{{org}}"
    product: 'Red Hat Satellite Capsule'
    basearch: 'x86_64'
    name: 'Red Hat Satellite Capsule 6.4 (for RHEL 7 Server) (RPMs)'
    # sync_name: 'Red Hat Satellite Capsule 6.4 for RHEL 7 Server RPMs x86_64'
  - organization: "{{org}}"
    product: 'Red Hat Ansible Engine'
    basearch: 'x86_64'
    name: 'Red Hat Ansible Engine 2.6 RPMs for Red Hat Enterprise Linux 7 Server'
    # sync_name: 'Red Hat Ansible Engine 2.6 RPMs for Red Hat Enterprise Linux 7 Server x86_64'
  - organization: "{{org}}"
    product: 'Red Hat Software Collections for RHEL Server'
    basearch: 'x86_64'
    releasever:  '7Server'
    name: 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server'
    # sync_name: 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server x86_64 7Server'
  - organization: "{{org}}"
    product: 'Red Hat Enterprise Linux Server'
    basearch: 'x86_64'
    name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
    # sync_name: 'Red Hat Satellite Maintenance 6 for RHEL 7 Server RPMs x86_64'
  - organization: "{{org}}"
    product: 'Red Hat Enterprise Linux Server'
    basearch: 'x86_64'
    name: 'Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (RPMs)'
    # sync_name: 'Red Hat Satellite Tools 6.4 for RHEL 7 Server RPMs x86_64'
 
###### satellite env related variables ###############
install_satellite: True
configure_satellite: True
install_capsule: True
configure_capsule: True
satellite_version: 6.4
org: gpte
org_label: gpte
org_description: "Global Partner Training and Enablement"
lifecycle_environment_path:
    - name: "Dev"
      label: "dev"
      description: "Development Environment"
      prior_env: "Library"
    - name: "QA"
      label: "qa"
      description: "Quality Environment"
      prior_env: "Dev"
    - name: "Prod"
      label: "prod"
      description: "Production Enviornment"
      prior_env: "QA"
subscription_name: "Employee SKU"
########## Activation Key #####################
satellite_content:
  - name:             "Capsule Server"
    activation_key:   "capsule_key"
    subscriptions:
      - "Employee SKU"
    life_cycle:       "Library"
    content_view:     "Capsule Content"
    content_view_update: False
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server'
        product: 'Red Hat Software Collections (for RHEL Server)'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Capsule 6.4 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Satellite Capsule'
        basearch: 'x86_64'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
      - name: 'Red Hat Ansible Engine 2.6 RPMs for Red Hat Enterprise Linux 7 Server'
        product: 'Red Hat Ansible Engine'
        basearch: 'x86_64'
      - name: 'Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Serve'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    activation_key:   "three_tier_app_key"
    content_view:     "Three Tier App Content"
    life_cycle:       "Library"
    subscriptions:
      - "Employee SKU"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server'
        product: 'Red Hat Software Collections (for RHEL Server)'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7.7'
      - name: 'Red Hat Enterprise Linux 7 Server - RH Common (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Enterprise Linux 7 Server - Extras (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
      - name: 'Red Hat Enterprise Linux 7 Server - Optional (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
...
ansible/configs/satellite-multi-region/software.yml
@@ -6,37 +6,33 @@
  tasks:
    - debug:
        msg: "Software tasks Started"
- name: Configuring satellite Hosts
  hosts: satellites
  become: True
  gather_facts: True
  pre_tasks:
      - name: Copy /etc/hosts file
        template:
          src: "./files/etc_hosts_template.j2"
          dest: /etc/hosts
  roles:
    - { role: "satellite-installation",                 when: install_satellite }
    - { role: "satellite-public-hostname" }
    - { role: "satellite-installation",                 when: install_satellite   }
    # # - { role: "satellite-hammer-cli",                 when: install_satellite   }
    - { role: "satellite-manage-organization",          when: configure_satellite }
    - { role: "satellite-manage-manifest",              when: configure_satellite } 
    - { role: "satellite-manage-subscription-and-sync", when: configure_satellite }
    - { role: "satellite-manage-content-view",          when: configure_satellite }
    - { role: "satellite-manage-activationkey",         when: configure_satellite }
    - { role: "satellite-manage-capsule-certificate",   when: configure_satellite }
    - { role: "satellite-manage-subscription",          when: configure_satellite }
    - { role: "satellite-manage-sync",                  when: configure_satellite }
    # - { role: "satellite-manage-lifecycle",             when: configure_satellite }
    # - { role: "satellite-manage-content-view",          when: configure_satellite }
    # - { role: "satellite-manage-activationkey",         when: configure_satellite }
    # - { role: "satellite-manage-capsule-certificate",   when: configure_satellite }
    
- name: Configuring capsule Hosts
  hosts: capsules
  become: True
  gather_facts: True
  pre_tasks:
      - name: Copy /etc/hosts file
        template:
          src: "./files/etc_hosts_template.j2"
          dest: /etc/hosts
  roles:
    - { role: "satellite-capsule-installation",   when: install_capsule }
    - { role: "satellite-capsule-configuration",  when: configure_capsule }
    - { role: "satellite-public-hostname" }
    # - { role: "satellite-capsule-installation",   when: install_capsule }
    # - { role: "satellite-capsule-configuration",  when: configure_capsule }
    
ansible/configs/satellite-v64-prod/README.adoc
File was deleted
ansible/configs/satellite-v64-prod/destroy_env.yml
File was deleted
ansible/configs/satellite-v64-prod/env_vars.yml
File was deleted
ansible/configs/satellite-v64-prod/files/etc_hosts_template.j2
File was deleted
ansible/configs/satellite-v64-prod/files/hosts_template.j2
File was deleted
ansible/configs/satellite-v64-prod/files/repos_template.j2
File was deleted
ansible/configs/satellite-v64-prod/files/tower_template_inventory.j2
File was deleted
ansible/configs/satellite-v64-prod/post_infra.yml
File was deleted
ansible/configs/satellite-v64-prod/post_software.yml
File was deleted
ansible/configs/satellite-v64-prod/pre_infra.yml
File was deleted
ansible/configs/satellite-v64-prod/pre_software.yml
File was deleted
ansible/configs/satellite-v64-prod/sample_vars.yml
File was deleted
ansible/configs/satellite-v64-prod/software.yml
File was deleted
ansible/configs/satellite-vm/README.adoc
@@ -1,60 +1,247 @@
= satellite-vm config
== Review the Env_Type variable file
* This file link:./env_vars.yml[./env_vars.yml] contains all the variables you
 need to define to control the deployment of your environment.
:config: satellite-vm
:author: GPTE Team
:tag1: install_satellite
:tag2: configure_satellite
== Running Ansible Playbook
You can run the playbook with the following arguments to overwrite the default variable values:
From the `ansible_agnostic_deployer/ansible` directory run
`
[source,bash]
Config: {config}
===============
This config deploys bastion and satellite.
Requirements
------------
Following are the requirements:
. Yum repositories are required for bastion and satellite host.
. Aws credentials are required.
Config Variables
----------------
* Cloud specfic settings related variables.
|===
|*Variable* | *State* |*Description*
| env_type: satellite-vm |Required | Name of the config
| output_dir: /tmp/workdir |Required | Writable working scratch directory
| email: satellite_vm@example.com |Required |  User info for notifications
| guid: defaultguid | Reqired |Unique identifier
| cloud_provider: ec2 |Required        | Which AgnosticD Cloud Provider to use
|aws_regions: "String" |Required | aws region
|===
* Satellite specfic settings related variables.
|===
|*Variable* | *State* |*Description*
|install_satellite: Boolean   |Required | To enable installation roles
|configure_satellite: Boolean |Required | To enable configuration roles
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Required | Organization label in string without space
|org_description: "String" |Required | Organization description
|lifecycle_environment_path: [list] |Required | Contains nested list of environment path
|satellite_content: [list] |Required | Main List variable
|subscription_name: "String" |Required | Subscription name mainly required for manifest role
| manifest_file: "/path/to/manifest.zip" |Required | Path of download satellite manifest
|===
[NOTE]
For more about variables read README.adoc of the roles.
* Example variables files
. Sample of sample_vars.yml
[source=text]
----
ENVTYPE=satellite-vm
GUID=test01
BASESUFFIX='.example.opentlc.com'
CLOUDPROVIDER=ec2
REGION=us-east-1
HOSTZONEID='Z3IHLWJZOU9SRT'
KEYNAME=ocpkey
RHN_USER=<rhn_username>
RHN_PASS=<rhn_password>
MANIFEST_FILE=/path/manifest.zip
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ cat ./configs/satellite-vm/sample_vars.yml
ansible-playbook main.yml  \
      -e "guid=${GUID}" \
      -e "env_type=${ENVTYPE}" \
      -e "key_name=${KEYNAME}" \
      -e "subdomain_base_suffix=${BASESUFFIX}" \
      -e "cloud_provider=${CLOUDPROVIDER}" \
      -e "aws_region=${REGION}" \
      -e "HostedZoneId=${HOSTZONEID}" \
      -e "email=name@example.com" \
      -e "output_dir=/tmp/workdir" \
      -e "rhel_subscription_user=${RHN_USER}"  \
      -e "rhel_subscription_pass=${RHN_PASS}" \
      -e "manifest_file=${MANIFEST_FILE}" \
      -e @~/secrets.yml
env_type: satellite-vm
output_dir: /tmp/workdir
email: satellite_vm@example.com
cloud_provider: ec2
aws_region: ap-southeast-2
=== To Delete an environment
install_satellite: True
configure_satellite: True
satellite_version: 6.4
org: gpte
org_label: gpte
org_description: "Global Partner Training and Enablement"
subscription_name: "Employee SKU"
lifecycle_environment_path:
    - name: "Dev"
      label: "dev"
      description: "Development Environment"
      prior_env: "Library"
    - name: "QA"
      label: "qa"
      description: "Quality Environment"
      prior_env: "Dev"
    - name: "Prod"
      label: "prod"
      description: "Production Enviornment"
      prior_env: "QA"
satellite_content:
  - name:             "Capsule Server"
    activation_key:   "capsule_key"
    subscriptions:
      - "Employee SKU"
    life_cycle:       "Library"
    content_view:     "Capsule Content"
    content_view_update: False
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Capsule 6.4 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Satellite Capsule'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    activation_key:   "three_tier_app_key"
    content_view:     "Three Tier App Content"
    life_cycle:       "Library"
    subscriptions:
      - "Employee SKU"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
----
for reference look at link:sample_vars.yml[]
. Sample of secrets.yml
[source=text]
----
[user@desktop ~]$ cat ~/secrets.yml
aws_access_key_id: xxxxxxxxxxxxxxxx
aws_secret_access_key: xxxxxxxxxxxxxxxxxx
own_repo_path: http://localrepopath/to/repo
openstack_pem: ldZYgpVcjl0YmZNVytSb2VGenVrTG80SzlEU2xtUTROMHUzR1BZdzFoTEg3R2hXM
====Omitted=====
25ic0NTTnVDblp4bVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
openstack_pub: XZXYgpVcjl0YmZNVytSb2VGenVrTG80SzlEU2xtUTROMHUzR1BZdzFoTEg3R2hXM
====Omitted=====
53ic0NTTnVDblp4bVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
----
REGION=us-east-1
KEYNAME=ocpkey
GUID=test01
ENVTYPE=satellite-vm
CLOUDPROVIDER=ec2
ansible-playbook configs/${ENVTYPE}/destroy_env.yml \
        -e "guid=${GUID}" -e "env_type=${ENVTYPE}" \
        -e "cloud_provider=${CLOUDPROVIDER}" \
        -e "aws_region=${REGION}"  -e "key_name=${KEYNAME}"  \
        -e "subdomain_base_suffix=${BASESUFFIX}" \
        -e @~/secret.yml -vv
Roles
-----
* List of satellite and capsule roles
|===
|*Role*| *Link* | *Description*
|satellite-public-hostname | link:../../roles/satellite-public-hostname[satellite-public-hostname] | Set public hostname
|satellite-installation |link:../../roles/satellite-installation[satellite-installation] | Install and configure satellite
|satellite-hammer-cli |link:../../roles/satellite-hammer-cli[satellite-hammer-cli] | Setup hammer cli
|satellite-manage-organization |link:../../roles/satellite-manage-organization[satellite-manage-organization] | Create satellite organization
|satellite-manage-manifest |link:../../roles/ssatellite-manage-manifest[satellite-manage-manifest] | uploads manifest
|satellite-manage-subscription |link:../../roles/satellite-manage-subscription[satellite-manage-subscription] | Manage subscription/repository
|satellite-manage-sync |link:../../roles/satellite-manage-sync[satellite-manage-sync] | Synchronize repository
|satellite-manage-lifecycle |link:../../roles/satellite-manage-lifecycle[satellite-manage-lifecycle]  | Create lifecycle environment
|satellite-manage-content-view |link:../../roles/satellite-manage-content-view[satellite-manage-content-view]  | Create content-view
|satellite-manage-activationkey |link:../../roles/satellite-manage-activationkey[satellite-manage-content-view]  | Create activation key
|satellite-manage-capsule-certificate | link:../../roles/satellite-manage-capsule-certificate[satellite-manage-capsule-certificate]  | Create certificates for capsule installation on satellite
|satellite-capsule-installation |link:../../roles/satellite-capsule-installation[satellite-capsule-installation]  | Install capsule packages
|satellite-capsule-configuration | link:../../roles/satellite-capsule-configuration[satellite-capsule-configuration] | Setup capsule server
|===
Tags
---
|===
|{tag1} |Consistent tag for all satellite installation roles
|{tag2} |Consistent tag for all satellite configuration roles
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags install_satellite
----
Example to run config
---------------------
How to use config (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  main.yml \
  -e @./configs/satellite-vm/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid  \
  -e satellite_admin=admin \
  -e 'satellite_admin_password=changeme' \
  -e manifest_file=/path/to/manifest_satellite_6.4.zip
----
Example to stop environment
---------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/satellite-vm/stop.yml \
  -e @./configs/satellite-vm/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid
----
Example to start environment
---------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/satellite-vm/start.yml \
  -e @./configs/satellite-vm/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid
----
Example to destroy environment
------------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/satellite-vm/destroy.yml \
  -e @./configs/satellite-vm/sample_vars.yml \
  -e @~/secrets.yml \
  -e guid=defaultguid
----
Author Information
------------------
{author}
ansible/configs/satellite-vm/env_vars.yml
@@ -1,88 +1,22 @@
---
## TODO: What variables can we strip out of here to build complex variables?
## i.e. what can we add into group_vars as opposed to config_vars?
## Example: We don't really need "subdomain_base_short". If we want to use this,
## should just toss in group_vars/all.
### Also, we should probably just create a variable reference in the README.md
### For now, just tagging comments in line with configuration file.
### Vars that can be removed
use_own_repos: false
###### VARIABLES YOU SHOULD CONFIGURE FOR YOUR DEPLOYEMNT
###### OR PASS as "-e" args to ansible-playbook command
### Common Host settings
repo_method: rhn # Other Options are: file, satellite and rhn
repo_version: "3.5"
# Do you want to run a full yum update
update_packages: true
## guid is the deployment unique identifier, it will be appended to all tags,
## files and anything that identifies this environment from another "just like it"
guid: defaultguid
install_bastion: true
install_common: true
install_ipa_client: false
############# vars needed for satellite config #############
install_satellite: True
configure_satellite: True
#rhel_subscription_user: "{{ rhn_username }}"
#rhel_subscription_pass: "{{ rhn_password }}"
satellite_admin: admin
satellite_admin_password: somepassword
# repo_pool_ids:
#   - 8a85f98460bfb0470160c2ff22f13e47
rhn_pool_id_string: "Employee SKU"
## SB Don't set software_to_deploy from here, always use extra vars (-e) or "none" will be used
#software_to_deploy: none
repo_version: "3.6"
# osrelease: 3.6
### If you want a Key Pair name created and injected into the hosts,
# set `set_env_authorized_key` to true and set the keyname in `env_authorized_key`
# you can use the key used to create the environment or use your own self generated key
# if you set "use_own_key" to false your PRIVATE key will be copied to the bastion. (This is {{key_name}})
use_own_key: true
env_authorized_key: "{{guid}}key"
ansible_ssh_private_key_file: ~/.ssh/{{key_name}}.pem
set_env_authorized_key: true
# Is this running from Red Hat Ansible Tower
tower_run: false
### AWS EC2 Environment settings
### Route 53 Zone ID (AWS)
# This is the Route53 HostedZoneId where you will create your Public DNS entries
# This only needs to be defined if your CF template uses route53
HostedZoneId: Z3IHLWJZOU9SRT
# The region to be used, if not specified by -e in the command line
aws_region: ap-southeast-2
# The key that is used to
key_name: "default_key_name"
## Networking (AWS)
subdomain_base_short: "{{ guid }}"
subdomain_base_suffix: ".example.opentlc.com"
subdomain_base: "{{subdomain_base_short}}{{subdomain_base_suffix}}"
################################################################################
################################################################################
### Environment Structure
################################################################################
################################################################################
## Environment Sizing
default_key_name: ~/.ssh/{{key_name}}.pem
tower_run: false
# How many do you want for each instance type
bastion_instance_type: "t2.medium"
bastion_instance_image: RHEL75
satellite_instance_count: 1
satellite_instance_type: "t2.large"
subnets:
  - name: PublicSubnet
    cidr: "192.168.1.0/24"
    routing_table: true
satellite_instance_type: "m5.2xlarge"
security_groups:
  - name: BastionSG
@@ -97,120 +31,39 @@
  - name: SatelliteSG
    rules:
      - name: BastionUDPPorts
        description: "Only from bastion"
        from_port: 0
        to_port: 65535
        protocol: udp
        group: BastionSG
        rule_type: Ingress
      - name: BastionTCPPorts
        description: "Only from bastion"
        from_port: 0
        to_port: 65535
        protocol: tcp
        group: BastionSG
        rule_type: Ingress
      - name: SatSSHPublic
        description: "SSH public"
        from_port: 22
        to_port: 22
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatHTTPPorts
        description: "HTTP Public"
        from_port: 80
        to_port: 80
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatHTTPSPorts
        description: "HTTPS Public"
        from_port: 443
        to_port: 443
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatKatello5646Ports
        description: "Katello/qpid Public"
        from_port: 5646
        to_port: 5646
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatKatello5647Ports
        description: "Katello/qpid Public"
        from_port: 5647
        to_port: 5647
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatamqpPorts
        description: "amqp Public"
        from_port: 5671
        to_port: 5671
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatPuppetPorts
        description: "Puppet Public"
        from_port: 8140
        to_port: 8140
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatForemanPorts
        description: "Foreman Smart Proxy Public"
        from_port: 9090
        to_port: 9090
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatDNSTCPPorts
        description: "DNS Public"
        from_port: 53
        to_port: 53
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatDNSUDPPorts
        description: "DNS Public"
        from_port: 53
        to_port: 53
        rule_type: Ingress
      - name: BastionUDPPorts
        description: "Only from bastion"
        from_port: 0
        to_port: 65535
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatDHCP67Ports
        description: "DHCP Public"
        from_port: 67
        to_port: 67
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatDHCP68Ports
        description: "DHCP Public"
        from_port: 68
        to_port: 68
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: SatTFTPPorts
        description: "TFTP Public"
        from_port: 69
        to_port: 69
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
        from_group: DefaultSG
        rule_type: Ingress
      - name: BastionTCPPorts
        description: "Only from bastion"
        from_port: 0
        to_port: 65535
        protocol: tcp
        from_group: DefaultSG
        rule_type: Ingress
# Environment Instances
instances:
  - name: "bastion"
    count: 1
    unique: true
    public_dns: true
    dns_loadbalancer: true
    dns_loadbalancer: false
    security_groups:
      - BastionSG
      - DefaultSG
    image: "{{ bastion_instance_image }}"
    flavor:
      ec2: "{{bastion_instance_type}}"
    tags:
@@ -218,27 +71,60 @@
        value: "bastions"
      - key: "ostype"
        value: "linux"
    subnet: PublicSubnet
      - key: "instance_filter"
        value: "{{ env_type }}-{{ email }}"
  - name: "satellite"
    count: "{{satellite_instance_count}}"
    public_dns: true
    security_groups: 
      - SatelliteSG
      - DefaultSG
    flavor:
      "ec2": "{{satellite_instance_type}}"
      ec2: "{{satellite_instance_type}}"
    tags:
      - key: "AnsibleGroup"
        value: "satellites"
      - key: "ostype"
        value: "rhel"
    key_name: "{{key_name}}"
        value: "linux"
      - key: "instance_filter"
        value: "{{ env_type }}-{{ email }}"
###### VARIABLES YOU SHOULD ***NOT*** CONFIGURE FOR YOUR DEPLOYEMNT
###### You can, but you usually wouldn't need to.
ansible_user: ec2-user
remote_user: ec2-user
# DNS settings for environmnet
subdomain_base_short: "{{ guid }}"
subdomain_base_suffix: ".example.opentlc.com"
subdomain_base: "{{subdomain_base_short}}{{subdomain_base_suffix}}"
zone_internal_dns: "{{guid}}.internal."
chomped_zone_internal_dns: "{{guid}}.internal"
# Stuff that only GPTE cares about:
install_ipa_client: false
# for file based repo
repo_method: file
use_own_repos: true
repo_version: "6.4"
## For RHN login
# repo_method: rhn
# rhsm_pool_ids:
#   - 8a85f99b6b498682016b521dfe463949
# rhel_subscription_user:
# rhel_subscription_pass:
######
rhel_repos:
  - rhel-7-server-rpms
  - rhel-server-rhscl-7-rpms
  - rhel-7-server-satellite-6.4-rpms
  - rhel-7-server-satellite-maintenance-6-rpms
  - rhel-7-server-ansible-2.6-rpms
  - rhel-7-server-extras-rpms
# Do you want to run a full yum update
update_packages: false
common_packages:
  - python
@@ -252,63 +138,21 @@
  - python27-python-pip
  - bind-utils
# Install_bastion role will install mosh
  #- mosh
rhel_repos:
  - rhel-7-server-rpms
  - rhel-server-rhscl-7-rpms
  - rhel-7-server-satellite-6.4-rpms
  - rhel-7-server-satellite-maintenance-6-rpms
  - rhel-7-server-ansible-2.6-rpms
  - rhel-7-server-extras-rpms
guid: defaultguid
install_bastion: true
install_common: true
install_ipa_client: false
deploy_local_ssh_config_location: "{{output_dir}}/"
use_own_key: true
env_authorized_key: "{{guid}}key"
set_env_authorized_key: true
HostedZoneId: Z3IHLWJZOU9SRT
project_tag: "{{ env_type }}-{{ guid }}"
zone_internal_dns: "{{guid}}.internal."
chomped_zone_internal_dns: "{{guid}}.internal"
bastion_public_dns: "bastion.{{subdomain_base}}."
bastion_public_dns_chomped: "bastion.{{subdomain_base}}"
vpcid_cidr_block: "192.168.0.0/16"
vpcid_name_tag: "{{subdomain_base}}"
az_1_name: "{{ aws_region }}a"
az_2_name: "{{ aws_region }}b"
subnet_private_1_cidr_block: "192.168.2.0/24"
subnet_private_1_az: "{{ az_2_name }}"
subnet_private_1_name_tag: "{{subdomain_base}}-private"
subnet_private_2_cidr_block: "192.168.1.0/24"
subnet_private_2_az: "{{ az_1_name }}"
subnet_private_2_name_tag: "{{subdomain_base}}-private"
subnet_public_1_cidr_block: "192.168.10.0/24"
subnet_public_1_az: "{{ az_1_name }}"
subnet_public_1_name_tag: "{{subdomain_base}}-public"
subnet_public_2_cidr_block: "192.168.20.0/24"
subnet_public_2_az: "{{ az_2_name }}"
subnet_public_2_name_tag: "{{subdomain_base}}-public"
dopt_domain_name: "{{ aws_region }}.compute.internal"
rtb_public_name_tag: "{{subdomain_base}}-public"
rtb_private_name_tag: "{{subdomain_base}}-private"
cf_template_description: "{{ env_type }}-{{ guid }} Ansible Agnostic Deployer "
# Cloudformation template selection
stack_file: default
secret_dir: "~/secrets"
cloud_provider: ec2                     # Which AgnosticD Cloud Provider to use
key_name: ocpkey                        # Keyname must exist in AWS
aws_region: ap-southeast-2
ansible/configs/satellite-vm/files/cloud_providers/ec2_cloud_template.j2
@@ -216,12 +216,16 @@
        - Key: Name
          Value: {{instance['name']}}
        - Key: internaldns
          Value: {{instance['name']}}.{{aws_dns_zone_private_chomped}}
          Value: {{instance['name']}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{subdomain_base }}
    {% else %}
        - Key: Name
          Value: {{instance['name']}}{{loop.index}}
        - Key: internaldns
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_public_prefix|d('')}}{{ subdomain_base}}
    {% endif %}
        - Key: "owner"
          Value: "{{ email | default('unknownuser') }}"
ansible/configs/satellite-vm/files/cloud_providers/ec2_cloud_template_json.j2
File was deleted
ansible/configs/satellite-vm/files/hosts_template.j2
@@ -1,3 +1,11 @@
{# # These are the satellite hosts #}
[satellites]
{% for host in groups['satellites'] %}
{{host}}
{% endfor %}
[all:vars]
{# ###########################################################################
### Ansible Vars
@@ -7,12 +15,4 @@
ansible_user={{remote_user}}
[all:children]
satellite
{# # These are the satellitehosts #}
[satellites]
{% for host in groups['satellites'] %}
satellite{{loop.index}}.{{chomped_zone_internal_dns}} ssh_host={{host}}
{% endfor %}
satellites
ansible/configs/satellite-vm/files/repos_template.j2
@@ -1,4 +1,4 @@
{%if inventory_hostname in groups['bastions'] %}
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
@@ -8,19 +8,72 @@
gpgcheck=0
#gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
### satellite repos #####
[rhel-7-server-rpms]
name=Red Hat Enterprise Linux 7
baseurl={{own_repo_path}}/rhel-7-server-rpms
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-rpms
enabled=1
gpgcheck=0
[rhel-server-rhscl-7-rpms]
name=Red Hat Enterprise Linux 7 RHSCL
baseurl={{own_repo_path}}/rhel-server-rhscl-7-rpms
baseurl={{own_repo_path}}/{{repo_version}}/rhel-server-rhscl-7-rpms
enabled=1
gpgcheck=0
[rhel-7-server-ansible-2.6-rpms]
name=Red Hat Enterprise Ansible 2.6
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-ansible-2.6-rpms
enabled=1
gpgcheck=0
[rhel-7-server-extras-rpms]
name=Red Hat Enterprise Linux Extra RPMs
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-extras-rpms
enabled=1
gpgcheck=0
{% endif %}
{%if groups['satellites'] is defined %}
{%if inventory_hostname in groups['satellites'] %}
{# satellite repos #}
[rhel-7-server-rpms]
name=Red Hat Enterprise Linux 7
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-rpms
enabled=1
gpgcheck=0
[rhel-server-rhscl-7-rpms]
name=Red Hat Enterprise Linux 7 RHSCL
baseurl={{own_repo_path}}/{{repo_version}}/rhel-server-rhscl-7-rpms
enabled=1
gpgcheck=0
[rhel-7-server-ansible-2.6-rpms]
name=Red Hat Enterprise Ansible 2.6
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-ansible-2.6-rpms
enabled=1
gpgcheck=0
[rhel-7-server-extras-rpms]
name=Red Hat Enterprise Linux Extra RPMs
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-extras-rpms
enabled=1
gpgcheck=0
[rhel-7-server-satellite-6.4-rpms]
name=Red Hat Enterprise Satellite 6.4
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-satellite-6.4-rpms
enabled=1
gpgcheck=0
[rhel-7-server-satellite-maintenance-6-rpms]
name=Red Hat Enterprise Satellite 6 Maintenance
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-satellite-maintenance-6-rpms
enabled=1
gpgcheck=0
{% endif %}
{% endif %}
ansible/configs/satellite-vm/files/tower_hosts_template.j2
File was deleted
ansible/configs/satellite-vm/pre_software.yml
@@ -1,4 +1,3 @@
---
- name: Step 003 Pre Software
  hosts: localhost
  gather_facts: false
@@ -11,34 +10,30 @@
        name: infra-local-create-ssh_key
      when: set_env_authorized_key | bool
- name: Configure all hosts with Repositories, Common Files and Set environment key
- name: Configure all hosts with Repositories
  hosts:
    - all:!windows
  become: true
  gather_facts: False
  tags:
    - step004
    - common_tasks
  roles:
    - role: set-repositories
      when: repo_method is defined
    - { role: "set-repositories", when: 'repo_method is defined' }
    - { role: "set_env_authorized_key", when: 'set_env_authorized_key' }
    - role: set_env_authorized_key
      when: set_env_authorized_key | bool
- name: Configuring Bastion Hosts
  hosts: bastions
  become: true
  gather_facts: False
  roles:
    - role: common
      when: install_common | bool
    - { role: "common", when: 'install_common' }
    - {role: "bastion", when: 'install_bastion' }
    - { role: "bastion-opentlc-ipa", when: 'install_ipa_client' }
    -  role: bastion
       when: install_bastion | bool
  tags:
    - step004
    - bastion_tasks
- name: PreSoftware flight-check
  hosts: localhost
  connection: local
@@ -49,49 +44,3 @@
  tasks:
    - debug:
        msg: "Pre-Software checks completed successfully"
#     - name: Generate SSH keys
#       shell: ssh-keygen -b 2048 -t rsa -f "{{output_dir}}/{{env_authorized_key}}" -q -N ""
#       args:
#         creates: "{{output_dir}}/{{env_authorized_key}}"
#       when: set_env_authorized_key | bool
# - name: Configure all hosts with Repositories, Common Files and Set environment key
#   hosts:
#     - all
#   become: true
#   gather_facts: False
#   tags:
#     - step004
#     - common_tasks
#   roles:
#     - { role: "set-repositories", when: 'repo_method is defined' }
#     - { role: "common", when: 'install_common' }
#     - { role: "set_env_authorized_key", when: 'set_env_authorized_key' }
# - name: Configuring Bastion Hosts
#   hosts: bastions
#   become: true
#   roles:
#     #- { role: "set-repositories", when: 'repo_method is defined' }
#     #- { role: "common", when: 'install_common' }
#     - { role: "bastion", when: 'install_bastion' }
#     #- { role: "ansible-version-lock" }
#   tags:
#     - step004
#     - bastion_tasks
# # - name: Configuring Satellite Hosts
# #   hosts: satellites
# #   become: true
# #   roles:
# #   - { role: "rhn-subscription-manager" , when: rhn_subscription_manager  }
# #   tags:
# #     - step004
# #     - satellite_tasks
ansible/configs/satellite-vm/sample_vars.yml
@@ -1,28 +1,98 @@
---
# sample configuration file
#
# Usage: ansible-playbook main.yml -e @configs/just-some-nodes-example/sample.yml
#
# Ideally keep your copy OUTSIDE your repo, especially if using Cloud Credentials
env_type: satellite-vm          # Name of config to deploy
env_type: satellite-vm      # Name of config to deploy
output_dir: /tmp/workdir                # Writable working scratch directory
email: name@example.com                 # User info for notifications
email: satellite_vm@example.com
cloud_provider: ec2                     # Which AgnosticD Cloud Provider to use                # User info for notifications
aws_region: ap-southeast-2
guid: guid02                             # Unique string used in FQDN
subdomain_base_suffix: .example.opentlc.com      # Your domain used in FQDN
###### satellite env related variables ###############
install_satellite: True
configure_satellite: True
satellite_version: 6.4
org: gpte
org_label: gpte
org_description: "Global Partner Training and Enablement"
lifecycle_environment_path:
    - name: "Dev"
      label: "dev"
      description: "Development Environment"
      prior_env: "Library"
    - name: "QA"
      label: "qa"
      description: "Quality Environment"
      prior_env: "Dev"
    - name: "Prod"
      label: "prod"
      description: "Production Enviornment"
      prior_env: "QA"
# Path to yum repos
own_repo_path: http://you-own.repo.com/repos
subscription_name: "Employee SKU"
########## Activation Key #####################
satellite_content:
  - name:             "Capsule Server"
    activation_key:   "capsule_key"
    subscriptions:
      - "Employee SKU"
    life_cycle:       "Library"
    content_view:     "Capsule Content"
    content_view_update: False
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server'
        product: 'Red Hat Software Collections for RHEL Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Capsule 6.4 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Satellite Capsule'
        basearch: 'x86_64'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
      - name: 'Red Hat Ansible Engine 2.6 RPMs for Red Hat Enterprise Linux 7 Server'
        product: 'Red Hat Ansible Engine'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    activation_key:   "three_tier_app_key"
    content_view:     "Three Tier App Content"
    life_cycle:       "Library"
    subscriptions:
      - "Employee SKU"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
# Cloud specfic settings - example given here for AWS
      - name: 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server'
        product: 'Red Hat Software Collections for RHEL Server'
        basearch: 'x86_64'
        releasever:  '7Server'
cloud_provider: ec2                     # Which AgnosticD Cloud Provider to use
aws_region: us-east-1                   # AWS Region to deploy in
HostedZoneId: Z3IHLWJZOU9SRT            # You will need to change this
key_name: ocpkey                        # Keyname must exist in AWS
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7.7'
# AWS Credentials. These are required (don't sync them to your fork)
# aws_access_key_id:
# aws_secret_access_key:
      - name: 'Red Hat Enterprise Linux 7 Server - RH Common (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Enterprise Linux 7 Server - Extras (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
      - name: 'Red Hat Enterprise Linux 7 Server - Optional (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
...
ansible/configs/satellite-vm/software.yml
@@ -12,13 +12,17 @@
  hosts: satellites
  become: True
  gather_facts: True
#   pre_tasks:
#     - debug: var=ansible_default_ipv4.address
  roles:
    # Satellite-installation role installs satellite and Configures firewalld
    - { role: "satellite-installation", when: install_satellite }
    # Uploads manifest, adds & sync repos, creates content-view and activation keys
    - { role: "satellite-configuration", when: configure_satellite }
    - { role: "satellite-public-hostname" }
    - { role: "satellite-installation",                 when: install_satellite  }
    # # - { role: "satellite-hammer-cli",                 when: install_satellite   }
    - { role: "satellite-manage-organization",          when: configure_satellite }
    - { role: "satellite-manage-manifest",              when: configure_satellite }
    - { role: "satellite-manage-subscription",          when: configure_satellite }
    - { role: "satellite-manage-sync",                  when: configure_satellite }
    - { role: "satellite-manage-lifecycle",             when: configure_satellite }
    - { role: "satellite-manage-content-view",          when: configure_satellite }
    - { role: "satellite-manage-activationkey",         when: configure_satellite }
    
- name: Software flight-check
ansible/configs/satellite-vm/start.yml
New file
@@ -0,0 +1,21 @@
---
- import_playbook: ../../include_vars.yml
- name: Stop instances
  hosts: localhost
  gather_facts: false
  become: false
  environment:
    AWS_ACCESS_KEY_ID: "{{aws_access_key_id}}"
    AWS_SECRET_ACCESS_KEY: "{{aws_secret_access_key}}"
  tasks:
    - debug:
        msg: "Step 002 Post Infrastructure"
    - name: Start instances
      ec2:
        instance_tags:
          "aws:cloudformation:stack-name": "{{ project_tag }}"
        state: running
        region: "{{ aws_region }}"
ansible/configs/satellite-vm/stop.yml
New file
@@ -0,0 +1,21 @@
---
- import_playbook: ../../include_vars.yml
- name: Stop instances
  hosts: localhost
  gather_facts: false
  become: false
  environment:
    AWS_ACCESS_KEY_ID: "{{aws_access_key_id}}"
    AWS_SECRET_ACCESS_KEY: "{{aws_secret_access_key}}"
  tasks:
    - debug:
        msg: "Step 002 Post Infrastructure"
    - name: Stop instances
      ec2:
        instance_tags:
          "aws:cloudformation:stack-name": "{{ project_tag }}"
        state: stopped
        region: "{{ aws_region }}"
ansible/roles/satellite-capsule-configuration/README.adoc
New file
@@ -0,0 +1,78 @@
:role: satellite-capsule-configuration
:author: GPTE Team
:tag1: configure_capsule
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role installs capsule server.
Requirements
------------
Following are the requirements:
. Satellite must be install and setup.
. Satellite should have capsule activation key and repositories.
. Capsule package should be installed.
Dependencies
------------
* Role is dependent on following roles
  . satellite-public-hostname
  . satellite-manage-capsule-certificate
  . satellite-capsule-installation
Tags
---
|===
|{tag1} | This tag is specific to installation tasks.
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags install_capsule
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags install_capsule
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat playbook.yml
- hosts: capsule.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-public-hostname
    - satellite-capsule-installation
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file}[main.yml] and link:{version_file}[version_6.4.yml] .
Author Information
------------------
{author}
ansible/roles/satellite-capsule-configuration/tasks/main.yml
@@ -1,83 +1,7 @@
---
- name: Download Cert from Satellite
  get_url:
    url: "https://{{item}}/pub/katello-ca-consumer-latest.noarch.rpm"
    dest: /root/katello-ca-consumer-latest.noarch.rpm
    mode: 0664
    validate_certs: no
  loop: "{{ groups['satellites'] }}"
- name: Remove rh-amazon-rhui-client package
  tags: packer
  yum:
    name: rh-amazon-rhui-client
    state: absent
- name: Install CA certificate
  yum:
    name: /root/katello-ca-consumer-latest.noarch.rpm
    state: present
- name: Install certs
  # use rpm here to avoid issue when yum is broken (chicken&egg)
  yum:
    name:  "/root/katello-ca-consumer-latest.noarch.rpm"
    state: present
# - name: Delete Cert Package
#   file:
#     name: /root/katello-ca-consumer-latest.noarch.rpm
#     state: absent
- name: Register with activation-key
  redhat_subscription:
    state: present
    server_hostname: "{{item}}"
    activationkey: "{{activation_key_name}}"
    org_id: "Default_Organization"
  loop: "{{ groups['satellites'] }}"
- name: Disable all repos
  command: subscription-manager repos --disable "*"
- name: Enable repos for RHEL
  rhsm_repository:
    name: "{{ item }}"
    state: enabled
  with_items:
    - '{{ capsule_repos }}'
- name: Install Katello Agent
  yum:
    name: katello-agent
    state: present
- name: Start Katello Agent
  service:
    name: goferd
    state: started
    enabled: yes
- name: Copy capsule cert tar from satellite
  synchronize:
    src: /root/{{ inventory_hostname }}-certs.tar
    dest: /root
  delegate_to: "{{item}}"
  loop: "{{ groups['satellites'] }}"
- name: Configure Satellite Capsule
  command: >-
    satellite-installer --scenario capsule
      --foreman-proxy-content-parent-fqdn {{item}}
      --foreman-proxy-register-in-foreman true
      --foreman-proxy-foreman-base-url https://{{item}}
      --foreman-proxy-trusted-hosts {{item}}
      --foreman-proxy-trusted-hosts {{ ansible_fqdn }}
      --foreman-proxy-oauth-consumer-key {{ hostvars[item]['capsule_data']['foreman_oauth_key'] }}
      --foreman-proxy-oauth-consumer-secret {{ hostvars[item]['capsule_data']['foreman_oauth_secret'] }}
      --foreman-proxy-content-certs-tar /root/{{inventory_hostname}}-certs.tar
      --puppet-server-foreman-url "https://{{item}}"
  loop: "{{ groups['satellites'] }}"
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_capsule
ansible/roles/satellite-capsule-configuration/tasks/version_6.4.yml
New file
@@ -0,0 +1,37 @@
---
- name: Grab satellite publicname, consumer-key and consumer-secret
  set_fact:
    satellite_publicname: "{{ item | regex_replace('.internal',subdomain_base_suffix) }}"
    consumer_key: "{{ hostvars[item]['capsule_data']['foreman_oauth_key'] }}"
    consumer_secret: "{{ hostvars[item]['capsule_data']['foreman_oauth_secret'] }}"
  loop: "{{ groups['satellites'] }}"
  tags:
    - configure_capsule
- name: Download capsule certificate
  synchronize:
    src: /root/{{ publicname }}-certs.tar
    dest: /root
  delegate_to: "{{ item  }}"
  loop: "{{ groups['satellites'] }}"
  tags:
    - configure_capsule
- name: Configure Satellite Capsule
  command: >-
    satellite-installer --scenario capsule
      --foreman-proxy-content-parent-fqdn {{ satellite_publicname }}
      --foreman-proxy-register-in-foreman true
      --foreman-proxy-foreman-base-url "https://{{ satellite_publicname }}"
      --foreman-proxy-trusted-hosts {{ satellite_publicname }}
      --foreman-proxy-trusted-hosts {{ publicname }}
      --foreman-proxy-oauth-consumer-key {{ consumer_key }}
      --foreman-proxy-oauth-consumer-secret {{ consumer_secret }}
      --foreman-proxy-content-certs-tar /root/{{publicname}}-certs.tar
      --puppet-server-foreman-url "https://{{ satellite_publicname }}"
  tags:
    - configure_capsule
ansible/roles/satellite-capsule-installation/README.adoc
New file
@@ -0,0 +1,125 @@
:role: satellite-capsule-installation
:author: GPTE Team
:tag1: install_capsule
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role installs capsule server.
Requirements
------------
Following are the requirements:
. Satellite must be install and setup.
. Satellite should have capsule activation key and repositories.
Role Variables
--------------
* Following are the variable which needs to be defined
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Notrequired | Organization label in string without space
|org_description: "String" |Not-required | Organization description
|satellite_content: {Dictionary} |Required | Main dictionary variable
|name: "String" |Required | Must be "Capsule Sever" for capsules
|activation_keys: "String" |Required | Name of the activation key
|vars/repos: [list] | Required | List of repos to enable in vars directory
|===
* Exammple variables
. variable in sample_vars.yml
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Capsule Server"
    activation_key:   "capsule_key"
  - name:             "Three Tier App"
    activation_key:   "three_tier_app_key"
----
. Variable in vars directory
[source=text]
----
[user@desktop ~]$ cat vars/main.yml
repos:
  - rhel-7-server-rpms
  - rhel-server-rhscl-7-rpms
  - rhel-7-server-satellite-maintenance-6-rpms
  - rhel-7-server-ansible-2.6-rpms
  - rhel-7-server-satellite-capsule-6.4-rpms
----
Dependencies
------------
* Role is dependent on following roles
  . satellite-public-hostname
  . satellite-manage-capsule-certificate
Tags
---
|===
|{tag1} | This tag is specific to installation tasks.
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags install_capsule
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags install_capsule
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat playbook.yml
- hosts: capsule.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-public-hostname
    - satellite-capsule-installation
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file}[main.yml] and link:{version_file}[version_6.4.yml] .
Author Information
------------------
{author}
ansible/roles/satellite-capsule-installation/tasks/main.yml
@@ -1,23 +1,7 @@
---
- name: Add internal dns name in hosts file
  lineinfile:
    dest: /etc/hosts
    state: present
    insertafter: EOF
    line: "{{ ansible_default_ipv4['address'] }}  {{ ansible_hostname }}.{{guid}}.internal"
  tags:
    - install_capsule
- name: Update system
  package:
    name: '*'
    state: latest
  tags:
    - install_capsule
- name: Install Satellite Capsule Packages
  package:
    name: satellite-capsule
    state: latest
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - install_capsule
ansible/roles/satellite-capsule-installation/tasks/version_6.4.yml
New file
@@ -0,0 +1,85 @@
---
- name: Grab satellite publicname
  set_fact:
    satellite_publicname: "{{ item | regex_replace('.internal',subdomain_base_suffix) }}"
  loop: "{{ groups['satellites'] }}"
- name: Remove rh-amazon-rhui-client package
  yum:
    name: rh-amazon-rhui-client
    state: absent
  tags:
    - install_capsule
- name: Remove existing repositories
  shell: "{{ item }}"
  loop:
    - '/bin/yum-config-manager repo --disable "*"'
    - '/sbin/subscription-manager repos --disable "*"'
    - '/bin/rm -rf /etc/yum.repos.d/*.repo'
  tags:
    - install_capsule
- name: Download Cert from Satellite
  get_url:
    url: "https://{{satellite_publicname}}/pub/katello-ca-consumer-latest.noarch.rpm"
    dest: /root/katello-ca-consumer-latest.noarch.rpm
    mode: 0664
    validate_certs: no
  tags:
    - install_capsule
- name: Install Satellite CA certificate
  yum:
    name: /root/katello-ca-consumer-latest.noarch.rpm
    state: present
  tags:
    - install_capsule
- name: Register with activation-key
  redhat_subscription:
    state: present
    server_hostname: "{{satellite_publicname}}"
    activationkey: "{{ item.activation_key }}"
    org_id: "{{ org_label }}"
  when: 'item.name == "Capsule Server"'
  loop: "{{ satellite_content }}"
  tags:
    - install_capsule
- name: Enable repos for RHEL
  rhsm_repository:
    name: "{{ item }}"
    state: enabled
  with_items:
    - '{{ capsule_repos }}'
  tags:
    - install_capsule
- name: Yum clean all
  command: /bin/yum clean all
- name: Host update
  yum:
    name: '*'
    state: latest
  tags:
    - install_capsule
- name: Install capsule packages
  yum:
    name:
      - satellite-capsule-6.4.*el7*
      - katello-agent
    state: latest
  tags:
    - install_capsule
- name: Start Katello Agent
  service:
    name: goferd
    state: started
    enabled: yes
  tags:
    - install_capsule
ansible/roles/satellite-capsule-installation/vars/main.yml
File was renamed from ansible/roles/satellite-capsule-configuration/defaults/main.yml
@@ -1,7 +1,8 @@
---
capsule_repos:
repos:
  - rhel-7-server-rpms
  - rhel-7-server-satellite-capsule-6.4-rpms
  - rhel-server-rhscl-7-rpms
  - rhel-7-server-satellite-maintenance-6-rpms
  - rhel-7-server-ansible-2.6-rpms
  - rhel-7-server-ansible-2.6-rpms
  - rhel-7-server-satellite-capsule-6.4-rpms
  - rhel-7-server-satellite-tools-6.4-rpms
ansible/roles/satellite-configuration/tasks/01_manifest.yml
File was deleted
ansible/roles/satellite-configuration/tasks/02_repository.yml
File was deleted
ansible/roles/satellite-configuration/tasks/03_content_view.yml
File was deleted
ansible/roles/satellite-configuration/tasks/04_lifecycle.yml
File was deleted
ansible/roles/satellite-configuration/tasks/05_activationkey.yml
File was deleted
ansible/roles/satellite-configuration/tasks/main.yml
File was deleted
ansible/roles/satellite-configuration/vars/main.yml
File was deleted
ansible/roles/satellite-hammer-cli/tasks/main.yml
New file
@@ -0,0 +1,12 @@
---
- name: "Create .hammer directory"
  file:
    path: "/root/.hammer"
    state: "directory"
    mode: "0755"
#Copy the hammer configuration from template to the .hammer directory
- name: "Setup hammer cli config file"
  template:
    src: "hammer_cli.j2"
    dest: "/root/.hammer/cli_config.yml"
ansible/roles/satellite-hammer-cli/templates/hammer_cli.j2
New file
@@ -0,0 +1,4 @@
:foreman:
  :username: {{ satellite_admin_username }}
  :password: {{ satellite_admin_password }}
  :request_timeout: -1
ansible/roles/satellite-installation/README.adoc
New file
@@ -0,0 +1,119 @@
:role: satellite-installation
:author: GPTE Team
:tag1: install_satellite
:tag2: install_firewalld
:tag3: public_hostname
:tag4: update_satellite_host
:tag5: setup_satellite
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role installs and configure satellite and setup firewalld.
Requirements
------------
. Basic repository should be configure to install packages.
Role Variables
--------------
|===
|satellite_version: "Digit" |Required |satellite version
|satellite_admin: "String" |Required |Satellite admin username
|satellite_admin_password: "String" |Required |Satellite admin password
|firewall_services: [List] |Not-Required |List of services to enable, Default value are in defaults/main.yml
|firewall_ports: [List] |Not-Required |List of ports to enable, Default value are in defaults/main.yml
|===
* Example variables
[source=text]
----
satellite_version: 6.4
satellite_admin: admin
satellite_admin_password: password
firewall_services:
  - ssh
  - RH-Satellite-6
firewall_ports:
  - 22/tcp
  - 80/tcp
  - 443/tcp
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite install tasks
|{tag2} |For firewall tasks
|{tag3} |For hostname setup tasks
|{tag4} |For host update tasks
|{tag5} |For satellite setup tasks
|===
* Example tags
[source=text]
----
## Tagged jobs
ansible-playbook playbook.yml --tags install_satellite
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags install_satellite
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat sample_vars.yml
satellite_version: 6.4
firewall_services:
  - ssh
  - RH-Satellite-6
firewall_ports:
  - 22/tcp
  - 80/tcp
  - 443/tcp
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-public-hostname
    - satellite-install
[user@desktop ~]$ ansible-playbook playbook.yml -e 'satellite_admin: admin' -e 'satellite_admin_password: password'
----
Dependencies
------------
Role has dependency of role satellite-public-hostname.
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file}[main.yml] and link:{version_file}[version_6.4.yml]
Author Information
------------------
{author}
ansible/roles/satellite-installation/defaults/main.yml
ansible/roles/satellite-installation/tasks/main.yml
@@ -1,3 +1,19 @@
---
- import_tasks: satellite_installation.yml
- import_tasks: firewalld.yml
  tags:
    - install_satellite
    - install_firewalld
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - install_satellite
## Import for version satellite 6.6 ##
- import_tasks: version_6.6.yml
  when: satellite_version == 6.6
  tags:
    - install_satellite
ansible/roles/satellite-installation/tasks/satellite_installation.yml
File was deleted
ansible/roles/satellite-installation/tasks/version_6.4.yml
New file
@@ -0,0 +1,28 @@
---
- name: Update system
  package:
    name: '*'
    state: latest
  tags:
    - install_satellite
    - update_satellite_host
- name: Install Satellite Package
  package:
    name: satellite-6.4*.el7*
    state: present
  tags:
    - install_satellite
- name: configure satellite
  command: >-
    satellite-installer --scenario satellite
    --foreman-admin-username {{ satellite_admin }}
    --foreman-admin-password {{ satellite_admin_password }}
    --certs-cname {{inventory_hostname}}
  async: 3600
  poll: 36
  tags:
    - install_satellite
    - setup_satellite
ansible/roles/satellite-installation/tasks/version_6.6.yml
New file
@@ -0,0 +1,28 @@
---
- name: Update system
  package:
    name: '*'
    state: latest
  tags:
    - install_satellite
    - update_satellite_host
- name: Install Satellite Package
  package:
    name: satellite-6.6*.el7*
    state: present
  tags:
    - install_satellit
- name: configure satellite
  command: >-
    satellite-installer --scenario satellite
    --foreman-initial-admin-username {{ satellite_admin }}
    --foreman-initial-admin-password {{ satellite_admin_password }}
    --certs-cname {{inventory_hostname}}
  async: 3600
  poll: 36
  tags:
    - install_satellite
    - setup_satellite
ansible/roles/satellite-manage-activationkey/README.adoc
New file
@@ -0,0 +1,129 @@
:role: satellite-manage-activationkey
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_activationkey
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
:script_file: files/activationkey_script_version_6.4.sh
Role: {role}
============
This role creates, adds repos and publish satellite content-view.
Requirements
------------
Following are the requirements:
. Satellite must be install and setup.
. Hammer cli config must be configured/updated with privileged user and password to run the satellite cli.
. Subscription, Life-cycle, and content-view should be exist.
Role Variables
--------------
* Following are the variable which needs to be defined
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Required | Organization label in string without space
|org_description: "String" |Required | Organization description
|satellite_content: {Dictionary} |Required | Main dictionary variable
|content_view: "String" | Requird | Name of content-view
|life_cycle: "String" | Required | Name of life_cycle for activation key
|subscriptions: [List] | Required | List of subscriptions to add in activation key
|===
* Example variables
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    life_cycle:       "Prod"
    content_view:     "Ansible servers content"
    activation_key:   "capsule_server_key"
    subscriptions:
      - "Employee SKU
  - name:             "Three Tier App"
    life_cycle:       "Dev"
    content_view:     "Three Tier App"
    activation_key:   "three_tier_app"
    subscriptions:
      - "Employee SKU"
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} |This tag is specific to this role only
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite,configure_satellite_activationkey
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite,configure_satellite_activationkey
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat sample_vars.yml
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    life_cycle:       "Prod"
    content_view:     "Ansible servers content"
    activation_key:   "capsule_server_key"
    subscriptions:
      - "Employee SKU
  - name:             "Three Tier App"
    life_cycle:       "Dev"
    content_view:     "Three Tier App"
    activation_key:   "three_tier_app"
    subscriptions:
      - "Employee SKU"
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-manage-activationkey
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml . Create new script in files directory for activation key action.
for reference look at link:{main_file}[main.yml] , link:{version_file}[version_6.4.yml] and link:{script_file}[script]
Author Information
------------------
{author}
ansible/roles/satellite-manage-activationkey/files/activationkey_script_version_6.4.sh
New file
@@ -0,0 +1,42 @@
#!/bin/bash
function sub_function(){
    sub_avail=$(hammer subscription list --organization "${org}" |grep "${subscription_name}" > /dev/null ; echo $?)
    if [ "${sub_avail}" -eq 0 ]; then
        sub_id=$(hammer subscription list --organization "${org}" |grep "${subscription_name}" |awk '{print $1}')
        echo ${sub_id}
        return
    else
        echo "${subscription_name} :  is not available"
        exit 1
    fi
}
# Parameter variables
org="${1}"
activation_key="${2}"
subscription_name="${3}"
content_view="${4}"
life_cycle="${5}"
# Find activation key
key_exist=$(hammer activation-key info --organization "${org}" --name "${activation_key}" >& /dev/null ; echo $? )
if [ ${key_exist} -eq 0 ]; then
    # find subscription if already exist
    echo "Activation key exist"
    sub_name_exist=$(hammer subscription  list --organization "${org}" --activation-key "${activation_key}" |grep "${subscription_name}" |awk -F'|' '{print $3}' | sed -e 's/^\ //' -e 's/\ $//')
        if [ "${sub_name_exist}" == "${subscription_name}" ]; then
                echo "${subscription_name} subscription already exist"
         else
            # Add subscription to activation key
            sub_function
            hammer activation-key add-subscription --organization "${org}" --name "${activation_key}"  --subscription-id ${sub_id}
        fi
elif [ ${key_exist} -ne 0 ]; then
    # Create new activation key
    hammer activation-key create --organization "${org}" --name "${activation_key}" --content-view "${content_view}" --lifecycle-environment "${life_cycle}"
    # Add subscription to activation key
    sub_function
    hammer activation-key add-subscription --organization "${org}" --name "${activation_key}"  --subscription-id ${sub_id}
fi
ansible/roles/satellite-manage-activationkey/tasks/main.yml
@@ -1,71 +1,8 @@
---
- name: Fetching Name list of existing Activation Keys
  shell: >-
    hammer --output cve
    activation-key list
    --organization "{{org}}"  |grep Name
  ignore_errors: yes
  register: activation_key_list
  tags:
    - configure_satellite
    - configure_satellite_activationkey
- name: Creating fact list from Name of existing Activation Keys
  set_fact:
    list_of_exist_activation_key: "{{ list_of_exist_activation_key | d([]) + [ item.split(':')[1].lstrip(' ') ] }}"
  loop: "{{activation_key_list.stdout_lines}}"
  when: activation_key_list is defined
  tags:
    - configure_satellite
    - configure_satellite_activationkey
- debug: var=list_of_exist_activation_key
- name: Creating Activiation Key
  command: >-
      hammer activation-key create
      --organization "{{org}}"
      --name "{{activation_key_name}}"
      --content-view "{{content_view_name}}"
      --lifecycle-environment "{{life_cycle_env_name}}"
  when: 'activation_key_name not in list_of_exist_activation_key| d([])'
  tags:
    - configure_satellite
    - configure_satellite_activationkey
##########################################################
# need to plan how to work on subscription id
# -  hammer --output cve subscription list --organization "Default Organization" |grep '^ID'
- name: Fetching list of subscription ID from existing Activation-Keys
  shell: >-
    hammer --output cve
    activation-key subscriptions
    --organization "{{org}}"
    --name "{{activation_key_name}}" |grep ID
  ignore_errors: yes
  register: list_of_susbscription_in_activation_key
  tags:
    - configure_satellite
    - configure_satellite_activationkey
- name: Creating fact list of subscription ID from existing Activation-Keys
  set_fact:
    list_of_existing_subscription_in_activation_key: "{{ list_of_existing_subscription_in_activation_key | d([]) + [ item.split(':')[1].strip(' ') | int  ] }}"
  loop: "{{ list_of_susbscription_in_activation_key.stdout_lines }}"
  when: list_of_susbscription_in_activation_key is defined
  tags:
    - configure_satellite
    - configure_satellite_activationkey
- debug: var=list_of_existing_subscription_in_activation_key
- name: Adding subscription to Activiation Key
  command: >-
      hammer activation-key add-subscription
      --organization "{{org}}"
      --name "{{activation_key_name}}"
      --subscription-id 1
  when: '1 not in list_of_existing_subscription_in_activation_key|d([])'
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_activationkey
ansible/roles/satellite-manage-activationkey/tasks/version_6.4.yml
New file
@@ -0,0 +1,31 @@
---
# - name: Script to create and update activation key
#   script: >-
#     script_version_6.4.sh
#     "{{ org }}"
#     "{{ item.activation_key}}"
#     "{{ subscription_name }}"
#     "{{item.content_view}}"
#     "{{ item.life_cycle }}"
#   register: output
#   changed_when: '"already exist" not in output.stdout_lines'
#   loop: "{{ satellite_content }}"
#   tags:
#     - configure_satellite
#     - configure_satellite_activationkey
- name: Script to create and update activation key
  script: >-
    activationkey_script_version_6.4.sh
    "{{ org }}"
    "{{ item.0.activation_key}}"
    "{{ item.1 }}"
    "{{item.0.content_view}}"
    "{{ item.0.life_cycle }}"
  register: output
  changed_when: '"already exist" not in output.stdout'
  loop: "{{ satellite_content | subelements('subscriptions') }}"
  tags:
    - configure_satellite
    - configure_satellite_activationkey
ansible/roles/satellite-manage-capsule-certificate/README.adoc
New file
@@ -0,0 +1,87 @@
:role: satellite-manage-capsule-certificate
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_capsule_certificate
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role generates capsule certificate on satellite server.
Requirements
------------
. Satellite must be install and setup.
Role Variables
--------------
|===
|satellite_version: "Digit" |Required |satellite version
|subdomain_base_suffix: "String" |Required | Domain suffix
|groups['capsules'] | Required | Ansible inventory group called capsules
|===
* Example variables
[source=text]
----
satellite_version: 6.4
subdomain_base_suffix: '.example.com'
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} | This tag is specific to this role only
|===
* Example tags
[source=text]
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars:
    satellite_version: 6.4
    subdomain_base_suffix: '.example.com'
  roles:
    - satellite-manage-capsule-certificate
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file[main.yml] and link:{version_file}[version_6.4.yml]
Author Information
------------------
{author}
ansible/roles/satellite-manage-capsule-certificate/tasks/main.yml
@@ -1,50 +1,9 @@
---
- name: Block for capsule servers
  when: groups['capsules'] is defined
  block:
    - name: Collecting foreman secret from config
      shell:
        "grep oauth_consumer_secret /etc/foreman/settings.yaml | awk '{ print $2}'"
      register: output
    - name: Storing foreman secret
      set_fact:
        capsule_data: "{{ capsule_data | default({}) |combine({ 'foreman_oauth_secret': output.stdout}) }} "
      delegate_to: localhost
    - name: Collecting foreman key from config
      shell:
        "grep oauth_consumer_key /etc/foreman/settings.yaml | awk '{ print $2}'"
      register: output
    - name: Storing foreman key
      set_fact:
        capsule_data: "{{ capsule_data| default({}) |combine({ 'foreman_oauth_key': output.stdout}) }} "
    - name: Collecting pulp secret from config
      shell:
        "grep ^oauth_secret /etc/pulp/server.conf | awk '{print $2}'"
      register: output
    - name: Storing pulp server key
      set_fact:
        capsule_data: "{{ capsule_data| default({}) |combine({ 'pulp_oauth_secret': output.stdout}) }} "
    - debug: var=capsule_data
    - name: Generate Satellite Capsule certificate
      shell: >-
        capsule-certs-generate
        --foreman-proxy-fqdn {{ item }}
        --certs-tar /root/{{item}}-certs.tar
      register: capsule_certificate_log
      loop: "{{ groups['capsules'] }}"
    - name: Storing logs in log file
      copy:
        content: "{{ item[1].stdout_lines  }}"
        dest: /root/{{item[0]}}-certs.log
      loop: "{{ groups['capsules']|zip(capsule_certificate_log.results)|list }}"
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_capsule_certificate
ansible/roles/satellite-manage-capsule-certificate/tasks/version_6.4.yml
New file
@@ -0,0 +1,52 @@
---
- name: Block for capsule servers
  tags:
    - configure_satellite
    - configure_satellite_capsule_certificate
  when: groups['capsules'] is defined
  block:
    - name: Collecting foreman secret from config
      shell:
        "grep oauth_consumer_secret /etc/foreman/settings.yaml | awk '{ print $2}'"
      register: output
    - name: Storing foreman secret
      set_fact:
        capsule_data: "{{ capsule_data | default({}) |combine({ 'foreman_oauth_secret': output.stdout}) }} "
      delegate_to: localhost
    - name: Collecting foreman key from config
      shell:
        "grep oauth_consumer_key /etc/foreman/settings.yaml | awk '{ print $2}'"
      register: output
    - name: Storing foreman key
      set_fact:
        capsule_data: "{{ capsule_data| default({}) |combine({ 'foreman_oauth_key': output.stdout}) }} "
    - name: Collecting pulp secret from config
      shell:
        "grep ^oauth_secret /etc/pulp/server.conf | awk '{print $2}'"
      register: output
    - name: Storing pulp server key
      set_fact:
        capsule_data: "{{ capsule_data| default({}) |combine({ 'pulp_oauth_secret': output.stdout}) }} "
    # - debug: var=capsule_data
    - name: Generate Satellite Capsule certificate
      shell: >-
        capsule-certs-generate
        --foreman-proxy-fqdn {{ item | regex_replace('.internal',subdomain_base_suffix) }}
        --certs-tar /root/{{item | regex_replace('.internal',subdomain_base_suffix) }}-certs.tar
      register: capsule_certificate_log
      loop: "{{ groups['capsules'] }}"
    # - name: Storing logs in log file
    #   copy:
    #     content: "{{ item[1].stdout_lines  }}"
    #     dest: /root/{{item[0]}}-certs.log
    #   loop: "{{ groups['capsules']|zip(capsule_certificate_log.results)|list }}"
ansible/roles/satellite-manage-content-view/README.adoc
New file
@@ -0,0 +1,143 @@
:role: satellite-manage-content-view
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_content_view
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role creates, adds repos and publish satellite content-view.
Requirements
------------
Following are the requirements:
. Satellite must be install and setup.
. Hammer cli config must be configured/updated with privileged user and password to run the satellite cli.
. Repository should be enabled and syncronized in the organization to add in content-view.
Role Variables
--------------
* Following are the variable which needs to be defined
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Required | Organization label in string without space
|org_description: "String" |Required | Organization description
| satellite_content: {Dictionary} |Required | Main dictionary variable
| content_view: "String" | Requird | Name of content-view
| content_view_update: bool | Not-required, Default(false) |True/false
| repos: [list] | Required | List of repository name
|===
* Exammple variables
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    content_view:     "Ansible servers content"
    content_view_update: False
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    content_view:     "Three Tier App"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} |This tag is specific to this role only
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite,configure_satellite_content_view
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite,configure_satellite_content_view
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat sample_vars.yml
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    content_view:     "Ansible servers content"
    content_view_update: False
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    content_view:     "Three Tier App"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-manage-content-view
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file}[main.yml] and link:{version_file}[version_6.4.yml]
Author Information
------------------
{author}
ansible/roles/satellite-manage-content-view/tasks/main.yml
@@ -1,125 +1,9 @@
---
- name: Fetching list of names available in Content-Views
  shell: >-
    hammer --output cve
    content-view list
    --organization "{{org}}" | grep Name
  register: content_view_list
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_content_view
- name: Creating list fact of names available in Content-Views
  set_fact:
    content_view_exist: "{{ content_view_exist | d([]) + [item.split(':')[1].lstrip(' ')]}}"
  loop: "{{content_view_list.stdout_lines}}"
  tags:
    - configure_satellite
    - configure_satellite_content_view
- name: Print fact
  debug: var=content_view_exist
- name: Creating Content-view if not exist
  command: >-
    hammer content-view create
    --name "{{content_view_name}}"
    --organization "{{org}}"
  when: content_view_name not in content_view_exist
  tags:
    - configure_satellite
    - configure_satellite_content_view
#######################################
- name: Fetching list of enabled Repo IDs from subscription
  shell: >-
    hammer --output cve
    repository list
    --organization "{{org}}" |grep Id
  ignore_errors: yes
  register: repo_list
  tags:
    - configure_satellite
    - configure_satellite_content_view
- name: Creating list fact of enabled Repo IDs from subscription
  set_fact:
    sub_repos_name_exist: "{{ sub_repos_name_exist | d([]) + [ item.split(':')[1].lstrip(' ')] }}"
  loop: "{{repo_list.stdout_lines}}"
  when: repo_list is defined
  tags:
    - configure_satellite
    - configure_satellite_content_view
# - name: list of enabled repo ID in subscription
#   debug: var=sub_repos_name_exist
######################################
- name: Fetching list of existing Repository IDs from Content-View
  shell: >-
    hammer --output cve
    content-view list
    --organization "{{org}}"
    --name  "{{content_view_name}}" |grep "Repository IDs"
  ignore_errors: yes
  register: content_view_repos_list
  tags:
    - configure_satellite
    - configure_satellite_content_view
- name: Creating list fact of existing Repository IDs from Content-View
  set_fact:
    content_view_repos_exist: "{{ content_view_repos_exist | d([]) + [item.lstrip(' ')]}}"
  loop: "{{ content_view_repos_list.stdout_lines[0].split(':')[1].split(',') }}"
  when: content_view_repos_list is defined
  tags:
    - configure_satellite
    - configure_satellite_content_view
# - name: list existing repos in content view
#   debug: var=content_view_repos_exist
- name: adding repo to content-view
  command: >-
      hammer content-view add-repository
      --organization "{{org}}"
      --name "{{content_view_name}}"
      --repository-id  "{{ item }}"
  when: item  not in content_view_repos_exist
  loop: "{{ sub_repos_name_exist }}"
  tags:
    - configure_satellite
    - configure_satellite_content_view
#######################
- name: Fetching list of published versions of Content-View
  shell: >-
    hammer --output cve
    content-view version list
    --organization "{{org}}"
    --content-view "{{content_view_name}}" |grep Version
  ignore_errors: yes
  register: published_versions
  tags:
    - configure_satellite
    - configure_satellite_content_view
- name: Creating facts list of published versions of Content-View
  set_fact:
    published_content_view_versions: "{{ published_content_view_versions | d([]) + [  item.split(':')[1].strip(' ') | int  ] }}"
  loop: "{{ published_versions.stdout_lines }} "
  when: published_versions is defined
  tags:
    - configure_satellite
    - configure_satellite_content_view
- name: Publishing content-view
  command: >-
      hammer content-view publish
      --organization "{{org}}"
      --name "{{content_view_name}}"
      --async
  when: published_content_view_versions | d([]) | length == 0
  tags:
    - configure_satellite
    - configure_satellite_content_view
ansible/roles/satellite-manage-content-view/tasks/version_6.4.yml
New file
@@ -0,0 +1,93 @@
---
- name: Fetching list of names available in Content-Views
  shell: >-
    hammer --output cve
    content-view list
    --organization "{{org}}" | grep Name | awk  -F'  +' '{ print $2}'
  register: content_view_list
  tags:
    - configure_satellite
    - configure_satellite_content_view
# - name: Print fact
#   debug: var=content_view_list.stdout_lines
#   tags:
#     - configure_satellite
#     - configure_satellite_content_view
## Create new content-view
- name: Creating Content-view if not exist
  command: >-
    hammer content-view create
    --name "{{item.content_view}}"
    --organization "{{org}}"
  when: item.content_view  not in content_view_list.stdout_lines
  loop: "{{ satellite_content }}"
  tags:
    - configure_satellite
    - configure_satellite_content_view
## add repo to new content-view
# - name: Print list of repository
#   debug:
#     msg: "{{ item.0.content_view }} --->>> {{ item.1.name | regex_replace('[()]')  + ' ' + item.1.basearch + ( ' ' if item.1.releasever|d(None)  else '')  +  item.1.releasever|d('') }}"
#   loop: "{{satellite_content | subelements('repos') }}"
#   tags:
#     - configure_satellite
#     - configure_satellite_content_view
- name: Add content-view repo
  command: >-
    hammer content-view add-repository
    --organization "{{org}}"
    --name "{{ item.0.content_view }}"
    --repository  "{{  item.1.name | regex_replace('[()]')  + ' ' + item.1.basearch + ( ' ' if item.1.releasever|d(None)  else '')  +  item.1.releasever|d('') }}"
  when: item.0.content_view  not in content_view_list.stdout_lines
  loop: "{{satellite_content | subelements('repos') }}"
  tags:
    - configure_satellite
    - configure_satellite_content_view
### To update repo in content-view ##
- name: Update content-view repo
  command: >-
    hammer content-view add-repository
    --organization "{{org}}"
    --name "{{ item.0.content_view }}"
    --repository  "{{  item.1.name | regex_replace('[()]')  + ' ' + item.1.basearch + ( ' ' if item.1.releasever|d(None)  else '')  +  item.1.releasever|d('') }}"
  when:
    - item.0.content_view in content_view_list.stdout_lines
    - item.0.content_view_update | d(false) == True
  loop: "{{satellite_content | subelements('repos') }}"
  tags:
    - configure_satellite
    - configure_satellite_content_view
## Publish content view ###
- name: Publish content-view
  command: >-
      hammer content-view publish
      --organization "{{org}}"
      --name "{{ item.content_view }}"
      --async
  when: item.content_view  not in content_view_list.stdout_lines
  loop: "{{ satellite_content }}"
  tags:
    - configure_satellite
    - configure_satellite_content_view
### Republish content view
- name: Updating Publish content-view (new version)
  command: >-
      hammer content-view publish
      --organization "{{org}}"
      --name "{{ item.content_view }}"
      --async
  when:
    - item.content_view in content_view_list.stdout_lines
    - item.0.content_view_update | d(false) == True
  loop: "{{ satellite_content }}"
  tags:
    - configure_satellite
    - configure_satellite_content_view
ansible/roles/satellite-manage-lifecycle/README.adoc
New file
@@ -0,0 +1,127 @@
:role: satellite-manage-lifecycle
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_lifecycle
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role creates lifecycle environment path.
Requirements
------------
. Satellite must be install and setup.
. Hammer cli config must be configured/updated with privileged user and password to run the satellite cli.
Role Variables
--------------
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Not-required | Organization label in string without space
|org_description: "String" |Not-required | Organization description
|lifecycle_environment_path: [List] |Required | Contains nested list with dictinary keys and values
|name: "String" |Required |Name of environment
|label: "String" |Required |label of environment
|descritpion: "String" |Required |Description for environment
|prior_env: "String" |Required |Name of the prior environment
|===
* Example variables
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
lifecycle_environment_path:
    - name: "Dev"
      label: "dev"
      description: "Development Environment"
      prior_env: "Library"
    - name: "QA"
      label: "qa"
      description: "Quality Environment"
      prior_env: "Dev"
    - name: "Prod"
      label: "prod"
      description: "Production Enviornment"
      prior_env: "QA"
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} | This tag is specific to this role only
|===
* Example tags
[source=text]
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat sample_vars.yml
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
lifecycle_environment_path:
    - name: "Dev"
      label: "dev"
      description: "Development Environment"
      prior_env: "Library"
    - name: "QA"
      label: "qa"
      description: "Quality Environment"
      prior_env: "Dev"
    - name: "Prod"
      label: "prod"
      description: "Production Enviornment"
      prior_env: "QA"
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-manage-organization
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file[main.yml] and link:{version_file}[version_6.4.yml]
Author Information
------------------
{author}
ansible/roles/satellite-manage-lifecycle/tasks/main.yml
New file
@@ -0,0 +1,8 @@
---
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_lifecycle
ansible/roles/satellite-manage-lifecycle/tasks/version_6.4.yml
New file
@@ -0,0 +1,29 @@
---
# List lifecycle enviroment
- name: List lifecycle environment
  shell: >-
    hammer --output yaml lifecycle-environment list
    --organization "{{ org }}" |grep Name |awk '{print $2}'
  register: lifecycle_output
- name: Print lifecycle env list
  debug:
    var: lifecycle_output
# Create lifecycle environment path
- name: Create lifecycle environment path
  shell: >-
    hammer lifecycle-environment create
    --organization "{{ org }}"
    --name "{{ item.name }}"
    --label "{{ item.label }}"
    --description "{{ item.description }}"
    --prior "{{ item.prior_env }}"
  when:
    - item.name | upper not in lifecycle_output.stdout | upper
  loop: "{{ lifecycle_environment_path }}"
  tags:
    - configure_satellite
    - configure_satellite_lifecycle
ansible/roles/satellite-manage-liifecycle/tasks/main.yml
ansible/roles/satellite-manage-liifecycle/vars/main.yml
ansible/roles/satellite-manage-location/tasks/main.yml
New file
@@ -0,0 +1,19 @@
---
- name: location list
  shell: >-
    hammer --output cve location list |grep Name |awk '{$1 = ""; print substr($0, index($0,$2))}'  #'{$1 = ""; print $0}'
  register: location_list
- debug: var=location_list
- name: Create locations
  command: >-
    hammer location create --name {{ item }}
  when: 'item not in location_list.stdout_lines'
  loop: "{{ locations }}"
- name: Adding location to organization
  command: >-
    hammer location add-organization --name "{{item}}" --organization "{{org}}"
  loop: "{{ locations }}"
ansible/roles/satellite-manage-manifest/README.adoc
New file
@@ -0,0 +1,103 @@
:role: satellite-manage-manifest
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_manifest
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role upload manifest onto satellite server.
Requirements
------------
Following are the requirements:
. Satellite must be install and setup.
. Hammer cli config must be configured/updated with privileged user and password to run the satellite cli.
Role Variables
--------------
* Following are the variable which needs to be defined
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Not-equired | Organization label in string without space
|org_description: "String" |Not-required | Organization description
|subscription_name: "String" |Required |Subscription name for manifest
|manifest_file: "String" |Required |Holds local path of manifest file
|===
* Exammple variables
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
subscription_name: "Employee SKU"
manifest_file: /home/user/manifest.zip
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} |This tag is specific to this role only
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite,configure_satellite_manifest
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite,configure_satellite_manifest
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat sample_vars.yml
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
subscription_name: "Employee SKU"
manifest_file: /home/user/manifest.zip
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-manage-manifest
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file}[main.yml] and link:{version_file}[version_6.4.yml]
Author Information
------------------
{author}
ansible/roles/satellite-manage-manifest/tasks/main.yml
@@ -1,40 +1,8 @@
---
- name: fetching list of existing subscriptions
  command: hammer --output cve subscription list --organization "{{org}}"
  register: subscription_list
  ignore_errors: yes
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_manifest
# - debug: var=subscription_list
- name: Creating list fact of existing subscriptions
  set_fact:
    list_of_existing_subscriptions: "{{ list_of_existing_subscriptions | d([]) + [ item.split(':').1.lstrip(' ') ] }}"
  loop: "{{ subscription_list.stdout_lines }}"
  when: subscription_list is defined
  tags:
    - configure_satellite
    - configure_satellite_manifest
# - debug: var=list_of_existing_subscriptions
- name: Copying and uploading Manifest
  when:
    ( list_of_existing_subscriptions is not defined ) or
    ( subscription_name not in list_of_existing_subscriptions )
  tags:
    - configure_satellite
    - configure_satellite_manifest
  block:
    - name: Copy manifest
      copy:
        src: "{{ manifest_file }}"
        dest: /tmp
    - name: Uploading manifest
      command: >-
        hammer subscription upload
        --file /tmp/{{ manifest_file | basename }}
        --organization "{{org}}"
    - configure_satellite_manifest
ansible/roles/satellite-manage-manifest/tasks/version_6.4.yml
New file
@@ -0,0 +1,38 @@
---
- name: fetching list of existing subscriptions
  shell: >-
    hammer --output cve subscription list
    --organization "{{org}}" |grep Name | awk -F'  +' '{print $2}'
  register: subscription_list
  ignore_errors: yes
  tags:
    - configure_satellite
    - configure_satellite_manifest
# - debug:
#       var: subscription_list
#     tags:
#     - configure_satellite
#     - configure_satellite_manifest
## Block to copy manifest and upload to satellite
- name: Block to copy and upload nanifest
  when:
    ( subscription_list is not defined ) or
    ( subscription_name not in subscription_list.stdout_lines )
  tags:
    - configure_satellite
    - configure_satellite_manifest
  block:
    - name: Copy manifest
      copy:
        src: "{{ manifest_file }}"
        dest: /tmp
    - name: Uploading manifest
      command: >-
        hammer subscription upload
        --file /tmp/{{ manifest_file | basename }}
        --organization "{{org}}"
ansible/roles/satellite-manage-organization/README.adoc
New file
@@ -0,0 +1,92 @@
:role: satellite-manage-organization
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_organization
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
Role: {role}
============
This role creates organization.
Requirements
------------
. Satellite must be install and setup.
. Hammer cli config must be configured/updated with privileged user and password to run the satellite cli.
Role Variables
--------------
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Required | Organization label in string without space
|org_description: "String" |Required | Organization description
|===
* Example variables
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} | This tag is specific to this role only
|===
* Example tags
[source=text]
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite,configure_satellite_organization
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite,configure_satellite_organization
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars:
    satellite_version: 6.4
    org: "gpte"
    org_label: "gpte"
    org_description: "Global Partner Training and Enablement"
  roles:
    - satellite-manage-organization
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file[main.yml] and link:{version_file}[version_6.4.yml]
Author Information
------------------
{author}
ansible/roles/satellite-manage-organization/tasks/main.yml
New file
@@ -0,0 +1,9 @@
---
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_organization
ansible/roles/satellite-manage-organization/tasks/version_6.4.yml
New file
@@ -0,0 +1,24 @@
---
- name: List existing organization
  shell: >-
    hammer --output cve
    organization list |grep Label |awk '{print $2}'
  register: organization_list
  tags:
    - configure_satellite
    - configure_satellite_organization
# - debug: var=organization_list
- name: Fetching enabled repos name list from subscription of {{org}}
  shell: >-
    hammer organization create
    --name "{{ org }}"
    --label "{{ org_label }}"
    --description "{{ org_description }}"
  when: 'org_label not in organization_list.stdout_lines'
  tags:
    - configure_satellite
    - configure_satellite_organization
ansible/roles/satellite-manage-subscription-and-sync/tasks/main.yml
File was deleted
ansible/roles/satellite-manage-subscription/README.adoc
New file
@@ -0,0 +1,141 @@
:role: satellite-manage-subscription
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_subscription
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
:script_file: files/subscription_script_version_6.4.sh
Role: {role}
============
This role enables subscribed repository.
Requirements
------------
Following are the requirements:
. Satellite must be install and setup.
. Hammer cli config must be configured/updated with privileged user and password to run the satellite cli.
. Subscription should be exist in satellite server.
Role Variables
--------------
* Following are the variable which needs to be defined
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Notrequired | Organization label in string without space
|org_description: "String" |Not-required | Organization description
|satellite_content: {Dictionary} |Required | Main dictionary variable
|repos: [list] | Required | List of repository name
|product: "String" |Required | Product name of repository
|name: "String" |Required | Repository Name
|basearch: "String" |Required | Repository Base Arch
|releasever: "String" |Whereever Applicable | Repository Release version
|===
* Exammple variables
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
----
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} |This tag is specific to this role only
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite,configure_satellite_subscription
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite,configure_satellite_subscription
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat sample_vars.yml
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-manage-subscription
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml
for reference look at link:{main_file}[main.yml] , link:{version_file}[version_6.4.yml] and script
link:{script_file}[subscription_script_version_6.4.sh]
Author Information
------------------
{author}
ansible/roles/satellite-manage-subscription/files/subscription_script_version_6.4.sh
New file
@@ -0,0 +1,32 @@
#!/bin/bash
org="${1}"
repo_name="${2}"
name="${3}"
product="${4}"
basearch="${5}"
releasever="${6}"
repo_status=$(hammer repository info --organization "${org}" --name "${repo_name}" --product "${product}" >&/dev/null; echo $?)
if [ ${repo_status} -eq 0 ]; then
    echo "Repository already exist"
else
    if [ ${#releasever} -eq 0 ]; then
        hammer repository-set enable --organization "${org}" --name "${name}" --product "${product}" --basearch "${basearch}"
        if [ $? -eq 0 ]; then
            echo "Repository enabled"
        else
            exit 1
        fi
    else
        hammer repository-set enable --organization "${org}" --name "${name}" --product "${product}" --basearch "${basearch}" --releasever "${releasever}"
        if [ $? -eq 0 ]; then
            echo "Repository enabled"
        else
            exit 1
        fi
    fi
fi
ansible/roles/satellite-manage-subscription/tasks/main.yml
New file
@@ -0,0 +1,8 @@
---
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_subscription
ansible/roles/satellite-manage-subscription/tasks/version_6.4.yml
New file
@@ -0,0 +1,25 @@
---
# - name: Print inputs to script
#   debug:
#     msg: "{% if item.1.releasever is defined %}{{ item.1.releasever }}{% endif %}"
#   loop: "{{ satellite_content | subelements('repos') }}"
- name: Setting up satellite repository
  script: >-
    subscription_script_version_6.4.sh
    "{{ org }}"
    "{{ item.1.name | regex_replace('[()]')  + ' ' + item.1.basearch + ( ' ' if item.1.releasever|d(None)  else '')  +  item.1.releasever|d('') }}"
    "{{ item.1.name }}"
    "{{ item.1.product }}"
    "{{ item.1.basearch }}"
    "{% if item.1.releasever is defined %}{{ item.1.releasever }}{% endif %}"
  register: output
  changed_when: '"already exist" not in output.stdout'
  loop: "{{ satellite_content | subelements('repos') }}"
  tags:
    - configure_satellite
    - configure_satellite_subscription
ansible/roles/satellite-manage-sync/README.adoc
New file
@@ -0,0 +1,149 @@
:role: satellite-manage-sync
:author: GPTE Team
:tag1: configure_satellite
:tag2: configure_satellite_sync
:main_file: tasks/main.yml
:version_file: tasks/version_6.4.yml
:script_file: files/sync_script_version_6.4.sh
Role: {role}
============
This role synchronize repository.
Requirements
------------
Following are the requirements:
. Satellite must be install and setup.
. Hammer cli config must be configured/updated with privileged user and password to run the satellite cli.
. Subscription should be exist in satellite server.
. Repository should be enabled.
Role Variables
--------------
* Following are the variable which needs to be defined
|===
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Notrequired | Organization label in string without space
|org_description: "String" |Not-required | Organization description
|satellite_content: {Dictionary} |Required | Main dictionary variable
|repos: [list] | Required | List of repository name
|product: "String" |Required | Product name of repository
|name: "String" |Required | Repository Name
|basearch: "String" |Required | Repository Base Arch
|releasever: "String" |Whereever Applicable | Repository Release version
|===
* Example variables
[source=text]
----
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
----
Role Script
-----------
There is script based on bash shell which actually runs sync command after few tests to synchronize repository.
* Script is located in files directory of the role, named sync_script_version_6.4.sh .
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|{tag2} |This tag is specific to this role only
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_satellite,configure_satellite_subscription
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags configure_satellite,configure_satellite_subscription
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat sample_vars.yml
satellite_version: 6.4
org: "gpte"
org_label: "gpte"
org_description: "Global Partner Training and Enablement"
satellite_content:
  - name:             "Ansible server"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Maintenance 6 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.example.com
  vars_files:
    - sample_vars.yml
  roles:
    - satellite-manage-sync
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
To extend role works for other version, create new file named  version_{{satellite_version}}.yml and import newly created file in main.yml. Create new script in files directory for synchronize repository
for reference look at link:{main_file}[main.yml] , link:{version_file}[version_6.4.yml] and link:{script_file}[sync_script_version_6.4.sh]
Author Information
------------------
{author}
ansible/roles/satellite-manage-sync/files/sync_script_version_6.4.sh
New file
@@ -0,0 +1,34 @@
#!/bin/bash
function repo_sync() {
  # check if repository exist
  hammer --output cve repository list  --organization "${1}" --name  "${2}" | grep Name
  exit_status=$?
  if [ ${exit_status} -ne 0 ]
    then
      echo "Repository or product not exist"
      return
  elif [ ${exit_status} -eq 0 ]
    then
      repo_status=$(hammer repository info  --organization "${1}" --name  "${2}"  --product "${3}" | grep Status: | awk -F':' '{print $2}' | sed  's/^ //')
      echo $repo_status
      if [ "${repo_status}"  == 'Not Synced' ]
        then
          sync_task_id=$(hammer repository synchronize  --organization "${1}" --name "${2}"  --product "${3}" --async)
          echo "${sync_task_id}"
      else
          echo "Already synced or in-process"
      fi
  fi
}
if [ "$#" -eq 3 ]
  then
    repo_sync "${1}" "${2}" "${3}"
elif [ "$#" -gt 3 ]
  then
    echo -e "More than three arguments provided,  Three arguments required:\n Organizzation Name Product" >&2
else
    echo -e "Argument missing, Three arguments required:\n Organizzation Name Product" >&2
fi
ansible/roles/satellite-manage-sync/tasks/main.yml
New file
@@ -0,0 +1,8 @@
---
## Import for version satellite 6.4 ##
- import_tasks: version_6.4.yml
  when: satellite_version == 6.4
  tags:
    - configure_satellite
    - configure_satellite_sync
ansible/roles/satellite-manage-sync/tasks/version_6.4.yml
New file
@@ -0,0 +1,13 @@
---
## Sync repositories
- name: Sync repository
  script: sync_script_version_6.4.sh "{{org}}" "{{ item.1.name | regex_replace('[()]')  + ' ' + item.1.basearch + ( ' ' if item.1.releasever|d(None)  else '')  +  item.1.releasever|d('') }}" "{{ item.1.product }}"
  register: output
  changed_when: '"Already synced or in-process" not in output.stdout_lines'
  failed_when: '"Repository or product not exist" in output.stdout_lines'
  loop: "{{ satellite_content | subelements('repos') }}"
  tags:
    - configure_satellite
    - configure_satellite_sync
ansible/roles/satellite-public-hostname/README.adoc
New file
@@ -0,0 +1,67 @@
:role: satellite-public-hostname
:author: GPTE Team
:tag1: public_hostname
:main_file: tasks/main.yml
Role: {role}
============
This role reset server internal hostname to public hostname.
Role Variables
--------------
* Following are the variable which needs to be defined
|===
|inventory_hostname | Ansible inventory file hostname
|===
Tags
---
|===
|{tag1} |Consistent tag for all satellite config roles
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags public_hostname
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags public_hostname
----
Example Playbook
----------------
How to use your role (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cat playbook.yml
- hosts: satellite.internal
  roles:
    - satellite-public-hostname
[user@desktop ~]$ ansible-playbook playbook.yml
----
Tips to update Role
------------------
For reference look at link:{main_file}[main.yml] .
Author Information
------------------
{author}
ansible/roles/satellite-public-hostname/tasks/main.yml
New file
@@ -0,0 +1,25 @@
---
- name: Setting up facts for public hostname
  set_fact:
      publicname: "{{ inventory_hostname | regex_replace('.internal',subdomain_base_suffix) }}"
  tags:
    - public_hostname
- name: Print public name
  debug: var=publicname
- name: Reset internal hostname to public hostname
  hostname:
    name: "{{ publicname }}"
  tags:
    - public_hostname
- name: Add public hostname name in hosts file
  lineinfile:
    dest: /etc/hosts
    state: present
    insertafter: EOF
    line: "{{ ansible_default_ipv4['address'] }}  {{ publicname }}"
  tags:
    - public_hostname
tests/jenkins/ocp-workload-iot-demo.groovy
File was deleted
tests/jenkins/ocp-workload-iot-demo.groovy
New file
@@ -0,0 +1,251 @@
// -------------- Configuration --------------
// CloudForms
def opentlc_creds = 'b93d2da4-c2b7-45b5-bf3b-ee2c08c6368e'
def opentlc_admin_creds = '73b84287-8feb-478a-b1f2-345fd0a1af47'
def cf_uri = 'https://rhpds.redhat.com'
def cf_group = 'rhpds-access-cicd'
// IMAP
def imap_creds = 'd8762f05-ca66-4364-adf2-bc3ce1dca16c'
def imap_server = 'imap.gmail.com'
// Notifications
def notification_email = 'gucore@redhat.com'
def rocketchat_hook = '5d28935e-f7ca-4b11-8b8e-d7a7161a013a'
// state variables
def guid=''
def openshift_location = ''
// Catalog items
def choices = [
    'DevOps Shared Cluster Development / DEV - IOT Enterprise 4.0 Demo',
    'IoT Demos / IoT Industry 4.0 Demo',
].join("\n")
pipeline {
    agent any
    parameters {
        booleanParam(
            defaultValue: false,
            description: 'wait for user input before deleting the environment',
                name: 'confirm_before_delete'
        )
        choice(
            choices: choices,
            description: 'Catalog item',
            name: 'catalog_item',
        )
    }
    options {
        buildDiscarder(logRotator(daysToKeepStr: '30'))
    }
    stages {
        stage('order from CF') {
            environment {
                uri = "${cf_uri}"
                credentials = credentials("${opentlc_creds}")
                DEBUG = 'true'
            }
            /* This step use the order_svc_guid.sh script to order
             a service from CloudForms */
            steps {
                git 'https://github.com/redhat-gpte-devopsautomation/cloudforms-oob'
                script {
                    def catalog = params.catalog_item.split(' / ')[0].trim()
                    def item = params.catalog_item.split(' / ')[1].trim()
                    def cfparams = [
                        'check=t',
                        'quotacheck=t',
                        "region=global_gpte",
                        'expiration=7',
                        'runtime=8',
                        'users=2',
                        'nodes=2',
                    ].join(',').trim()
                    echo "'${catalog}' '${item}'"
                    guid = sh(
                        returnStdout: true,
                        script: """
                          ./opentlc/order_svc_guid.sh \
                          -c '${catalog}' \
                          -i '${item}' \
                          -G '${cf_group}' \
                          -d '${cfparams}'
                        """
                    ).trim()
                    echo "GUID is '${guid}'"
                }
            }
        }
        stage('Wait for last email and parse OpenShift location') {
            environment {
                credentials=credentials("${imap_creds}")
            }
            steps {
                git url: 'https://github.com/redhat-cop/agnosticd',
                    branch: 'development'
                script {
                    email = sh(
                        returnStdout: true,
                        script: """
                          ./tests/jenkins/downstream/poll_email.py \
                          --server '${imap_server}' \
                          --guid ${guid} \
                          --timeout 30 \
                          --filter 'has completed'
                        """
                    ).trim()
                    def m = email =~ /You can find the master's console here: ([^ <]+)/
                    openshift_location = m[0][1]
                }
            }
        }
        stage('Test OpenShift access') {
            environment {
                credentials = credentials("${opentlc_creds}")
            }
            steps {
                sh "./tests/jenkins/downstream/openshift_client.sh '${openshift_location}'"
            }
        }
        stage('Test demo') {
            options {
                timeout(time: 20, unit: 'MINUTES')
            }
            environment {
                credentials = credentials("${opentlc_creds}")
            }
            steps {
                sh "./tests/jenkins/downstream/openshift_test_demo.sh '${openshift_location}' '${guid}'"
            }
        }
        stage('Confirm before retiring') {
            when {
                expression {
                    return params.confirm_before_delete
                }
            }
            steps {
                input "Continue ?"
            }
        }
        stage('Retire service from CF') {
            environment {
                uri = "${cf_uri}"
                credentials = credentials("${opentlc_creds}")
                admin_credentials = credentials("${opentlc_admin_creds}")
                DEBUG = 'true'
            }
            /* This step uses the delete_svc_guid.sh script to retire
             the service from CloudForms */
            steps {
                git 'https://github.com/redhat-gpte-devopsautomation/cloudforms-oob'
                sh "./opentlc/delete_svc_guid.sh '${guid}'"
            }
            post {
                failure {
                    withCredentials([usernameColonPassword(credentialsId: imap_creds, variable: 'credentials')]) {
                        mail(
                            subject: "${env.JOB_NAME} (${env.BUILD_NUMBER}) failed retiring for GUID=${guid}",
                            body: "It appears that ${env.BUILD_URL} is failing, somebody should do something about that.\nMake sure GUID ${guid} is destroyed.",
                            to: "${notification_email}",
                            replyTo: "${notification_email}",
                            from: credentials.split(':')[0]
                        )
                    }
                    withCredentials([string(credentialsId: rocketchat_hook, variable: 'HOOK_URL')]) {
                        sh(
                            """
                            curl -H 'Content-Type: application/json' \
                            -X POST '${HOOK_URL}' \
                            -d '{\"username\": \"jenkins\", \"icon_url\": \"https://dev-sfo01.opentlc.com/static/81c91982/images/headshot.png\", \"text\": \"@here :rage: ${env.JOB_NAME} (${env.BUILD_NUMBER}) failed retiring ${guid}.\"}'\
                            """.trim()
                        )
                    }
                }
            }
        }
        stage('Wait for deletion email') {
            steps {
                git url: 'https://github.com/redhat-cop/agnosticd',
                    branch: 'development'
                withCredentials([usernameColonPassword(credentialsId: imap_creds, variable: 'credentials')]) {
                    sh """./tests/jenkins/downstream/poll_email.py \
                        --guid ${guid} \
                        --timeout 20 \
                        --server '${imap_server}' \
                        --filter 'has been deleted'"""
                }
            }
        }
        stage('Ensure projects are deleted') {
            steps {
                withCredentials([usernameColonPassword(credentialsId: opentlc_creds, variable: 'credentials')]) {
                    sh "./tests/jenkins/downstream/openshift_ensure_projects_deleted.sh '${openshift_location}' '${guid}'"
                }
            }
        }
    }
    post {
        failure {
            git 'https://github.com/redhat-gpte-devopsautomation/cloudforms-oob'
            /* retire in case of failure */
            withCredentials(
                [
                    usernameColonPassword(credentialsId: opentlc_creds, variable: 'credentials'),
                    usernameColonPassword(credentialsId: opentlc_admin_creds, variable: 'admin_credentials')
                ]
            ) {
                sh """
                export uri="${cf_uri}"
                export DEBUG=true
                ./opentlc/delete_svc_guid.sh '${guid}'
                """
            }
            withCredentials([usernameColonPassword(credentialsId: imap_creds, variable: 'credentials')]) {
                mail(
                    subject: "${env.JOB_NAME} (${env.BUILD_NUMBER}) failed GUID=${guid}",
                    body: "It appears that ${env.BUILD_URL} is failing, somebody should do something about that.",
                    to: "${notification_email}",
                    replyTo: "${notification_email}",
                    from: credentials.split(':')[0]
              )
            }
            withCredentials([string(credentialsId: rocketchat_hook, variable: 'HOOK_URL')]) {
                sh(
                    """
                      curl -H 'Content-Type: application/json' \
                      -X POST '${HOOK_URL}' \
                      -d '{\"username\": \"jenkins\", \"icon_url\": \"https://dev-sfo01.opentlc.com/static/81c91982/images/headshot.png\", \"text\": \"@here :rage: ${env.JOB_NAME} (${env.BUILD_NUMBER}) failed GUID=${guid}. It appears that ${env.BUILD_URL}/console is failing, somebody should do something about that.\"}'\
                    """.trim()
                )
            }
        }
        fixed {
            withCredentials([string(credentialsId: rocketchat_hook, variable: 'HOOK_URL')]) {
                sh(
                    """
                      curl -H 'Content-Type: application/json' \
                      -X POST '${HOOK_URL}' \
                      -d '{\"username\": \"jenkins\", \"icon_url\": \"https://dev-sfo01.opentlc.com/static/81c91982/images/headshot.png\", \"text\": \"@here :smile: ${env.JOB_NAME} is now FIXED, see ${env.BUILD_URL}/console\"}'\
                    """.trim()
                )
            }
        }
    }
}