Mitesh The Mouse
2020-02-12 e8b2a9153ac700e2e19c3aa0c9fec68d43d761be
New config: multi-cloud-capsule (#1139)

* multiple cloud support added

* multi cloud support config:satellite-vm

* changes made for PR

* Readme file updated config:satellite-vm

* fixed readme typos config:satellite-vm

* fixed readme typos config:satellite-vm

* Enhancement role:satellite-capsule-installation

* Issues fixed for enhancement role:satellite-capsule-installation

* enhancement role:satellite-capsule-configuration

* enhancement role:satellite-capsule-configuration

* update config:satellite-vm

* first commit for new config:multi-cloud-capsule
23 files added
1783 ■■■■■ changed files
ansible/configs/multi-cloud-capsule/README.adoc 225 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/default_vars.yml 38 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/default_vars_ec2.yml 120 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/default_vars_osp.yml 133 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/destroy_env.yml 18 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/files/cloud_providers/ec2_cloud_template.j2 367 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/files/cloud_providers/osp_cloud_template_master.j2 225 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/files/hosts_template.j2 24 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/files/repos_template.j2 43 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/infra.yml 3 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/infra_configs/ec2_infrastructure_deployment.yml 126 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/infra_configs/infra-common-ssh-config-generate.yml 54 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/infra_configs/infra-osp-create-inventory.yml 64 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/infra_configs/osp_infrastructure_deployment.yml 109 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/post_infra.yml 24 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/post_software.yml 36 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/pre_infra.yml 12 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/pre_software.yml 46 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/sample_vars_ec2.yml 23 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/sample_vars_osp.yml 23 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/software.yml 28 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/start.yml 21 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/stop.yml 21 ●●●●● patch | view | raw | blame | history
ansible/configs/multi-cloud-capsule/README.adoc
New file
@@ -0,0 +1,225 @@
:config: multi-cloud-capsule
:author: GPTE Team
:tag1: install_capsule
:tag2: configure_capsule
Config: {config}
===============
With {config}, we can capsule server on OpenStack and AWS cloud providers.
Requirements
------------
Following are the requirements:
. Aws OR OpenStack credentials .
. Satellite must be install and setup.
. Satellite should have all capsule repositories in activation key.
Config Variables
----------------
* Cloud specfic settings related variables.
|===
|*Variable* | *State* |*Description*
| env_type: multi-cloud-capsule |Required | Name of the config
| output_dir: /tmp/workdir |Required | Writable working scratch directory
| email: capsule-vm@example.com |Required |  User info for notifications
| guid: defaultguid | Reqired |Unique identifier
| cloud_provider: ec2 |Required        | Which AgnosticD Cloud Provider to use
|aws_regions: "String" |Required | aws region
|===
* Satellite specfic settings related variables.
|===
|*Variable* | *State* |*Description*
|install_satellite: Boolean   |Required | To enable installation roles
|configure_satellite: Boolean |Required | To enable configuration roles
|satellite_version: "Digit" |Required |satellite version
|org: "String" |Required |Organization name
|org_label: "String" |Required | Organization label in string without space
|org_description: "String" |Required | Organization description
|lifecycle_environment_path: [list] |Required | Contains nested list of environment path
|satellite_content: [list] |Required | Main List variable
|subscription_name: "String" |Required | Subscription name mainly required for manifest role
| manifest_file: "/path/to/manifest.zip" |Required | Path of download satellite manifest
|===
[NOTE]
For more about variables read README.adoc of the roles.
* Example variables files
. Sample of sample_vars_ec2.yml
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ cat ./configs/multi-cloud-capsule/sample_vars_ec2.yml
env_type: multi-cloud-capsule
output_dir: /tmp/workdir
email: satellite_vm@example.com
install_satellite: True
configure_satellite: True
satellite_version: 6.4
org: gpte
org_label: gpte
satellite_content:
  - name:             "Capsule Server"
    activation_key:   "capsule_key"
    subscriptions:
      - "Employee SKU"
    life_cycle:       "Library"
    content_view:     "Capsule Content"
    content_view_update: False
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
      - name: 'Red Hat Satellite Capsule 6.4 (for RHEL 7 Server) (RPMs)'
        product: 'Red Hat Satellite Capsule'
        basearch: 'x86_64'
  - name:             "Three Tier App"
    activation_key:   "three_tier_app_key"
    content_view:     "Three Tier App Content"
    life_cycle:       "Library"
    subscriptions:
      - "Employee SKU"
    repos:
      - name: 'Red Hat Enterprise Linux 7 Server (RPMs)'
        product: 'Red Hat Enterprise Linux Server'
        basearch: 'x86_64'
        releasever:  '7Server'
----
for reference look at link:sample_vars_ec2.yml[]
. Sample of ec2_secrets.yml
[source=text]
----
[user@desktop ~]$ cat ~/ec2_secrets.yml
aws_access_key_id: xxxxxxxxxxxxxxxx
aws_secret_access_key: xxxxxxxxxxxxxxxxxx
own_repo_path: http://localrepopath/to/repo
openstack_pem: ldZYgpVcjl0YmZNVytSb2VGenVrTG80SzlEU2xtUTROMHUzR1BZdzFoTEg3R2hXM
====Omitted=====
25ic0NTTnVDblp4bVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
openstack_pub: XZXYgpVcjl0YmZNVytSb2VGenVrTG80SzlEU2xtUTROMHUzR1BZdzFoTEg3R2hXM
====Omitted=====
53ic0NTTnVDblp4bVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
----
Roles
-----
* List of satellite and capsule roles
|===
|*Role*| *Link* | *Description*
|satellite-public-hostname | link:../../roles/satellite-public-hostname[satellite-public-hostname] | Set public hostname
|satellite-capsule-installation |link:../../roles/satellite-capsule-installation[satellite-capsule-installation]  | Install capsule packages
|satellite-capsule-configuration | link:../../roles/satellite-capsule-configuration[satellite-capsule-configuration] | Setup capsule server
|===
Tags
---
|===
|{tag1} |Consistent tag for all capsule installation roles
|{tag2} |Consistent tag for all capsule configuration roles
|===
* Example tags
----
## Tagged jobs
ansible-playbook playbook.yml --tags configure_capsule
## Skip tagged jobs
ansible-playbook playbook.yml --skip-tags install_capsule
----
Example to run config
---------------------
How to use config (for instance, with variables passed in playbook).
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  main.yml \
  -e @./configs/multi-cloud-capsule/sample_vars_ec2.yml \
  -e @~/ec2_secrets.yml \
  -e guid=defaultguid  \
  -e satellite_admin=admin \
  -e 'satellite_admin_password=changeme' \
  -e manifest_file=/path/to/manifest_satellite_6.4.zip
----
Example to stop environment
---------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/multi-cloud-capsule/stop.yml \
  -e @./configs/multi-cloud-capsule/sample_vars_ec2.yml \
  -e @~/ec2_secrets.yml \
  -e guid=defaultguid
----
Example to start environment
---------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/multi-cloud-capsule/start.yml \
  -e @./configs/multi-cloud-capsule/sample_vars_ec2.yml \
  -e @~/ec2_secrets.yml \
  -e guid=defaultguid
----
Example to destroy environment
------------------------------
[source=text]
----
[user@desktop ~]$ cd agnosticd/ansible
[user@desktop ~]$ ansible-playbook  ./configs/multi-cloud-capsule/destroy.yml \
  -e @./configs/multi-cloud-capsule/sample_vars_ec2.yml \
  -e @~/ec2_secrets.yml \
  -e guid=defaultguid
----
Author Information
------------------
{author}
ansible/configs/multi-cloud-capsule/default_vars.yml
New file
@@ -0,0 +1,38 @@
---
env_type: multi-cloud-capsule
output_dir: /tmp/workdir                # Writable working scratch directory
email: "{{env_type}}@example.com"
guid: defaultguid
deploy_local_ssh_config_location: "{{output_dir}}/"
key_name: ocpkey                        # Keyname must exist in AWS
use_own_key: true
env_authorized_key: "{{guid}}key"
set_env_authorized_key: true
default_key_name: ~/.ssh/{{key_name}}.pem
install_bastion: true
install_common: true
install_ipa_client: false
tower_run: false
update_packages: false
update_all_packages: false
install_satellite: True
configure_satellite: false
project_tag: "{{ env_type }}-{{ guid }}"
capsule_repos:
  - rhel-7-server-rpms
  - rhel-server-rhscl-7-rpms
  - rhel-7-server-satellite-maintenance-6-rpms
  - rhel-7-server-ansible-2.6-rpms
  - rhel-7-server-satellite-capsule-6.4-rpms
  - rhel-7-server-satellite-tools-6.4-rpms
...
ansible/configs/multi-cloud-capsule/default_vars_ec2.yml
New file
@@ -0,0 +1,120 @@
################################################################################
### Environment Settings for aws
################################################################################
## Environment Sizing
cloud_provider: ec2                     # Which AgnosticD Cloud Provider to use                # User info for notifications
HostedZoneId: Z3IHLWJZOU9SRT
aws_region: ap-southeast-2
capsule_instance_count: 1
capsule_instance_type: "m5.2xlarge"
security_groups:
  - name: CapsuleSG
    rules:
      - name: CapSSHPort
        description: "SSH Public"
        from_port: 22
        to_port: 22
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapbootpsPorts
        description: "bootps Public"
        from_port: 67
        to_port: 67
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapftftpPorts
        description: "tftp Public"
        from_port: 69
        to_port: 69
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapHTTPSPorts
        description: "HTTP Public"
        from_port: 80
        to_port: 80
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapHTTPSPorts
        description: "HTTPS Public"
        from_port: 443
        to_port: 443
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapCommplexPorts
        description: "Commplex Public"
        from_port: 5000
        to_port: 5000
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapCoPorts
        description: "Co Public"
        from_port: 5647
        to_port: 5647
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapiRDMIPorts
        description: "iRDMIPublic"
        from_port: 8000
        to_port: 8000
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapRDMIPorts
        description: "RDMIPublic"
        from_port: 8140
        to_port: 8140
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CappcsyncPorts
        description: "pcsync Public"
        from_port: 8443
        to_port: 8443
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapwebsbPorts
        description: "websb Public"
        from_port: 9090
        to_port: 9090
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
# Environment Instances
instances:
  - name: "capsule"
    count: "{{capsule_instance_count}}"
    security_groups:
      - CapsuleSG
    public_dns: true
    dns_loadbalancer: false
    flavor:
      ec2: "{{capsule_instance_type}}"
    tags:
      - key: "AnsibleGroup"
        value: "capsules"
      - key: "ostype"
        value: "linux"
      - key: "instance_filter"
        value: "{{ env_type }}-{{ email }}"
# DNS settings for environmnet
subdomain_base_short: "{{ guid }}"
subdomain_base_suffix: ".example.opentlc.com"
subdomain_base: "{{subdomain_base_short}}{{subdomain_base_suffix}}"
zone_internal_dns: "{{guid}}.internal."
chomped_zone_internal_dns: "{{guid}}.internal"
ansible/configs/multi-cloud-capsule/default_vars_osp.yml
New file
@@ -0,0 +1,133 @@
################################################################################
### OSP Environment variables
################################################################################
cloud_provider: osp
install_student_user: false
ansible_user: cloud-user
remote_user: cloud-user
osp_cluster_dns_zone: red.osp.opentlc.com
osp_cluster_dns_server: ddns01.opentlc.com
use_dynamic_dns: true
osp_project_create: true
student_name: student
admin_user: opentlc-mgr
capsule_instance_type: 8c32g100d
capsule_instance_image: rhel-server-7.7-update-2
capsule_instance_count: 1
security_groups:
  - name: CapsuleSG
    rules:
      - name: CapSSHPort
        description: "SSH Public"
        from_port: 22
        to_port: 22
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapbootpsPorts
        description: "bootps Public"
        from_port: 67
        to_port: 67
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapftftpPorts
        description: "tftp Public"
        from_port: 69
        to_port: 69
        protocol: udp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapHTTPSPorts
        description: "HTTP Public"
        from_port: 80
        to_port: 80
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapHTTPSPorts
        description: "HTTPS Public"
        from_port: 443
        to_port: 443
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapCommplexPorts
        description: "Commplex Public"
        from_port: 5000
        to_port: 5000
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapCoPorts
        description: "Co Public"
        from_port: 5647
        to_port: 5647
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapiRDMIPorts
        description: "iRDMIPublic"
        from_port: 8000
        to_port: 8000
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapRDMIPorts
        description: "RDMIPublic"
        from_port: 8140
        to_port: 8140
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CappcsyncPorts
        description: "pcsync Public"
        from_port: 8443
        to_port: 8443
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
      - name: CapwebsbPorts
        description: "websb Public"
        from_port: 9090
        to_port: 9090
        protocol: tcp
        cidr: "0.0.0.0/0"
        rule_type: Ingress
# Environment Instances
instances:
  - name: "capsule"
    count: "{{capsule_instance_count}}"
    public_dns: true
    floating_ip: true
    image_id: "{{ capsule_instance_image }}"
    flavor:
      ec2: "{{capsule_instance_type}}"
      osp: "{{capsule_instance_type}}"
      azure: Standard_A2_V2
    image_id: "{{ capsule_instance_image }}"
    security_groups:
      - CapsuleSG
    tags:
      - key: "AnsibleGroup"
        value: "capsules"
      - key: "ostype"
        value: "linux"
      - key: "instance_filter"
        value: "{{ env_type }}-{{ email }}"
ansible/configs/multi-cloud-capsule/destroy_env.yml
New file
@@ -0,0 +1,18 @@
---
- import_playbook: ../../include_vars.yml
- name: Delete Infrastructure
  hosts: localhost
  connection: local
  gather_facts: False
  become: no
  tasks:
    - name: Run infra-ec2-template-destroy
      include_role:
        name: "infra-{{cloud_provider}}-template-destroy"
      when: cloud_provider == 'ec2'
    - name: Run infra-azure-template-destroy
      include_role:
        name: "infra-{{cloud_provider}}-template-destroy"
      when: cloud_provider == 'azure'
ansible/configs/multi-cloud-capsule/files/cloud_providers/ec2_cloud_template.j2
New file
@@ -0,0 +1,367 @@
#jinja2: lstrip_blocks: "True"
---
AWSTemplateFormatVersion: "2010-09-09"
Mappings:
  RegionMapping: {{ aws_ami_region_mapping | to_json }}
Resources:
  Vpc:
    Type: "AWS::EC2::VPC"
    Properties:
      CidrBlock: "{{ aws_vpc_cidr }}"
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: "{{ aws_vpc_name }}"
        - Key: Hostlication
          Value:
            Ref: "AWS::StackId"
  VpcInternetGateway:
    Type: "AWS::EC2::InternetGateway"
  VpcRouteTable:
    Type: "AWS::EC2::RouteTable"
    Properties:
      VpcId:
        Ref: Vpc
  VPCRouteInternetGateway:
    DependsOn: VpcGA
    Type: "AWS::EC2::Route"
    Properties:
      GatewayId:
        Ref: VpcInternetGateway
      DestinationCidrBlock: "0.0.0.0/0"
      RouteTableId:
        Ref: VpcRouteTable
  VpcGA:
    Type: "AWS::EC2::VPCGatewayAttachment"
    Properties:
      InternetGatewayId:
        Ref: VpcInternetGateway
      VpcId:
        Ref: Vpc
  PublicSubnet:
    Type: "AWS::EC2::Subnet"
    DependsOn:
      - Vpc
    Properties:
    {% if aws_availability_zone is defined %}
      AvailabilityZone: {{ aws_availability_zone }}
    {% endif %}
      CidrBlock: "{{ aws_public_subnet_cidr }}"
      Tags:
        - Key: Name
          Value: "{{project_tag}}"
        - Key: Hostlication
          Value:
            Ref: "AWS::StackId"
      MapPublicIpOnLaunch: true
      VpcId:
        Ref: Vpc
  PublicSubnetRTA:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Properties:
      RouteTableId:
        Ref: VpcRouteTable
      SubnetId:
        Ref: PublicSubnet
{% for security_group in security_groups|list %}
  {{security_group['name']}}:
    Type: "AWS::EC2::SecurityGroup"
    Properties:
      GroupDescription: Host
      VpcId:
        Ref: Vpc
      Tags:
        - Key: Name
          Value: "{{security_group['name']}}"
{% endfor %}
{% for security_group in   security_groups|list %}
{% for rule in security_group.rules %}
  {{security_group['name']}}{{rule['name']}}:
    Type: "AWS::EC2::SecurityGroup{{rule['rule_type']}}"
    Properties:
     GroupId:
       Fn::GetAtt:
         - "{{security_group['name']}}"
         - GroupId
     IpProtocol: {{rule['protocol']}}
     FromPort: {{rule['from_port']}}
     ToPort: {{rule['to_port']}}
  {% if rule['cidr'] is defined %}
     CidrIp: "{{rule['cidr']}}"
  {% endif  %}
  {% if rule['from_group'] is defined %}
     SourceSecurityGroupId:
       Fn::GetAtt:
        - "{{rule['from_group']}}"
        - GroupId
  {% endif  %}
{% endfor %}
{% endfor %}
  DnsZonePrivate:
    Type: "AWS::Route53::HostedZone"
    Properties:
      Name: "{{ aws_dns_zone_private }}"
      VPCs:
        - VPCId:
            Ref: Vpc
          VPCRegion:
            Ref: "AWS::Region"
      HostedZoneConfig:
        Comment: "{{ aws_comment }}"
  DnsZonePublic:
    Type: "AWS::Route53::HostedZone"
    Properties:
      Name: "{{ aws_dns_zone_public }}"
      HostedZoneConfig:
        Comment: "{{ aws_comment }}"
  DnsPublicDelegation:
    Type: "AWS::Route53::RecordSetGroup"
    DependsOn:
      - DnsZonePublic
    Properties:
    {% if HostedZoneId is defined %}
      HostedZoneId: "{{ HostedZoneId }}"
    {% else %}
      HostedZoneName: "{{ aws_dns_zone_root }}"
    {% endif %}
      RecordSets:
        - Name: "{{ aws_dns_zone_public }}"
          Type: NS
          TTL: {{ aws_dns_ttl_public }}
          ResourceRecords:
            "Fn::GetAtt":
              - DnsZonePublic
              - NameServers
{% for instance in instances %}
{% if instance['dns_loadbalancer'] | d(false) | bool
  and not instance['unique'] | d(false) | bool %}
  {{instance['name']}}DnsLoadBalancer:
    Type: "AWS::Route53::RecordSetGroup"
    DependsOn:
    {% for c in range(1, (instance['count']|int)+1) %}
      - {{instance['name']}}{{c}}
      {% if instance['public_dns'] %}
      - {{instance['name']}}{{c}}EIP
      {% endif %}
    {% endfor %}
    Properties:
      HostedZoneId:
        Ref: DnsZonePublic
      RecordSets:
      - Name: "{{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{ aws_dns_zone_public }}"
        Type: A
        TTL: {{ aws_dns_ttl_public }}
        ResourceRecords:
{% for c in range(1,(instance['count'] |int)+1) %}
          - "Fn::GetAtt":
            - {{instance['name']}}{{c}}
            - PublicIp
{% endfor %}
{% endif %}
{% for c in range(1,(instance['count'] |int)+1) %}
  {{instance['name']}}{{loop.index}}:
    Type: "AWS::EC2::Instance"
    Properties:
{% if custom_image is defined %}
      ImageId: {{ custom_image.image_id }}
{% else %}
      ImageId:
        Fn::FindInMap:
        - RegionMapping
        - Ref: AWS::Region
        - {{ instance.image | default(aws_default_image) }}
{% endif %}
      InstanceType: "{{instance['flavor'][cloud_provider]}}"
      KeyName: "{{instance.key_name | default(key_name)}}"
    {% if instance['UserData'] is defined %}
      {{instance['UserData']}}
    {% endif %}
    {% if instance['security_groups'] is defined %}
      SecurityGroupIds:
      {% for sg in instance.security_groups %}
        - Ref: {{ sg }}
      {% endfor %}
    {% else %}
      SecurityGroupIds:
        - Ref: DefaultSG
    {% endif %}
      SubnetId:
        Ref: PublicSubnet
      Tags:
    {% if instance['unique'] | d(false) | bool %}
        - Key: Name
          Value: {{instance['name']}}
        - Key: internaldns
          Value: {{instance['name']}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{subdomain_base }}
    {% else %}
        - Key: Name
          Value: {{instance['name']}}{{loop.index}}
        - Key: internaldns
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_private_chomped}}
        - Key: publicname
          Value: {{instance['name']}}{{loop.index}}.{{aws_dns_zone_public_prefix|d('')}}{{ subdomain_base}}
    {% endif %}
        - Key: "owner"
          Value: "{{ email | default('unknownuser') }}"
        - Key: "Project"
          Value: "{{project_tag}}"
        - Key: "{{project_tag}}"
          Value: "{{ instance['name'] }}"
    {% for tag in instance['tags'] %}
        - Key: {{tag['key']}}
          Value: {{tag['value']}}
    {% endfor %}
      BlockDeviceMappings:
    {% if '/dev/sda1' not in instance.volumes|d([])|json_query('[].device_name')
      and '/dev/sda1' not in instance.volumes|d([])|json_query('[].name')
%}
        - DeviceName: "/dev/sda1"
          Ebs:
            VolumeSize: "{{ instance['rootfs_size'] | default(aws_default_rootfs_size) }}"
            VolumeType: "{{ aws_default_volume_type }}"
    {% endif %}
    {% for vol in instance.volumes|default([]) if vol.enable|d(true) %}
        - DeviceName: "{{ vol.name | default(vol.device_name) }}"
          Ebs:
          {% if cloud_provider in vol and 'type' in vol.ec2 %}
            VolumeType: "{{ vol[cloud_provider].type }}"
          {% else %}
            VolumeType: "{{ aws_default_volume_type }}"
          {% endif %}
            VolumeSize: "{{ vol.size }}"
    {% endfor %}
  {{instance['name']}}{{loop.index}}InternalDns:
    Type: "AWS::Route53::RecordSetGroup"
    Properties:
      HostedZoneId:
        Ref: DnsZonePrivate
      RecordSets:
    {% if instance['unique'] | d(false) | bool %}
        - Name: "{{instance['name']}}.{{aws_dns_zone_private}}"
    {% else %}
        - Name: "{{instance['name']}}{{loop.index}}.{{aws_dns_zone_private}}"
    {% endif %}
          Type: A
          TTL: {{ aws_dns_ttl_private }}
          ResourceRecords:
            - "Fn::GetAtt":
              - {{instance['name']}}{{loop.index}}
              - PrivateIp
{% if instance['public_dns'] %}
  {{instance['name']}}{{loop.index}}EIP:
    Type: "AWS::EC2::EIP"
    DependsOn:
    - VpcGA
    Properties:
      InstanceId:
        Ref: {{instance['name']}}{{loop.index}}
  {{instance['name']}}{{loop.index}}PublicDns:
    Type: "AWS::Route53::RecordSetGroup"
    DependsOn:
      - {{instance['name']}}{{loop.index}}EIP
    Properties:
      {% if secondary_stack is defined %}
      HostedZoneName: "{{ aws_dns_zone_public }}"
      {% else %}
      HostedZoneId:
        Ref: DnsZonePublic
      {% endif %}
      RecordSets:
      {% if instance['unique'] | d(false) | bool %}
        - Name: "{{instance['name']}}.{{aws_dns_zone_public_prefix|d('')}}{{ aws_dns_zone_public }}"
      {% else %}
        - Name: "{{instance['name']}}{{loop.index}}.{{aws_dns_zone_public_prefix|d('')}}{{ aws_dns_zone_public }}"
      {% endif %}
          Type: A
          TTL: {{ aws_dns_ttl_public }}
          ResourceRecords:
          - "Fn::GetAtt":
            - {{instance['name']}}{{loop.index}}
            - PublicIp
{% endif %}
{% endfor %}
{% endfor %}
  Route53User:
    Type: AWS::IAM::User
    Properties:
      Policies:
        - PolicyName: Route53Access
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action: route53:GetHostedZone
                Resource: arn:aws:route53:::change/*
              - Effect: Allow
                Action: route53:ListHostedZones
                Resource: "*"
              - Effect: Allow
                Action:
                  - route53:ChangeResourceRecordSets
                  - route53:ListResourceRecordSets
                  - route53:GetHostedZone
                Resource:
                  Fn::Join:
                    - ""
                    - - "arn:aws:route53:::hostedzone/"
                      - Ref: DnsZonePublic
              - Effect: Allow
                Action: route53:GetChange
                Resource: arn:aws:route53:::change/*
  Route53UserAccessKey:
      DependsOn: Route53User
      Type: AWS::IAM::AccessKey
      Properties:
        UserName:
          Ref: Route53User
Outputs:
  Route53internalzoneOutput:
    Description: The ID of the internal route 53 zone
    Value:
      Ref: DnsZonePrivate
  Route53User:
    Value:
      Ref: Route53User
    Description: IAM User for Route53 (Let's Encrypt)
  Route53UserAccessKey:
    Value:
      Ref: Route53UserAccessKey
    Description: IAM User for Route53 (Let's Encrypt)
  Route53UserSecretAccessKey:
    Value:
      Fn::GetAtt:
        - Route53UserAccessKey
        - SecretAccessKey
    Description: IAM User for Route53 (Let's Encrypt)
ansible/configs/multi-cloud-capsule/files/cloud_providers/osp_cloud_template_master.j2
New file
@@ -0,0 +1,225 @@
#jinja2: lstrip_blocks: "True"
---
heat_template_version: 2018-03-02
description: >-
  Top level HOT for creating new project, network resources and instances.
  This template relies on ResourceGroups and a nested template that is
  called to provision instances, ports, & floating IPs.
resources:
  {{ guid }}-infra_key:
    type: OS::Nova::KeyPair
    properties:
      name: {{ guid }}-infra_key
      save_private_key: true
{% if osp_project_create | bool %}
  {{ guid }}-project_user:
    type: OS::Keystone::User
    properties:
      name: {{ guid }}-user
      password: {{ heat_user_password }}
      domain: Default
  {{ guid }}-project_role_user:
    type: OS::Keystone::UserRoleAssignment
    properties:
      user: {get_resource: {{ guid }}-project_user}
      roles:
        - {project: {{ osp_project_name }}, role: _member_}
        - {project: {{ osp_project_name }}, role: swiftoperator}
    depends_on:
      - {{ guid }}-project_user
{% endif %}
{% for network in networks %}
  {{ network['name'] }}-network:
    type: OS::Neutron::Net
    properties:
      name: "{{ guid }}-{{ network['name'] }}-network"
      shared: {{ network['shared'] }}
  {{ network['name'] }}-subnet:
    type: OS::Neutron::Subnet
    properties:
      name: "{{ guid }}-{{ network['name'] }}-subnet"
      network_id: {get_resource: {{ network['name'] }}-network}
{% if network['dns_nameservers'] is defined %}
      dns_nameservers: [{{ network['dns_nameservers'] | list | join(",") }}]
{% endif %}
      cidr: {{ network['subnet_cidr'] }}
      gateway_ip: {{ network['gateway_ip'] }}
      allocation_pools:
        - start: {{ network['allocation_start'] }}
          end: {{ network['allocation_end'] }}
{% if network['create_router'] %}
  {{ network['name'] }}-router:
    type: OS::Neutron::Router
    properties:
      name: "{{ guid }}-{{ network['name'] }}-router"
      external_gateway_info:
        network: "{{ provider_network }}"
  {{ network['name'] }}-router_private_interface:
    type: OS::Neutron::RouterInterface
    properties:
      router: {get_resource: {{ network['name'] }}-router}
      subnet: {get_resource: {{ network['name'] }}-subnet}
{% endif %}
{% endfor %}
  ###################
  # Security groups #
  ###################
{% for security_group in security_groups | list %}
  {{ security_group['name'] }}:
    type: OS::Neutron::SecurityGroup
    properties:
      name: {{ guid }}-{{ security_group['name'] }}
{% if security_group['description'] is defined %}
      description: "{{ security_group['description'] }}"
{% endif %}
{% for rule in security_group.rules %}
{% if rule['name'] is defined %}
  {{ guid }}-{{ security_group['name'] }}-rule_{{ rule['name'] }}:
{% else %}
  {{ guid }}-{{ security_group['name'] }}-rule_{{ lookup('password', '/dev/null length=5 chars=ascii_letters,digits') }}:
{% endif %}
    type: OS::Neutron::SecurityGroupRule
    properties:
      security_group: {get_resource: {{ security_group['name'] }}}
      direction: {{ rule['direction'] | default(rule.rule_type) | lower }}
      protocol: {{ rule['protocol'] | lower }}
{% if rule['description'] is defined %}
      description: {{ rule['description'] }}
{% endif %}
{% if rule['port_range_min'] is defined or
  rule.from_port is defined %}
      port_range_min: {{ rule['port_range_min'] | default(rule.from_port) }}
{% endif %}
{% if rule['port_range_max'] is defined or
  rule.to_port is defined %}
      port_range_max: {{ rule['port_range_max'] | default(rule.to_port) }}
{% endif %}
{% if rule['remote_ip_prefix'] is defined or
  rule.cidr is defined %}
      remote_ip_prefix: {{ rule['remote_ip_prefix'] | default(rule.cidr) }}
{% endif %}
{% if rule['remote_group'] is defined or
  rule.from_group is defined %}
      remote_group: {get_resource: {{ rule['remote_group'] | default(rule.from_group) }}}
{% endif %}
    depends_on: {{ security_group['name'] }}
{% endfor %}
{% endfor %}
  #############
  # Instances #
  #############
{% for instance in instances %}
  {% for myinstanceindex in range(instance.count|int) %}
    {% set iname = instance.name if instance.count == 1 else [instance.name, loop.index] | join() %}
  ########### {{ iname }} ###########
  port_{{ iname }}:
    type: OS::Neutron::Port
    properties:
      network: { get_resource: {{ instance['network'] | default('default') }}-network }
      security_groups:
    {% if instance.security_groups is defined %}
      {% for security_group in instance.security_groups %}
        - {get_resource: {{ security_group }}}
      {% endfor %}
    {% endif %}
    depends_on:
      - {{ instance['network'] | default('default') }}-router_private_interface
    {% if instance.floating_ip | default(false) or instance.public_dns | default(false) %}
  fip_{{ iname }}:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network: {{ provider_network }}
    depends_on:
      - {{ instance['network'] | default('default') }}-router_private_interface
  fip_association_{{ iname }}:
    type: OS::Neutron::FloatingIPAssociation
    properties:
      floatingip_id: {get_resource: fip_{{ iname }}}
      port_id: {get_resource: port_{{ iname }}}
    {% endif %}
  server_{{ iname }}:
    type: OS::Nova::Server
    properties:
      name: {{ iname }}
      flavor: {{ instance.flavor.osp }}
      key_name: {get_resource: {{ guid }}-infra_key}
      block_device_mapping_v2:
        - image: {{ instance.image_id | default(instance.image) }}
          delete_on_termination: true
          volume_size: {{ instance['rootfs_size'] | default(osp_default_rootfs_size) }}
          boot_index: 0
      user_data: |
        #cloud-config
        ssh_authorized_keys: {{ all_ssh_authorized_keys | to_json }}
      user_data_format: RAW
      networks:
        - port: {get_resource: port_{{ iname }}}
    {% if instance['metadata'] is defined %}
      metadata: {{ instance.metadata | combine(default_metadata) | to_json }}
    {% endif %}
    {% if instance.tags is defined %}
      # Convert EC2 tags
      metadata:
      {% for key, value in default_metadata.items() %}
        '{{ key }}': {{ value | to_json }}
      {% endfor %}
      {% for tag in instance.tags %}
        '{{ tag.key }}': {{ tag.value | to_json }}
      {% endfor %}
    {% endif %}
    depends_on:
      - {{ instance['network'] | default('default') }}-router_private_interface
    {% if 'security_groups' in instance %}
      {% for security_group in instance.security_groups %}
      - {{ security_group }}
      {% endfor %}
    {% endif %}
    {% if instance.volumes is defined %}
  #### Volumes for {{ iname }} ####
      {% for volume in instance.volumes %}
        {% set loopvolume = loop %}
        {% set vname = ["volume", iname, loopvolume.index] | join('_') %}
  {{ vname }}:
    type: OS::Cinder::Volume
    properties:
      size: {{ volume.volume_size | default(volume.size) }}
          {% if volume.volume_name is defined %}
      name: {{ volume.volume_name | default(volume.name) }}
          {% endif %}
  volume_attachment_{{ vname }}:
    type: OS::Cinder::VolumeAttachment
    properties:
      volume_id: {get_resource: {{ vname }}}
      instance_uuid: {get_resource: server_{{ iname }}}
      {% endfor %}
    {% endif %}
  {% endfor %}
{% endfor %}
outputs:
  {{ guid }}-infra_key:
    description: The SSH infra key
    value: {get_attr: [{{ guid }}-infra_key, private_key]}
ansible/configs/multi-cloud-capsule/files/hosts_template.j2
New file
@@ -0,0 +1,24 @@
{# # These are the satellite hosts #}
{% if groups['satellites'] is defined %}
[satellites]
{% for host in groups['satellites'] %}
{%      if cloud_provider == 'ec2' %}
{{host}}
{%      elif cloud_provider == 'osp' %}
{{host}} ansible_host={{host}}.example.com
{% endif %}
{% endfor %}
[all:vars]
{# ###########################################################################
### Ansible Vars
########################################################################### #}
timeout=60
ansible_become=yes
ansible_user={{remote_user}}
[all:children]
satellites
{% endif %}
ansible/configs/multi-cloud-capsule/files/repos_template.j2
New file
@@ -0,0 +1,43 @@
{% if groups['capsules'] is defined %}
{% if inventory_hostname in groups['capsules'] %}
{# capsule repos #}
[rhel-7-server-rpms]
name=Red Hat Enterprise Linux 7
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-rpms
enabled=1
gpgcheck=0
[rhel-server-rhscl-7-rpms]
name=Red Hat Enterprise Linux 7 RHSCL
baseurl={{own_repo_path}}/{{repo_version}}/rhel-server-rhscl-7-rpms
enabled=1
gpgcheck=0
[rhel-7-server-ansible-2.6-rpms]
name=Red Hat Enterprise Ansible 2.6
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-ansible-2.6-rpms
enabled=1
gpgcheck=0
[rhel-7-server-satellite-capsule-6.4-rpms]
name=Red Hat Enterprise Satellite Capsule 6.4
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-satellite-capsule-6.4-rpms
enabled=1
gpgcheck=0
[rhel-7-server-satellite-maintenance-6-rpms]
name=Red Hat Enterprise Satellite 6 Maintenance
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-satellite-maintenance-6-rpms
enabled=1
gpgcheck=0
[rhel-7-server-satellite-tools-6.4-rpms]
name=Red Hat Enterprise Linux Satellite tools 6.4
baseurl={{own_repo_path}}/{{repo_version}}/rhel-7-server-satellite-tools-6.4-rpms
enabled=1
gpgcheck=0
{% endif %}
{% endif %}
ansible/configs/multi-cloud-capsule/infra.yml
New file
@@ -0,0 +1,3 @@
---
- import_playbook: ./infra_configs/{{ cloud_provider }}_infrastructure_deployment.yml
ansible/configs/multi-cloud-capsule/infra_configs/ec2_infrastructure_deployment.yml
New file
@@ -0,0 +1,126 @@
---
- import_playbook: ../../../cloud_providers/ec2_pre_checks.yml
- name: Step 001.1 Deploy Infrastructure
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  tags:
    - step001
    - step001.1
    - deploy_infrastructure
  tasks:
    - name: Run infra-ec2-template-generate Role
      import_role:
        name: infra-ec2-template-generate
    - name: Run infra-ec2-template-create Role
      import_role:
        name: infra-ec2-template-create
      vars:
        aws_region_loop: "{{aws_region}}"
    - name: Run infra-ec2-template-create Role into FallBack region
      include_role:
        name: infra-ec2-template-create
      vars:
        aws_region_loop: "{{item}}"
      with_items: "{{ fallback_regions }}"
      when:
        - fallback_regions is defined
        - cloudformation_out is failed
    - name: report Cloudformation error
      fail:
        msg: "FAIL {{ project_tag }} Create Cloudformation"
      when: not cloudformation_out is succeeded
      tags:
        - provision_cf_template
- name: Step 001.2 Create Inventory and SSH config setup
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  tags:
    - step001
    - step001.2
    - create_inventory
    - create_ssh_config
  tasks:
    # Sometimes the infra step is skipped, for example when scaling up a cluster.
    # when step001.1 is skipped, aws_region_final is not defined.
    - when: aws_region_final is not defined
      include_tasks: ec2_detect_region_tasks.yml
    - name: Run infra-ec2-create-inventory Role
      import_role:
        name: infra-ec2-create-inventory
    - name: Run Common SSH Config Generator task file
      import_tasks: ./infra-common-ssh-config-generate.yml
# include global vars again, this time for all hosts now that the inventory is built
- import_playbook: ../../../include_vars.yml
  tags:
    - create_inventory
    - must
- name: Step 001.3 Configure Linux Hosts and Wait for Connection
  hosts:
    - all:!windows:!network
  gather_facts: false
  any_errors_fatal: true
  ignore_errors: false
  become: true
  tags:
    - step001
    - step001.3
    - wait_ssh
    - set_hostname
  tasks:
    - name: set facts for remote access
      tags:
        - create_inventory
      set_fact:
        aws_region_final: "{{hostvars['localhost'].aws_region_final}}"
        ansible_ssh_extra_args: "{{ ansible_ssh_extra_args|d() }} -F {{output_dir}}/{{ env_type }}_{{ guid }}_ssh_conf"
    - name: Run infra-ec2-wait_for_linux_hosts Role
      import_role:
        name: infra-ec2-wait_for_linux_hosts
    - name: Run infra-ec2-linux-set-hostname Role
      import_role:
        name: infra-ec2-linux-set-hostname
- name: Step 001.4 Configure Windows Hosts and Wait for Connection
  gather_facts: false
  hosts:
    - windows
  tags:
    - step001
    - step001.4
  tasks:
    - name: set facts for remote access
      tags:
        - create_inventory
      set_fact:
        ansible_become: false
        ansible_connection: winrm
        ansible_host: "{{ public_dns_name }}"
        ansible_password: "{{ hostvars['localhost'].windows_password | default(hostvars['localhost'].generated_windows_password) }}"
        ansible_port: 5986
        ansible_user: Administrator
        ansible_winrm_server_cert_validation: ignore
        aws_region_final: "{{hostvars['localhost'].aws_region_final}}"
    - name: Run infra-ec2-wait_for_linux_hosts Role
      import_role:
        name: infra-ec2-wait_for_windows_hosts
    - name: Set output_dir for all windows hosts
      set_fact:
        output_dir: "{{ hostvars.localhost.output_dir }}"
ansible/configs/multi-cloud-capsule/infra_configs/infra-common-ssh-config-generate.yml
New file
@@ -0,0 +1,54 @@
---
- name: Store hostname as a fact
  set_fact:
    ansible_ssh_config: "{{output_dir}}/{{ env_type }}_{{ guid }}_ssh_conf"
    ansible_known_host: "{{output_dir}}/{{ env_type }}_{{ guid }}_ssh_known_hosts"
- name: Store hostname as a fact
  set_fact:
      remote_user: ec2-user
  when: "cloud_provider == 'ec2'"
- name: Store hostname as a fact
  set_fact:
      remote_user: cloud-user
  when: "cloud_provider == 'osp'"
- name: delete local ssh config and know_host file. start fresh
  file:
    dest: "{{ item }}"
    state: absent
  loop:
    - "{{ansible_known_host}}"
    - "{{ ansible_ssh_config }}"
- name: Create empty local ssh config
  file:
    dest: "{{ ansible_ssh_config }}"
    state: touch
  when: secondary_stack is not defined
- name: Add proxy config to workdir ssh config file
  blockinfile:
    dest: "{{ ansible_ssh_config }}"
    marker: "##### {mark} ADDED PROXY HOST {{ item }} {{ env_type }}-{{ guid }} ######"
    content: |
        Host {{ item }} {{ hostvars[item].shortname |d('')}}
          Hostname {{ hostvars[item].public_ip_address }}
          IdentityFile {{ ssh_key | default(infra_ssh_key) | default(ansible_ssh_private_key_file) | default(default_key_name)}}
          IdentitiesOnly yes
          User {{ remote_user }}
          ControlMaster auto
          ControlPath /tmp/{{ guid }}-%r-%h-%p
          ControlPersist 5m
          StrictHostKeyChecking no
          ConnectTimeout 60
          ConnectionAttempts 10
          UserKnownHostsFile {{ansible_known_host}}
  loop: "{{ groups['capsules'] }} "
  tags:
    - proxy_config_main
...
ansible/configs/multi-cloud-capsule/infra_configs/infra-osp-create-inventory.yml
New file
@@ -0,0 +1,64 @@
---
- set_fact:
    _name_selector: name
- set_fact:
    stack_tag: "{{env_type | replace('-', '_')}}_{{guid}}"
  tags:
    - create_inventory
    - must
- when: server.status != 'terminated'
  block:
    - name: Add hosts to inventory
      add_host:
        name: "{{ server | json_query(_name_selector) | default(server.name) }}"
        original_name: "{{ server.name }}"
        groups:
          #TODO: remove thos tag_*
          - "tag_Project_{{stack_tag}}"
          - "tag_{{ stack_tag }} | default('unknowns') }}"
          - "{{ server.metadata.ostype | default('unknowns') }}"
        ansible_user: "{{ ansible_user }}"
        remote_user: "{{ remote_user }}"
        # ansible_ssh_private_key_file: "{{item['key_name']}}"
        # key_name: "{{item['key_name']}}"
        state: "{{ server.status }}"
        instance_id: "{{ server.id }}"
        isolated: "{{ server.metadata.isolated | default(false) }}"
        # private_dns_name: "{{item['private_dns_name']}}"
        private_ip_address: "{{ server.private_v4 }}"
        public_ip_address: "{{ server.public_v4 | default('') }}"
        image_id: "{{ server.image.id | default('') }}"
        ansible_ssh_extra_args: "-o StrictHostKeyChecking=no"
        # bastion: "{{ local_bastion | default('') }}"
      loop: "{{ r_osp_facts.ansible_facts.openstack_servers }}"
      loop_control:
        label: "{{ server | json_query(_name_selector) | default(server.name) }}"
        loop_var: server
      tags:
        - create_inventory
        - must
    - add_host:
        name: "{{ server | json_query(_name_selector) | default(server.name) }}"
        groups: "{{ server.metadata.AnsibleGroup }}"
      loop: "{{ r_osp_facts.ansible_facts.openstack_servers }}"
      loop_control:
        label: "{{ server | json_query(_name_selector) | default(server.name) }}"
        loop_var: server
      when: server.metadata.AnsibleGroup | default('') != ''
      tags:
        - create_inventory
        - must
- name: debug hostvars
  debug:
    var: hostvars
    verbosity: 2
- name: debug groups
  debug:
    var: groups
    verbosity: 2
ansible/configs/multi-cloud-capsule/infra_configs/osp_infrastructure_deployment.yml
New file
@@ -0,0 +1,109 @@
---
- name: Step 001.1 Deploy Infrastructure
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  tags:
    - step001
    - step001.1
    - deploy_infrastructure
  environment:
    OS_AUTH_URL: "{{ osp_auth_url }}"
    OS_USERNAME: "{{ osp_auth_username }}"
    OS_PASSWORD: "{{ osp_auth_password }}"
    OS_PROJECT_NAME: "admin"
    OS_PROJECT_DOMAIN_ID: "{{ osp_auth_project_domain }}"
    OS_USER_DOMAIN_NAME: "{{ osp_auth_user_domain }}"
  tasks:
    - name: Run infra-osp-project-create Role
      import_role:
        name: infra-osp-project-create
      tags:
        - infra-osp-project-create
    - name: Run infra-osp-template-generate Role
      import_role:
        name: infra-osp-template-generate
    - name: Run infra-osp-template-create Role
      import_role:
        name: infra-osp-template-create
- name: Step 001.2 Create Inventory and SSH config setup
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  tags:
    - step001
    - step001.2
    - create_inventory
    - create_ssh_config
  environment:
    OS_AUTH_URL: "{{ osp_auth_url }}"
    OS_USERNAME: "{{ osp_auth_username }}"
    OS_PASSWORD: "{{ osp_auth_password }}"
    OS_PROJECT_NAME: "{{ osp_project_name }}"
    OS_PROJECT_DOMAIN_ID: "{{ osp_auth_project_domain }}"
    OS_USER_DOMAIN_NAME: "{{ osp_auth_user_domain }}"
  tasks:
    - name: Gather instance facts
      os_server_facts:
        server: "*"
        filters:
          metadata:
            guid: "{{ guid }}"
            env_type: "{{ env_type }}"
      register: r_osp_facts
    - name: debug osp_facts
      debug:
        var: r_osp_facts
        verbosity: 2
    - name: Run infra-osp-dns Role
      import_role:
        name: infra-osp-dns
      vars:
        _dns_state: present
    - name: Run infra-osp-create-inventory Role
      import_tasks: ./infra-osp-create-inventory.yml
    - name: Run Common SSH Config Generator task file
      import_tasks: ./infra-common-ssh-config-generate.yml
# include global vars again, this time for all hosts now that the inventory is built
- import_playbook: ../../../include_vars.yml
  tags:
    - create_inventory
    - must
- name: Step 001.3 Configure Linux Hosts and Wait for Connection
  hosts:
    - all:!windows:!network
  gather_facts: false
  any_errors_fatal: true
  ignore_errors: false
  tags:
    - step001
    - step001.3
    - wait_ssh
  tasks:
    - name: set facts for remote access
      tags:
        - create_inventory
      set_fact:
        # set python interpreter: Useful when the distrib running ansible has a different path
        # ex: when running using the alpine image
        #ansible_python_interpreter: env python
        ansible_ssh_common_args: >-
          {{ ansible_ssh_extra_args|d() }}
          -F {{ output_dir }}/{{ env_type }}_{{ guid }}_ssh_conf
          -o ControlPath=/tmp/{{ guid }}-%r-%h-%p
    - name: Run infra-osp-wait_for_linux_hosts Role
      import_role:
        name: infra-osp-wait_for_linux_hosts
ansible/configs/multi-cloud-capsule/post_infra.yml
New file
@@ -0,0 +1,24 @@
- name: Step 002 Post Infrastructure
  hosts: localhost
  connection: local
  become: false
  tags:
    - step002
    - post_infrastructure
  tasks:
    - name: Job Template to launch a Job Template with update on launch inventory set
      uri:
        url: "https://{{ ansible_tower_ip }}/api/v1/job_templates/{{ job_template_id }}/launch/"
        method: POST
        user: "{{tower_admin}}"
        password: "{{tower_admin_password}}"
        body:
          extra_vars:
            guid: "{{guid}}"
            ipa_host_password: "{{ipa_host_password}}"
        body_format: json
        validate_certs: False
        HEADER_Content-Type: "application/json"
        status_code: 200, 201
      when: tower_run == 'true'
ansible/configs/multi-cloud-capsule/post_software.yml
New file
@@ -0,0 +1,36 @@
- name: Step 00xxxxx post software
  hosts: support
  gather_facts: False
  become: yes
  tasks:
    - debug:
        msg: "Post-Software tasks Started"
# - name: Step lab post software deployment
#   hosts: bastions
#   gather_facts: False
#   become: yes
#   tags:
#     - opentlc_bastion_tasks
#   tasks:
#     - import_role:
#         name: bastion-opentlc-ipa
#       when: install_ipa_client|bool
- name: PostSoftware flight-check
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  tags:
    - post_flight_check
  tasks:
    - debug:
        msg: "Post-Software checks completed successfully"
ansible/configs/multi-cloud-capsule/pre_infra.yml
New file
@@ -0,0 +1,12 @@
- name: Step 000 Pre Infrastructure
  hosts: localhost
  connection: local
  become: false
  tags:
    - step001
    - pre_infrastructure
  tasks:
    - name: Pre-Infra
      debug:
        msg: "Pre-Infra work is done"
ansible/configs/multi-cloud-capsule/pre_software.yml
New file
@@ -0,0 +1,46 @@
- name: Step 003 Pre Software
  hosts: localhost
  gather_facts: false
  become: false
  tasks:
    - debug:
        msg: "Step 003 Pre Software"
    - import_role:
        name: infra-local-create-ssh_key
      when: set_env_authorized_key | bool
- name: Configure all hosts with Repositories
  hosts:
    - all:!windows
  become: true
  gather_facts: False
  tags:
    - step004
    - common_tasks
  roles:
    # - { role: "set-repositories", when: 'repo_method is defined' }
    - { role: "set_env_authorized_key", when: 'set_env_authorized_key' }
# - name: Configuring Bastion Hosts
#   hosts: bastions
#   become: true
#   roles:
#     - { role: "common", when: 'install_common' }
#     - {role: "bastion", when: 'install_bastion' }
#     - { role: "bastion-opentlc-ipa", when: 'install_ipa_client' }
#   tags:
#     - step004
#     - bastion_tasks
- name: PreSoftware flight-check
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  tags:
    - presoftware_flight_check
  tasks:
    - debug:
        msg: "Pre-Software checks completed successfully"
ansible/configs/multi-cloud-capsule/sample_vars_ec2.yml
New file
@@ -0,0 +1,23 @@
---
env_type: multi-cloud-capsule
output_dir: /tmp/workdir              # Writable working scratch directory
email: capsule_vm@example.com
guid: capaws01
cloud_provider: ec2
aws_region: ap-southeast-2
satellite_version: 6.4
install_capsule: true
configure_capsule: true
satellite_public_fqdn: satellite1.cap01.example.opentlc.com
capsule_activationkey: capsule_key
capsule_org: gpte
consumer_key:  "cuBfSo9NhB338aSwvRC5VKgZt5Sqhez5"
consumer_secret: "mpYncnDHkRq9XrHDoereQ3Hwejyyed6c"
capsule_cert_path:  /tmp/capsule-cert.tar
ansible/configs/multi-cloud-capsule/sample_vars_osp.yml
New file
@@ -0,0 +1,23 @@
---
env_type: multi-cloud-capsule
output_dir: /tmp/workdir              # Writable working scratch directory
email: capsule_vm@example.com
cloud_provider: osp
guid: caposp01
osp_cluster_dns_zone: red.osp.opentlc.com
###### satellite env related variables ###############
satellite_version: 6.4
satellite_public_fqdn: satellite1.cap01.example.opentlc.com
capsule_activationkey: capsule_key
capsule_org: gpte
consumer_key:  "cuBfSo9NhB338aSwvRC5VKgZt5Sqhez5"
consumer_secret: "mpYncnDHkRq9XrHDoereQ3Hwejyyed6c"
capsule_cert_path:  /tmp/capsule-cert.tar
install_capsule: true
configure_capsule: true
ansible/configs/multi-cloud-capsule/software.yml
New file
@@ -0,0 +1,28 @@
---
- name: Step 00xxxxx software
  hosts: localhost
  gather_facts: False
  become: false
  tasks:
    - debug:
        msg: "Software tasks Started"
- name: Configuring capsule Hosts
  hosts: capsules
  become: True
  gather_facts: True
  roles:
    - { role: "satellite-public-hostname" }
    - { role: "satellite-capsule-installation",   when: install_capsule }
    - { role: "satellite-capsule-configuration",  when: configure_capsule }
- name: Software flight-check
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  tags:
    - post_flight_check
  tasks:
    - debug:
        msg: "Software checks completed successfully"
ansible/configs/multi-cloud-capsule/start.yml
New file
@@ -0,0 +1,21 @@
---
- import_playbook: ../../include_vars.yml
- name: Stop instances
  hosts: localhost
  gather_facts: false
  become: false
  environment:
    AWS_ACCESS_KEY_ID: "{{aws_access_key_id}}"
    AWS_SECRET_ACCESS_KEY: "{{aws_secret_access_key}}"
  tasks:
    - debug:
        msg: "Step 002 Post Infrastructure"
    - name: Start instances
      ec2:
        instance_tags:
          "aws:cloudformation:stack-name": "{{ project_tag }}"
        state: running
        region: "{{ aws_region }}"
ansible/configs/multi-cloud-capsule/stop.yml
New file
@@ -0,0 +1,21 @@
---
- import_playbook: ../../include_vars.yml
- name: Stop instances
  hosts: localhost
  gather_facts: false
  become: false
  environment:
    AWS_ACCESS_KEY_ID: "{{aws_access_key_id}}"
    AWS_SECRET_ACCESS_KEY: "{{aws_secret_access_key}}"
  tasks:
    - debug:
        msg: "Step 002 Post Infrastructure"
    - name: Stop instances
      ec2:
        instance_tags:
          "aws:cloudformation:stack-name": "{{ project_tag }}"
        state: stopped
        region: "{{ aws_region }}"