Supernets: AWS Terraform deployment problems

Hi,

I’m attempting to follow the guide for Deploying Polygon Supernets on AWS with Terraform and I’m running into multiple issues.

I was able to deploy the infra described by Terraform, with the additional tweak of setting the jumpbox_ssh_access var to 0.0.0.0/0 (without this the ingres aws_security_group_rule.open_ssh creation times out).

However, when it comes to provisioning via Ansible, I’ve reached a blocker at the ansible -inventory --graph --inventory inventory/aws_ec2.yml command step. Firstly, the command turns out to be ansible-inventory (w/o the space, a doc update needed perhaps?), but when I run it, it does not produce an Ansible inventory file as stated by docs - I’m not familiar to Ansible, thus unsure how to troubleshoot.

Attempting the following command of pinging the instances also fails, times out actually, with the following:

i-05f5989624cecec38 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-05f5989624cecec38: nodename nor servname provided, or not known",
    "unreachable": true
}
i-03a4c372c32298ecf | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-03a4c372c32298ecf: nodename nor servname provided, or not known",
    "unreachable": true
}
i-07d11061ec567f27a | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-07d11061ec567f27a: nodename nor servname provided, or not known",
    "unreachable": true
}
i-048c975d430782359 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname i-048c975d430782359: nodename nor servname provided, or not known",
    "unreachable": true
}

Not stated in the docs, I also changed the filters:tag:BaseDN in aws_ec2.yml to correspond to the BaseDN output from the Terraform deployment - not sure if that is right, some docs coverage on this would be helpful.

Running ansible-inventory --graph --inventory inventory/aws_ec2.yml does produce this output:

@all:
  |--@ungrouped:
  |--@aws_ec2:
  |  |--i-03a4c372c32298ecf
  |  |--i-048c975d430782359
  |  |--i-05f5989624cecec38
  |  |--i-07d11061ec567f27a
  |--@validator:
  |  |--i-03a4c372c32298ecf
  |  |--i-048c975d430782359
  |  |--i-05f5989624cecec38
  |  |--i-07d11061ec567f27a
  |--@dev_edge_rg_private:
  |  |--i-03a4c372c32298ecf
  |  |--i-048c975d430782359
  |  |--i-05f5989624cecec38
  |  |--i-07d11061ec567f27a
  |--@validator_004:
  |  |--i-03a4c372c32298ecf
  |--@validator_002:
  |  |--i-048c975d430782359
  |--@validator_003:
  |  |--i-05f5989624cecec38
  |--@validator_001:
  |  |--i-07d11061ec567f27a

so I feel it’s close, but that I’m missing something.

Deepest gratitude in advance to any help!

Yuri