Skip to content

olmax99/nomad-local-dev

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Nomad Local Development

Set up and manage a local Nomad Cluster using Consul, Vault, Traefik and Vagrant.

Useful Commands

$ vagrant up --provider=libvirt
$ vagrant ssh node-1 -c "sudo ifconfig"

# Consul
$ vagrant ssh node-1 -c "consul catalog nodes"
$ vagrant ssh node-1 -c "consul catalog services -node=node-1 -tags"
# Vault
$ curl http://192.168.0.111:8200/v1/sys/health | python -m json.tool   # Check Vault 'unsealed'
$ vault kv put secret/aws/s3 aws_access_key_id=somekeyid               # Create new secret
$ curl -X PUT -H "Content-Type: application/json" -d '{"key":"****"}' \ 
 "http://192.168.0.111:8200/v1/sys/unseal"                             # Unseal
# Nomad
$ NOMAD_ADDR="http://192.168.122.111:4646" nomad server members -detailed
$ nomad node-status
$ nomad job run <path/to/jobspec.nomad>

$ nomad job status <Job_ID>
$ nomad job stop <Job_ID>"

$ docker exec -i -t ansible_controller bash
$ rsync -avz -e "ssh -i $HOME/.vagrant.d/insecure_private_key" \ 
 example/ [email protected]:/root/example

Get Started

  • When being launched the entrypoint will pull all ansible-galaxy roles as defined in the ansible/requirements.yml file.

1. Environment Setup

#ActionCommand
1Clone the repositoryhost:~/ ❯ git clone <nomad-cluster>
2Edit Vagrantfile and change the IP addresses accordingly with your own local networkAlso ensure match with ansible/inventory/group_vars/service_traefik/traefik.yml
3Spin up Vagrant machineshost:<nomad-cluster> ❯ vagrant up --provider=libvirt
4Build the container for the Ansible controllerhost:<nomad-cluster> ❯ docker-compose up --build
5Access Ansible Controllerhost:<nomad-cluster> ❯ docker exec -i -t ansible_controller bash
6Create siimlink to ansible-vault script$ ln -s ./ansible/scripts/vault-pass-client.py vault-pass-client.py

2. Stack Deployment

#ActionCommand
1Change the IP addresses of the hosts file accordingly with what you set in the Vagrantfile and local network$ vagrant ssh node-1 -c "sudo ifconfig"
3Run the individual playbooks a first time (use any password as Vault password) to deploy Consul+Dnsmasq, and Vault(ansible_controller) ~$ ansible-playbook ansible/consul.yml + vault.yml
4Unseal Vault: head to http://192.168.0.111:8200/ and follow the process (remember to store the master key and the root token)http://192.168.0.111:8200/
5Provide a Vault token to Nomad (remember to store the new password)In ‘General Instructions’ follow the ‘Integrate Ansible-Vault’ section and insert encrypted token into ansible/inventory/group_vars/service_nomad.yml
6Run the playbook a second time to configure Nomad with the Vault tokenansible-playbook --vault-id [email protected] ansible/site.yml

Main IP Addresses

ServiceIPHostname
Consulhttp://192.168.0.111:8500http://consul.service.lab.consul:8500
Vaulthttp://192.168.0.111:8200http://active.vault.service.lab.consul:8200
Nomadhttp://192.168.0.111:4646http://nomad-servers.service.lab.consul:4646
Traefikhttp://192.168.0.111:8081

3. New Applications Deployment

#ActionCommand
1Find the Nomad job description/home/vagrant/example/APPNAME.nomad
2Inside the Nomad server run the levant deployment script(vagrant@node-1):~/example$ bash ./deploy-dev.sh

Confirm services

$ curl http://192.168.122.111/0/
$ curl -H Host:vault.service.consul http://192.168.122.111/ui/

Environment teardown

#ActionCommand
1Stop the Vagrant machineshost:<nomad-cluster> ❯ vagrant halt/destroy

General Instructions

Integrate Ansible-Vault

  • The variable nomad_vault_token needs to be created via Ansible-Vault
  • In the Ansible-Controller set up gpg and pass
# 'pass' needs a pregenerated 'gpg' key
(ansible_controller)~$ gpg --list-key       # will be empty
(ansible_controller)~$ gpg --full-generate-key   # the key will have a full name

# Create password-store and ansible-vault encryption password
(ansible_controller)~$ pass init "<gpg_full_key_name>"
(ansible_controller)~$ pass insert dev   # 'dev' is the vault-id label

# Create inline secret to be used with ansible-vault
# Ensure keyname 'dev' matches in ansible.cfg and ansible/script/pass-keyring-client.py l.131 !!
# 130 if not keyname:
# 131        keyname = "dev"  <-- HERE
$ ansible-vault encrypt_string --vault-id [email protected] '[VAULT_ROOT_TOKEN]' --name 'nomad_vault_token'
(ansible_controller)~$ ansible-playbook --vault-id dev@ansible/scripts/vault-pass-client.py ansible/nomad.yml
(ansible_controller)~$ ansible-playbook --vault-id dev@prompt ansible/nomad.yml
# Confirm Nomad
http://192.168.0.111:4646

Troubleshoot Vagrant

  • Error while activating network: Call to virNetworkCreate failed: internal error: Network is already in use
    # This is a libvirt/KVM network problem
    
    # 1. Identify default interface
    $ sudo virsh net-list
    $ sudo virsh net-dumpxml vagrant-libvirt
    $ sudo virsh net-edit vagrant-libvirt
    
    # 2. Delete Interface and vagrant virtual network manually
    # You can also check libvirt networks in VMM > Edit > Connection Details > Virtual Networks
    $ sudo virsh net-destroy vagrant-libvirt
        

Author

Olaf Marangone Contact: [email protected]

Credits

About

Set up and manage a local Nomad Cluster using Consul, Vault, Traefik and Vagrant.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published