National Parks - Java Tomcat Application
This is an example Java Tomcat application packaged by Habitat. This example app has existed for some time, and another example can be found here. The differences with this example versus previous examples are the following:
core/mongodb- Previous examples had you build a version of mongodb that was already populated with data before the applicationmongo.toml- This repo includes auser.tomlfile for overriding the default configuration of mongodbcore/haproxy= This repo uses thecore/haproxypackage as a loadbalancer in front of National Parks- Scaling - In both the
terrform/azureandterraform/awsplans there is acountvariable which allows you to scale out the web instances to demonstrate the concept of choreography vs orchestration in Habitat.
Usage
In order run this repo, you must first install Habitat. You can find setup docs on the Habitat Website.
Build/Test National-Parks App Locally:
- Clone this repo
cd national-parks-demo- Export environment variables to forward ports on the
export HAB_DOCKER_OPTS='-p 8000:8000 -p 8080:8080 -p 8085:8085 -p 9631:9631' hab studio enterbuildsource results/last_build.env- Load
core/mongodbpackage from the public depot:
hab svc load core/mongodb - Override the default configuration of mongodb:
hab config apply mongodb.default $(date +%s) mongo.toml - Load the most recent build of national-parks:
hab svc load $pkg_ident --bind database:mongodb.default - Load
core/haproxyfrom the public depot:
hab svc load core/haproxy --bind backend:national-parks.default - Override the default configuration of HAProxy:
hab config apply haproxy.default $(date +%s) haproxy.toml - Run
sup-logto see the output of the supervisor
You should now be able to hit the front end of the national-parks site as follows:
- Directly -
http://localhost:8080/national-parks - HAProxy -
http://localhost:8085/national-parks
You can also view the admin console for HAProxy to see how the webserver was added dynamically to the load balancer:
http://localhost:8000/haproxy-stats
username: admin
password: password
Build a new version of the application
There is also an index.html file in the root of the repo that updates the map of the National-Parks app to use red pins and colored map. This can be used to demonstrate the package promotion capabilities of Habitat.
- create a new feature branch -
git checkout -b update_homepage - Bump the
pkg_versioninhabitat/plan.sh - Overwrite
src/main/webapp/index.htmlwith the contents of thered-index.htmlin the root directory _NOTE: the index.html has a version number hard coded on line 38. Update that to your version number if you want it to match. hab studio enterbuild
Terraform
Included in the repo is terraform code for launching the application in AWS and Google Kubernetes Engine. Provision either AWS, GKE, or both, and then you can watch Habitat update across cloud deployments.
Proivision National-Parks in AWS
You will need to have an AWS account already created
Step
cd terraform/awscp tfvars.example terraform.tfvars- edit
terraform.tfvarswith your own values terraform apply
Once the provisioning finishes you will see the output with the various public IP addresses
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
haproxy_public_ip = 34.216.185.16
mongodb_public_ip = 54.185.74.152
national_parks_public_ip = 34.220.209.230
permanent_peer_public_ip = 34.221.251.189
http://<haproxy_public_ip>:8085/national-parks
or
http://<haproxy_public_ip>:8000/haproxy-stats
Proivision National-Parks in Azure
You will need to have an Azure account already created
Step
cd terraform/azureterraform initaz logincp tfvars.example terraform.tfvars- edit
terraform.tfvarswith your own values terraform apply
Once provisioning finishes you will see the output withthe various public IP addresses:
Apply complete! Resources: 19 added, 0 changed, 0 destroyed.
Outputs:
haproxy-public-ip = 40.76.29.195
instance_ips = [
40.76.29.123
]
mongodb-public-ip = 40.76.17.2
permanent-peer-public-ip = 40.76.31.133
Scaling out Azure and AWS Deployments
Both the AWS and Azure deployments support scaling of the web front end instances to demonstrate the concept of 'choreography' vs 'orchestration' with Habitat. The choreography comes from the idea that when the front end instances scale out, the supervisor for the HAProxy instance automatically takes care of the adding the new members to the pool and begins balancing traffic correctly across all instances.
Scaling out
- In your
terraform.tfvarsadd a line forcount = 3 - run
terraform apply - Once provisioning finishes, go to the
http://<haproxy-public-ip>:8000/haproxy-statsto see the new instances in the pool
Deploy National-Parks in Google Kubernetes Engine
You will need to have an Google Cloud account already created, and install the Google Cloud SDK
Before you begin
git clone https://github.com/habitat-sh/habitat-operatorgit clone https://github.com/habitat-sh/habitat-updater- create a
terraform.tfvars
Provision Kubernetes
-
cd terraform/gke -
terraform apply -
When provisioning completes you will see two commands you need to run:
1_creds_command = gcloud container clusters get-credentials...2_admin_permissions = kubectl create clusterrolebinding cluster-admin-binding...
Deploy Habitat Operator and Habitat Updater
First we need to deploy the Habitat Operator
git clone https://github.com/habitat-sh/habitat-operatorcd habitat-operatorkubectl apply -f examples/rbac/rbac.ymlkubectl apply -f examples/rbac/habitat-operator.yml
Now we can deploy the Habitat Updater
git clone https://github.com/habitat-sh/habitat-updatercd habitat-updaterkubectl apply -f kubernetes/rbac/rbac.ymlkubectl apply -f kubernetes/rbac/updater.yml
Deploy National-Parks into Kubernetes
Now that we have k8s stood up and the Habitat Operator and Updater deployed we are are ready to deploy our app.
cd national-parks-demo/terraform/gke/habitat-operator- Deploy the GKE load balancer:
kubectl create -f gke-service.yml - Next, edit the
habitat.ymltemplate with the proper origin names on lines 19 and 36 - Deploy the application:
kubectl create -f habitat.yml
Once deployment finishes you can run kubectl get all and see the running pods:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/habitat-operator-c7c559d7b-z5z7m 1/1 Running 0 3d1h
pod/habitat-updater-578c99fbcd-kbs2d 1/1 Running 0 3d1h
pod/national-parks-app-0 1/1 Running 0 2d14h
pod/national-parks-db-0 1/1 Running 0 3d1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.47.240.1 <none> 443/TCP 3d2h
service/national-parks NodePort 10.47.241.104 <none> 8080:30001/TCP 3d1h
service/national-parks-lb LoadBalancer 10.47.254.247 35.227.157.16 80:31247/TCP 3d1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/habitat-operator 1 1 1 1 3d1h
deployment.extensions/habitat-updater 1 1 1 1 3d1h
NAME DESIRED CURRENT READY AGE
replicaset.extensions/habitat-operator-c7c559d7b 1 1 1 3d1h
replicaset.extensions/habitat-updater-578c99fbcd 1 1 1 3d1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/habitat-operator 1 1 1 1 3d1h
deployment.apps/habitat-updater 1 1 1 1 3d1h
NAME DESIRED CURRENT READY AGE
replicaset.apps/habitat-operator-c7c559d7b 1 1 1 3d1h
replicaset.apps/habitat-updater-578c99fbcd 1 1 1 3d1h
NAME DESIRED CURRENT AGE
statefulset.apps/national-parks-app 1 1 3d1h
statefulset.apps/national-parks-db 1 1 3d1h
Find the EXTERNAL-IP for service/national-parks-lb:
http://<EXTERNAL-IP>/national-parks