Restoring packages and running locally
Simply do the usual thing
and start with
Dockerfile
The docker file is all that’s needed to create a container.
It exposes port 3000 which needs to be specified later on when mapping the port to the instance.
Build the image
For example:
Note the dot at the end!
Rename a container
Use the unique identifier or the first few characters, say ’24drf’:
You can see the image via
Run the container
Run it via
Browse the container
The docker host has most likely not the localhost address, you can find out by using
Which gives on the Mac usually 192.168.99.100.
Hence, the container with port 44330 can be accessed via
Create an SSH tunnel
You need to have an X509 pair with the public uploaded to Azure. The SSH tunnel is create by specifying the private part in the following command
Clash with Marathon
To communicate with Azure you need to set up an SSH tunnel. Usually it persists and you can kill the tunnel explicitly via:
See this article for instance.
Give name or rename a docker image
Note that all names have to be lowercase.
Remove all docker images and containers
Removing the dangling ‘none:none’ images
See this article for instance.
Talk to the container
Enter the container via
To exit it do CTR+P + CTRL+Q.
Publish container on DockerHub
DCOS dashboards
One you have an SSH tunnel the following dashboard can be accessed:
- Changing logging settings: http://localhost/logging
- Marathon manage applications: http://localhost/marathon
- Plenty of Mesos info via: http://localhost/mesos
Instance definition
You can define instance via a form-UI but things are actually easier with JSON:
{ "id": "/propensity", "cmd": null, "cpus": 0.2, "mem": 128, "disk": 0, "instances": 1, "portDefinitions": [ { "port": 10000, "protocol": "tcp", "labels": {} } ], "container": { "type": "DOCKER", "volumes": [], "docker": { "image": "orbifold/propensity", "network": "BRIDGE", "portMappings": [ { "containerPort": 3000, "hostPort": 80, "servicePort": 10000, "protocol": "tcp", "labels": {} } ], "privileged": false, "parameters": [], "forcePullImage": false } }, "acceptedResourceRoles": [ "slave_public" ] }
Azure Container Service
The creation of a separate resource group is adviced ‘ cause ACS creates quite a bit of infrastructure
The Marathon UI is similar to the Azure Service Fabric Explorer and manages all apps and scaling:
It also offers a clic-and-go UI to install integrations and is in this respect more ‘open’ than ASF:
Finally Mesos is quite full of info about the health and going of apps:
Personal findings
- easy learning curve
- apps are very lightweight. A NodeJS service literally takes a dozen of lines.
- containers can be simply taken to another cloud without the slightest change.
- continuous integration with Jenkins or any other continuous service works hand-in-hand with github and docker hub.
- still in a state of flux. Here and there things fail but it could be because of my lack of expertise as well.
- docker containers have become universal boxes. Not sure that ignoring this would be a good thing onward.