Docker Blog http://www.maryputnam.com/blog Thu, 30 Jul 2020 18:39:28 +0000 en-US hourly 1 http://wordpress.org/?v=5.4.2 83253124 Docker Blog http://www.maryputnam.com/blog/docker-index-dramatic-growth-in-docker-usage-affirms-the-continued-rising-power-of-developers/ Thu, 30 Jul 2020 18:39:27 +0000 http://www.maryputnam.com/blog/?p=26813 Developers have always been an integral part of business innovation and transformation. With the massive increase in Docker usage, we can see the continued rising importance of developers as they create the next generation of cloud native applications.  You may recall in February we introduced the Docker Index, which gives a snapshot and analysis of […]

The post Docker Index: Dramatic Growth in Docker Usage Affirms the Continued Rising Power of Developers appeared first on Docker Blog.

]]>
28365365备用网址Developers have always been an integral part of business innovation and transformation. With the massive increase in Docker usage, we can see the continued rising importance of developers as they create the next generation of cloud native applications. 

You may recall in February we introduced the Docker Index, which gives a snapshot and analysis of developer and dev team preferences and trends based on anonymized data from 5 million Docker Hub users, 2 million Docker Desktop users and countless other developers engaging with content on Docker Hub.

According to a newly updated Docker Index28365365备用网址, the eight months between November 2019 and July 2020 have seen a dramatic swell in consumption across the maryputnam.community and ecosystem. How exactly is usage expanding? Let us count the ways.

28365365备用网址Last November, there were 130 billion pulls on Docker Hub. That seemed worth talking about, so we shared this data in a blog in February. But since then consumption of the world’s most popular repository for application components (Docker Hub lest there be any doubt) has skyrocketed; in July, total pulls on Docker Hub reached 242 billion. That’s almost a doubling of pulls in a little over six months. (To be clear, the numbers represent total pulls since Docker Hub was created in June of 2014.)

Containers Are Now Mainstream and Usage Is Only Growing.

We also reported 8 billion pulls in the month of November—up from 5.5 billion a month the previous year. Impressive developer usage much? We thought so. But July has already left that number in the dust, with 11 billion pulls in the past month, according to the Docker Index.

It is also worth noting that the 160+ Official Images represent north of 20% of all pulls and this underscores the value-add of the Docker ecosystem. Developers want and need a curated, maintained and secure set of content that Docker is investing in. 

Over the same time, the number of repositories on Docker Hub has grown from 6 million to 7 million, while Docker Hub users have grown from 5 million to 7 million, and Docker Desktop installations have gone from 2.4 million to 2.9 million. 

28365365备用网址Are you getting the picture? If it wasn’t clear before, it’s even more apparent that there is ever more developer adoption and engagement of a container based application strategy. Simply said, containers are mainstream and usage is only growing and developers are at the heart of this trend. Companies of all sizes—small, media and large—are using Docker. 

28365365备用网址The Docker Index also offers insights into trends such as top search terms and most popular container images. For example, the top five most popular container images in 2019 were busybox, nginx, redis, mongo and postgres. In the first six months of 2020, they were postgres, redis, memcached, alpine and traefik.

Most Popular Container Images in 2020

With the growth and adoption of cloud and need for cloud native apps, Docker’s capabilities to help dev teams build, share and run these apps are perfectly aligned to the trends. And let’s not forget some other massive drivers of Docker growth that we set in motion in recent months. In May, we extended our strategic collaboration with Microsoft to simplify code-to-cloud application development for developers and development teams. The move allows developers to use native maryputnam.commands to run applications in Azure Container Instances (ACI), providing a frictionless experience when building cloud native applications.

And in July, we announced a collaboration with Amazon Web Services (AWS) to simplify the lives of developers by allowing them to focus on application development, streamlining the process of deploying and managing containers in AWS from their local development environment. The move allows developers to use Docker to easily deploy apps on Amazon ECS and AWS Fargate.

Users Want Choice.

All of which is to say, the adoption of containers for cloud native apps—and Docker usage in particular—is looking extremely robust. Stay tuned for more insights from the Docker Index in the months ahead. In the meantime, onwards and upwards!

The post Docker Index: Dramatic Growth in Docker Usage Affirms the Continued Rising Power of Developers appeared first on Docker Blog.

]]>
26813
Docker Blog http://www.maryputnam.com/blog/containerized-python-development-part-3/ Tue, 28 Jul 2020 16:00:00 +0000 http://www.maryputnam.com/blog/?p=26765 This is the last part in the series of blog posts showing how to set up and optimize a containerized Python development environment. The first part covered how to containerize a Python service and the best development practices for it. The second part showed how to easily set up different components that our Python application […]

The post Containerized Python Development – Part 3 appeared first on Docker Blog.

]]>
This is the last part in the series of blog posts showing how to set up and optimize a containerized Python development environment. The first part covered how to containerize a Python service and the best development practices for it. The second part28365365备用网址 showed how to easily set up different components that our Python application needs and how to easily manage the lifecycle of the overall project with maryputnam.compose.

28365365备用网址In this final part, we review the development cycle of the project and discuss in more details how to apply code updates and debug failures of the containerized Python services. The goal is to analyze how to speed up these recurrent phases of the development process such that we get a similar experience to the local development one.

Applying Code Updates

In general, our containerized development cycle consists of writing/updating code, building, running and debugging it.

For the building and running phase, as most of the time we actually have to wait, we want these phases to go pretty quick such that we focus on coding and debugging.

28365365备用网址We now analyze how to optimize the build phase during development. The build phase corresponds to image build time when we change the Python source code. The image needs to be rebuilt in order to get the Python code updates in the container before launching it.

28365365备用网址We can however apply code changes without having to build the image. We can do this simply by bind-mounting the local source directory to its path in the container. For this, we update the compose file as follows:

28365365备用网址maryputnam.compose.yaml

...
  app:
    build: app
    restart: always
    volumes:
      - ./app/src:/code
...

With this, we have direct access to the updated code and therefore we can skip the image build and restart the container to reload the Python process.

28365365备用网址Furthermore, we can avoid re-starting the container if we run inside it a reloader process that watches for file changes and triggers the restart of the Python process once a change is detected. We need to make sure we have bind-mounted the source code in the Compose file as described previously.

28365365备用网址In our example, we use the Flask framework that, in debugging mode, runs a very convenient module called the reloader. The reloader watches all the source code files and automatically restarts the server when detects that a file has changed. To enable the debug mode we only need to set the debug parameter as below:

server.py

server.run(debug=True, host='0.0.0.0', port=5000)

If we check the logs of the app container we see that the flask server is running in debugging mode.

$ maryputnam.compose logs app
Attaching to project_app_1
app_1 | * Serving Flask app "server" (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: This is a development server. Do not use it in a production deployment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 315-974-099

Once we update the source code and save, we should see the notification in the logs and reload.

$ maryputnam.compose logs app
Attaching to project_app_1
app_1 | * Serving Flask app "server" (lazy loading)
...
app_1 | * Debugger PIN: 315-974-099
app_1 | * Detected change in '/code/server.py', reloading
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 315-974-099

Debugging Code

28365365备用网址We can debug code in mostly two ways. 

First is the old fashioned way of placing print statements all over the code for checking runtime value of objects/variables. Applying this to containerized processes is quite straightforward and we can easily check the output with a maryputnam.compose logs command.

Second, and the more serious approach is by using a debugger. When we have a containerized process, we need to run a debugger inside the container and then connect to that remote debugger to be able to inspect the instance data.

28365365备用网址We take as an example again our Flask application. When running in debug mode, aside from the reloader module it also includes an interactive debugger. Assume we update the code to raise an exception, the Flask service will return a detailed response with the exception.

Another interesting case to exercise is the interactive debugging where we place breakpoints in the code and do a live inspect. For this we need an IDE with Python and remote debugging support. If we choose to rely on Visual Studio Code to show how to debug Python code running in containers we need to do the following to connect to the remote debugger directly from VSCode. 

First, we need to map locally the port we use to connect to the debugger. We can easily do this by adding  the port mapping to the Compose file:

maryputnam.compose.yaml

...
  app:
    build: app
    restart: always
    volumes:
      - ./app/src:/code
    ports:
      - 5678:5678

...

Next, we need to import the debugger module in the source code and make it listen on the port we defined in the Compose file. We should not forget to add it to the dependencies file also and rebuild the image for the app service to get the debugger package installed. For this exercise, we choose to use the ptvsd28365365备用网址 debugger package that VS Code supports.

server.py

...
import ptvsd
ptvsd.enable_attach(address=('0.0.0.0', 5678))
...

requirements.txt

Flask==1.1.1
mysql-connector==2.2.9


ptvsd==4.3.2

We need to remember that for changes we make in the Compose file, we need to run a compose down command to remove the current containers setup and then run a maryputnam.compose up to redeploy with the new configurations in the compose file.

Finally, we need to create a ‘Remote Attach?configuration in VS Code to launch the debugging mode.

The launch.json28365365备用网址 for our project should look like:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python: Remote Attach",
            "type": "python",
            "request": "attach",
            "port": 5678,
            "host": "localhost",
            "pathMappings": [
                {
                    "localRoot": "${workspaceFolder}/app/src",
                    "remoteRoot": "/code"
                }
            ]
        }
    ]
}

We need to make sure we update the path map locally and in the container. 

Once we do this, we can easily place breakpoints in the IDE, start the debugging mode based on the configuration we created and, finally, trigger the code to reach the breakpoint.

Conclusion

This series of blog posts showed how to quickly set up a containerized Python development environment, manage project lifecycle and apply code updates and debug containerized Python services.  Putting in practice all we discussed should make the containerized development experience identical to the local one. 

Resources

The post Containerized Python Development – Part 3 appeared first on Docker Blog.

]]>
26765
Docker Blog http://www.maryputnam.com/blog/multi-arch-build-what-about-gitlab-ci/ Mon, 27 Jul 2020 16:00:00 +0000 http://www.maryputnam.com/blog/?p=26795 Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll show how to use GitLab CI, which is part of the GitLab. To start building your image with GitLab CI, you will […]

The post Multi-arch build, what about GitLab CI? appeared first on Docker Blog.

]]>
Following the previous article28365365备用网址 where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll show how to use GitLab CI, which is part of the GitLab.

To start building your image with GitLab CI, you will first need to create a .gitlab-ci.yml file at the root of your repository, commit it and push it.

image: docker:stable

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2

services:
  - docker:dind

build:
  stage: build
  script:
    - docker version

This should result in a build output that shows the version of the Docker CLI and Engine: 

We will now install Docker buildx. Because GitLabCI runs everything in containers and uses any image you want to start this container, we can use one with buildx preinstalled, like the one we used for CircleCI. And as for CircleCI, we need to start a builder instance.

image: jdrouet/docker-with-buildx:stable

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2

services:
  - docker:dind

build:
  stage: build
  script:
    - docker buildx create --use
    - docker buildx build --platform linux/arm/v7,linux/arm64/v8,linux/amd64 --tag your-username/multiarch-example:gitlab .

And that’s it, your image will now be built for both ARM and x86 platforms.

The last step is now to store the image on the Docker Hub. To do so we’ll need an access token from Docker Hub28365365备用网址 to get write access.

Once you created it, you’ll have to set in your project CI/CD settings in the Variables section.

We can then add  DOCKER_USERNAME and DOCKER_PASSWORD variables to GitLab CI so that we can login to push our images.

Once this is done, you can add the login step and the --push option to the buildx command as follows.

build:
  stage: build
  script:
    - docker login -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
    - docker buildx create --use
    - docker buildx build --push --platform linux/arm/v7,linux/arm64/v8,linux/386,linux/amd64 --tag your-username/multiarch-example:gitlab .

And voila, you can now create a multi arch image each time you make a change in your codebase.

The post Multi-arch build, what about GitLab CI? appeared first on Docker Blog.

]]>
26795
Docker Blog http://www.maryputnam.com/blog/containerized-python-development-part-2/ Tue, 21 Jul 2020 16:00:00 +0000 http://www.maryputnam.com/blog/?p=26662 28365365备用网址This is the second part of the blog post series on how to containerize our Python development. In part 1, we have already shown how to containerize a Python service and the best practices for it. In this part, we discuss how to set up and wire other components to a containerized Python service. We […]

The post Containerized Python Development – Part 2 appeared first on Docker Blog.

]]>
This is the second part of the blog post series on how to containerize our Python development. In part 1, we have already shown how to containerize a Python service and the best practices for it. In this part, we discuss how to set up and wire other components to a containerized Python service. We show a good way to organize project files and data and how to manage the overall project configuration with maryputnam.compose. We also cover the best practices for writing Compose files for speeding up our containerized development process.

Managing Project Configuration with maryputnam.compose

Let’s take as an example an application for which we separate its functionality in three-tiers following a microservice architecture. This is a pretty common architecture for multi-service applications. Our example application consists of:

  • a UI tier – running on an nginx service
  • a logic tier – the Python component we focus on
  • a data tier – we use a mysql database to store some data we need in the logic tier

The reason for splitting an application into tiers is that we can easily modify or add new ones without having to rework the entire project.

28365365备用网址A good way to structure the project files is to isolate the file and configurations for each service. We can easily do this by having a dedicated directory per service inside the project one. This is very useful to have a clean view of the components and to easily containerize each service. It also helps in manipulating service specific files without having to worry that we could modify by mistake other service files.

For our example application, we have the following directories:

Project
├─── web
└─── app
└─── db

We have already covered how to containerize a Python component in the first part of this blog post series.  Same applies for the other project components but we skip the details for them as we can easily access samples implementing the structure we discuss here. The nginx-flask-mysql example provided by the awesome-compose28365365备用网址 repository is one of them. 

28365365备用网址This is the updated Project structure with the Dockerfile in place. Assume we have a similar setup for the web and db components.

Project
├─── web
├─── app
? ├─── Dockerfile
? ├─── requirements.txt
? └─── src
? └─── server.py
└─── db

We could now start the containers manually for all our containerized project components. However, to make them communicate we have to manually handle the network creation and attach the containers to it. This is fairly complicated and it would take precious development time if we need to do it frequently.

Here is where maryputnam.compose offers a very easy way of coordinating containers and spinning up and taking down services in our local environment. For this, all we need to do is write a Compose file containing the configuration for our project’s services. Once we have it, we can get the project running with a single command.

Compose file

Let’s see what is the structure of the Compose files and how we can manage the project services with it.

28365365备用网址Below is a sample file for our project. As you can see we define a list of services. In  the db section we specify the base image directly as we don’t have any particular configuration to apply to it. Meanwhile our web and app service are going to have the image built from their Dockerfiles. According to where we can get the service image we can either set the build or the image field. The build field requires a path with a Dockerfile inside.

maryputnam.compose.yaml

version: "3.7"
services:
  db:
    image: mysql:8.0.19
    command: '--default-authentication-plugin=mysql_native_password'
    restart: always
    environment:
      - MYSQL_DATABASE=example
      - MYSQL_ROOT_PASSWORD=password

  app:
    build: app
    restart: always

  web:
    build: web
    restart: always
    ports:
      - 80:80

28365365备用网址To initialize the database we can pass environment variables with the DB name and password while for our web service we map the container port to the localhost in order to be able to access the web interface of our project.

Let’s see how to deploy the project with maryputnam.compose. 

All we need to do now is to place the maryputnam.compose.yaml at the root directory of the project and then issue the command for deployment with maryputnam.compose.

Project
├─── maryputnam.compose.yaml
├─── web
├─── app
└─── db

maryputnam.compose is going to take care of pulling the mysql image from Docker Hub and launching the db container while for our web and app 28365365备用网址service, it builds the images locally and then runs the containers from them. It also takes care of creating a default network and placing all containers in it so that they can reach each other.

All this is triggered with only one command.

$ maryputnam.compose up -d
Creating network "project_default" with the default driver
Pulling db (mysql:8.0.19)?br> ?br> Status: Downloaded newer image for mysql:8.0.19
Building app
Step 1/6 : FROM python:3.8
---> 7f5b6ccd03e9
Step 2/6 : WORKDIR /code
---> Using cache
---> c347603a917d
Step 3/6 : COPY requirements.txt .
---> fa9a504e43ac
Step 4/6 : RUN pip install -r requirements.txt
---> Running in f0e93a88adb1
Collecting Flask==1.1.1
?br> Successfully tagged project_app:latest
WARNING: Image for service app was built because it did not already exist. To rebuild this image you must use maryputnam.compose build or maryputnam.compose up --build.
Building web
Step 1/3 : FROM nginx:1.13-alpine
1.13-alpine: Pulling from library/nginx
?br> Status: Downloaded newer image for nginx:1.13-alpine
---> ebe2c7c61055
Step 2/3 : COPY nginx.conf /etc/nginx/nginx.conf
---> a3b2a7c8853c
Step 3/3 : COPY index.html /usr/share/nginx/html/index.html
---> 9a0713a65fd6
Successfully built 9a0713a65fd6
Successfully tagged project_web:latest

Creating project_web_1 ?done
Creating project_db_1 ?done
28365365备用网址 Creating project_app_1 ?done

Check the running containers:

$ maryputnam.compose ps
  Name         Command                        State  Ports
-------------------------------------------------------------------------
project_app_1  /bin/sh -c python server.py    Up
project_db_1   docker-entrypoint.sh --def ... Up     3306/tcp, 33060/tcp
project_web_1  nginx -g daemon off;           Up     0.0.0.0:80->80/tcp

28365365备用网址To stop and remove all project containers run:

$ maryputnam.compose down
Stopping project_db_1 ... done
Stopping project_web_1 ... done
Stopping project_app_1 ... done
Removing project_db_1 ... done
Removing project_web_1 ... done
Removing project_app_1 ... done
Removing network project-default

To rebuild images we can run a build and then an up command to update the state of the project containers:

$ maryputnam.compose build
$ maryputnam.compose up -d

As we can see, it is quite easy to manage the lifecycle of the project containers with maryputnam.compose.

Best practices for writing Compose files

Let us analyse the Compose file and see how we can optimise it by following best practices for writing Compose files.

Network separation

When we have several containers we need to control how to wire them together. We need to keep in mind that, as we do not set any network in the compose file, all our containers will end in the same default network.

28365365备用网址This may not be a good thing if we want only our Python service to be able to reach the database. To address this issue, in the compose file we can actually define separate networks for each pair of components. In this case the web component won’t be able to access the DB.

Docker Volumes

28365365备用网址Every time we take down our containers, we remove them and therefore lose the data we stored in previous sessions. To avoid that and persist DB data between different containers, we can exploit named volumes. For this, we simply define a named volume in the Compose file and specify a mount point for it in the db service as shown below:

version: "3.7"
services:
  db:
    image: mysql:8.0.19
    command: '--default-authentication-plugin=mysql_native_password'
    restart: always
    volumes:
      - db-data:/var/lib/mysql

    networks:
      - backend-network
    environment:
      - MYSQL_DATABASE=example
      - MYSQL_ROOT_PASSWORD=password

  app:
    build: app
    restart: always
    networks:
      - backend-network
      - frontend-network

  web:
    build: web
    restart: always
    ports:
      - 80:80
    networks:
      - frontend-network
volumes:
  db-data:

networks:
  backend-network:
  frontend-network:

 We can explicitly remove the named volumes on maryputnam.compose down if we want.

Docker Secrets

As we can observe in the Compose file, we set the db password in plain text. To avoid this, we can exploit docker secrets to have the password stored and share it securely with the services that need it. We can define secrets and reference them in services as below. The password is being stored locally in the project/db/password.txt file and mounted in the containers under /run/secrets/<secret-name>.

version: "3.7"
services:
  db:
    image: mysql:8.0.19
    command: '--default-authentication-plugin=mysql_native_password'
    restart: always
    secrets:
      - db-password

    volumes:
      - db-data:/var/lib/mysql
    networks:
      - backend-network
    environment:
      - MYSQL_DATABASE=example
      - MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db-password

  app:
    build: app
    restart: always
    secrets:
      - db-password

    networks:
      - backend-network
      - frontend-network

  web:
    build: web
    restart: always
    ports:
      - 80:80
    networks:
      - frontend-network
volumes:
  db-data:
secrets:
  db-password:
    file: db/password.txt

networks:
  backend-network:
  frontend-network:

We have now a well defined Compose file for our project that follows best practices. An example application exercising all the aspects we discussed can be found here.

What’s next?

28365365备用网址This blog post showed how to set up a container-based multi-service project where a Python service is wired to other services and how to deploy it locally with maryputnam.compose.

In the next and final part of this series, we show how to update and debug the containerized Python component.

Resources

The post Containerized Python Development – Part 2 appeared first on Docker Blog.

]]>
26662
Docker Blog http://www.maryputnam.com/blog/top-questions-for-getting-started-with-docker/ Mon, 20 Jul 2020 15:01:00 +0000 http://www.maryputnam.com/blog/?p=26739 28365365备用网址Does Docker run on Windows? Yes. Docker is available for Windows, MacOS and Linux. Here are the download links: Docker Desktop for Windows Docker Desktop for Mac Linux What is the difference between Virtual Machines (VM) and Containers? This is a great question and I get this one a lot. The simplest way I can […]

The post Top Questions for Getting Started with Docker appeared first on Docker Blog.

]]>
Does Docker run on Windows?

Yes. Docker is available for Windows, MacOS and Linux. Here are the download links:

What is the difference between Virtual Machines (VM) and Containers?

28365365备用网址This is a great question and I get this one a lot. The simplest way I can explain the differences between Virtual Machines and Containers is that a VM virtualizes the hardware and a Container “virtualizes?the OS. 

28365365备用网址If you take a look at the image above, you can see that there are multiple Operating Systems running when using Virtual Machine technology. Which produces a huge difference in start up times and various other constraints and overhead when installing and maintaining a full blow operating system. Also, with VMs, you can run different flavors of operating systems. For example, I can run Windows 10 and a Linux distribution on the same hardware at the same time. Now let’s take a look at the image for Docker Containers.

As you can see in this image, we only have one Host Operating System installed on our infrastructure. Docker sits “on top?of the host operating system. Each application is then bundled in an image that contains all the configuration, libraries, files and executables the application needs to run.

At the core of the technology, containers are just an operating system process that is run by the OS but with restrictions on what files and other resources they can consume and have access to such as CPU and networking.

Since containers use features of the Host Operating System and therefore share the kernel, they need to be created for that operating system. So, for example, you can not run a container that contains linux binaries on Windows or vice versa.

28365365备用网址This is just the basics and of course the technical details can be a little more complicated. But if you understand these basic concepts, you’ll have a good foundation of the difference between Virtual Machines and Containers.

What is the difference between an Image and a Container?

28365365备用网址This is another very common question that is asked. I believe some of the confusion stems from the fact that we sometimes interchange these terms when talking about containers. I know I’ve been guilty of it.

An image is a template that is used by Docker to create your running Container. To define an image you create a Dockerfile. When Docker reads and executes the commands inside of your Dockerfile, the result is an image that then can be run “inside a container.?/p>

A container, in simple terms, is a running image. You can run multiple instances of your image and you can create, start and stop them as well as connect them to other containers using networks.

What is the difference between Docker and Kubernetes?

28365365备用网址I believe the confusion between the two stems from the development community talking as if these two are the same concepts. They are not.

Kubernetes is an orchestrator and Docker is a platform from building, shipping and running containers. Docker, in and of itself, does not handle orchestration. 

28365365备用网址Container Orchestration, in simple terms, is the process of managing and scheduling the running of containers across nodes that the orchestrator manages. 

28365365备用网址So generally speaking, Docker runs one instance of a container as a unit. You can run multiple containers of the same image, but Docker will not manage them as a unit.

To manage multiple containers as a unit, you would use an Orchestrator. Kubernetes is a container orchestrator. As well is AWS ECS and Azure ACI.

Why can’t I connect to my web application running in a container?

28365365备用网址By default, containers are secure and isolated from outside network traffic – they do not expose any of its ports by default. Therefore if you want to be able to handle traffic coming from outside the container, you need to expose the port your container is listening on. For web applications this is typically port 80 or 443.

To expose a port when running a container, you can pass the –publish or -p flag. 

For example:

$ docker run -p 80:80 nginx

This will run an Nginx container and publish port 80 to the outside world.

You can read all about Docker Networking in our documentation.

How do I run multiple applications in one container?

28365365备用网址This is a very common question that I get from folks that are coming from a Virtual Machine background. The reason being is that when working with VMs, we can think of our application as owning the whole operating system and therefore can create multiple processes or runtimes.

When working with containers, it is best practice to map one process to one container for various architectural reasons that we do not have the space to discuss here. But the biggest reason to run one process inside a container is in respect to the tried and true KISS principle. Keep It Simple Simon. 

When your containers have one process, they can focus on doing one thing and one thing only. This allows you to scale up and down relatively easily.

Stay tuned to this blog and my twitter handle (@pmckee) for more content on how to design and build applications with containers and microservices.

How do I persist data when running a container?

Containers are immutable and you should not write data into your container that you would like to be persisted after that container stops running. You want to think about containers as unchangeable processes that could stop running at any moment and be replaced by another very easily.

28365365备用网址So, with that said, how do we write data and have a container use it at runtime or write data at runtime that can be persisted. This is where volumes come into play.

Volumes are the preferred mechanism to write and read persistent data. Volumes are managed by Docker and can be moved, copied and managed outside of containers.

28365365备用网址For local development, I prefer to use bind mounts to access source code outside of my development container.

For an excellent overview of storage and specifics around volumes and bind mounts, please checkout our documentation on Storage.

Conclusion

These are just some of the common questions I get from people new to Docker. If you want to read more common questions and answers, check out our FAQ in our documentation.


Also, please feel free to connect on twitter (@pmckee) and ask questions if you like.

The post Top Questions for Getting Started with Docker appeared first on Docker Blog.

]]>
26739
Docker Blog http://www.maryputnam.com/blog/dockercon-2020-the-aws-sessions/ Thu, 16 Jul 2020 15:00:00 +0000 http://www.maryputnam.com/blog/?p=26724 28365365备用网址Last week we announced Docker and AWS created an integrated and frictionless experience for developers to leverage maryputnam.compose, Docker Desktop, and Docker Hub to deploy their apps on Amazon Elastic Container Service (Amazon ECS) and Amazon ECS on AWS Fargate. On the heels of that announcement, we continue the latest series of blog articles […]

The post DockerCon 2020: The AWS Sessions appeared first on Docker Blog.

]]>
Last week we announced Docker and AWS created an integrated and frictionless experience for developers to leverage maryputnam.compose, Docker Desktop, and Docker Hub to deploy their apps on Amazon Elastic Container Service (Amazon ECS) and Amazon ECS on AWS Fargate. On the heels of that announcement, we continue the latest series of blog articles focusing on developer content that we are curating from DockerCon LIVE 202028365365备用网址, this time with a focus on AWS. If you are running your apps on AWS, bookmark this post for relevant insights for easy access in one place.

As more developers adopt and learn Docker, and as more organizations are jumping head-first into containerizing their applications, AWS continues to be the cloud of choice for deployment. Earlier this year Docker and AWS collaborated on Compose-spec.io open specification and as mentioned on the Docker blog by my colleague Chad Metcalf, deploying straight from Docker to AWS has never been easier. It’s just another step to constantly put ourselves in the shoes of you, our customer, the developer.

The replay of these three sessions on AWS is where you can learn more about container trends for developers, adopting microservices and building and deploying multi-container apps to AWS.

Interview with Deepak Singh, AWS

Deepak Singh – AWS

28365365备用网址Deepak Singh is Vice President of Compute Services at AWS and has broad responsibilities across a number of businesses and teams, including AWS Container Services and the Amazon Open Source Program Office (OSPO). Docker was delighted to have Deepak interviewed by The Cube. He dives into the latest AWS and open source technologies, cloud native development, and the state of containerization in 2020 for developers. Deepak is a dynamic thought leader and you definitely want to catch his interview by Stuart Miniman.

Access Logging Made Easy With Envoy and Fluent Bit

28365365备用网址Carmen Puccio – AWS

28365365备用网址As customers start to adopt microservices patterns into their organization, they typically run into a challenge when it comes to logging. One of the challenges of a polyglot microservices architecture is trying to correlate different access logs into a consistent format as they are sent to a centralized logging solution. Imagine trying to find a particular error or status code across different services that are interacting with each other with no data consistency in your logs.Be sure to watch this session by AWS Principal Solutions Architect, Carmen Puccio, where you will learn how to implement a consistent and structured log format for your microservices applications with Envoy and Fluent Bit.

Build & Deploy Multi-Container Applications to AWS

Lukonde Mwila

As the cloud-native approach to development and deployment becomes more prevalent, it’s an exciting time for software engineers to be equipped on how to dockerize multi-container applications and deploy them to the cloud. In this talk, Lukonde Mwila, Software Engineer at Entelect, will cover the following topics: maryputnam.compose, containerizing an Nginx server, containerizing an React app, containerizing a Node.JS app, containerizing a MongoDB app,  running a multi-container app locally, creating a CI/CD pipeline, adding a build stage to test containers and push images to Docker Hub, and deploying a multi-container app to AWS Elastic Beanstalk.

The post DockerCon 2020: The AWS Sessions appeared first on Docker Blog.

]]>
26724
Docker Blog http://www.maryputnam.com/blog/containerized-python-development-part-1/ Wed, 15 Jul 2020 16:00:00 +0000 http://www.maryputnam.com/blog/?p=26580 Developing Python projects in local environments can get pretty challenging if more than one project is being developed at the same time. Bootstrapping a project may take time as we need to manage versions, set up dependencies and configurations for it. Before, we used to install all project requirements directly in our local environment and […]

The post Containerized Python Development – Part 1 appeared first on Docker Blog.

]]>
Developing Python projects in local environments can get pretty challenging if more than one project is being developed at the same time. Bootstrapping a project may take time as we need to manage versions, set up dependencies and configurations for it. Before, we used to install all project requirements directly in our local environment and then focus on writing the code. But having several projects in progress in the same environment becomes quickly a problem as we may get into configuration or dependency conflicts. Moreover, when sharing a project with teammates we would need to also coordinate our environments. For this we have to define our project environment in such a way that makes it easily shareable. 

A good way to do this is to create isolated development environments for each project. This can be easily done by using containers and  maryputnam.compose to manage them.  We cover this in a series of blog posts, each one with a specific focus.

28365365备用网址This first part covers how to containerize a Python service/tool and the best practices for it.

Requirements

To easily exercise what we discuss in this blog post series, we need to install a minimal set of tools required to manage containerized environments locally:

Containerize a Python service

We show how to do this with a simple Flask service such that we can run it standalone without needing  to set up other components.

server.py

from flask import Flask
server = Flask(__name__)

@server.route("/")
 def hello():
    return "Hello World!"

if __name__ == "__main__":
   server.run()

In order to run this program, we need to make sure we have all the required dependencies installed first. One way to manage dependencies is by using a package installer such as pip. For this we need to create a requirements.txt file and write the dependencies in it. An example of such a file for our simple server.py28365365备用网址 is the following:

requirements.txt

Flask==1.1.1

28365365备用网址We have now the following structure:

app
├─── requirements.txt
└─── src
     └─── server.py

We create a dedicated directory for the source code to isolate it from other configuration files. We will see later why we do this.

28365365备用网址To execute our Python program, all is left to do is to install a Python interpreter and run it. 

We could run this program locally. But, this goes against the purpose of containerizing our development which is to keep a clean standard development environment that allows us to easily switch between projects with different conflicting requirements.

Let’s have a look next on how we can easily containerize this Python service.

Dockerfile 

The way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it. The steps are sketched below.

To generate a Docker image we need to create a Dockerfile which contains instructions needed to build the image. The Dockerfile is then processed by the Docker builder which generates the Docker image. Then, with a simple docker run command, we create and run a container with the Python service.

Analysis of a Dockerfile

An example of a Dockerfile containing instructions for assembling a Docker image for our hello world Python service is the following:

Dockerfile

# set base image (host OS)
FROM python:3.8

# set the working directory in the container
WORKDIR /code

# copy the dependencies file to the working directory
COPY requirements.txt .

# install dependencies
RUN pip install -r requirements.txt

# copy the content of the local src directory to the working directory
COPY src/ .

# command to run on container start
CMD [ "python", "./server.py" ]

For each instruction or command from the Dockerfile, the Docker builder generates an image layer and stacks it upon the previous ones. Therefore, the Docker image resulting from the process is simply a read-only stack of different layers.

28365365备用网址We can also observe in the output of the build command the Dockerfile instructions being executed as steps.

$ docker build -t myimage .
Sending build context to Docker daemon 6.144kB
Step 1/6 : FROM python:3.8
3.8.3-alpine: Pulling from library/python
?br> Status: Downloaded newer image for python:3.8.3-alpine
---> 8ecf5a48c789
Step 2/6 : WORKDIR /code
---> Running in 9313cd5d834d
Removing intermediate container 9313cd5d834d
---> c852f099c2f9
Step 3/6 : COPY requirements.txt .
---> 2c375052ccd6
Step 4/6 : RUN pip install -r requirements.txt
---> Running in 3ee13f767d05
?br> Removing intermediate container 3ee13f767d05
---> 8dd7f46dddf0
Step 5/6 : COPY ./src .
---> 6ab2d97e4aa1
Step 6/6 : CMD python server.py
---> Running in fbbbb21349be
Removing intermediate container fbbbb21349be
---> 27084556702b
Successfully built 70a92e92f3b5
28365365备用网址 Successfully tagged myimage:latest

28365365备用网址Then, we can check the image is in the local image store:

$ docker images
REPOSITORY    TAG       IMAGE ID        CREATED          SIZE
28365365备用网址 myimage       latest    70a92e92f3b5    8 seconds ago    991MB

During development, we may need to rebuild the image for our Python service multiple times and we want this to take as little time as possible. We analyze next some best practices that may help us with this.

Development Best Practices for Dockerfiles

We focus now on best practices for speeding up the development cycle. For production-focused ones, this blog post and the docs cover them in more details.

Base Image

The first instruction from the Dockerfile specifies the base image on which we add new layers for our application. The choice of the base image is pretty important as the features it ships may impact the quality of the layers built on top of it. 

When possible, we should always use official images which are in general frequently updated and may have less security concerns.

The choice of a base image can impact the size of the final one. If we prefer size over other considerations, we can use some of the base images of a very small size and low overhead. These images are usually based on the alpine 28365365备用网址distribution and are tagged accordingly. However, for Python applications, the slim variant of the official Docker Python image works well for most cases (eg. python:3.8-slim).

Instruction order matters for leveraging build cache

When building an image frequently, we definitely want to use the builder cache mechanism to speed up subsequent builds.  As mentioned previously, the Dockerfile instructions are executed in the order specified. For each instruction, the builder checks first its cache for an image to reuse. When a change in a layer is detected, that layer and all the ones coming after are being rebuilt.

28365365备用网址For an efficient use of the caching mechanism , we need to place the instructions for layers that change frequently after the ones that incur less changes.

Let’s check our Dockerfile example to understand how the instruction order impacts caching. The interesting lines are the ones below.

...
# copy the dependencies file to the working directory
COPY requirements.txt .

# install dependencies
RUN pip install -r requirements.txt

# copy the content of the local src directory to the working directory
COPY src/ .
...

During development, our application’s dependencies change less frequently than the Python code. Because of this, we choose to install the dependencies in a layer preceding the code one. Therefore we copy the dependencies file and install them and then we copy the source code. This is the main reason why we isolated the source code to a dedicated directory in our project structure.

Multi-stage builds 

28365365备用网址Although this may not be really useful during development time, we cover it quickly as it is interesting for shipping the containerized Python application once development is done. 

What we seek in using multi-stage builds is to strip the final application image of all unnecessary files and software packages and to deliver only the files needed to run our Python code.  A quick example of a multi-stage Dockerfile for our previous example is the following:

# first stage
FROM python:3.8 AS builder
COPY requirements.txt .

# install dependencies to the local user directory (eg. /root/.local)
RUN pip install --user -r requirements.txt

# second unnamed stage
FROM python:3.8-slim
WORKDIR /code

# copy only the dependencies installation from the 1st stage image
COPY --from=builder /root/.local/bin /root/.local
COPY ./src .

# update PATH environment variable
ENV PATH=/root/.local:$PATH

CMD [ "python", "./server.py" ]

Notice that we have a two stage build where we name only the first one as builder. We name a stage by adding an AS <NAME> to the FROM instruction and we use this name in the COPY i28365365备用网址nstruction where we want to copy only the necessary files to the final image.

The result of this is a slimmer final image for our application:

$ docker images
REPOSITORY    TAG      IMAGE ID       CREATED         SIZE
myimage       latest   70a92e92f3b5   2 hours ago     991MB
multistage    latest   e598271edefa   6 minutes ago   197MB
?

In this example we relied on the pip’s  –user28365365备用网址  option to install dependencies to the local user directory and copy that directory to the final image. There are however other solutions available such as virtualenv or building packages as wheels and copy and install them to the final image.

Run the container

28365365备用网址After writing the Dockerfile and building the image from it,  we can run the container with our Python service.

$ docker images
REPOSITORY   TAG      IMAGE ID       CREATED       SIZE
myimage      latest   70a92e92f3b5   2 hours ago   991MB
...

$ docker ps
CONTAINER ID   IMAGE   COMMAND   CREATED   STATUS   PORTS   NAMES

$ docker run -d -p 5000:5000 myimage
befb1477c1c7fc31e8e8bb8459fe05bcbdee2df417ae1d7c1d37f371b6fbf77f

We now containerized our hello world server and we can query the port mapped to localhost.

$ docker ps
CONTAINER     ID        IMAGE        COMMAND        PORTS                   ...
befb1477c1c7  myimage   "/bin/sh -c  'python ..."   0.0.0.0:5000->5000/tcp  ...

$ curl http://localhost:5000
28365365备用网址"Hello World!"

What’s next?

28365365备用网址This post showed how to containerize a Python service for a better development experience. Containerization not only provides deterministic results easily reproducible on other platforms but also avoids dependency conflicts and enables us to keep a clean standard development environment. A containerized development environment is easy to manage and share with other developers as it can be easily deployed without any change to their  standard environment.  

In the next post of this series, we will show how to set up a container-based multi-service project where the Python component is connected to other external ones and how to manage the lifecycle of all these project components with maryputnam.compose.

Resources

The post Containerized Python Development – Part 1 appeared first on Docker Blog.

]]>
26580
Docker Blog http://www.maryputnam.com/blog/how-to-deploy-containers-to-azure-aci-using-docker-cli-and-compose/ Mon, 13 Jul 2020 21:59:38 +0000 http://www.maryputnam.com/blog/?p=26715 Running containers in the cloud can be hard and confusing. There are so many options to choose from and then understanding how all the different clouds work from virtual networks to security. Not to mention orchestrators. It’s a learning curve to say the least. At Docker we are making the Developer Experience (DX) more simple. […]

The post How To Deploy Containers to Azure ACI using Docker CLI and Compose appeared first on Docker Blog.

]]>
28365365备用网址Running containers in the cloud can be hard and confusing. There are so many options to choose from and then understanding how all the different clouds work from virtual networks to security. Not to mention orchestrators. It’s a learning curve to say the least.

At Docker we are making the Developer Experience (DX) more simple. As an extension of that we want to provide the same beloved Docker experience that developers use daily and integrate it with the cloud. Microsoft’s Azure ACI provided an awesome platform to do just that.

In this tutorial, we take a look at running single containers and multiple containers with Compose in Azure ACI. We’ll walk you through setting up your docker context and even simplifying logging into Azure. At the end of this tutorial, you will be able to use familiar maryputnam.commands to deploy your applications into your own Azure ACI account.

Prerequisites

To complete this tutorial, you will need:

Run Docker Container on ACI

The integration with Azure ACI is very similar to working with local containers. The development teams have thought very deeply about the developer experience and have tried to make the UX for working with ACI as close as possible to working with local containers.

Let’s run a simple Nginx web server on Azure ACI.

Log into Azure

You do not need to have the Azure CLI installed on your machine to run Docker images in ACI. Docker takes care of everything.

The first thing you need to do is to login to Azure.

$ docker login azure

28365365备用网址This will open a browser window which will allow you to login to Azure.

28365365备用网址Select your account and login. Once you are logged in, you can close the browser window.

Azure ACI Context

28365365备用网址Docker has the concept of a context. You can think of a context as a place where you can run docker containers.It’s a little more complicated than this but this is a good enough description for now. In this tutorial, we use our local context and the new ACI context.

Let’s first take a look at what contexts we currently have on our local development machine. Run the following command to see a list of contexts.

$ docker context list
NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT                                 ORCHESTRATOR

default *           moby                Current DOCKER_HOST based configuration   unix:///var/run/docker.sock   http://kubernetes.docker.internal:6443 (default)   swarm

Depending on if you have already created another context, you should only see one context. This is the default context that points to your local Docker engine labeled as “moby? You can identify the current context that will be used for maryputnam.commands by the ??beside the name of the active context.

Now let’s create an ACI context that we can run containers with. We’ll use the create aci command to create our context. 

Let’s take a look at the help for creating an aci context.

$ docker context create aci --help
Create a context for Azure Container Instances

Usage:
  docker context create aci CONTEXT [flags]

Flags:
      --description string       Description of the context
  -h, --help                     help for aci
      --location string          Location (default "eastus")
      --resource-group string    Resource group
      --subscription-id string   Location

Global Flags:
      --config DIRECTORY   Location of the client config files DIRECTORY (default "/Users/peter/.docker")
  -c, --context string     context
  -D, --debug              enable debug output in the logs
  -H, --host string        Daemon socket(s) to connect to

Underneath the Flags section of the help, you can see that we have the option to set the location, resource-group, and subscription-id.

You can pass these flags into the create command. If you do not, the docker cli will ask you these questions in interactive mode. Let’s do that now.

$ docker context create aci myaci

28365365备用网址The first thing the cli will ask is what subscription you would like to use. If you only have one then docker will use that one.

Using only available subscription : Azure subscription 1 (b3c07e4a-774e-4d8a-b071-xxxxxxxxxxxx)

28365365备用网址Now we need to select the resource group we want to use. You can either choose one that has been previously created or choose “create a new resource group? I’ll choose to create a new one.

Resource group "c3eea3e7-69d3-4b54-83cb-xxxxxxxxxxxx" (eastus) created

Okay, our aci context is set up. Let’s list our contexts.

$ docker context list

You should see the ACI context you just created.

Run Containers on ACI

28365365备用网址Now that we have our ACI context set up, we can now run containers in the cloud. There are two ways to tell Docker which context you want your commands to be applied to. 

The first is to pass the –context flag. The other is to tell Docker which context we want to use with all subsequent commands by switching contexts. For now, let’s use the –context flag.

$ docker --context myaci run -d --name web -p 80:80 nginx
[+] Running 2/2
 ?web                         Created                                                                           
 ?single--container--aci      Done                                                                                
web

28365365备用网址Here you can see that Docker interacted with ACI and created a container instance named “web?and started a single instance.

28365365备用网址Open your Azure portal and navigate to container instances.

We can also run Docker CLI commands that you are already familiar with such as ps and logs.

Switch Contexts

Let’s take a look at our running containers. But before we do that let’s switch our active context to the ACI context we setup above so we do not have to keep typing –context with every command.

$ docker context use myaci

Now let’s run the ps command without passing the –context flag.

$ docker ps
CONTAINER ID        IMAGE               COMMAND             STATUS              
PORTS
web                            nginx                                                              Running             52.224.73.190:80->80/tcp

Nice, since we told Docker to use the myaci context, we see a list of containers running in our Azure account and not on our local machine.

28365365备用网址Let’s make sure our container is running. Copy the IP address of the container from the above ps output and paste it into your browser address bar. You can see our Nginx web server running!

Like I mentioned above, we can also take a look at the container’s logs. 

$ docker logs web

To stop and remove the container, run the following command.

$ docker rm web

BOOM!

That was pretty easy and the integration is smooth. With a few maryputnam.commands that you are already familiar with and a couple new ones, we were able to run a container in ACI from our development machine pretty quickly and simply.

But we’re not done!

maryputnam.compose

We can also run multiple containers using maryputnam.compose. With the ACI integration, we now have the ability to run compose commands from the docker cli against ACI. Let’s do that next.

Fork the Code Repository

I’m using a simple Python Flask application that logs timestamps to a Redis database. Let’s fork the repository and then clone the git repository to your local machine.

Open your favorite browser and navigate to: http://github.com/pmckeetx/timestamper

Click on the “fork?button in the top right corner of the window. This will make a “copy?of the demo repository into your GitHub account.

28365365备用网址On your forked version of the repository, click the green “Code?button and copy the github url.

Open up a terminal on your local machine and run the following git command to clone the repository to your local development machine.

Make sure you replace the <<github username>> with your GitHub username.

git clone git@github.com:<<github username>>/timestamper.git

Build and Run Locally

28365365备用网址Make sure you are in the root directory for the timestamper project and follow the following steps to build the images and start the application with maryputnam.compose.

First we need to add your Docker ID to the image in our maryputnam.compose.yml file. Open the maryputnam.compose.yml file in an editor and replace <<username>> with your Docker ID.

Next, we need to make sure we are using the local Docker context.

$ docker context use default

28365365备用网址Now we can build and start our application using maryputnam.compose.

$ maryputnam.compose up --build
Building frontend
Step 1/7 : FROM python:3.7-alpine
 ---> 6ca3e0b1ab69
Step 2/7 : WORKDIR /app
...
frontend_1  |  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
frontend_1  |  * Restarting with stat
frontend_1  |  * Debugger is active!
frontend_1  |  * Debugger PIN: 622-764-646

28365365备用网址Docker will build our timestamper image and then run the Redis database and our timestamper containers.

Navigate to http://localhost:5000 and click the Timestamp! button a couple of times.

Compose on ACI

Now let’s run our application on ACI using the new maryputnam.compose integration.

28365365备用网址We’ll first need to push our image to Docker Hub so ACI can pull the image and run it. Run the following command to push your image to your Docker Hub account.

$ maryputnam.compose push
Pushing frontend (pmckee/timestamper:latest)...
The push refers to repository [docker.io/pmckee/timestamper]
6e899582609b: Pushed
...
50644c29ef5a: Layer already exists
latest: digest: sha256:3ce2607f101a381b36beeb0ca1597cce9925d17a0f826cac0f7e0365386a3042 size: 2201

Now that our image is on Hub, we can use compose to run the application on ACI.

28365365备用网址First let’s switch to our ACI context.

$ docker context use myaci

28365365备用网址Remember, to see a list of contexts and which is being used, you can run the list contexts command.

$ docker context list

Okay, now that we are using the ACI context, let’s start our application in the cloud.

$ maryputnam.compose up
[+] Running 3/3
 ?timestamper     Created
 ?frontend        Done
 ?backend         Done

Let’s verify that our application is up and running. To get the IP address of our frontend, let’s list our running containers.

$ docker ps
CONTAINER ID           IMAGE                COMMAND             STATUS              PORTS

timestamper_frontend       pmckee/timestamper                       
Running             40.71.234.128:5000->5000/tcp

timestamper_backend         redis:alpine                             
Running

Copy the IP address and port listed above and paste into your favorite browser.

Let’s take a look at the logs for our Redis container.

$ docker logs timestamper_backend
1:C 13 Jul 2020 18:21:12.044 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
...
1:M 13 Jul 2020 18:21:12.046 # Server initialized
1:M 13 Jul 2020 18:21:12.047 * Ready to accept connections

Yes, sir! That is a Redis container running in ACI! Pretty cool.

After you play around a bit, you can take down the compose application by running compose down.

$ maryputnam.compose down

Conclusion

We saw how simple it is now to run a single container or run multiple containers using Compose on Azure with our ACI integration. If you want to help influence or suggest features, you can do that on our public Roadmap.

If you want to learn more about Compose and all the cool things that are happening around the OpenSource initiative, please checkout Awesome Compose and the OpenSource Compose specification

The post How To Deploy Containers to Azure ACI using Docker CLI and Compose appeared first on Docker Blog.

]]>
26715
Docker Blog http://www.maryputnam.com/blog/from-docker-straight-to-aws/ Thu, 09 Jul 2020 16:00:00 +0000 http://www.maryputnam.com/blog/?p=26652 28365365备用网址Just about six years ago to the day Docker hit the first milestone for maryputnam.compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex.  New managed container services have […]

The post From Docker Straight to AWS appeared first on Docker Blog.

]]>
28365365备用网址Just about six years ago to the day Docker hit the first milestone for maryputnam.compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex.  New managed container services have arrived bringing their own runtime environments, CLIs, and configuration languages. This complexity serves the needs of the operations teams who require fine grained control, but carries a high price for developers.

One thing has remained constant over this time is that developers love the simplicity of Docker and Compose. This led us to ask, why do developers now have to choose between simple and powerful? Today, I am excited to finally be able to talk about the result of what we have been working on for over a year to provide developers power and simplicity from desktop to the cloud using Compose. Docker is expanding our strategic partnership with Amazon and integrating the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Deploying straight from Docker straight to AWS has never been easier.

Today this functionality is being made available as a beta UX using docker ecs to drive commands. Later this year when the functionality becomes generally available this will become  part of our new Docker Contexts and will allow you to  just run docker run and maryputnam.compose.

To learn more about what we are building together with Amazon go read Carmen Puccio’s post over at the Amazon Container blog. After that register for the Amazon Cloud Container Conference and come see Carmen and my session at 3:45 PM Pacific.

We are extremely excited for you to try out the public beta starting right now. In order to get started, you can sign up for a Docker ID, or use your existing Docker ID, and download the latest version of Docker Desktop Edge 2.3.3.0 which includes the new experience. You can also head straight over to the GitHub repository which will include the conference session’s demo you can follow along. We are excited for you to try it out, report issues and let us know what other features you would like to see on the Roadmap!

The post From Docker Straight to AWS appeared first on Docker Blog.

]]>
26652
Docker Blog http://www.maryputnam.com/blog/multi-arch-build-what-about-travis/ Wed, 08 Jul 2020 16:00:00 +0000 http://www.maryputnam.com/blog/?p=26420 Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll consider Travis, which is one of the most tricky ones to use for this use case. To start building your image with […]

The post Multi-arch build, what about Travis? appeared first on Docker Blog.

]]>
Following the previous article28365365备用网址 where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll consider Travis, which is one of the most tricky ones to use for this use case.

To start building your image with Travis, you will first need to create .travis.yml28365365备用网址 file at the root of your repository.

languagebash
distbionic
services:
  - docker
script:
  - docker version

You may notice that we specified using “bionic?to have the latest version of Ubuntu available ?Ubuntu 18.04 (Bionic Beaver). As of today (May 2020), if you run this script, you’ll be able to see that the Docker Engine version it provides is 18.06.0-ce which is too old to be able to use buildx. So we’ll have to install Docker manually.

language: bash
distbionic
before_install:
  - sudo rm -rf /var/lib/apt/lists/*
  - curl -fsSL http://download.maryputnam.com/linux/ubuntu/gpg | sudo apt-key add -
  - sudo add-apt-repository "deb [arch=amd64] 
http://download.maryputnam.com/linux/ubuntu $(lsb_release -cs) edge"
  - sudo apt-get update
  - sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
script:
  - docker version

As you can see in the previous script, the installation process requires adding new keys in order to be able to synchronize the package database and download new packages from the Docker APT repository. We can then install the latest version of Docker available for Ubuntu 18.04. Once you have run this, you can see that we now have the version 19.03 of the Docker Engine.

28365365备用网址At this point we are able to interact with the Docker CLI but we don’t yet have the buildx plugin installed. To install it, we will download it from GitHub.

language: bash
dist: bionic
before_install:
  - sudo rm -rf /var/lib/apt/lists/*
  - curl -fsSL http://download.maryputnam.com/linux/ubuntu/gpg | sudo apt-key add -
  - sudo add-apt-repository "deb [arch=amd64] http://download.maryputnam.com/linux/ubuntu $(lsb_release -cs) edge"
  - sudo apt-get update
  - sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
  - mkdir -vp ~/.docker/cli-plugins/
  - curl --silent -L "http://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
  - chmod a+x ~/.docker/cli-plugins/docker-buildx
script:
  - docker buildx version

We are now able to use buildx. It’s verbose and because of how Travis works and the versions it’s based on, we won’t be able to shorten that by using a Docker image like on CircleCI28365365备用网址. We’ll have to keep this big boilerplate at the top of our script and allow it to take about 1.5min of build, each time you run the build.

Now it’s time to build our image for multiple architectures. We’ll use the same Dockerfile that we used for the previous article and the same build command:

FROM debian:buster-slim
RUN apt-get update \
  && apt-get install -y curl \
  && rm -rf /var/lib/apt/lists/*
ENTRYPOINT [ "curl" ]

28365365备用网址Modify the Travis configuration file to have the following in the `script` section:

script:
  - docker buildx build --platform linux/arm/v7,linux/arm64/v8,linux/amd64 --tag your-username/multiarch-example:buildx-latest .

28365365备用网址If you launch it like this, you will see the following error:

28365365备用网址multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")

28365365备用网址This is due to the fact that there is no buildkit driver already started. So, if we add that line to our configuration file and run it again, this will allow buildx to have an instance of a buildkit running to build the multiarch images.

28365365备用网址Navigating to the Travis dashboard, you should see the following result:

The last step is now to store the image on the Docker Hub. To do so we’ll need an access token from Docker Hub to get write access.

Once you created an access token, you’ll have to add it to your project settings in the “Settings” section.

We can then create DOCKER_USERNAME and DOCKER_PASSWORD28365365备用网址 variables to login afterward.

Once this is done, you can add the login step and the --push28365365备用网址 option to the buildx command as follows.

script:
  - docker login -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
  - docker buildx create --use
  - docker buildx build --push --platform
linux/arm/v7,linux/arm64/v8,linux/amd64 --tag
your-username/multiarch-example:buildx-latest .

28365365备用网址And voila, you can now create a multi arch image each time you make a change in your codebase.

The post Multi-arch build, what about Travis? appeared first on Docker Blog.

]]>
26420