Throughout the book, we will be writing Go programs that will be compiled to binaries and run directly on our system. However, in the latter chapters we will be using docker-compose to build and run multiple Go applications. These applications can run without any real problem on our local system; however, our ultimate goal is to be able to run these programs on servers and to be able to access them over the internet.
During the 1990s and early 2000s, the standard way to deploy applications to the internet was to get a server instance, copy the code or binary onto the instance, and then start the program. This worked great for a while, but soon complications began to arise. Here are a few of them:
- Code that worked on the developer's machine might not work on the server.
- Programs that ran perfectly on a server instance might fail upon applying the latest patch to the server's OS.
- For every new instance added as part of a service, various installation scripts had to be run so that we can bring the new instance to be on par with all the other instances. This can be a very slow process.
- Extra care had to be taken to ensure that the new instance and all the software versions installed on it are compatible with the APIs being used by our program.
- It was also important to ensure that all config files and important environment variables were copied to the new instance; otherwise, the application might fail with little or no clue.
- Usually the version of the program that ran on local system versus test system versus production system were all configured differently, and this meant that it was possible for our application to fail on one of the three types of systems. If such a situation occurred, we would end up having to spend extra time and effort trying to figure out whether the issue is specific to one particular instance, one particular system, and so on.
It would be great if we could avoid such a situation from arising, in a sensible manner. Containers try to solve this problem using OS-level virtualization. What does this mean?
All programs and applications are run in a section of memory known as user space. This allows the operating system to ensure that a program is not able to cause major hardware or software issues. This allows us to recover from any program crashes that might occur in the user space applications.
The real advantage of containers is that they allow us to run applications in isolated user spaces, and we can even customize the following attributes of user spaces:
- Connected devices such as network adapters and TTY
- CPU and RAM resources
- Files and folders accessible from host OS
However, how does this help us solve the problems we stated earlier? For that, let's take a deeper look at Docker.