For quite some time now, one of the most standard and straightforward ways to deploy software is using containers. It's what we usually do, but every now and then there is an exception. We've worked in both energy and finance, where there are high security servers, which will not have any internet access. Installing dependencies in such an environment is tricky, which is why it's so convenient to prepare a docker image first. This year we had a project, where we had to deploy in just such an environment. Initially we had specified that the target machine must come with docker and prepared accordingly, but to our unwelcome surprise it came with Windows Server 2019 and getting docker on there would have been difficult even if it had internet. We didn't feel like giving up though, so here's the story of what we did.
| Packing and deploying a Python Environment by midjourney |
The whole situation reminded us of how we had to deal with a similar scenario when working with a large Asian power grid back in 2019. What we had used back then was a tool named conda-pack. It is specifically designed to zip conda environments so that they can be shipped to target machines. It's been some years since we last used it, but we still got the hang of it. Our service, which was developed to be deployed via docker needed some adjustments, so it can run on windows server 19, but we got it done on time. The most challenging aspect was that we ourselves were not allowed to interact with the target machine. Instead we had to arrange sessions with their IT people, so that they deploy our software. Every unexpected error they reported back to us and we had do fix them without being able to test, as we have no comparable environment to test on. To make matters more complicated, there was an other company involved, also providing systems to the client and our service has to interact with theirs. As a side note, for license complaince reasons, we opted to not use any software from official anaconda / miniconda channels, only open-source conda-forge or self built software was used.
![]() |
| Sketch of how we usually deploy vs. how we had to deploy in this scenario. |
We added two sketches to this blog post explaining the deployment process and the system itself. In the deployment sketch we want to highlight the difference between what we usually do, i.e. what we prefer to do: Have the code in a repo, run a build pipeline including a Dockerfile containing all the necessary steps to create a working service image and deployment pipeline creating the container including all the necessary configuration. What we ended up doing in this case is creating a conda-packed environment, which contains the service as .pyc files, which had to be manually set up on the target machine by the client under our supervision to then finally create a service there. Quite the workaround...
And for the system sketch: the system basically consists of three components, two developed by us, one of which running on our infrastructure (deployed in the preferred way), one of which running on a server provided by 3rd party and a service which is run by the 3rd party. Our two services have to interact with each other and with the 3rd party service, in order to create "one single system" that is equipped to automatically handle customer requests on the clients behalf. Requests that cannot be handled by this system are handled by the client, but we were not part of that.
![]() |
| Sketch of the system architecture, i.e. who interacts with which component. |


Keine Kommentare:
Kommentar veröffentlichen