Monday, July 11, 2016

Docker based WordPress dev environment

   Working in Agile means being flexible when it comes to the team tasks as well.  A good QA should be fully responsible not only for the testing activities, but for all of the rest when it comes to Product quality. If the team needs process improvement – initiate it. Better infrastructure – build it. The testing is not an SDLC phase from a very long time now, but rather - an integrated development activity.  
   Let’s look at our problem
  •          shared and slow development/integration server
  •           sluggish testing feedback loops
  •           multiple OS based local development environment (Unix, Mac, Windows)
  •           complex Frontend and Backend team integrations
  •           need of shared and timely loading content between the team members

   Most of the above mentioned issues are caused by manually managed infrastructure. Going through the options with the team we’ve decided that a Docker based replacement should be built. Moving to a IaC is not an easy task even with a dedicated DevOps team at hand. But sometimes the only guy with the “Automation” in his job title is the QA engineer. So facing such challenge is a great learning opportunity (and IMHO, part of the day to day work). 
    First, we should get decent understanding how WordPress development works and how our team currently manages the process. Probably most of us have seen the following architecture


    
    However, this is not the case with Docker containers, as we can see from the Dockerfile. In this scenario, both the WordPress and the Apache will run inside the containers (on the developer machine). This leaves us with just the mySQL Server environment configs as shown on the hub. One more thing to note is that the wp-config.php comes with default values, so you need to either append your custom code or entirely replace the file. Example is the case when we need to read the localhost URL and not the integration server one. PHP sample code

and on our CLI run 

If we now go to our : (e.g. 127.0.0.1:8000) we should see the well-known White screen of death.  This could be caused by millions of things, but in our case we have a clean and connected environment. We’ve checked that the container is up and running, /wp-admin is loading as well.  After all WordPress acts as a CMS as well, so we need to consider the content. The same is located at wp-content/uploads. So if we check that directory inside the container with

we’ll see that it is empty.  Let’s get back to our last problem from the list – shared content between the team members. We should provide the team with the possibility to manage the work in progress and in the same time to keep their local copies clean. One such solution is the NFS. Yes, we’ve considered Swarm, data containers and volumes, but they are not supposed to do this task by design. The first option is for orchestrating containers, last two work only on one host and are pretty much equal in this case.  What we need is to spin up a VM box that will be our data host and configure it with  nfs-kernel-server.
   

  All of the above works well with Unix and Mac, but not with Docker Machine and Windows. We need a dedicated solution here, like SFTP and Eldos. Note that here our host is not the Windows OS, but the VM (Oracle VirtualBox)on which the Docker engine runs on. This could cause empty folders in your containers even if they do exist on your local file system.  Also replace the local path like this:  

    

No comments:

Post a Comment