Back

Building a Modern Deployment Runway

For many years most websites and applications were being “set live” by simply taking a number of files from one machine and then uploading these files to a server via FTP. Once updates were needed to be made, these files would be downloaded to a local machine, an Apace webserver would be spun up – using something like MAMP – then the developer would step in to make the required updates. At that point, these files were then re-uploaded to the production server and fingers were then perpetually crossed. The developer’s local environment settings were slightly different than what the production environment was using. On top of that, the other developer working on the project possibly updated this same file and we were now on a collision course to overwrite all of their changes.

This scenario would lead the developer down a road of “hoping for the best”. These practices would end in downtime on the production (live) environment – due to the developer’s local environment having different settings from that of the live environment. “It worked on my machine” would ring in everyone’s ears. A scramble to find a solution while the production environment lay out of commission would ensue. Joe would be overwriting John’s work. Hours would be wasted while they toiled over the source code manually merging each other’s changes. Enemies would be made. This calumniates in them uploading these “fixes” back to the production environment and crossing their fingers once again.

This was a wild world. A wild, west world. This is the world of the Cowboy Coder.

Jimmi Simpson William GIF by Westworld HBO - Find & Share on GIPHY

Things have changed quite a bit since these dark days – at least they should have for any serious web developer or web development company. There are now numerous options available that will prevent most of these types of conflicts. Over the next few blog posts, I’ll be diving into these options in good depth. I’ll be speaking to those who are new to the concept of Deployment Runways and to those who know many of the basic concepts but don’t know which options apply most to their circumstance.

In this first post, we are going to dissect the scenario that was outlined in the opening paragraph and talk about the options that move us away from this past, while diving into the history of each.

FTP – Stop all the downloading (and uploading)

Downloading Gi Joe GIF - Find & Share on GIPHY

FTP or File Transfer Protocol, is a method of transferring files to and from a local and remote computer or server environment. This protocol was heavily used for years and has been around since the 1970s and it runs on port 21 – more on this later. With it being a user friendly way to conduct a file transfer and with so many user interfaces being built around the protocol, for example: https://panic.com/transmit/. The major problem being that it’s a completely insecure protocol that results in an absurd amount of security risk by using it.

SSH – Secure Socket Shell

The years past and the dust settled. And out of Innovation Saloon stepped the spurs of Tatu Ylönen – a security and software engineer from Finland. Tatu saw a clear issue with the plain text transfer of data (using FTP) and the major security issue that was bound to be more of a “when it happens” – rather than an “if it happens” – security scenario. From this realization spawned his new protocol that focused on encryption of such data and appropriately name Secure Socket Shell – or better known as SSH. SSH is the protocol that we will be exploring in the chapters to come. That’s it for now.

Perfect Parity

With the proliferation of a more secured transfer protocol (SSH) we solved one of the major issues with the antiquated methods described in our opening scenario. This still left much to be desired when it came to our deep desires to just know that our code is going to work on Staging or Production servers – as it did on our Local Development. We are jumping many years ahead on this one and we are zeroing-in on Vagrant’s role in this showdown.

Vagrant states that they make it possible to “Create and configure lightweight, reproducible, and portable development environments”. The keyword in that proclamation is “reproducible”. For modern, properly-processed code to be written locally, we need to know that it will work when we deploy to our Staging or Production environments. This is where Vagrant steps in and says “Hey, let me create a mini, isolated-server on your local machine. I’ll provision each of these machines with specific settings that are isolated from your computer’s settings. Then, once you’re ready to provision your Staging environment, let me provision that server with the exact same settings.” This is the New World, the world where it just works.

Version Control – All Your Base Are Belong to Us

http://giphy.com/gifs/all-your-base-g53M5Eku2FToY

Okay, 2/3 conflicts portrayed in our opening scenario have been resolved with the use of secured file transfer (using SSH) and by creating a deployment runway that creates perfect parity (using Vagrant for server provisioning). Now we just need to figure out how to keep from stepping on each other’s bootstraps.

This is where Version Control comes into play. The first form of version control showed itself all the way back in 1972 where at Bell Labs they created UNIX. This version control system, known as SCSS, only worked on Unix specific environments – that’s no good for today’s wide-array of development environment options. Then came RCS, which worked on cross-platform but was restricted to text files. Even with each iteration of these early version control systems being released, there was still a limitation in that these Version Control systems could only be run from a Local Development environment and weren’t designed for code collaboration. In came the Sheriff known today as Centralized Version Control in 1986. This concepts came along with the ability for a Concurrent Versions System – which was the first to allow for a centralized repository. This opened the flood-gates for collaborative coding peace of mind. This form of Version Control is what we work with today and the front runner for this concept is known a Git (created by the God Father of Linux – Linus Torvalds).

Git is usually associated with the free and premium Git repository hosting company: Github. Whether you are using GitHub or an alternative like BitBucket, you are on the right track. By using Git you are now able to “commit” and “push” your local changes to a centralized repository. This shared repository is “pulled” before a developer starts their next round of code modification. By following this simple pull-push procedure, you will now ensure that any potential “merge conflicts” are caught before pushing your changes to the repository.

Sunsetting

There you have folks. You now know what it takes to establish a proper deployment runway. For the sake of your sanity, your client’s security and for everyone who’ll have to possibly work with you, please, pretty-please, stick around to learn more about how the west can be won. In the coming chapters of this series we will break down exactly how Newbird employs the above concepts, bit by bit.

If you have any questions in the meantime, fire away.

Questions? What's up?

Fill in the form, send us an email or just call 844.newbird

  • This field is for validation purposes and should be left unchanged.