Pricing Login
Pricing

Zachary Flower

Zachary Flower (@zachflower) is a Fixate IO Contributor and lead developer at Emerson Stone, a Boulder-based design and branding agency. He has an eye for simplicity and usability, and strives to build products with both the end user and business goals in mind. From building projects for the NSA to creating features for companies like Name.com and Buffer, Zach has always taken a strong stand against needlessly reinventing the wheel, often advocating for the use of well established third-party and open source services and solutions to improve the efficiency and reliability of a development project.

Posts by Zachary Flower

Blog

Docker Logging Example

Docker is hard.Don't get me wrong. It's not the technology itself that is difficult...It's the learning curve. Committing to a Docker-based infrastructure means committing to a new way of thinking, which can be a harsh adjustment from the traditional thinking behind bare metal and virtualized servers.Because of Docker's role-based container methodology, simple things like log management can seem like a bear to integrate. Thankfully, as with most things in tech, once you wrap your head around the basics, finding the solution is simply a matter of perspective and experience.Collecting LogsWhen it comes to aggregating Docker logs in Sumo Logic, the process starts much like any other: Add a Collector. To do this, open up the Sumo Logic Collection dashboard and open up the Setup Wizard.Because we will be aggregating logs from a running Docker container, rather than uploading pre-collected logs, select the Set Up Streaming Data option in the Setup Wizard when prompted.Next up, it is time to select the data type. While Docker images can be based on just about any operating system, the most common base image—and the one used for this demonstration—is Linux-based.After selecting the Linux data type, it's time for us to get into the meat of things. At this point, the Setup Wizard will present us with a script that can be used to install a Collector on a Linux system.The DockerfileWhile copying and pasting the above script is generally all that is required for a traditional Linux server, there are some steps required to translate it into a Docker-friendly environment. To accomplish this, let's take a look at the following Dockerfile:<strong>FROM</strong> ubuntu<strong>RUN</strong> apt-get update <strong>RUN</strong> apt-get install -y wget nginx<strong>CMD</strong> /etc/init.d/nginx start && tail -f /var/log/nginx/access.logThat Dockerfile creates a new container from the Ubuntu base image, installs NGINX, and then prints the NGINX access log to stdout (which allows our Docker image to be long-running). In order to add log aggregation to this image, we need to convert the provided Linux Collector script into Docker-ese. By replacing the sudo and && directives with RUN calls, you'll end up with something like this:<strong>RUN</strong> wget "https://collectors.us2.sumologic.com/rest/download/linux/64" -O SumoCollector.sh <strong>RUN</strong> chmod +x SumoCollector.sh <strong>RUN</strong> ./SumoCollector.sh -q -Vsumo.token_and_url=b2FkZlpQSjhhcm9FMzdiaVhBTHJUQ1ZLaWhTcXVIYjhodHRwczovL2NvbGxlY3RvcnMudXMyLnN1bW9sb2dpYy5jb20=Additionally, while this installs the Sumo Logic Linux Collector, what it does not do is start up the Collector daemon. The reason for this goes back to Docker's "one process per container" methodology, which keeps containers as lightweight and targeted as possible.While this is the "proper" method in larger production environments, in most cases, starting the Collector daemon alongside the container's intended process is enough to get the job done in a straightforward way. To do this, all we have to do is prefix the /etc/init.d/nginx start command with a /etc/init.d/collector start && directive.When all put together, our Dockerfile should look like this:<strong>FROM</strong> ubuntu<strong>RUN</strong> apt-get update <strong>RUN</strong> apt-get install -y wget nginx<strong>RUN</strong> wget "https://collectors.us2.sumologic.com/rest/download/linux/64" -O SumoCollector.sh <strong>RUN</strong> chmod +x SumoCollector.sh <strong>RUN</strong> ./SumoCollector.sh -q -Vsumo.token_and_url=b2FkZlpQSjhhcm9FMzdiaVhBTHJUQ1ZLaWhTcXVIYjhodHRwczovL2NvbGxlY3RvcnMudXMyLnN1bW9sb2dpYy5jb20=<strong>CMD</strong> /etc/init.d/collector start && /etc/init.d/nginx start && tail -f /var/log/nginx/access.logBuild ItIf you've been following along in real time up to now, you may have noticed that the Set Up Collection page hasn't yet allowed you to continue on to the next page. The reason for this is that Sumo Logic is waiting for the Collector to get installed. Triggering the "installed" status is as simple as running a standard docker build command:docker build -t sumologic_demo .Run ItNext, we need to run our container. This is a crucial step because the Setup Wizard process will fail unless the Collector is running.docker run -p 8080:80 sumologic_demoConfigure the SourceWith our container running, we can now configure the logging source. In most cases, the logs for the running process are piped to stdout, so unless you take special steps to pipe container logs directly to the syslog, you can generally select any log source here. /var/log/syslog is a safe choice.Targeted CollectionNow that we have our Linux Collector set up, let's actually send some data up to Sumo Logic with it. In our current example, we've set up a basic NGINX container, so the easiest choice here is to set up an NGINX Collector using the same Setup Wizard as above. When presented with the choice to set up the Collection, choose the existing Collector we just set up in the step above.Viewing MetricsOnce the Collectors are all set up, all it takes from here is to wait for the data to start trickling in. To view your metrics, head to your Sumo Logic dashboard and click on the Collector you’ve created.This will open up a real-time graph that will display data as it comes in, allowing you to compare and reduce the data as you need in order to identify trends from within your running container.Next StepsWhile this is a relatively simplistic example, it demonstrates the potential for creating incredibly complex workflows for aggregating logs across Docker containers. As I mentioned above, the inline collector method is great for aggregating logs from fairly basic Docker containers, but it isn't the only—or best—method available. Another more stable option (that is out of the scope of this article) would be using a dedicated Sumo Logic Collector container that is available across multiple containers within a cluster. That said, this tutorial hopefully provides the tools necessary to get started with log aggregation and monitoring across existing container infrastructure.

Blog

Apache Logs vs. NGINX Logs

Blog

How to Deploy and Manage a Container on Azure Container Service

Blog

What Are Isomorphic Applications?

Having been a backend developer for my entire career, isomorphic applications are still a very new concept to me. When I first heard about them, it was a little difficult for me to understand. Why would you give that much control to the front-end? My brain started listing off reasons why that is a terrible idea: security, debugging, and complexity are just a few of the problems I saw. But, once I got past my initial flurry of closed-mindedness, I started to look into the solutions to those problems. While I am not necessarily a true believer, I can definitely see the power of isomorphic apps. So, what exactly is an isomorphic application? Well, in a nutshell, an isomorphic application (typically written in JavaScript) tries to mix the best parts of the front-end (“the client”) and the backend (“the server”) by allowing the same code to run on both sides. In this configuration, the first request made by the web browser is processed by the server, while all remaining requests are processed by the client. The biggest advantage to this is that both the the client and server are capable of performing the same operations, making the first request fast, and all subsequent requests faster. A secondary (at least from my perspective) benefit is the SEO advantage that isomorphic applications provide. Because the first request is passed directly onto the server, raw HTML is rendered the first time, rather than making an AJAX call and processing the response via JavaScript. This allows search engine crawlers that don’t have JavaScript support to properly read the data on the page, as opposed to receiving a blank page with no text. While isomorphic applications speed things up and prevent duplication of functionality between the client and the server, there are still some risks associated with using them. A big potential problem with isomorphic apps is security. Because the client and server share so much code, you have to be especially careful not to expose API keys or application passwords in the client. Another issue, as I mentioned above, is that debugging can be significantly more difficult. This is because, instead of debugging JavaScript in the browser and PHP in the server (as an example), you are now debugging the same set of code but in potentially two places (depending on when and where the issue occurred). If you are using an isomorphic JavaScript library, such as Facebook’s React.js, tools like the React Developer Tools Chrome Extension can be invaluable for debugging issues on the client side. The biggest concern I have with isomorphic apps isn’t necessarily a universal one: complexity. While the concept is incredibly clever, the learning curve feels like it could be pretty big. Like with most frameworks and libraries it does take practice, but because this is such a new way to structure web apps, there is a high potential for making mistakes and doing things the “wrong” way. Ultimately, I think isomorphic JavaScript has a ton of potential to make some great web applications. While it may not be perfect for every project, I think that the benefits definitely outweigh the risks. About the Authors Zachary Flower (@zachflower) is a freelance web developer, writer, and polymath. He has an eye for simplicity and usability, and strives to build products with both the end user, and business goals, in mind. From building projects for the NSA to features for Buffer, Zach has always taken a strong stand against needlessly reinventing the wheel, often advocating for using well established third-party and open source services and solutions to improve the efficiency and reliability of a development project.

Blog

What is Full Stack Deployment?

Full stack deployments are a relatively new concept to me. At first, I was confused as to why you would redeploy the entire stack every time, rather than just the code. It seems silly, right? My brain was stuck a little in the past, as if you were rebuilding a server from scratch on every deployment. Stupid. But what exactly is full stack deployment, and why is it better than “traditional” code-only deployment? Traditionally, deployments involve moving code from a source code repository into a production environment. I know this is a simplistic explanation, but I don’t really want to get into unit testing, continuous integration, migrations, and all the other popular buzzwords that inhabit the release engineering ethos. Code moves from Point A to Point B, where it ends up in the hands of the end user. Not much else changes along that path. The machine, operating system, and configurations all stay the same (for the most part). With full stack deployments, everything is re-deployed. The machine (or, more accurately, the virtual machine) is replaced with a fresh one, the operating system is reprovisioned, and any dependent services are recreated or reconfigured. These deployment are often handled in the form of a freshly configured server image that is uploaded and then spun up, rather than starting and provisioning a server remotely. While this might sound a little like overkill, it is actually an incredibly valuable way to keep your entire application healthy and clean. Imagine if you were serving someone a sandwich for lunch two days in a row. You would use a different plate to put today’s sandwich on than you used for yesterday’s sandwich wouldn’t you? Maintaining a consistent environment from development to production is also a great way to reduce the number of production bugs that can’t be reproduced on a development environment. With the rise of scalable micro-hosting services Amazon EC2, this mindset has already taken hold a bit to facilitate increased server and network load. As the site requires more resources, identical server images are loaded onto EC2 instances to handle the additional load, and then are powered back down when they’re no longer needed. This practice is also incredibly valuable for preventing issues that can crop up with long-running applications, especially across deployments. Technologies like Docker do a good job of encouraging isolation of different pieces of an application by their function, allowing them to be deployed as needed as individual server images. As Docker and other similar services gain support, I think we will start to see a change in the way we view applications. Rather than being defined as just code, applications will be defined as a collection of isolated services.

Blog

Leveraging AWS Spot Instances for Continuous Integration

Automated Continuous Integration (CI), at a high level, is a development process in which changes submitted to a central version control repository by developers are automatically built and run through a test suite. As builds and tests succeed or fail, the development team is then aware of the state of the codebase at a much more granular level, providing more confidence in deployments. While CI is often used primarily on production-ready branches, many implementations run builds and tests on all branches of a version control system, giving developers and managers a high level view of the status of each project in development. The trouble with automated CI is that it typically requires an always-on server or an expensive SaaS product in order to run builds and tests at any given time. In a large development team, this can be prohibitively expensive as a large backlog of commits would require more resources to test in a reasonable amount of time.A way that organizations can save money is by utilizing Amazon Web Services (AWS) EC2 On-Demand Instances. These instances let you pay by the hour with no long-term commitments. You just spin up an instance when you need one, and shut it down when you’re done. This is incredibly useful, as in the times between builds, you aren’t paying for servers, and conversely, your CI environment can scale to the needs of your team as the need for testing builds increases. While more volatile, AWS Spot Instances are even more cost effective than On-Demand instances. Spot Instances are spare On-Demand instances Amazon auctions off at up to 90% off the regular price that run as long as your bid exceeds the current Spot Price, which fluctuates based on supply and demand. The trouble with Spot Instances is that they can disappear at any point, so applications running on them need to be able to appropriately handle this unpredictability. This requirement puts us at a bit of a disadvantage when configuring a CI server, as it will need to be capable of going down in an instant without throwing false failure negatives.Sample Reference Architecture (Source: aws.amazon.com)Finding a CI server that can safely handle the volatility of AWS Spot Instances can be tough, and configuration and maintenance is a bit more difficult as well, but luckily Jenkins, one of the most popular open source CI servers, has an extensive plugin library that provides extended support for most use cases; including Amazon EC2 support. The Amazon EC2 Jenkins plugin is a plugin that is in active development that gives Jenkins the ability to start slaves on EC2 on demand. This allows you to run a lower powered Jenkins server, and spin up slaves for more resources as they are needed. The EC2 plugin also provides great Spot Instance support, giving you the ability to configure persistent bid prices and monitor slaves more accurately. In the event a Spot slave is terminated, a build will be marked as failed, but the error messaging will reflect the reason appropriately.While the barrier to entry of configuring your CI environment to utilize AWS Spot Instances can be high, the possible savings can be even bigger. SaaS products are often a great way to offload the time and expertise needed to manage services like this, but CI services can be unnecessarily expensive. The same can be said for always-on servers in large organizations with a backlog commits that need to be built and tested. Spot Instances are a great way to dynamically allocate only the resources that an organization needs when they need them, while at the same time reducing operating costs and build wait times.

Blog

API Design - A Documentation-first Approach

What exactly makes a “good” API? That is a question a lot of developers ask when designing their first API. While there are hundreds of resources online, all with differing opinions about what defines “good,” the majority of them share some similar themes. Logical endpoint naming conventions, clear error messaging, accessibility, and predictability are all crucial pieces in any well-designed API. Most importantly, every good API I’ve ever worked with has had clearly written and easily understandable documentation. On the flip side, poor documentation is one of my biggest frustrations with any API I use. A great example of a good API with excellent documentation is Stripe. Any developer who has worked with it can attest to how well written it is. With clearly defined endpoints, transparent error messages, usable examples, a slew of great SDKs, and standards-compliant methodology, Stripe is often used as a reference point for API development. Even their documentation is used as a source of inspiration for many freely and commercially available website templates. When designing an API, it is often desireable to take a “build first” approach, especially when utilizing the architecture of a pre-existing product. Unfortunately, this mindset doesn’t follow the standard usability practices that we follow when building apps with graphic interfaces. It is extremely important that we take a user-centric approach to API design, because we are developing a product to be consumed by other developers. If we can’t empathise with their needs and frustrations, then who can we empathise with? This user-centric focus is an important reason to start by writing your API documentation, rather than just designing it. When you create good documentation, good design follows, but the reverse isn’t necessarily true. Designing an API before you write any code can be difficult when you are working with pre-existing architecture. Your pre-conceived notions about how a system works will influence your design, and may result in a less-than-logical API. Starting with the documentation first will force you to design an unopinionated API. If you write documentation that you as a user would want to read, there is a good chance that your own users will appreciate it as well. Almost as important as the content of your documentation is how easy it is to read. There are tons of services out there that make this incredibly easy, and even go so far as to generate dynamic documentation along with beautiful templates. A great way to start is an open source API documentation library called API Blueprint. API Blueprint allows you to build out API documentation using markdown, and export it into a well-structured JSON file for importing into a number of services. Apiary and Gelato.io are two services that allow you to import API Blueprint files for hosting, and even testing your API design prior to writing any code. Remember that, while writing your API documentation, the most important factor to consider is how valuable it is to a user. To quote Apiary, “An API Is Only As Good As Its Documentation.” It may be tough, at times, to separate the perfect structure of your API from the current structure of an existing system, but it is important to remember that it is just another user interface. While the backend architecture of a current application has some influence on the frontend, we often have to find creative solutions in order to provide the best possible experience for our users. The only difference between that and API design is that our target users are much more tech savvy, and thus potentially less forgiving when something doesn’t make sense.

Blog

Who Controls Docker Containers?

It’s no secret that Development (“Dev”) and Operations (“Ops”) departments have a tendency to butt heads. The most common point of contention between these two departments is ownership. Traditionally Ops owns and manages everything that isn’t direct development, such as systems administration, systems engineering, database administration, security, networking, and various other subdisciplines. On the flipside of the coin, Dev is responsible for product development and quality assurance. The conflict between the two departments happens in the overlap of duties, especially in the case of managing development resources. When it comes to Docker containers, there is often disagreement as to which department actually owns them because the same container can be used in both development and production environments. If you were to ask me, I would say without hesitation that Dev owns Docker containers, but thanks to the obvious bias I have as a developer, that is probably an overly-simplistic viewpoint. In my personal experience, getting development-related resources from Ops can be tough. Now don’t get me wrong, I’m under no impression that this is because of some Shakespearian blood-feud; Ops just has different priorities than Dev, and spinning up yet another test server just happens to land a little further down the list. When it comes to managing development resources, I think it is a no-brainer that they should fall under the Dev umbrella. Empowering Dev to manage their own resources reduces tension between the departments, manages time and priorities more appropriately, and keeps things running smoothly. Source: www.docker.com On the flip side, Docker containers that aren’t used directly for development should fall under the purview of Ops. Database containers are good examples of this type of separation. While the MySQL container may be used by Dev, no development needs to be done directly on it, which makes the separation pretty clear. But what about more specialized containers that developers work directly on, like workers or even (in some instances) web servers? It doesn’t really make sense for either department to have full control over these containers, as developers may need to make changes to the containers themselves (for the sake of development) that would normally fall under the Ops umbrella if there was clear separation between development and production environments. The best solution I can think of to this particular problem would be joint custody of ambiguous containers. I think the reason this would work well is that it would require clear documentation and communication between Dev and Ops as to how these types of containers are maintained, which would in turn keep everybody happy and on the same page. A possible process that could work well would be for Ops to be responsible for provisioning base containers, with the understanding that the high-level configuration of these types of containers would be manageable by Dev. Because Ops typically handles releases, it would then be back on Ops to approve any changes made by Dev to these containers before deploying. This type of checks-and-balances system would provide a high level of transparency between the two departments, and also maintain a healthy partnership between them. About the Author Zachary Flower (@zachflower) is a freelance web developer, writer, and polymath. He has an eye for simplicity and usability, and strives to build products with both the end user, and business goals, in mind. From building projects for the NSA to features for Buffer, Zach has always taken a strong stand against needlessly reinventing the wheel, often advocating for using well established third-party and open source services and solutions to improve the efficiency and reliability of a development project.

Blog

Choosing the Right Development Environment For You

When starting a project, working as an individual developer provides a level of development freedom that can get quickly complicated when it is time to grow the team. Once you expand to multiple developers, it is critical to maintain a well-documented and structured development environment. In a poorly architected environment, team members will have different experiences and ideas about software development, which can lead to friction amongst developers. The lack of consistency between the different environments makes fixing bugs and developing features a frustrating experience, and leads to the commonly used “works on my machine” excuse. By contrast, a properly structured and documented development environment keeps everyone on the same page and focused on product instead of constantly trying to get things to work. In addition to a more efficient development team, a structured development environment can drastically decrease the time it takes to onboard a new developer (I’ve personally worked at a company where the development environment took the entirety of my first week to setup and configure properly, because it wasn’t properly documented or managed). While there is no “one size fits all” development environment, the majority of the solutions to the problem are centered around consistency. This is almost always handled through the use of virtual machines, however, how and where they are setup can differ wildly. When it comes down to it, there are two options that determine what and how these machines work: local or remote. In a local development environment, devs run an instance of the code locally on their own machines, which allows them to work independently and without having to rely on a centralized server. Unfortunately, local environments can be limited when trying to replicate more complex production environments, especially if developers are working on underpowered machines. In a remote environment, developers work directly off of a remotely hosted server, rather than locally. This has the added benefit of offering perfect parity with the production environment, but requires developers to have a high-speed internet connection to write and test even the most trivial of changes to the codebase. The most popular structured local development environment that I have seen is Vagrant. Vagrant, at its core, is a cross-platform virtual machine management tool that allows you to configure and provision virtual machines in a reproducible way. With built in Puppet and Chef support, Vagrant is an excellent way to setup a brand new development environment with just one command. What makes Vagrant so great is that, rather than passing around machine images, the configuration files are committed directly into a project’s version control system and a base machine is built and configured on the fly, meaning any developer who can clone the codebase is also instantly given the ability to spin up a virtual machine. Because Vagrant runs within a virtual machine that mounts the directory it is configured in, developers can also use any IDE that they are comfortable with, which allows them to spend less time learning new tools and more time focusing on product. Often, it is desirable for developers to work off of a centralized server (or set of servers), rather than their local machines. Remote development environments are an interesting case in that they can provide a lot more power than local environments and reduce the amount of setup required to almost nothing for new developers. Depending on the size of the organization, these environments can be set up by someone on either the development or operations team, and can be hosted almost anywhere. The two most common setups I have seen for remote development environments are shared servers and private servers. In a shared server environment, every developer shares the same machine with their own distinct logins and subdomains. This is a good solution for organizations with limited resources that self-host their servers, as it may not be feasible to have a dedicated private server for each developer. When available, private servers are the perfect solution for remote development environments because they can provide 100% parity with production environments and, much like Vagrant, can be spun up at the click of a button. The biggest problem with private servers, however, is that an internet connection is required to use them. In a local environment, developers could theoretically work off the grid, but in a remote environment, a high-speed internet connection is always required to get work done. Another, smaller issue is remote access doesn’t always play well with every IDE. Many don’t provide great remote access functionality, if they provide any at all, requiring developers to either use a new IDE or cook up hacky solutions to use the IDE they’re used to using. In a perfect world in which developers have access to sufficiently powerful machines, I would recommend Vagrant 100% of the time. Because it is cross-platform, organizations can take an OS-agnostic approach to personal computers, and the automated and simple setup allows for quicker developer onboarding. While Vagrant can have some speed drawbacks, the lack of internet requirement is a huge bonus, removing “my internet is down” from the list of things that can go wrong in a project.