You may have heard the term “continuous integration” or “continuous deployment” or even “continuous delivery” tossed about in your department as a catch-all phrase for “we need to ship code quickly and constantly”. It’s true that a well-honed continuous integration (CI) program can result in rapid, hyper-agile delivery of software but in order to reap the philosophy’s rewards you have to establish and adhere to a disciplined protocol that is based on a true understanding of what CI actually is.

In order to understand CI, let’s look at the way software used to be shipped in the years leading up to the golden age of Agile, say, the aughts (2000 to 2010-ish). During these years, even the smallest feature change to an in-place application was a major undertaking. Budgets had to be approved, designs made, code written and tested, bugs fixed, user acceptance granted, and then a big monolithic chunk of code was released as a new version of the software. Because making a release was such a big undertaking, the needs of most stakeholders from the business fell to the wayside; there simply wasn’t enough time or budget to cater to all of their needs. 

Things began to change, however, as we entered the teens (2013 to present).  Web and native apps intended for consumption on smartphones exploded filling more and more niche needs and the typical business stakeholder became ever savvier in all things software as they grew accustomed to having myriad software features to solve problems in their personal life. This created a demand that spilled over into the workplace and became common in just about every conference room around the world:  “Amazon sends me updates about the location of my order every step of the way! Why can’t we do that with our replacement part orders??” or “Searching for information on Google is so intuitive–it should be the same when we search our inventory” or how about, “We should make a mini-game like Angry Birds to promote this new ad campaign”. Overnight, software delivery professionals–from developers to quality assurance to analysts–were overwhelmed with requests and outnumbered by throngs of stakeholders with wishlists a mile long. The age of carefully planned, waterfall-like software release delivery schedules and the age of “I want it all and I want it now” Agile methodology had begun.


In the years since that critical inflection point in the art of software delivery Agile, a stream-of-consciousness approach to software delivery has proliferated in response to stakeholder demand/impatience and this has in turn given rise to CI which is the process of streamlining and automating the business of requirement specification, development, quality assurance, testing, user acceptance and, finally, production deployment. 


In a CI world, a stakeholder may express a desire for a new feature in the company intranet during the Monday morning meeting. By lunchtime, the business analyst has gathered detailed requirements and placed the requirements into a ticketing system such as Visual Studio Team Services or Jira. This alerts the dev team automatically so that they can step away from the foosball table and get back to their workstations. By Monday afternoon, the developer(s) has accepted the ticket and used it’s automatic integration with the source control repository to create a new “branch” of the code. The developer’s job is done within the hour and her code is checked in which triggers an automatic execution of unit and end-to-end tests and then an automatic build to the QA environment and Slack notification to the QA testers. Once the QA staff has approved the build, the CI pipeline takes over once more and automatically handles the placement of the new branch into the production environment while maintaining the ability to easily roll back to the previous build if necessary.  By Tuesday morning, the stakeholder is happily using the feature he requested during the previous day’s morning meeting. This would never be possible without an established CI program in the organization.

A well-developed CI program isn’t just for the benefit of the stakeholders; it has plenty of deep technical advantages as well. For example, most projects have multiple developers working in isolation. Adherence to a CI protocol forces a degree of work atomization which limits the ability for discrete tasks to become too large. This means more frequent code check-ins and integration with the production environment which means fewer nasty merge conflicts and bugs.

By now you probably get what CI does, but you may be asking yourself what exactly it is. Is it a tool? A platform? A philosophy? In reality, it’s a little bit of everything. DevOps professionals create “build definitions” using popular build engines like Visual Studio Team Services, Team City by JetBrains, Jenkins, or Octopus. You can think of these definitions as scripts that have hooks into both the source control repository where your application’s code resides as well as into the environments (servers) that run the working code. In a sense, these build definitions are a collection of IF THIS THEN THAT statements: “If a new ticket is added then create a new branch and email the dev team”, “If a developer checks in their code, then run unit tests”, and then, “If all unit tests pass, then deploy to the QA environment and email the testers”. 

Different build engines have different strengths and it is possible that your organization uses more than one of them. In fact, we developed our NoOps/digital developer product called Catapult to help abstract and alleviate the stress of managing multiple build servers and other resources in order to further streamline the continuous integration process.

The other aspect of a solid CI program is to enforce a protocol to be followed by all team members. This is very important because if the team does not use the correct toolchain, the CI program won’t work and its benefits are lost. For example, if the stakeholder from the Monday morning meeting were to have simply emailed his request directly to the developer, that developer–eager to please–may have coded the feature and then checked it directly into the source control repository without following the proper branching protocol. This may have caused merge conflicts that then require a manual review and possibly fail to trigger automatic tests which would then result in the very real possibility that bugs slip through to production environments and create the need for site downtime.  The good news is, there are some pretty great tools out there to make adherence to a CI protocol pretty easy. We are, of course, partial to Catapult but regular ol’ VSTS or Jira are pretty good as well.

If you are interested in instituting a CI program at your enterprise but don’t know where to start, please feel free to contact us for help. We are experts in the field of CI and we can either help you design and roll out a custom CI program or implement a licensed instance of Catapult to make CI (and DevOps in general) feel like magic.

If you haven’t already heard the term “NoOps” as it pertains to enterprise software development and delivery you probably will soon. NoOps is an emerging movement that seeks to relieve a bottleneck created by traditional IT operations and on-premise application hosting by utilizing solutions rooted in automation and cloud-based infrastructure. At Polyrific, we have developed an outstanding NoOps solution called Catapult and we offer this article in hopes that it helps you better understand why Catapult is such a big deal.

From DevOps to NoOps

Perhaps the best way to begin understanding the NoOps movement is to first understand the DevOps movement. The term “DevOps” is an amalgamation of “Development” and “Operations” and refers to the interplay between software developers and IT operations during the process of deploying applications to the world. In every enterprise, it is necessary for these two departments to stay close to one another in order to best serve the needs of the business.

At most enterprises, responsibilities for developers generally include the following:

  • Work with stakeholders to understand the needs of the business
  • Distill those needs into requirements and specifications
  • Develop applications that fulfill said requirements

By contrast, IT operations are generally responsible for interfacing with network hardware:

  • Allocation & management of server resources
  • Fault planning & monitoring
  • Security & compliance
  • Device management

Obviously, applications that are developed to suit the needs of the business have to be deployed somewhere so that they can be consumed, and this is where the interplay between the developers and IT operations managers comes in: they must work together to take the developer’s work and deploy it to the world on their enterprise’s resources. This makes perfect sense if the picture were so simple but, as we will see in the next section, the reality is a bit more complicated.

Agile & Continuous Deployment

In the early days of enterprise software solutions, very few enterprises created custom software solutions or applications of their own. However, as workplace environments have become more dynamic and reliant on smart hardware and software solutions, the demand for the rapid release of custom software applications has grown dramatically. The Agile movement was largely a response to this exponential growth in application demand and it is founded on principles inspired by the Silicon Valley “fail fast & fail early” philosophy.  Gone are the days of months of planning, tedious software architecture design, and software release schedules following a waterfall schedule into a deployment phase that is given equal weight by the IT operations team. Today’s software development teams are expected to respond immediately to a seemingly never-ending stream of features and demands requested by the business.

Often, projects are started as bare-bones applications that are immediately thrust into production environments where they will be constantly updated and expanded upon as the business requirements evolve. This sounds great, but it presents a few challenges to software development and IT operations teams, especially with regards to the quality of the end-user experience and application uptime. To counter this, the development and ops teams employ a set of automation tools and checkpoints, collectively referred to as “Continuous Integration” or “Continuous Deployment” that smooth out the problems caused by rapid iterations in the software development life cycle. For example, when properly configured, and CI pipeline can trigger a series of automated tests whenever a developer checks in new code to ensure that the new code does not break anything or cause “regression” bugs.


The (Traditional) IT Bottleneck

Its operations experts are fantastic but, in our view, their role is best executed when the evidence of their work is everywhere, but their presence is not so apparent. A good server at a restaurant will keep your glass full and your food coming without you noticing them much at all and it should be the same with IT operations managers but sometimes–often through no fault of their own–this is not the case. Without considerable depth of automation in your software development life cycle (SDLC), it becomes necessary for the development team to spend significantly more time with the IT operations team in order to coordinate downtime, deployments, rollbacks, and so forth. This is especially true in the case of on-premise deployments. This close coupling between IT ops folk and the developers is bad for at least three reasons:

  1. It takes the developer’s focus away from understanding the needs of the business stakeholders
  2. It cuts into development time
  3. It can influence the engineering and delivery schedule of the application

Given the above, you can probably start to see where this is headed: interaction between development and IT operations should be automated so that the software engineers can remain focused on what they do best: delivering application-based solutions that serve the immediate needs of the business.

NoOps Produces Better Outcomes

So in order to respond to ever-changing demands of the business, development teams must be capable of quickly organizing the stakeholder’s needs into business requirements and then parlaying those requirements into working code that is tested, quality-assured, accepted by the end-user, and deployed into the production environment on a frequent and recurring basis without being slowed down or distracted by hardware and deployment challenges on the IT ops side of things. Does this mean that IT operations professionals must be removed from the SDLC? Of course not. What it does mean is that IT operations personnel should join forces with the developers to implement game-changing solutions that help to automate the business of getting the developer’s changes into production with very little interfacing required between development and operations.

In a NoOps world, developers don’t check with IT operations before deploying code or to schedule downtime. In fact, they don’t deploy code at all–they simply check their changes into source control and the rest happens automatically, behind-the-scenes, just like the server who always keeps your drink full without your noticing they were there at all. Similarly, developers do not need to request the allocation of new resources from the IT department. They can, in theory, “spin up” a new ecosystem of server and database environments for a special purpose app while they sit with the stakeholder during a requirements gathering session.

The Catapult Digital Developer & NoOps Solution

As previously mentioned, we have developed a software solution called Catapult that takes automation of enterprise software delivery to the extreme. Using Catapult, even non-technical stakeholders can create new application projects on a meta-level that immediately spin up server resources using popular cloud platforms such as Azure and AWS. Catapult then allows the entry of high-level data models in order to populate databases (or it can connect to existing ones) and generate, and deploy comprehensive codebases all without the user knowing how to write the simplest of SQL queries.

Like the restaurant server that deftly keeps your needs satisfied without making his or her presence known, Catapult allocates hardware resources, creates codebases, sets up source control repositories, allows stakeholders to manage content and seed test data, manages branching strategies, communicates with the engineering team members to let them know of code changes, and pretty much anything else a competent developer and IT operations professional on your team would do. That is why we refer to Catapult as the “enterprise digital developer”.

If you’d like to learn more about Catapult or any of our other software development solutions, please contact us or call us at 833-POLYRIFIC.


The story of Polyrific began back in 2011 when company founder Matt Cashatt was thinking of a name for a polymorphic database concept and landed on the portmanteau “Polyrific” as a great way to describe a product that could make many different facets of enterprise data management faster and easier. It didn’t take long for Matt to decide that the name, and the concept behind it, was bigger than any single product: so many different facets of enterprise software creation and management need to be made faster and easier. And with that, a brand was born.

Since those early days, we have grown into an enterprise-focused technology company that specializes in software development, machine learning, and DevOps. Our original vision is woven in everything we do: we constantly streamline and perfect the way custom software is designed and delivered so that the process becomes faster, easier, and more economical with each project. Our imperative is to stay close to our clients and understand their needs clearly while continuing to develop the game-changing technologies that delight them.  

This latest website of ours was designed to give our clients, colleagues, and friends insight into contemporary technology topics that today’s enterprises must embrace if they hope to stay relevant in the marketplace as well as to stimulate ideas related to these technologies. Here you will find engaging articles intended to quickly get you up-to-speed on such topics, as well as the ways in which Polyrific can help guide your enterprise into territory that, for many, may be unfamiliar. We have also created high-level pages to help our new guests understand the types of services that Polyrific can offer them such as custom software development, general technology consulting, and on-premise DevOps automation.

Perhaps our most important corporate value is that “we go farther together”.  This value is meant for not only our internal team members but for our clients and friends as well. We hope to be a catalyst for positive and impactful change that helps your enterprise soar to new heights by aggressively growing our expertise and offerings in machine learning, data science, bots, personal assistants, and new form factors such as the Amazon Echo Show, which we believe will have far-reaching uses in the enterprise environment. We’ll bring to the table the knowledge, expertise, and even some good ideas. You bring the desire, imagination, and vision for an incredible future.

We are glad you are here and hope to see you back often. We would like to hear your feedback about our new website and hope you will share your thoughts and suggestions about any section you find interesting.