Way back in 1947 Rear Admiral Grace “Amazing Grace” Hopper documented for the first time in programming, a bug. More specifically, it was a moth that had decided to party between two solenoid contacts which shorted out an electro-mechanical relay at the Harvard Computation Lab. Amazing Grace, also known as “Grandma COBOL” didn’t have Trello, Asana, or Jira back then, so she documented the bug on a piece of graphing paper:


A lot has changed since Amazing Grace’s day, and bugs–of the digital variety–are far more common due to the fact that software is exponentially more complex and touches every aspect of our daily lives. Bugs are viewed differently from developer to project manager to product owner to end-user and being mindful of those different viewpoints is critical to stopping any bug infestation. If you find that bugs, like the ants on your kitchen counter, are a bit too abundant for comfort, here are five ways you can manage:

1. Have a requirements document

A requirements document, also known as a functional specification, is critical to the success of any project. In their own headspace, product owners and stakeholders usually have a crystal clear vision of their product. But that vision can lack practical details about how the app should work in the real world.  When the development team makes a good-faith effort to bring that vision to life, the product owner may be confronted with details that don’t fit the way that they think the app should work and that is frustrating from their perspective.

It is the job of a business analyst and/or the project manager to tease out of the stakeholder a complete map of their vision–including aspects that they may not have considered–and to document that vision in the form of a detailed functional specification that tells the developers exactly how the app should operate.

This may seem obvious, but in our hyper-agile world this step of creating a blueprint for the project is skipped more often than you might think and too frequently the developers get blamed for not using their own “common sense” because there was a disconnect between their vision and that of the product owner.

2. Know the difference between enhancements and bugs

It is very difficult for stakeholders to articulate the optimal way in which an app should behave when they are working with their imagination alone. Often, it takes iterations of development, then feedback, then further development before a final release. This is perfectly normal–the stakeholders need to see the app, experience it, play with it a bit, before they can say, “this registration path is a bit more cumbersome than I intended”. The problem in this scenario is that sometimes a project manager will take such feedback from the stakeholder and label it as a “bug” when in fact it is an enhancement.

As a rule of thumb, if the issue being documented isn’t breaking the app and the end-user can still complete the user journey (e.g. successfully register, successfully add a product to their cart, etc), then the issue is not a bug, it is an enhancement and should be labeled as such.

Some developers interpret the term “bug” as “you made a mistake here” and rightfully so: developers are often the only party blamed when things go wrong even though they are only one part of the software development team. With this in mind, you can see how being told that they made a mistake because they didn’t anticipate how a stakeholder’s vision would evolve over time can be a bit irritating. 

As a PM or stakeholder, the best way to alleviate this situation when documenting new issues is to substitute (in your head) the term “bug” with “mistake”. In fact, before you log it, think of the developer who will get the ticket and say to yourself, “Johnny, you made a mistake because _________.”. If the sentence sounds preposterous in your head, then it will sound preposterous to your developers as well–label it an “enhancement”. On the other hand, if the sentence is not ridiculous (“Johnny, you made a mistake because you didn’t add a logout button when it was clearly in the functional spec and wireframes. Now the users can not log out.”) then you have a true bug on your hands.

3. You know what they say about ASSumptions. . .

We recently had a project that included a form with a text field for entering a date. The PM and stakeholders assumed that this field would have a date-picker (calendar control) fly out when gaining focus, but this was never documented in the functional specs or shown in the wireframes. As a consequence the developers simply made it a text box that validated for a date and moved on. The QA team signed off on this because, after all, it met the requirements of the functional spec.

During UAT, the stakeholders were visibly irritated that there was no calendar control for this field and the issue got bounced back to development as a bug. Was it a bug? No. It was an assumption made about which design pattern should be selected by the development team and never documented. 

If a requirement is important, the PM should be certain that it has been included in either the functional or non-functional specs. If that doesn’t happen, the issue should simply be labeled as an “enhancement” and prioritized in the backlog.

4. Not all bugs are in the code

Occasionally, an app that was working fine will suddenly display several prominent bugs causing a five-alarm fire among the maintenance engineering team. This is very rare, but it does happen. Almost without fail, we have found that the cause of this can be down to a change in the code’s dependencies rather than a defect with the code itself. A few examples:

  • A DBA makes a change to the schema of a production database. The DBA thought this would not be of consequence–they simply changed the datatype of a column to save space, or dropped a column that contained no data. Unfortunately this type of issue can have an affect on the code and should always be discussed with the development team before such changes are made.
  • A third party API on which the code depends has gone down. For example, there are APIs that calculate sales tax for eCommerce apps. If that API is having an outage, or has bugs of it’s own, then it can make the app consuming the service break.
  • A maintenance engineer updates dependencies or frameworks without consulting the dev team. The app breaks. Again, any changes to software connected to the app should be thoroughly discussed with the engineers before they are implemented.
  • The app is migrated to a new server that does not have all of the necessary dependencies installed.

Essentially, this one is all about communication. If any changes are to be made that might affect the code, they should be discussed beforehand. The primary engineering team should include a dependency profile with their turnover documentation to provide greater visibility into potential issues for the maintenance engineers. Also, there should always be a backout plan when making such changes.

5. Software can be hard

This may sound simple and silly but it is valuable to remember. The software game is full of risks–that’s why the financial rewards can be so high. It is a complex game with layers upon layers of systems and considerations. You should always allow time in your plan for things to go wrong so that when they do, you can have enough time and budget to step back, carefully consider whether what you are seeing is a technical defect or just the product of an evolving vision, and then properly label the issue.

At Polyrific, we take pride in the fact that we invest ourselves fully into our client’s projects. We know how this game is played and we know how to shepherd our clients through the process directly because we plan for and deal with issues, like bugs, that will inevitably come up. 

If you’d like to discuss your vision for a software product, please do not hesitate to contact us.

You may have heard the term “continuous integration” or “continuous deployment” or even “continuous delivery” tossed about in your department as a catch-all phrase for “we need to ship code quickly and constantly”. It’s true that a well-honed continuous integration (CI) program can result in rapid, hyper-agile delivery of software but in order to reap the philosophy’s rewards you have to establish and adhere to a disciplined protocol that is based on a true understanding of what CI actually is.

In order to understand CI, let’s look at the way software used to be shipped in the years leading up to the golden age of Agile, say, the aughts (2000 to 2010-ish). During these years, even the smallest feature change to an in-place application was a major undertaking. Budgets had to be approved, designs made, code written and tested, bugs fixed, user acceptance granted, and then a big monolithic chunk of code was released as a new version of the software. Because making a release was such a big undertaking, the needs of most stakeholders from the business fell to the wayside; there simply wasn’t enough time or budget to cater to all of their needs. 

Things began to change, however, as we entered the teens (2013 to present).  Web and native apps intended for consumption on smartphones exploded filling more and more niche needs and the typical business stakeholder became ever savvier in all things software as they grew accustomed to having myriad software features to solve problems in their personal life. This created a demand that spilled over into the workplace and became common in just about every conference room around the world:  “Amazon sends me updates about the location of my order every step of the way! Why can’t we do that with our replacement part orders??” or “Searching for information on Google is so intuitive–it should be the same when we search our inventory” or how about, “We should make a mini-game like Angry Birds to promote this new ad campaign”. Overnight, software delivery professionals–from developers to quality assurance to analysts–were overwhelmed with requests and outnumbered by throngs of stakeholders with wishlists a mile long. The age of carefully planned, waterfall-like software release delivery schedules and the age of “I want it all and I want it now” Agile methodology had begun.

In the years since that critical inflection point in the art of software delivery Agile, a stream-of-consciousness approach to software delivery has proliferated in response to stakeholder demand/impatience and this has in turn given rise to CI which is the process of streamlining and automating the business of requirement specification, development, quality assurance, testing, user acceptance and, finally, production deployment. 

In a CI world, a stakeholder may express a desire for a new feature in the company intranet during the Monday morning meeting. By lunchtime, the business analyst has gathered detailed requirements and placed the requirements into a ticketing system such as Visual Studio Team Services or Jira. This alerts the dev team automatically so that they can step away from the foosball table and get back to their workstations. By Monday afternoon, the developer(s) has accepted the ticket and used it’s automatic integration with the source control repository to create a new “branch” of the code. The developer’s job is done within the hour and her code is checked in which triggers an automatic execution of unit and end-to-end tests and then an automatic build to the QA environment and Slack notification to the QA testers. Once the QA staff has approved the build, the CI pipeline takes over once more and automatically handles the placement of the new branch into the production environment while maintaining the ability to easily roll back to the previous build if necessary.  By Tuesday morning, the stakeholder is happily using the feature he requested during the previous day’s morning meeting. This would never be possible without an established CI program in the organization.

A well-developed CI program isn’t just for the benefit of the stakeholders; it has plenty of deep technical advantages as well. For example, most projects have multiple developers working in isolation. Adherence to a CI protocol forces a degree of work atomization which limits the ability for discrete tasks to become too large. This means more frequent code check-ins and integration with the production environment which means fewer nasty merge conflicts and bugs.

By now you probably get what CI does, but you may be asking yourself what exactly it is. Is it a tool? A platform? A philosophy? In reality, it’s a little bit of everything. DevOps professionals create “build definitions” using popular build engines like Visual Studio Team Services, Team City by JetBrains, Jenkins, or Octopus. You can think of these definitions as scripts that have hooks into both the source control repository where your application’s code resides as well as into the environments (servers) that run the working code. In a sense, these build definitions are a collection of IF THIS THEN THAT statements: “If a new ticket is added then create a new branch and email the dev team”, “If a developer checks in their code, then run unit tests”, and then, “If all unit tests pass, then deploy to the QA environment and email the testers”. 

Different build engines have different strengths and it is possible that your organization uses more than one of them. In fact, we developed our NoOps/digital developer product called Catapult to help abstract and alleviate the stress of managing multiple build servers and other resources in order to further streamline the continuous integration process.

The other aspect of a solid CI program is to enforce a protocol to be followed by all team members. This is very important because if the team does not use the correct toolchain, the CI program won’t work and its benefits are lost. For example, if the stakeholder from the Monday morning meeting were to have simply emailed his request directly to the developer, that developer–eager to please–may have coded the feature and then checked it directly into the source control repository without following the proper branching protocol. This may have caused merge conflicts that then require a manual review and possibly fail to trigger automatic tests which would then result in the very real possibility that bugs slip through to production environments and create the need for site downtime.  The good news is, there are some pretty great tools out there to make adherence to a CI protocol pretty easy. We are, of course, partial to Catapult but regular ol’ VSTS or Jira are pretty good as well.

If you are interested in instituting a CI program at your enterprise but don’t know where to start, please feel free to contact us for help. We are experts in the field of CI and we can either help you design and roll out a custom CI program or implement a licensed instance of Catapult to make CI (and DevOps in general) feel like magic.

If you haven’t already heard the term “NoOps” as it pertains to enterprise software development and delivery you probably will soon. NoOps is an emerging movement that seeks to relieve a bottleneck created by traditional IT operations and on-premise application hosting by utilizing solutions rooted in automation and cloud-based infrastructure. At Polyrific, we have developed an outstanding NoOps solution called Catapult and we offer this article in hopes that it helps you better understand why Catapult is such a big deal.

From DevOps to NoOps

Perhaps the best way to begin understanding the NoOps movement is to first understand the DevOps movement. The term “DevOps” is an amalgamation of “Development” and “Operations” and refers to the interplay between software developers and IT operations during the process of deploying applications to the world. In every enterprise, it is necessary for these two departments to stay close to one another in order to best serve the needs of the business.

At most enterprises, responsibilities for developers generally include the following:

  • Work with stakeholders to understand the needs of the business
  • Distill those needs into requirements and specifications
  • Develop applications that fulfill said requirements

By contrast, IT operations are generally responsible for interfacing with network hardware:

  • Allocation & management of server resources
  • Fault planning & monitoring
  • Security & compliance
  • Device management

Obviously, applications that are developed to suit the needs of the business have to be deployed somewhere so that they can be consumed, and this is where the interplay between the developers and IT operations managers comes in: they must work together to take the developer’s work and deploy it to the world on their enterprise’s resources. This makes perfect sense if the picture were so simple but, as we will see in the next section, the reality is a bit more complicated.

Agile & Continuous Deployment

In the early days of enterprise software solutions, very few enterprises created custom software solutions or applications of their own. However, as workplace environments have become more dynamic and reliant on smart hardware and software solutions, the demand for the rapid release of custom software applications has grown dramatically. The Agile movement was largely a response to this exponential growth in application demand and it is founded on principles inspired by the Silicon Valley “fail fast & fail early” philosophy.  Gone are the days of months of planning, tedious software architecture design, and software release schedules following a waterfall schedule into a deployment phase that is given equal weight by the IT operations team. Today’s software development teams are expected to respond immediately to a seemingly never-ending stream of features and demands requested by the business.

Often, projects are started as bare-bones applications that are immediately thrust into production environments where they will be constantly updated and expanded upon as the business requirements evolve. This sounds great, but it presents a few challenges to software development and IT operations teams, especially with regards to the quality of the end-user experience and application uptime. To counter this, the development and ops teams employ a set of automation tools and checkpoints, collectively referred to as “Continuous Integration” or “Continuous Deployment” that smooth out the problems caused by rapid iterations in the software development life cycle. For example, when properly configured, and CI pipeline can trigger a series of automated tests whenever a developer checks in new code to ensure that the new code does not break anything or cause “regression” bugs.

The (Traditional) IT Bottleneck

Its operations experts are fantastic but, in our view, their role is best executed when the evidence of their work is everywhere, but their presence is not so apparent. A good server at a restaurant will keep your glass full and your food coming without you noticing them much at all and it should be the same with IT operations managers but sometimes–often through no fault of their own–this is not the case. Without considerable depth of automation in your software development life cycle (SDLC), it becomes necessary for the development team to spend significantly more time with the IT operations team in order to coordinate downtime, deployments, rollbacks, and so forth. This is especially true in the case of on-premise deployments. This close coupling between IT ops folk and the developers is bad for at least three reasons:

  1. It takes the developer’s focus away from understanding the needs of the business stakeholders
  2. It cuts into development time
  3. It can influence the engineering and delivery schedule of the application

Given the above, you can probably start to see where this is headed: interaction between development and IT operations should be automated so that the software engineers can remain focused on what they do best: delivering application-based solutions that serve the immediate needs of the business.

NoOps Produces Better Outcomes

So in order to respond to ever-changing demands of the business, development teams must be capable of quickly organizing the stakeholder’s needs into business requirements and then parlaying those requirements into working code that is tested, quality-assured, accepted by the end-user, and deployed into the production environment on a frequent and recurring basis without being slowed down or distracted by hardware and deployment challenges on the IT ops side of things. Does this mean that IT operations professionals must be removed from the SDLC? Of course not. What it does mean is that IT operations personnel should join forces with the developers to implement game-changing solutions that help to automate the business of getting the developer’s changes into production with very little interfacing required between development and operations.

In a NoOps world, developers don’t check with IT operations before deploying code or to schedule downtime. In fact, they don’t deploy code at all–they simply check their changes into source control and the rest happens automatically, behind-the-scenes, just like the server who always keeps your drink full without your noticing they were there at all. Similarly, developers do not need to request the allocation of new resources from the IT department. They can, in theory, “spin up” a new ecosystem of server and database environments for a special purpose app while they sit with the stakeholder during a requirements gathering session.

The Catapult Digital Developer & NoOps Solution

As previously mentioned, we have developed a software solution called Catapult that takes automation of enterprise software delivery to the extreme. Using Catapult, even non-technical stakeholders can create new application projects on a meta-level that immediately spin up server resources using popular cloud platforms such as Azure and AWS. Catapult then allows the entry of high-level data models in order to populate databases (or it can connect to existing ones) and generate, and deploy comprehensive codebases all without the user knowing how to write the simplest of SQL queries.

Like the restaurant server that deftly keeps your needs satisfied without making his or her presence known, Catapult allocates hardware resources, creates codebases, sets up source control repositories, allows stakeholders to manage content and seed test data, manages branching strategies, communicates with the engineering team members to let them know of code changes, and pretty much anything else a competent developer and IT operations professional on your team would do. That is why we refer to Catapult as the “enterprise digital developer”.

If you’d like to learn more about Catapult or any of our other software development solutions, please contact us or call us at 833-POLYRIFIC.

The 2018 Consumer Electronic Show is now underway in Las Vegas, Nevada. Each year CES brings forth emerging technologies to the world stage that will soon power the way we live, work, and play. Here are the buzz-worthy technology trends at CES this year:


Of notable buzz is the expansion of 5G New Radio (NR) cellular data transfer and millimeter wave technology. Five years ago, the upgrade to 4G felt like a big deal, but 5G is like nothing we have seen before. Whereas 4G can transfer data at 100 Mbps, by 2020 5G will transfer data at a searing 10 Gbps.  To put this into perspective, 10 Gbps data rates will allow you to download a two hour long high definition movie to your smart device in about three seconds

Such high data transfer rates should catch the US up to other areas of the world that have newer (and therefore faster) data infrastructure like South Korea, Japan, and Singapore. The importance of 5G speed isn’t in the fact that we can download more media in less time–5G is important because of the industries it will enable such as streaming 8K video for digital medicine, data streaming for self-driving cars, mega-encryption for the Internet of Things, and so forth.

AI (again)

Be prepared to hear more about AI now and for the next several CES conferences. Specific intelligence, that is intelligence trained for a very specific purpose, is now a mature technology and one that you most likely already use on a daily basis. There is a heavy focus this year on the application of AI to building better and more conversational digital assistants like Alexa, Siri, Cortana, and “Hey Google” (seems Google dropped the additional syllable in “OK Google”). 

As AI goes from specific to general (a process that will take many more years), conversational interfaces become more, well, conversational. For example, instead of “Hey Google, find Italian restaurants”, we would have, “Hey Google, I want to go out tonight. The weather is going to be bad so I don’t want to travel far from home. Just go ahead and make a reservation somewhere close–you know I love Italian food but Mexican is fine as well”.


AI and Robotics are the peanut butter and jelly of the tech world. You can’t have efficacious robots without strong AI. AI has come a long way in the last few years and this is giving rise to a whole new family of robotics here at CES this year.  There have already been unveiling events for several humanoid robots which, like there predecessors, have been clunky and prone to errors; however, the more purpose-built robots geared towards specific industrial or practical purposes are faring much better. Among such technologies are “smart baggage” and self-driving vehicles. Check back for more detailed articles on such robotics in the future.

Virtual & Augmented Reality

Virtual reality is still limping it’s way to mass adoption with Sony announcing that just under 3 million Playstation VR units have been released as of the Holiday 2017 season. Many of the big names such as Oculus and HTC have announced lower-cost and self-contained VR units in a move to catch up with Sony who currently dominates the space. In our view, VR seems to still be a ways off in terms of mass commercial adoption; however, there are interesting applications such as therapy for post traumatic stress disorder that we believe will be useful in the near term. 

By contrast, augmented reality technology is just beginning to sprint towards mass commercial adoption. When you think of augmented reality, think about viewing the world through the window that is your smartphone rather than through special glasses (though both are happening). What we are seeing here at CES are several applications wherein ordinary smart phone owners can use the phone to overlay useful information onto the real world like where the nearest restroom is. We will be adding more articles about augmented reality in the coming weeks.

Digital Therapeutics

Digital therapy is another big topic at CES 2018. The term “digital therapeutics” encompasses all types of sensor-based diagnostics that enable virtual medicine. At Polyrific, we view emerging technologies in digital therapeutics and virtual medicine as essential for the well-being of US citizens in our changing healthcare landscape. We will be publishing articles on digital therapy in the future, but essentially this topic involves the gathering of personal health data from a variety of sensors in our smart devices and checking that information against oceans of data to indicate trends and even perhaps make a diagnosis. Additionally, with your permission digital therapy enables doctors from across the world to review your medical history and deliver a consultation which, depending on your healthcare situation, might be critical to your well being.

Internet of Things (IoT)

The Internet of things is nothing new to CES and is prevalent once again this year as it continues to expand and serve as the world’s digital nervous system. Of particular focus this year are the IoT implementations that drive smart cities and energy conservation.

Various Improvements to Consumer Electronics

As you might imagine, there are many fun updates to consumer technology being announced at CES 2018. We won’t go too deep into these areas but a few highlights include 8k video, thinner, lighter, and more powerful laptops, hand-held mini-camcorders with built-in stabilization gimbals, and new ways to enjoy sports in virtual reality.

So these are the primary trends driving CES 2018! Stay tuned throughout the week and follow @Polyrific on twitter for more CES coverage.

The story of Polyrific began back in 2011 when company founder Matt Cashatt was thinking of a name for a polymorphic database concept and landed on the portmanteau “Polyrific” as a great way to describe a product that could make many different facets of enterprise data management faster and easier. It didn’t take long for Matt to decide that the name, and the concept behind it, was bigger than any single product: so many different facets of enterprise software creation and management need to be made faster and easier. And with that, a brand was born.

Since those early days, we have grown into an enterprise-focused technology company that specializes in software development, machine learning, and DevOps. Our original vision is woven in everything we do: we constantly streamline and perfect the way custom software is designed and delivered so that the process becomes faster, easier, and more economical with each project. Our imperative is to stay close to our clients and understand their needs clearly while continuing to develop the game-changing technologies that delight them.  

This latest website of ours was designed to give our clients, colleagues, and friends insight into contemporary technology topics that today’s enterprises must embrace if they hope to stay relevant in the marketplace as well as to stimulate ideas related to these technologies. Here you will find engaging articles intended to quickly get you up-to-speed on such topics, as well as the ways in which Polyrific can help guide your enterprise into territory that, for many, may be unfamiliar. We have also created high-level pages to help our new guests understand the types of services that Polyrific can offer them such as custom software development, general technology consulting, and on-premise DevOps automation.

Perhaps our most important corporate value is that “we go farther together”.  This value is meant for not only our internal team members but for our clients and friends as well. We hope to be a catalyst for positive and impactful change that helps your enterprise soar to new heights by aggressively growing our expertise and offerings in machine learning, data science, bots, personal assistants, and new form factors such as the Amazon Echo Show, which we believe will have far-reaching uses in the enterprise environment. We’ll bring to the table the knowledge, expertise, and even some good ideas. You bring the desire, imagination, and vision for an incredible future.

We are glad you are here and hope to see you back often. We would like to hear your feedback about our new website and hope you will share your thoughts and suggestions about any section you find interesting.