As 2016 comes to a close and we begin to look toward 2017, our survey of the current digital landscape indicates that the new year will come with new challenges as the digital landscape continues to evolve at a breakneck pace.
In addition to Agile software development’s current reign in terms of development methodology, we see rapid expansion of microservice adoption, increased usage of services such as GitHub and Docker, and implementation of continuous delivery (CD) methodologies. These innovations have rendered traditional performance management practices less able to cope with the needs of today’s technologies. Many organizations that have gone digital to move faster and reach a global audience have seen their product quality deteriorate, despite attempts to prevent such from happening.
Given this, today’s blog post will address some of the changes your technology team will need to make to be successful in 2017.
Reevaluate Your Technology Stack
When setting up their monitoring systems, many organizations begin by focusing on their infrastructure. As financial resources expand, they then turn their attention to their applications, the transactions between end users and their applications, and finally, their end users’ experiences.
However, infrastructure is rarely the bottleneck today. Many companies now rely on third-parties to deliver their hardware-related needs. These vendors can now easily scale up resource availability as needed by their clients. Additionally, the content shown to visitors is less and less centralized in today’s digital world. There is increasing interdependence between first and third parties (each with its own systems in place) in terms of content delivery. Therefore, a given infrastructure provider is responsible for hosting less and less. As such, it makes less sense to devote a majority of resources to this area.
In today’s digital world, inverting the traditional way of spending on monitoring yields the most bang for your buck, so to speak. By beginning with monitoring of the end user experience and then moving down to transactions between the user and the application, the application itself, and, finally, infrastructure, you can maximize attention on areas where problems are most likely to pop up and cause issues.
Begin Your Performance Testing and Monitoring Processes to an Earlier Point in Your Software Development Life Cycle
THE COST OF A SOFTWARE DEFECT IF IDENTIFIED DURING:
Source: IBM Systems Sciences Institute
Most of a software engineer’s time is spent fixing defects. Each minute spent on bug fixes is a minute not spent building new features and improving your digital application. However, this view doesn’t fully address the differing cost of bugs found at various parts of the software development life cycle (SDLC). Generally speaking, the earlier a bug is discovered and rectified, the cheaper it is to do so. According to IBM’s Systems Sciences Institute:
The cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase.
In today’s competitive digital landscape, your profit margins narrow as new competitors enter the field. As such, it is imperative for you to keep costs as low as possible to maximize your profits and remain competitive. Given the amount of time spent on software engineering and the frequency with which defects are introduced into code, you can reduce the cost of development and improve your product by implementing testing and monitoring processes at the earliest possible point in your SDLC.
Consider Performance Needs a Part of the Software Development Process
Today, performance is paramount. Users are becoming more and more demanding. To remain competitive, you must provide your product in the most performant manner possible. One of the ways to ensure that this area receives the attention it needs and deserves is to incorporate performance considerations into the software development process. It should not be considered a separate entity or assigned to another team. This is part and parcel with moving testing and monitoring to an earlier part of your SDLC.
To be sure, optimizing performance requires “specialist knowledge in order to understand the issues and tooling that are at the source of the issues,” but, according to Andy Still:
Performance…is a consequence of legitimate activity and once resolved should stay resolved. To that end, once the methodologies for addressing performance issues are understood they can be applied in multiple situations. These elements are not beyond the capabilities of a good developer, given time and space to do so.
Many companies have already incorporated this view into their development culture. Best practices have become standard practices, yielding better products that perform better. To remain competitive, you should consider phasing in the same changes if you have not already done so, especially with regards to automation. Just as you likely have automated testing that prevents the inclusion of bugs and regressions in your deliverables, you can (and should!) automate testing of your site performance so that, if a release results in specific metrics not being met, the build fails. You should strive to provide as much information to your developers as possible so site optimization becomes an actionable item for them.
One way to do this is using the Rigor Optimization plugin for Jenkins, which tests your site’s performance as a discrete part of the build process. In addition to failing a Jenkins build if the metrics you specify aren’t met, the plugin tags all tests and snapshots in your account with details about the build project, build number, and specific details about the test’s pass/fail status.
Use Monitoring/Development Tools that Support Open APIs
Context matters when it comes to monitoring your applications, and to do so, the technologies you choose should support open APIs. Without this, you’re left with limitations and impediments that prevent your tools from collecting and sharing the appropriate event-related information. For example, you want the monitoring and development products you use to be able to collect information from other tools, including logs, and communicate things like configuration settings. Doing so allows you to fully automate your development workflow, and doing this becomes more and more important as you incorporate monitoring into the earlier stages of the SDLC.
Remember that Less is More
While you may be tempted to monitor every aspect of your technology stack, remember that less is more. Your resources are finite, so they should be focused on collecting data that can have the greatest impact on your business and integrates into the feedback loop you use to make your development decisions. Generally speaking, this data should meet one or more of the following criteria:
- Actionable by your team
- Influential on your development decisions
- Supportive of your business’ key performance indicators (KPIs)
Obviously, there are going to be exceptions to these guidelines within your company, but the idea is that you should spend some time simply evaluating what needs to be monitored and what doesn’t to ensure that your resources are spent in the most optimal manner.
To remain competitive in 2017’s digital landscape, your technology team may need to make some significant changes, as traditional performance management practices become less and less able to keep pace with today’s innovations, such as adoption of microservices, mobile-first development, use of continuous delivery methodologies, and so on.
For customized information on the performance of your technology stack and how you can implement a continuous performance management strategy that works in today’s digital world,
E-commerce revenue continues to grow,as consumers turn away from shopping in brick-and-mortar stores to shopping online. However, many businesses are not prepared for this growth because they do not fully understand the market and how to invest in...Read More
Because of the multifarious nature of web clients today, it’s important to consider the usage statistics when designing, implementing, and managing your site. However, misconceptions often arise when determining what browsers to design for an...Read More
Google Webmaster Tools is a web service that allows webmasters to view the status of their sites as seen by Google and the Googlebot crawlers. In addition to indexing your site structure and content, the Googlebot crawlers also record data on perform...Read More
Web designers and developers are always looking for ways to speed up their page load times. Yahoo has an excellent article written on the best practices for speeding up your page. How do you know if implementing one of their suggested practices will...Read More