Best Practices in Reporting Information

of three

Best Practices in Reporting Information: Software Engineering

Number of change requests resulting from changing requirements, number of reusable components developed, number of person hours required to complete the change, number of commercial components used, number of reusable components developed and the number of bugs recovered are some metrics used in measuring financial and non-financial performance for software engineering in North America.


After a review of numerous magazine articles, research and course outlines from accredited universities, we discovered that there is common agreement among all published experts that the practices listed in the findings are the best practices in software engineering. We were able to select the ‘best’practices by comparing sources and choosing those that were common across the sources. We have included relevant metrics attached to each of the best practices identified. We included metrics and best practices that apply to North America (US and Canada) with a focus on small and mid-size businesses within the software engineering industry.


Understanding a software engineer’s best practices requires an understanding of software architecture. Just like an architect draws the design of a house before the engineers start to draw the detailed plans, a software architect draws the software models that the engineer will use to design the details. Software architects are responsible for high-level decision-making with regard to technical standards, including software coding standards, platforms and tools. They are concerned with some aspects which include aesthetics, functionality, maintainability, performance resilience, reuse, and usage.


A model is a simplification of reality. Software architectures model because of the difficulty in comprehending the complexity of a system in its entirety. Models help in the visualization, specification, construction, and documentation of the behavior and structure of a system's architecture. A model describes a system from a specific perspective.


Software engineering starts with the architect’s models and is responsible for the design, development, maintenance, testing, and evaluation of computer software. Software engineers are responsible for the process of solving customers’ problems through the systematic development and evolution of large, high-quality software systems. They have to work within specific timelines and budgetary allocations. An engineering process involves the application of well-understood techniques in an organized and disciplined way. Many well-accepted practices are standardized, such as those from the IEEE or ISO. Most development work is iterative.


Software engineering practices are applied to enable a better understanding of the large systems. Collaboration and coordination are required in the process of managing large, high-quality software systems. The key challenge for engineering is distributing the work and ensuring that the parts of the system work accurately together.


Because resources are finite, resource estimates must be as accurate as possible. Many projects have failed due to inaccurate estimates of cost and time.


Best practices in software development are a set of empirically proven approaches that are combined to address the underlying causes of software development problems. They are the commonly applied tactics and solutions in the industry by successful organizations.
Iterations are time boxed, and the first few iterations provide necessary information for scheduling further iterations. Iterative development allows critical risks to be resolved before large investments are made and enables users to provide early feedback to solve miscommunications and problems early. In iterative development, testing and integration are done concurrently, and errors are caught early. The software is rolled out in ongoing implementations, which give the team objective short term goals and milestones. The metrics for this process should be the number of person-hours for each category of employee (including, but not restricted to architect, engineer, developed, project manager) per iteration.

Requirements management is the continuous process that involves the documentation, analysis, tracing, prioritization, and agreements on requirements, and then control of change and communication with the relevant stakeholders. Requirements are dynamic and should be expected to change during software development. As users' understanding and visualization of the system grows, their requests evolve. Part of customer satisfaction is the ability to respond to these requests. Software engineers are responsible for maintaining forward and backward traceability of requirements. The two metrics for this process are the number of change requests resulting from changing requirements and the number of person-hours required to complete the change.
Using components permits reuse of common system processes. Enhancing that value is the fact that there are thousands of commercially-available components. Also, the use of components/services improves maintainability and extensibility and allows for a clear division of work among teams of developers. The number of commercial components used and the number of reusable components developed are the two metrics for this process.
When a software engineer models the system, it is a detailed representation of a system, module, or function from a specific standpoint. This provides a better, comprehensive, faster and productive analysis. Plus, models can, in a more compact and compressed way, serve as the same information as written specifications. The models visualize the requirements, thus contributing to better and faster perception by readers. Visual modeling improves the team’s ability to manage software complexity. Metrics for this practice could include a survey of both team and clients to determine the success of the models in communicating concepts.
The definition of quality must be documented and agreed upon with the client. It is defined as the delivery of a product that meets or exceeds agreed upon requirements through specific objective measures. Software problems are expensive to identify and repair after deployment. Part of this process for the software engineers is to develop test suites for each iteration and ensure these tests check for functionality, reliability, and performance. Metrics for this practice include the number of bugs uncovered (which assesses coding accuracy) and severity of the bugs.
Precise control is vital in ensuring that parallel developments do not degrade to chaos. A software engineer establishes an enforceable change control mechanism where change requests are prioritized and impact of the change request is assessed. A clearly understood approval process to introduce change in an iteration is also put in place. Metrics for this practice require a tracking process that includes a number of changes, cost for each change and tracking of those costs against the amount budgeted for change control.
Each of the best practices should be applied as reinforcement for others. Although it is possible to use one best practice without the others, it is recommended that they are applied together. The whole is much greater than the sum of the parts.
of three

Best Practices in Reporting Information: Advertising

The key metrics that are used by small to medium-sized advertising companies to measure financial and non-financial performance in North America were derived to be the following: the customer acquisition cost (CAC), the retention rate, the customer lifetime revenue (CLR), the return on advertising spending (ROAS), margin, the employee turnover rate, and the innovation measure.


We started our research by looking for directly available lists of the top metrics or key performance indicators (KPIs) that small to medium-sized advertising companies use to measure financial and non-financial performance in North America. We searched for this information in various sources such as in digital marketing industry reports such as those from Digital Agency Network, Moat, Strategy Online, Ad Weekly, Adage, Martech Today, and other related sites; business publications such as Forbes, Business Insider, and others; media outlets such as CNBC, The Globe and Mail, CNN, and others; consulting sites like Deloitte, PWC, and other relevant sources. Based on this search approach, we were not able to find directly available information on the top metrics or key performance indicators (KPIs) that small to medium-sized advertising companies used to measure financial and non-financial performance in North America. What we found are scattered pieces of information on several metrics or key performance indicators (KPIs) for tha are used across industries and for all company sizes. We also found a few metrics for small to medium-sized companies in general. Some reports that we came across also discussed the metrics used by advertising and marketing companies in general. Furthermore, the reports found were mostly for application in the United States and only a few ones were for Canada. Meanwhile, majority of the metrics found were more financial-related. Only very few reports pertain to a limited number of non-financial KPIs.
Since we were not able to find several reports on KPIs that can meet all the selected criteria, we tried to look for the actual KPIs used by several small to medium advertising businesses in Canada and the US such as Ruckus, Flightpath, Springboard Advertising, BeCreative, Studiothink, Original Ginger, Tugboat, and other similar companies. Based on this search approach, we were not able to obtain any information on the KPIs used by these companies. This maybe due to the internal nature of this information.
We also looked for interview snippets, statements, and surveys to determine if there are company executives, marketing experts, and professionals who gave some information on the KPIs used by small to medium-sized advertising agencies in North America. Based on our search, we were not able to locate these surveys or interviews that provide conclusive information on the digital advertising agencies' top KPIs in the region. Given the limited number of reports that consistently show the top KPIs used by the advertising agencies in North America, particularly the non-financial metrics, we based our choices on those recurring KPIs that appear in several reports for digital agencies, for specific business sizes, and for North America. We inferred then that since these KPIs were also vouched for by CEOs, founders, and key company executives, these can approximately represent the actual key metrics used by small to medium advertising agencies in North America.
The following are the chosen KPIs base on the strategies above:

Financial KPIs

1. Customer Acquisition Cost (CAC)

CAC represents the cost of onboarding new customers. This can be calcualted by selecting a specific time period. Within this period, the cost of marketing and sales can be divided by the volume of customers that were acquired during the chosen period. The lower the CAC value, the healthier is the business. This metric cannot be evaluated by itself though to get the actual health of the business. It needs to be assessed to gather with the retention rate and the customer lifetime revenue that are explained below.

2. Retention Rate

This is also referred to as the churn rate. This metric measures the percentage of the customers that are still supporting the business and those that already left within a specific time period. The rate can be obtained by deducting the number of new customers gained from the total customers that the business still has after the specified period. The resulting value can then be divided by the number of customers at the start of the time period. The target is to keep this rate very high to ensure that the business is not bleeding customers.<inc.>

3. Customer Lifetime Revenue (CLR)

CLR is also referred to as the Customer Lifetime Value. This metric represents the revenue from recurring customers.This KPI is difficult to estimate during the early phase of the business. However, once there are already a substantial number of information available, certain conclusions can already be drawn from the data sets. Obtaining the CLR is crucial since this can predict how low can the business go in the CAC mentioned above. This data can also give some picture of the level of customer service that a business provides. Some customers typically opt out due to poor customer service. With this metric, companies can know if they need to address customer service concerns.

4. Return on Advertising Spending (ROAS)

For small to medium agencies that are still starting, advertising budgets are still considered as investments and should yield some returns. The ROAS metric represents the return amount. ROAS can be calculated by dividing the sales amount with the advertising spending. This metric should be evaluated per channel though in order to determine the ones where most sales are generated. <inc>

5. Margin

Margin represents the most important financially-related metric. This is the business' bottomline. Before a company can expand, the business margin should be evaluated first. There are many methods that can be employed to calculate the margin. In general, the revenue of the business should be greater than the sum of the following: cost of the goods that were sold and the operating expenses such as wages, rent, and fixed costs. If the revenue is lower, the business should not be adding more expenses as this will result in a negative bottomline.

Non-Financial KPIs

6. Employee Turnover Rate (ETR)

ETR is determined by dividing the number of employees who departed from the company with the average number of employees that are still with the company. If the ETR is high, a business should determine if there are any concerns among its workforce. They will need to address any issue on culture, compensation, and other concerns. <clear>

7. Innovation Measure

This metric can determine how involved is the business in coming up with innovative initiatives. This can be in the form of new products or new ideas that were executed to grow the business. This can be measured by determining the return on investment of these innovative ideas.
of three

Best Practices in Reporting Information: Software Development

Commonly used metrics for software development can be divided into three major categories which include delivery, maintenance, and requirement effectiveness. Software delivery metrics include sprint burndown, team velocity, throughput and cycle time. Maintenance metrics include lead time, mean time to repair, code coverage and bug rates.
Requirement Effectiveness can be measured by comparing task volume to averages and by tracking rework.


We started by determining the top ten practices via searching through numerous articles, textbooks, and reports. Through this search, we were able to list various recommended metrics. We made an assumption that there is no mature metric gathering operation in place and therefore, we removed the more complex practices and then ranked the remaining by the number of mentions and importance to the process. From there, we have categorized these metrics into development, maintainability, and requirement effectiveness. These categories are used to map or measure the financial metrics for software development which are included at the bottom of each category. Finally, non-financial metrics are included along with a quote from one of the articles describing the best way to use those metrics.

Best Practices in Reporting Information: Software Development


In order to be effective in using metrics to determine a company’s abilities, it is crucial that these metrics be designed with the team, not for them. Each of the ten best practice metrics listed below can have a financial component by calculating the resources required to complete the metric.
Additionally, we have provided two pieces of information which drill down on the metric provided and we believe will be of value if a company decides to implement these best practices.



It is one of the key metrics for software development. A burndown report communicates the character and complexity of work throughout the sprint based on story points (often defined as a single user-defined process). The goal of the team is to consistently complete all work according to the project plan. Below mentioned are the value of tracking this metric:


It tells the “amount” of software the team has completed during a sprint. It can be measured in business processes or hours, and this metric can be used for estimation and planning. Below mentioned are the value of tracking this metric:


In software development, throughput means the number of features, tasks or chores, and bugs completed within a period that is ready to ship or ready to test. Traditionally, this is where the software metrics align with current business goals. Below mentioned are the value of tracking this metric:
While Todd DeCapua, the Chief Technology Evangelist for application development management (ADM) for in-depth discussion of throughput was at HP, his teams managed to achieve a 25% annual increase in code quality and 100% increase in throughput by re-defining software quality based on the metrics such as code integrity, customer and operation impact of defects, date of delivery, quality of communication, system ability to meet service levels, and canceled defects that eliminated wasted Q&A time.


It stands for the total time that elapses from the moment when the work is started on an item (e.g., ticket, bug, task) until its completion. Below mentioned are the value of tracking this metric:
  • To estimate how fast the team can deliver new features to users.
  • It can also be used by breaking down tasks or items more finely to the team’s current speed for different tasks. This analysis allows a manager to identify the exact bottlenecks affecting the team.


It includes the time required for the development team, management, and administrative staff. Therefore, any metric above should be able to have an average and actual cost assigned. Any costs associated with tools, technology or other non-human resources should also be factored in.



This is the time between the request for a (new) feature and its availability to the user. Below mentioned are the value of tracking this metric:


This means that how fast a company can respond to bugs and deploy fixes to the users. Below mentioned are the value of tracking this metric:


It is the amount of code measured in LOC that is covered by a unit test. Below mentioned are the value of tracking this metric:
  • A caution — while code coverage is a good metric of how much testing of software is taking place, it is not necessarily a good metric of how well the software is being tested.


This is the average number of bugs that are generated as a new product or new features are being deployed. Below mentioned are the value of tracking this metric:
Some companies drill down on bug rates using these categories. Each company should determine on their own if these detailed metrics are necessary which include error count, CPU/memory utilization, response times, transactions, disk space, garbage collection, and Thread counts.


Unfortunately, every software product developed will have corrections, adaptations, or technically required maintenance tasks and their associated personnel costs. The better the metrics such as lead time, mean time to repair, bug rates, and code coverage, the lower will be the related costs.


Defining effective requirements is the hardest part of the software development cycle. Miscommunication between users and developers is common. To measure how effective the requirements gathering team is at their jobs, there are two metrics the company can use:

1. Task Volume + average estimates

“The number of tasks your team can complete in the face of change, compared against the average estimates” will help the company to understand how good the requirements are.

2. Recidivism

A high amount of rework on a piece of code means someone in the workflow didn’t have the same understanding as someone downstream. This is a good indicator of incomplete or inconsistent requirements.


Non-financial metrics for soft issues such as innovation level, employee retention/satisfaction, and customer satisfaction can be defined using some of the categories and sub-categories used above. For example, innovation level can be determined by tracking new requirements from users as well as new ideas from development teams.
Employee retention in software development is affected by many of the metrics above. For example, one of the pieces of advice for retaining developers is not to “wait until the exit interview to ask what’s wrong.”
Finally, measuring customer satisfaction is crucial. This is usually captured in surveys and reported in percentages. It can be helpful to communicate in advance some development and maintenance metrics above to customers before asking them to complete the survey.


Due to the reason, deciding what to measure is as important as doing the measurement, and because maintaining focus and flexibility is crucial, we have compiled a list of suggestions in adopting measurement:
  • Although metrics are important, they cannot tell the story, however, "only the team can do that."
  • Almost anything can be measured but it is difficult to pay attention to everything.
  • Software improvements are driven by business success metrics but this is not true for the other way round.
  • It is a waste to compare snowflakes.
  • Every feature has its own importance towards adding value whether we measure it or not.
  • Focus on measuring what currently matters.