← Back
Guides & Playbooks

Integrated Analytics Playbook: Process

June 1, 2021

Process

Most organizations that have built software are probably familiar with product management, CI/CD, and DevOps. They likely have started to standardize processes around gathering requirements, building MVPs, designing the user experience, writing automated tests, deploying to staging environments, going through QA, and releasing a new version into production. These concepts are still something large companies are trying to wrap their heads around and ensure their development teams follow best practices. Due to the added complexity analytics brings, this process is typically not well defined for building advanced analytics or business intelligence dashboards. 

Understanding the quality of analytics is an ongoing effort. The trouble usually starts with defining what a good analytics solution looks like. When it comes to analytics being “good enough” to use, most people have difficulty articulating what that means - and depending on who you are talking to, it will mean something different. This dilemma of an inconsistent definition of quality is where product management concepts become extremely useful. We must think of analytics initiatives as products that serve users, accomplish specific tasks, and are long-living with a likelihood of becoming less useful over time as needs shift or data changes. If we think about these requirements upfront before building and designing solutions, we have a better chance of defining “good”, building to meet that criteria, and maintaining it over time. 

It is tempting to build quickly, get things out the door, and show progress. These solutions should be built iteratively, but with an understanding of the objective and intended impact. This process can be extremely complex, and it is often challenging to communicate this complexity up to the leadership team without getting too into the details. But, building and using analytics is when the details matter. Organizations should pause and ask, “what could happen if this is wrong or used in the wrong context? What kind of damage could be done?” The answers to these questions might get the leadership team’s attention. 

The first step to addressing this problem is taking a look at the typical process to gather requirements, design, build, test, deploy, manage, and monitor analytics within your organization. Let’s start with gathering requirements.

Gathering Requirements

It is vital to have a process to gather and document the answers to the following questions, get sign off from stakeholders, and associate them with each analytics initiative:

  • Who are the users of the model or dashboard?
  • What exact information needs to be delivered? 
  • What decisions are being made with this information?
  • How often are those decisions made?
  • How do you know if the decision was the right one?
  • How can you track that the information provided influenced a decision?
  • Do you need to be able to explain how the information was generated?
  • How often does the information presented cause an incorrect decision?
  • What is the implication if a wrong decision is made?
  • How should the information be consumed, to who or what, how often?

These questions often trigger in-depth conversations with stakeholders and users to ensure everyone is aligned and clear on the objectives. This exercise may result in educational discussions where analysts learn more about the business or executives learn more about the analytics process.

Design

The answers to the questions above will drive most of your team’s decisions around designing the analytics solution. Should it be a static report? An alert? A predictive model? An interactive dashboard? Understanding these requirements will save your team a lot of time and pain when building and testing your solutions. 

Some of the design elements these requirements may drive:

  • Information Delivery - the mechanism in which information is delivered will largely depend on the business workflow it is being inserted into
  • Accessibility - controls around who can access what and when
  • Automation - depending on the cadence of updates or refreshes, it may not be feasible to run anything manually, and some data pipelines may need to be built and automated
  • Technique - if explainability is required, that rules out some machine learning techniques
  • Monitoring and Alerting - what data needs to be captured to understand impact or correctness? What alerts need to be set up to know when something is not working as intended?
  • Infrastructure - some decisions require a lot of data to be used, which would require the correct systems and infrastructure in place to run the solution

DevOps for Analytics

Build and Test

Building and testing analytics solutions will depend on what type of solutions are being created (BI dashboard, models, etc.). Regardless, each will require some sort of tool or platform and a set of tests that it must pass before being used. There will likely be many environments to manage, access to specific data sets in each environment, some level of running tests, and a promotion process. Below is a simplified example of what these environments might look like.

The promotion process should connect the environments and establish stage gates that the solution must go through to make it to the next environment. Having an established, consistent, and enforceable process for promotion and testing will enable the organization to build higher quality solutions.

Deploy and Manage

Deploying or publishing an analytic solution means that it will operate and execute within the organization’s production level standards. The deployment process likely has some technical complexity to ensure everything works as expected in a different environment and various data sources. Some significant differences between a development environment and a production environment might include the data that is available and its format, the level of access, the automation required to continuously run the solution, the configuration and availability of the infrastructure, and the other applications that the solution must integrate with. Considering the fact that the business or organization runs off of the production environments, it is best practice to ensure that the environment and solutions are always available, trigger potential errors, have an operations team and ownership model, and have mechanisms in place to quickly fix errors.

If these aspects of the deployment process are not thought through during development and testing, there could be a significant number of issues during the promotion of the solution that will be challenging to triage. 

Typical errors with deploying machine learning models include:

  • Data schema changes in different environments
  • Lack of support for dependencies in higher-level environments
  • Partial coverage to properly monitor the model
  • System integration points
  • Latency issues
Monitoring Analytics Solutions

Monitor

Once deployed, the quality of the analytic solution needs to be continuously monitored to ensure that the predictions and insights it provides are reliable, available, and useful. This is a crucial capability your team will need to enable to operationalize analytic solutions effectively. Monitoring can be seen through three lenses: as a model, as a software asset, and as a product.

As a Model

Traditional software assets implement finite and rigid logic. Analytic solutions predict the future and analyze the past. Therefore, systems and mechanisms need to be in place to ensure the solution can react and evolve as the data and underlying phenomenon the data represents change over time.

Machine learning models and other advanced models require this capability in particular. Data scientists and analysts define the performance metrics (confusion matrices, F1 scores, etc.) and need a way to periodically calculate and monitor those metrics over time. They need a process and tooling to hand monitoring definitions to the operation teams for implementation. This enables data scientists and analysts to focus more on creating analytic solutions and less on maintaining them.

Concept Drift

When a model is trained, it is built with a snapshot of the dataset’s statistical properties. Over time, the data can evolve and change. The model needs to be updated or retrained to continue providing quality and actionable insights. This is a unique characteristic of models in comparison to traditional software.

Data Pipeline Quality 

Models rely on the data pipeline to produce insights. This crucial interface needs to be monitored to ensure performance. Data scientists/analysts need to define the schema of the records the model expects. The pipeline must be monitored to ensure it is delivering records that align with the specified schema.

As an Asset

Analytic solutions are assets to the business. It can be easy to overlook that these solutions need to be managed as first-class citizens with the same rigor and standards as traditional software. 

Performance SLA’s and Latency

As a software asset, the technical performance also needs to be taken into account and monitored. Applications and users expect and deserve a certain performance and latency in order to use the solution effectively. Therefore the latency needs to be defined and optimized against specifications. 

As a Product

Impact - Usage & Feedback

Analytic solutions are critical assets and sometimes require significant investment. Therefore, they must be improved over time to increase their value to the organization. Tracking and understanding usage, feedback, and business impact will guide the evolution and iteration of the solution throughout its life.

Conclusion

Having the process in place is the first step. If your organization is not thinking through these concepts every time they build analytics, start there. This might sound like overkill or too much to implement with each solution, but it will end up saving your organization a ton of time! The quality of the analytic solutions will be much higher, you will understand what is working and what is not much faster, and your usage of solutions will go up. This process will transform your team from building single-use, ad hoc solutions to highly utilized, self-service products. After defining these processes, the next step is to find and evaluate tools to make this process easier.