Recently, our team was on a call with a client who was trying to consolidate dozens of transactional systems into a single model to support a more effective reporting paradigm. The envisioned solution focused on self-service, visual analytics, while also supporting more traditional reporting.
This client’s challenges were similar to what many other businesses face today. They wanted:
- Quicker time to insight
- Empowered end users
- Lessened dependency on IT
- Reduced reconciliation of reports, etc.
Sound familiar?
The client wasn’t questioning whether or not there was value in the project ahead. Their questions were focused on the best approach. Do we pursue a big bang approach or pursue something more agile in nature?
Upon further discussion and reflection, the objectives of the program seemed to be a perfect case for agile. Let’s talk about why.
Iterative selling of value
While the client knew the value of the project, we discussed how, in reality, data projects can die on the vine when the value isn’t apparent to the business funding the initiative or to the IT executives who need to demonstrate their operational ROI.
As such, the ability to demonstrate value early and often becomes critical to building and keeping the momentum necessary to drive projects and programs across the finish line.
Project sponsors need to constantly sell the value up to their management and across to the ultimate customer. Iterative wins become selling points that allow them to do so.
Know your team’s delivery capability
To truly understand what can be delivered (and by when) means accurately assessing how much work is in front of you and how quickly your team can deliver with quality.
This example project was as new as the client’s team. For them, the most logical approach was to start doing the work to learn more about the work itself as well as the team. After a few iterations, the answers to the following questions become clearer:
- Parametric estimating – How do I estimate different complexities of types of work or data sources? How do I define the “buckets” of work and associate an estimate with each? What values do I assign to each of these buckets?
- Velocity – How quickly can my team deliver with each iteration? How much work can they reliably design, build, and test?
- Throttling – What factors can I adjust to predictably affect velocity without compromising quality or adversely affecting communication?
- Continuous improvement – Fail fast, learn fast, adapt. Do I understand what factors are impeding progress that I can influence? What are we learning about and how are we accomplishing the work so we can improve going forward? How do we get better at estimating?
- Team optimization – Do I have the right players on the team? Are they in the right roles? How does the team need to evolve as the work evolves?
Foster trust – ensure adoption
Anyone who relies on data, whether they are business or IT, has their go-to sources that they rely on. Getting an individual to embrace a new source for all of their information and reporting needs requires that the new source be intuitive to use, performant, and above all, trustworthy.
As with any new solution, there will be skepticism within the user community, and whether conscious or not, an unspoken desire to find fault in the new solution, thereby justifying staying with the status quo. Data quality and reliability can be the biggest factor that adversely impacts adoption of a new data solution.
By taking an agile, iterative development approach, you expose the new solution to a small group initially, work through any issues, then incrementally build and expose the solution to larger and larger groups. With each iteration, you build trust and buy-in to steadily drive adoption.
Generate excitement
By following an iteratively expansive rollout, genuine excitement about the new solution can be fostered. As use expands, adoption becomes more a result of a contagious enthusiasm rather than a forced, orchestrated, planned activity.
Tableau’s mantra for many years has been “land and expand” — don’t try to deploy a solution all at one time. Once people see a solution and get excited about it, word will spread, and adoption will be organic.
Eliminate the unnecessary
While there are many legitimate use cases for staging all “raw” data in a data lake, concentrating on the right data is the appropriate focus for self-service BI. The right data is important for ensuring the performance of the semantic model, and it’s important for presenting the business user with a model that remains uncluttered with unnecessary data.
Agile’s focus on a prioritized set of user stories will, by definition, de-prioritize and ultimately eliminate the need to incorporate low priority or unnecessary data. The result is the elimination of wasted migration time and effort, a reduced need for the creation and maintenance of various model perspectives, and ultimately quicker time to insight and value.
Adjust to changing requirements and priorities
Finally, it’s important to understand that data projects and programs focused on enabling enhanced or completely changed reporting paradigms take time to implement, often months. Over the time period, priorities will likely change. An agile approach allows you to reprioritize with each iteration, giving you the opportunity to “adjust fire” and ensure you’re still working on the most important needs of the end-users.
Ready to roll out a successful self-service business intelligence program and not sure where to start? If you’re ready to take the next step, we’re here to help.