Who's going maverick with your data? It’s probably not the first Top Gun reference you’ve seen today, but when we think about the perils of Shadow IT for data governance and the importance of handling data democratization well in your organization, we couldn’t think of a better analogy. Today’s knowledge workers need data, and they need it at Mach speed. If your organization funnels requests for data insights through your IT department and they’re experiencing significant flight delays or are overbooked, you can’t blame the business for turning to shadow sources for quicker results. Many organizations turn to self-service BI solutions to manage this problem, but if you don’t manage governance with solid workflows, you can quickly stall out or fall into a tailspin. (We promise we’re going to stop with the flying metaphors. Soon.) We’re finding that a lot of companies aren’t sure exactly where they stand when it comes to managing data democratization. You might need a complete governance overhaul, but, more likely, a quick assessment and gameplan could get you back on track. Whether you need an audit, stakeholder workshop, or a quick call to level set, we’re here to help. Shadow IT, data governance, that AI they built to deliver Val Kilmer’s line…let us know what’s on your mind. Get smart: Not to be painfully obvious, but the new Top Gun: Maverick remake is out in theaters nationwide. The original is on Amazon Prime as of June 1. And if you really want to geek out, you can read the 1983 article that inspired the franchise.
Articles about BI & Analytics
Almost 20 years ago, Capital One recognized the need for one person to oversee their data security, quality, and privacy, and the role of the Chief Data Officer was born. Now reports show that 68% of organizations have a CDO (Harvard Business Review, 2020). And while the role has become more common and has significantly evolved, many data executives are still struggling to get a seat at the table or bring data to the forefront of their organization. In fact, in a recent survey, only 28% of respondents agreed that the role was successful and established. Company leaders agree that there needs to be a single point of accountability to manage the various dimensions of data inside and outside of the enterprise, including the quality and availability of that data. But now we are at a crossroads — what is the best way to align the work that the CDO does with the strategy of the business as a whole? The reality is that CDOs often struggle to find the internal and external support and resources needed to educate others to align with the organization’s goals. Implementing enterprise data governance, data architecture, data asset development, data science, and advanced analytics capabilities — such as machine learning and video analytics — at scale, is not an easy task. To be successful, data executives need support, resources, and communities focused on the elevation of data. We are proud to continue to help these communities come to life for the benefit of our colleagues and clients, establishing local resources here in the Midwest with global scale, reach, and impact. Read on as Mark Johnson, our Executive Leader for Data Management and Analytics, provides insight on the current state of data and the CDO, and provides details on multiple opportunities for data leaders of different levels to get more involved in the data community. Q: How has the role of data changed/evolved for organizations? The reality is that information is everything. This global pandemic proved that to many organizations. For some, it showed that their digital network was ready, and they were aptly prepared to take on COVID. For others, it has forced them to recognize their own immaturity with data and analytics. On its own, managing data is not exciting — the information just sort of exists. To give data value, you have to put it to use. And so, I think we are going to see the Chief Data Officer and/or Chief Data Analytics Officer really find their own in the coming years. It’s time for their seat at the table. The C-suite is now asking questions that can only be answered with data, and now they truly understand both the value and consequences of the data game. Q: What do you think are the biggest challenges facing CDOs/data leaders today? I think that the biggest challenge for data executives today is the acquisition of talent that is seasoned and experienced where you need them to be for your organization. Higher education hasn’t necessarily kept up with the data world, and often times it takes additional training to reach the right levels. The reality is that right now the talent is manufactured in the real world. Data executives have to be connected and equipped to mentor, train, and keep the right people. Q: You’ve mentioned that data leaders need to connect with each other. What value can people expect from these data communities? I think there is tremendous value. As we are seeing the power of data evolve in organizations, and the role of data leaders evolve as well, I think coming together to collaborate and share elevates the leader, the organization, and the view of data as a whole. In these communities, it gives people a safe space to talk about how they are doing, what they are doing, what their biggest challenges are, and what solutions are working for them. These communities have truly become both a learning laboratory and an accelerator for data. Q: As a big proponent of connecting data leaders, you have been involved in creating different opportunities for people to get together. What groups/events would you recommend, and how can people get involved? I personally have been involved with the MIT Chief Data Officer and Information Quality Symposium (MIT CDOIQ), which is such a great opportunity to start with for connection. It has developed into additional opportunities for data leaders at all levels to get involved and create the kind of community we need to truly elevate the value of data. Organizations like the CDO Magazine, the creation of CDO roundtables across the nation, and the International Society of Chief Data Officers (isCDO) all evolved from connecting data leaders and identifying common challenges. MIT CDOIQ: The International MIT Chief Data Officer and Information Quality Symposium (MIT CDOIQ) is one of the key events for sharing and exchanging cutting-edge ideas and creating a space for discussion between data executives across industries. While resolving data issues at the Department of Defense, the symposium founder, Dr. Wang, recognized the need to bring data people together. Now in its 15th year, MIT CDOIQ is a premier event designed to advance knowledge, accelerate the adoption of the role of the Chief Data Officer, and change how data is leveraged in organizations across industries and geographies. Fusion has been a sponsor of this symposium for seven years now, and we are so excited to see how the event has grown. Designed for the CDO or top data executive in your organization, this is a space to really connect with other top industry leaders. CDO Roundtables Fusion has always been focused on building community and connecting people. And when one of our clients, a Fortune 500 retailer, mentioned wanting to talk with other data leaders from similar corporations, we realized that there was a big gap here — there was no space that existed where data leaders could informally come together, without sales pitches and vendor influence, and simply talk. That’s how the CDO roundtables were born — a place that allows data leaders to get to know each other, collaborate, accelerate knowledge growth, and problem solve. We just started two years ago in Cincinnati, but now, now we’ve expanded to multiple markets including Indianapolis, Columbus, Cleveland, Chicago, and Miami. These groups are designed for your CDO/CDAO and truly create an environment for unfiltered peer-to-peer discussion that helps solves data leadership challenges across industries. If you’re interested in joining one of these roundtables or starting one in your market, email me or message me on LinkedIn. I’m here and ready to get these roundtables started with executives in as many communities as I can. The more communities we have, the more data leaders and organizations we can serve. International Society of Chief Data Officers (isCDO) Launched out of the MIT CDOIQ symposium, the isCDO is a vendor-neutral organization designed to promote data leadership. I am excited to be a founding member of this organization, along with our Vice President of Strategy, David Levine. Our ultimate goal is to create a space that serves as a peer-advisory resource and enables enterprises to truly realize the value of data-driven decision making. With multiple membership options available, isCDO is the perfect opportunity for data leaders looking to connect with their peers and gain a competitive advantage by focusing on high-quality data and analytics. CDO Magazine I am really proud to be a founder of the CDO magazine, as it really is a resource for all business leaders, not just the CDO. We designed the magazine to be a resource for C-suite leaders — to educate and inform on the value proposition, strategies, and best practices that optimize long-term business value from investments in enterprise data management and analytics capabilities. Check out the publication here. And if you’re interested in contributing content or being interviewed, let me know at email@example.com. Closing: The role of the CDO is integral to organizations, but it’s still evolving. Now more than ever, it is important that data leaders come together to collaborate and problem-solve. Fusion is excited to be a part of each of these initiatives, and we are committed to being an agent of change in the communities we serve and beyond. By connecting global thought leaders we believe that organizations will realize the value of data to power their digital transformation. If you’re interested in joining any of these data communities or just have questions, feel free to reach out to Mark via email or on LinkedIn.
Return on investment (ROI) is top of mind for everyone. With so many competing priorities, how you spend your time and money, and what you get for it, matters more than ever. The focus used to completely be on the level of your investment. But the paradigm is shifting because of the data capabilities that now exist. In this article, we’ll explore how the definition of ROI has changed because of modern technology and approaches, where your data ROI comes from, and how to accelerate it. Setting goals for your data analytics efforts Data without analytics is ultimately an investment without return. Most organizations sit on troves of data, but can’t do anything with it. But analytics is a progression. Each of the levels on the data analytics maturity model represent questions you can begin to answer. To go from nothing to cognitive takes a lot — your investment increases substantially. Descriptive: What happened? Diagnostic: Why did it happen? Predictive: What will happen? Prescriptive: How can it happen? Cognitive: What can be suggested? With each step up the model, you add more information and complexity. For example, the descriptive level of maturity can be answered with a look at history. As you progress, you will need more information and stronger data relationships to better understand the “why.” Your data quality and integrity are also important. When you get to the cognitive step, you’re expanding outside your universe of data, and the contextual element of what you’re doing gets broader. For this step, consider Microsoft’s Cortana or IBM’s Watson. But in the modern data world, there doesn’t have to be a huge upfront investment. Shifting your focus from a return on investment to a return on insights can drastically impact how you invest in your data and your results. Calculating the ROI of data and analytics projects Ultimately, ROI is realized from leveraging effective data management to enable access to: more and better data maximizing visualizations advanced analytics actionable insights for outcomes For data management, that means: Improved quality and completeness Confidence and trust Accountability through governance Improved stewardship Advancing culture change to help stakeholders understand the importance and value proposition By improving your data management, the insights from your data become better and more actionable, including: Access to more data and the inclusion of new sources Faster and easier access to data Greater integration of disparate data Easier standup and use of analytics technology In years past, insights from data analytics might have been limited to data scientists or experts in the field. But now, with analytics tools and technologies, data insights are useful to — and actionable for — people across the entire organization. The larger the investment in time and money, the more emphasis on ROI, how quickly it can be realized, and the amount of trailing value. Data leaders have to work with their organizations to understand what the best strategy is – whether that be a smaller investment with a slower return or a big investment that allows you to realize your ROI sooner. It is critical to evaluate your organization’s needs, expectations, and goals before making decisions on strategic data management. Understanding the classic data ecosystem In a classic data ecosystem, the setup might look something like this. In a classic data ecosystem, it requires deep analysis to understand sources and the definition of data, and a considerable amount of time and effort to reach the gold standard you need for your data to be utilized for BI and analytics. Investments are required on all layers. There is no real way to invest in one aspect of your data and analytics and still find value. There is also significant effort required to ensure that as you introduce new systems, you don’t break legacy systems and processes already in place. Quite often, work must be done upfront to ensure that changes (even upgrades) will not cause disruption. In addition, significant “time-to-market” factors need to be considered with classic data ecosystems. Often, the slow delivery of data and features forces businesses to make incremental changes without undertaking any kind of larger project. Doing so might be helpful at first, but can cause issues later. With a slow delivery of data, many organizations using a classic data ecosystem find that they are unable to keep up with the pace of business today. Classic data ecosystems are often built to meet reporting needs, not analytical needs and the analytics piece is a one-off project. The deployment and incorporation of analytical models into production in a classic setup requires a considerable amount of time and customization. Building a modern data ecosystem In this more modern data ecosystem, there is a more layered approach. Now it is much easier to gather, ingest, and integrate the data, and bridge gaps between systems, along with including new concepts like data lakes that can include data layers at the bronze, silver, and gold levels. You don’t have to invest fully in all of the layers, you can invest where you need to. You still have BI & analytics capabilities, but you have more of an application integration framework that serves additional needs. And then there is the trust of the data. This setup allows for more flexibility and customization for all parts of the organization. Read more: How to build your data analytics capabilities The value of incremental change Your investment doesn’t have to be an all-or-nothing proposition. You can incrementally build out components and capabilities and can make data available for exploration without deep upfront analysis that often slows everything down. Additionally, you can control the degree of your investment in a significant way. Without having to push data through all the layers to make it useful and the use of flexible architecture, you don’t have to make a significant investment and change to make it worth it. You can also leverage external tools in the interim. Service and subscription-based features allow for fast initiation, and exploratory efforts can be stood up and torn down easily and quickly. New technologies and design/development paradigms enable faster adoption overall. And now, more user groups are able to access data and analytics, create more use cases, and make business decisions on the insights. Ultimately, it is time to shift your thinking on ROI and leverage modern data technology and tools and focus on the return on insights, intelligence, and innovation. For more information on data, analytics, and assigning ROI, Fusion’s Vice President of Data, Saj Patel, recently spoke at the CDO Summit. His presentation details how to accelerate ROI and gain buy-in across your organization. Watch the recording here and connect with us if you have any specific questions Learn more about Strategic Data Management here.
While every organization’s journey to digital transformation looks different, one thing remains the same — the importance of data. Tackling your data systems and processes is vital to fully transform. However, the reality is that most organizations are overwhelmed with data about their customers. But these troves of information are completely useless unless companies know that the data they have is accurate and how to analyze it to make the right business decisions. In today’s world, organizations have been forced to pivot and have realized the value data can bring to drive insight and empower their decision-making. However, many organizations have also recognized their data immaturity. So how do you move forward? The role of data in digital transformation Data can be your organization’s biggest asset, but only if it is used correctly. And things have changed. A lot of organizations have completed the first steps in their digital transformation, but now they are stuck — they aren’t getting the results they expected. Why? They haven’t truly leveraged their data. According to Forrester, “Firms make fewer than 50% of their decisions on quantitative information as opposed to gut feelings, experiences, or opinions.” The same survey also showed that while 85% of those respondents wanted to improve their use of data insights, 91% found it challenging to do so. So, now that you’ve got the data, how can you make it more valuable? Data strategy is key to your digital transformation With so many systems and devices connected, the right information and reporting is critical. But first, you have to make sure you have the right technology in place. Utilizing big data Although you might feel inundated with the amount of data you have coming in, using big data analytics can bring significant value to your digital transformation. Through big data analytics, you can get to a granular level and create an unprecedented customer experience. With information about what customers buy, when they buy it, how often they buy it, etc., you can meet their future needs. It enables both digitization and automation to improve efficiency and business processes. Optimizing your legacy systems Legacy systems are critical to your everyday business, but can be slow to change. Why fix what’s not necessarily broken? But just because systems are functioning correctly doesn’t mean they’re functioning at the level you need them to — a level that is conducive to achieving your data and digital transformation goals. This doesn’t have to mean an entire overhaul. You’ve likely invested a lot into your legacy systems. One key to a good data strategy is understanding how to leverage your legacy systems to make them a part of (instead of a roadblock to) your digital transformation. With the enormous scale of data so closely tied to applications, coding and deployment can often make this stage of your digital transformation feel overwhelming. Sometimes DevOps tooling and processes are incompatible with these systems. Therefore, they are unable to benefit from Agile techniques, continuous integration, and delivery tooling. But it doesn’t have to feel impossible — you just need the right plan and the right technology. Focusing on your data quality Even with the right plan and technology, you have to have the right data. Bad data can have huge consequences for an organization and can lead to business decisions made on inaccurate analytics. Ultimately, good data needs to meet five criteria: accuracy, relevancy, completeness, timeliness, and consistency. With these criteria in place, you will be in the right position to use your data to achieve your digital transformation goals. Implementing a data strategy with digital transformation in mind So how do you implement your data strategy? You should start by tackling your data engineering and data analytics. The more you can trust your data, the more possibilities you have. By solving your data quality problem, you can achieve trust in your data analytics. And then, the more data you have on your customers, the more effective you can make your customer experience. But, this all requires a comprehensive data strategy that allows your quality data to be compiled and analyzed so you can use it to create actionable insights. The biggest tools to help here — AI and machine learning. The benefits of a data-driven digital transformation The benefits of investing in your data are clear, including increased speed to market, faster incremental returns, extended capabilities, and easier access and integration of data. Discover more about the different ways you can invest in your data and improve and accelerate ROI for your organization. Ultimately, your goal is to elevate how you deliver value to your customers. Digital transformation is the key to understanding your customers better and providing a personalized customer experience for them. Leveraging your data can make all the difference between you and your competitors. And we’re here to help. Learn more about how some of our clients have benefited from investing in their data and digital transformation.
Executive summary The credit card industry is becoming more complex. Advanced loyalty, targeted offerings, unclear rate conditions, and many other factors can often make it difficult for banks to identify the right customer. Ultimately, the financial services firms that will succeed in this environment will engage the right customers with the right message at the right time. Market leaders will be those who can accurately forecast the revenue and risk for each prospective and existing customer. While the credit card environment has changed, the analytics and modeling techniques have largely remained the same. These models are highly valuable, but do not offer flexibility to evaluate granular and complex customer behaviors incumbent in a financial services firm’s data and other public and private data sets. Machine learning and deep learning (collectively, machine learning) change the paradigm for predictive analytics. In lieu of complex, expensive, and difficult to maintain traditional models, machine learning relies on statistical and artificial intelligence approaches to infer patterns in data, spanning potentially billions of available patterns. These insights, not discoverable with traditional analytics, may empower the financial industry to make higher-value, lower-risk decisions. In this brief article, we discuss three potential opportunities that Fusion expects should add high value to the financial services industry. Advanced analytics for banking Machine learning uncovers patterns in complex data to drive a predictive outcome. This is a natural fit for the banking industry as firms are often working with imperfect information to determine the value of incoming customers. How it works: Traditional models vs. machine learning Credit scorecards represent the basis of most credit card issuance decision making. Whether a firm leverages off-the-shelf models or applies bespoke modeling, Fusion expects the following is representative of a credit scorecard: In the aggregate, these models are highly valuable. But on a per-applicant basis, patterns and details are lost. In machine learning, we can explore detailed and expansive public and private data about segmented applicants for marketing purposes in real time. For example, we can supplement our existing models with data that can be used to segment potential customers such as: Regional FICO trends Educational attainment Social media sentiment analysis Mortgage and equity analysis Much, much more Machine learning can apply artificial neural networks to uncover patterns in your applicants’ history across millions of data points and hundreds of model statistical training generations. When detecting these patterns, machine learning models can uncover risk in approved applicants and value in sub-prime applications. For example, by exploring existing customers, machine learning could potentially reveal that applicants with low FICOs but high educational attainment for a specific city suburb have historically resulted in minimal write-offs. Conversely, a potentially high FICO applicant may have recently moved into a higher-net-worth neighborhood, requiring a high expenditure on a financial institution’s credit lines, resulting in repayment risk. Ultimately, your customer data can tell a far richer story about your customers’ behavior than simple payment history. Machine learning opportunities Financial services firms can gain more insight and capitalize on the benefits of machine learning by applying their marketing dollars towards customers who are more likely to fit within their desired financial portfolio. Lifetime customer value for customer with limited credit data Currently, credit score is determined based on traditional data methods. Traditional data typically means data from a credit bureau, a credit application, or a lender’s own files on an existing customer. One in 10 American consumers has no credit history, according to a 2015 study by the Consumer Financial Protection Bureau (Data Point: Credit Invisibles). The research found that about 26 million American adults have no history with national credit reporting agencies, such as Equifax, Experian and TransUnion. In addition to those so-called credit invisibles, another 19 million have credit reports so limited or out-of-date that they are unscorable. In other words, 45 million American consumers do not have credit scores. Through machine learning models and alternative data (any data that is not directly related to the consumer’s credit behavior), lenders can now directly implement algorithms that assess whether a banking firm should market to the customer segment, thereby assigning customer risk and scores, even to credit invisibles (thin-file or no-file customers). Let’s look at a few sources of alternative data and how useful they are for credit decisions. Telecom/utility/rental data Survey/questionnaire data School transcript data Transaction data – This is typically data on how customers use their credit or debit cards. It can be used to generate a wide range of predictive characteristics Clickstream data – How a customer moves through your website, where they click and how long they take on a page Social network analysis – New technology enables us to map a consumer’s network in two important ways. First, this technology can be used to identify all the files and accounts for a single customer, even if the files have slightly different names or different addresses. This gives you a better understanding of the consumer and their risk. Second, we can identify the individual’s connections with others, such as people in their household. When evaluating a new credit applicant with no or little credit history, the credit ratings of the applicant’s network provide useful information. Whether a bank wants to more efficiently manage current credit customers or take a closer look at the millions of consumers considered unscorable, alternative data sources can provide a 360° view that provides far greater value than traditional credit scoring. Alternate data sets can reveal consumer information that can increase the predictive accuracy of the credit scores of millions of credit prospects. This allows companies to target consumers who may not appear to be desirable because they have been invisible to lenders before, which can lead to a commanding competitive advantage. ON-DEMAND WEBINAR: Learn how to turn data into insights that drive cross-sell revenue Optimizing marketing dollars to target customers Traditional marketing plans for credit card issuers call to onboard as many prime customers that meet the risk profile of the bank. However, new customer acquisition is only one piece of the puzzle. To drive maximum possible profitability, banks can consider not only the volume of customers, but also explore the overall profitability of a customer segment. Once these high-value customer segments are identified, credit card marketers can tailor specific products to these customer segments to deliver high value. Machine learning can assist both in the prediction of total customer value, as well as the clustering of customers based on patterns and behaviors. Identifying high-risk credit card transactions in real time Payments are the most digitalized part of the financial industry, which makes them particularly vulnerable to digital fraudulent activities. The rise of mobile payments and the competition for the best customer experience push banks to reduce the number of verification stages. This leads to lower efficiency of rule-based approaches. The machine learning approach to fraud detection has received a lot of publicity in recent years and shifted industry interest from rule-based fraud detection systems to machine-learning-based solutions. However, there are also understated and hidden events in user behavior that may not be evident but still signal possible fraud. Machine learning allows for creating algorithms that process large datasets with many variables and helps find these hidden correlations between user behavior and the likelihood of fraudulent actions. Another strength of machine learning systems compared to rule-based ones is faster data processing and less manual work. Machine learning can be used in few different areas: Data credibility assessment – Gap analytics help identify missing values in sequences of transactions. Machine learning algorithms can reconcile paper documents and system data, eliminating the human factor. This ensures data credibility by finding gaps in it and verifying personal details via public sources and transactions history. Duplicate transactions identification – Rule-based systems that are used currently constantly fail to distinguish errors or unusual transactions from real fraud. For example, a customer can accidentally push a submission button twice or simply decide to buy twice more goods. The system should differentiate suspicious duplicates from human error. While duplicate testing can be implemented by conventional methods, machine learning approaches will increase accuracy in distinguishing erroneous duplicates from fraud attempts. Identification of account theft, unusual transactions – As the rate of commerce is growing, it’s very important to have a lightning-fast solution to identify fraud. Merchants want results immediately, in microseconds. We can leverage machine learning techniques to achieve that goal with the sort of confidence level needed to approve or decline a transaction. Machine learning can evaluate vast numbers of transactions in real time. It continuously analyzes and processes new data. Moreover, advanced machine learning models, such as neural networks, autonomously update their models to reflect the latest trends, which is much more effective in detecting fraudulent transactions. Summary Bottom line: machine learning can leverage your data to develop patterns and predictions about your customers and applicants. These machine learning models are typically simpler to develop and deploy and may be more efficacious than traditional financial services modeling. These models also enable a more detailed forecast about your customers, allowing you to reduce risk while targeting more profitable customers through their lifetime with your credit card services. Related resources Case study: Machine learning predicts outcomes in financial services Case study: How Donatos uses machine learning to retain customers 5 tips to keep the wealth in your company Fusion Alliance has extensive experience in the financial services industry and serves as a preferred solutions provider for many prominent financial services institutions, including Fortune 500 firms. If you’d like to discuss your organization, let us know.
Recently, our team was on a call with a client who was trying to consolidate dozens of transactional systems into a single model to support a more effective reporting paradigm. The envisioned solution focused on self-service, visual analytics, while also supporting more traditional reporting. This client’s challenges were similar to what many other businesses face today. They wanted: Quicker time to insight Empowered end users Lessened dependency on IT Reduced reconciliation of reports, etc. Sound familiar? The client wasn’t questioning whether or not there was value in the project ahead. Their questions were focused on the best approach. Do we pursue a big bang approach or pursue something more agile in nature? Upon further discussion and reflection, the objectives of the program seemed to be a perfect case for agile. Let’s talk about why. Iterative selling of value While the client knew the value of the project, we discussed how, in reality, data projects can die on the vine when the value isn’t apparent to the business funding the initiative or to the IT executives who need to demonstrate their operational ROI. As such, the ability to demonstrate value early and often becomes critical to building and keeping the momentum necessary to drive projects and programs across the finish line. Project sponsors need to constantly sell the value up to their management and across to the ultimate customer. Iterative wins become selling points that allow them to do so. Know your team’s delivery capability To truly understand what can be delivered (and by when) means accurately assessing how much work is in front of you and how quickly your team can deliver with quality. This example project was as new as the client’s team. For them, the most logical approach was to start doing the work to learn more about the work itself as well as the team. After a few iterations, the answers to the following questions become clearer: Parametric estimating – How do I estimate different complexities of types of work or data sources? How do I define the “buckets” of work and associate an estimate with each? What values do I assign to each of these buckets? Velocity – How quickly can my team deliver with each iteration? How much work can they reliably design, build, and test? Throttling – What factors can I adjust to predictably affect velocity without compromising quality or adversely affecting communication? Continuous improvement – Fail fast, learn fast, adapt. Do I understand what factors are impeding progress that I can influence? What are we learning about and how are we accomplishing the work so we can improve going forward? How do we get better at estimating? Team optimization – Do I have the right players on the team? Are they in the right roles? How does the team need to evolve as the work evolves? Foster trust – ensure adoption Anyone who relies on data, whether they are business or IT, has their go-to sources that they rely on. Getting an individual to embrace a new source for all of their information and reporting needs requires that the new source be intuitive to use, performant, and above all, trustworthy. As with any new solution, there will be skepticism within the user community, and whether conscious or not, an unspoken desire to find fault in the new solution, thereby justifying staying with the status quo. Data quality and reliability can be the biggest factor that adversely impacts adoption of a new data solution. By taking an agile, iterative development approach, you expose the new solution to a small group initially, work through any issues, then incrementally build and expose the solution to larger and larger groups. With each iteration, you build trust and buy-in to steadily drive adoption. Generate excitement By following an iteratively expansive rollout, genuine excitement about the new solution can be fostered. As use expands, adoption becomes more a result of a contagious enthusiasm rather than a forced, orchestrated, planned activity. Tableau’s mantra for many years has been “land and expand” — don’t try to deploy a solution all at one time. Once people see a solution and get excited about it, word will spread, and adoption will be organic. Eliminate the unnecessary While there are many legitimate use cases for staging all “raw” data in a data lake, concentrating on the right data is the appropriate focus for self-service BI. The right data is important for ensuring the performance of the semantic model, and it’s important for presenting the business user with a model that remains uncluttered with unnecessary data. Agile’s focus on a prioritized set of user stories will, by definition, de-prioritize and ultimately eliminate the need to incorporate low priority or unnecessary data. The result is the elimination of wasted migration time and effort, a reduced need for the creation and maintenance of various model perspectives, and ultimately quicker time to insight and value. Adjust to changing requirements and priorities Finally, it’s important to understand that data projects and programs focused on enabling enhanced or completely changed reporting paradigms take time to implement, often months. Over the time period, priorities will likely change. An agile approach allows you to reprioritize with each iteration, giving you the opportunity to “adjust fire” and ensure you’re still working on the most important needs of the end-users. Ready to roll out a successful self-service business intelligence program and not sure where to start? If you’re ready to take the next step, we’re here to help.
Data science is an important field of study as a means of analyzing big data. The success stories of how data science and machine learning provide organizations with new insights that stimulate the growth of customer service, productivity, and profitability by leaps and bounds are true. The initial steps for integrating data science into your organization need not be costly. The focus is often on finding a “data scientist” who will find ways to provide immediate insight into your data. But a more thoughtful, measured approach to incorporating data science into your organization may be more efficient and effective. What is data science? It’s not easy to pin down the definition of data science. Depending on whom you talk to, the meanings can be radically different. A strong definition was offered by Jeff Leek in 2016, “Data science is the process of formulating a quantitative question that can be answered with data, collecting and cleaning the data, analyzing the data, and communicating the answer to the question to the relevant audience.” Leek’s definition is pertinent because it avoids relying on specific concepts, such as big data and machine learning, or specific tools, such as Hadoop, R, and Python. Data science can actually be performed using any number of tools and on many types of data, regardless of the size. The classic data sets used to develop and test statistical processes are actually very small. While big data is often a wonderful resource, in reality, one of the first steps will always be to aggregate and/or reduce the data to a smaller size in order to be useful. The specific types of statistical modeling tools and algorithms are many and varied, and new ones continue to be developed all the time. The most important consideration is not which tool or algorithm is used, but that the correct solution is applied to the problem. Many business questions can be answered through simple summaries, counts, or percentages. The trick is understanding the data enough to decide the best approach and having the skill sets and tools available to implement that approach. This is heavily dependent on process, which brings us to a second reason Leek’s definition is so apt — it emphasizes that data science is a process with multiple parts that all need to work together. Data science as a process The process of data science can be broken down into five parts. 1. Know your use case Knowing your use case delivers actionable information about the core needs of the organization, and it’s absolutely key to driving the entire process. Knowing your use case defines what data is required, how it will be gathered, how it will be looked at, and how the results need to be reported. Data science works best when there is a question or a hypothesis that needs to be answered or proven. 2. Acquire and clean the data The data you acquire can come from inside the company and/or outside the company (public domain data sets, social media feeds, etc.), but it must be driven by the needs of the use case question. Acquiring and cleaning the data is often time-consuming and resource-intensive, but it is the most important part of the process. Surveys of data and statistical analysts often state that this step consumes 80-90% of their time, leaving only 10-20% for the actual statistical analysis, but it is absolutely critical that this part of the process is done with great care. Accuracy of analysis is tightly related to the quality of the initial data sources. 3. Understand the data Once you have the data, you need to understand what you have. This includes: What it is and what it is not What it contains that is useful What it contains that might be problematic or misleading Exploratory analysis of the data, i.e., learning the properties within the data that relate and can be applied to the use case question at hand, is important. And information about the source of the data and how it was processed is critical in assessing its usefulness. Spending time sampling and profiling your data pays great dividends in two key areas, using the data in analysis and being able to assess the validity of the results. 4. Use the data to answer the question This is the step where the high-end skillsets of a statistical analyst are applied and is often the quickest and seemingly easiest part of the data science process. When the data has been ingested into an environment by a load process, deep analysis begins. This includes using statistical modeling, machine-learning algorithms, clustering techniques, and other appropriate tools to see if the question can be answered. If the previous steps have all been done well (a clear question exists, the data was properly cleansed, and is fully understood), then selecting and implementing the analysis can be fairly straightforward to the skilled statistician. 5. Communicate the results It is vital to make the results of this seemingly arcane and mathematically dense process understood at the business level. Interesting and actionable results are of no use if no one knows about them or can understand them. Resourcing data science as an organization Looking at the process outlined above, it’s clear that finding a single technologist, engineer, or mathematician who can accomplish all steps is not likely. Rather, a data science team of several people who cover all of the necessary skill sets would be the most viable solution. Building such a team is not difficult. Most organizations already have employees with many of the required abilities. 1. Know your use case: Business analysts and subject matter experts The business analysts (BAs) and subject matter experts (SMEs) will hopefully already have a firm grasp of the organization’s internal data and know the current use case questions being asked by the business. The key here will be for them to expand their horizons to other data sources and wider questions. They will need to start looking beyond internal systems to other externally available data sources and consider how these might be used to gain new insights into how the organization is relating to the outside world. Thinking creatively about what other information may be available and how it might be used can lead to even more intriguing use case questions. 2. Getting and cleaning data: Database/data warehouse architects and ETL programmers Like BAs and SMEs, architects and programmers will need to expand their activities to include both external and highly unstructured data. They will also need to understand the more specific requirements of how a statistical analyst needs the data formatted and delivered. Fortunately, getting and cleaning data is generally part of these architects’ and programmers’ everyday lives, and leveraging their knowledge and skills will be critical to providing the analysts with the information they need. 3. Understand the data and communicate the results: Data analysts, data stewards, report developers Data analysts, data stewards, and report developers should already have a good handle on the organization’s internal data. Like BAs and SMEs, the analysts, stewards, and report developers will need to expand their horizons to other data sources. They will already have a history of bridging the communications gap between IT and the business, and that will help the statistical analyst understand the data and the business to understand the results. 4. Use the data to answer the question: Statistical analyst/data scientist Unless the organization already employs statisticians, the skill set of a statistical analyst or data scientist will most likely need to be added. This can either be done by bringing in an outside resource or developing the skill sets internally. Do not discount your existing data analysts when looking to fill this role. Their current knowledge of the data is a huge running start, and an intermediate level of statistical training will provide them with a variety of new tools to utilize. It will not make them Ph.D. statistician rock stars and they might not fully understand the underlying theories, but not all use case questions require deep statistics to answer, and the practical application of regression modeling and machine learning tools can go a long way. 5. Repeat step 3: Understand the data and communicate the results: Data analysts, data stewards, report developers Conclusion Data science can provide an organization with new and surprising insights into both internal processes and interactions with the outside world. Take time to build the correct structure and resources to implement data science so it can become an integral and productive asset to the organization.
With increasing customer demands and competition a click away, access to data-driven responses in real time has become a necessity for business users and marketers. Today, the market is full of digital analytics tools for measuring your customer experience across web and mobile applications, customer relationship management (CRM) systems, and point of sales (POS). When used correctly, these digital analytics tools can provide businesses with a wealth of insights into the performance of their digital platforms. To best leverage digital analytics, you will first need to set clear business objectives and define how your organization intends to measure success on your digital platforms. An in-depth look into your current measurement strategy, if one exists, including metrics and key performance indicators (KPIs) will reveal if digital analytics are providing the data and insights necessary to ensure business and customer needs are met. Identifying metrics Organizations often make the mistake of using out-of-the-box metrics like page views, sessions, bounce rates, and session duration as KPIs. These basic metrics are not representative of actual business objectives and can prove useless without the right context. For example, if a marketer wanted to understand the value of a landing page, they would want to look at the number of leads generated by the page or the long-term business impact that customers who came to the site through that page brought. Instead, reports focus on the number of people that saw the page or the bounce rate for the content. While this is helpful information, it doesn’t mean anything if you can’t tie the analysis to ultimate business success. Regardless of industry, website visits and page views do not increase bottom lines, nor should they be used as KPIs. If these are the types of metrics you’re seeing in reports or using in your analysis instead of relying on KPIs like leads, transactions, revenue, or conversions, then it may be time to re-examine your digital measurement strategy. Developing a measurement strategy Successfully integrating digital analytics into business processes requires a clear measurement strategy. A measurement strategy outlines business objectives, what should be tracked on the website or mobile app that will inform these objectives, the types of reporting that will be available, and to whom it will be exposed to once the implementation is complete. Depending on current processes, analytics tools, technology, and available resources, the process of uncovering this information can take several months, but it is a vital step that should not be overlooked or rushed, as the end result is a digital measurement model that provides the framework to align digital analytics with business strategy. The digital measurement model A digital measurement model is a high-level, visual summary that links your core business objectives, such as increasing brand awareness, customer acquisition, or increasing sales, to the digital strategies used to achieve these objectives and their requisite goals. From there, specific KPI and targets will be identified for each digital strategy, helping business and marketing stakeholders understand whether their efforts are trending in the right direction. These elements should be captured in a matrix that can be used to inform the tracking strategy, reporting development, and ultimately gauging the health of your digital practice. Benefits of a measurement strategy Creating a measurement strategy that aligns your business goals with the activities of the digital teams can have a significant impact on how the business operates. With clearly defined objectives and KPIs for measuring digital outcomes, digital teams can focus their efforts on producing measurable value, instead of opting for a shotgun approach that hopes some portion of their efforts will drive outcomes. A well-defined digital measurement strategy encourages an environment of accountability. With KPIs to measure the gap between real-time digital outcomes and targets, executives gain greater visibility into the progress (or lack thereof) being made toward business objectives. It also creates a baseline of expectations, helping digital team members to better prioritize work to produce measurable value. Most importantly, developing a digital measurement strategy gets people talking. Shaping strategy to reflect business objectives encourages collaboration among business operatives and leaders across the board, from marketing analysts to the CMO. Gaining alignment on what matters most helps an organization instill confidence in teams and helps team members gain a better understanding of how their day-to-day work contributes to the overall mission of the company. Final thoughts All too often, digital analytics are completely overlooked within marketing teams. This could be due to lack of expertise around robust measurement implementations, or analytics has been under-prioritized in favor of more tactical activities. Whichever the case, overcoming hurdles to generate actionable business insights from your digital platforms is vital to the health of your digital practice and the needs of your customers. To successfully leverage digital analytics, organizations need to take a deep look into their current measurement strategy and reframe as needed to align their implementations with their established business strategy. Ultimately, having a clearly defined digital measurement strategy paves the way for receiving lasting, meaningful insights from your digital platforms and provides a system of accountability for team members and leadership to unite around.
When digital analytics do not produce useful outcomes to inform business decisions, digital teams often point to reporting and analysis as the culprit. But an in-depth investigation of the measurement strategy and digital analytics implementation often reveals a much different truth: digital marketing teams often don’t fully understand the capabilities of the tools at their disposal. This common issue is born out of inconsistent technical implementations, lack of analytics expertise within the team, or a general misunderstanding of the types of metrics and data that the business should collect. As long as digital teams maintain that reporting and analysis are at the heart of the issue, organizations will not be able to leverage the full feature set available with analytics tools like Google Analytics or Adobe Analytics. This leaves digital leaders questioning whether they should invest in more expensive or specialized tools to get the “right” data that will create new and incremental value. The often surprising reality is that an updated implementation with more focused tracking would suffice to provide digital teams with the valuable data they seek across all digital platforms (e.g., mobile app, CRM and point of sale). With the recent addition of tools like Google Data Studio (a lightweight BI dashboard tool) and Google Optimize (an optimization experimentation tool for the free Google Analytics suite), the vast majority of businesses don’t need a paid analytics solution. You can invest money usually spent on expensive data management tools into analysts or other digital marketing efforts. In most cases, the free versions of tools like Google Analytics and Google Tag Manager are more than sufficient for the needs of an organization, but teams don’t have a true understanding of the tools’ capabilities or don’t implement their tracking in a way that works within the limitations of the free toolsets. 9 questions that can clarify you digital analytics capabilities If you wonder whether your digital analytics toolset is up to par, start by asking some basic questions. The answers will divulge the true extent of your digital analytics capabilities and identify areas for improvement from both technical and expert standpoints. 1. Are we combining data that we already have about our customers with their on-site activities? Many digital teams who use only the out-of-the-box versions of tools like Google Analytics are unaware that the tools come with powerful custom features. Custom dimensions, for example, provide valuable context to information being collected. You might have data, including gender, zip code, customer segment, persona type, etc., about a specific user based on their digital profile. Populate these values into analytics code, along with everything that is tracked using custom dimensions, in order to create meaningful user segments that provide insights as opposed to just metrics. 2. How do specific sections of the website compare to others? Many teams stop their measurement at the page level. This is a natural inclination, considering all interactions are recorded with the single page associated with it. However, there are available features, such as content groups and custom dimensions, that allow you to combine the data from specific pages into predefined site sections or groups. You can then compare these page groupings to each other to understand how they impact conversion and acquisition. 3. How does a specific content type impact conversion? By using the aforementioned features alongside conversions and goal tracking, analysts can point out which content types have the greatest impact on conversion, user drop-off, and other key performance indicators (KPIs) reflecting business objectives. Organizations can then use this information to better allocate resources. For instance, if you learn zero percent of your blog traffic converts on the site, you need to shift resources toward a more conversion-friendly design, engaging content, or traffic sources that are converting. 4. Can your analysts easily set up event tracking or conversion tracking without developers? Teams should use tools like Google Tag Manager or Dynamic Tag Manager to implement their analytics platforms whenever possible. These platforms allow marketers to have control over what is tracked and how that data is expressed in the analytics tool. Many companies do not use these implementation tools. Instead, they still rely on developers to add code, adding significant time to the tagging process and deterring analysts from tracking in some cases. 5. What are our most efficient sources of traffic for conversion? Analysts should be able to relay the traffic sources that generate the most conversions and which sources are the most efficient at doing so. An SEM program may generate the most conversions, while organic search might have a drastically higher conversion rate. In these instances, it’s worth exploring what an investment in the organic search channel could do versus making a similar investment in SEM or social media. 6. How are key segments of customers converting against other key segments on the website? Analysts can create meaningful segments within the analytics tool to understand how different types of customers utilize the website. These segments can be developed using existing customer data or even basic demographic data, like age and gender. Segmentation yields the necessary data to understand how different types of users are being impacted online to help identify areas for improvement. 7. Can we run A/B or multi-variate tests on the website today? A key part of optimizing your digital strategy should be conducting experiments with your website or app content. Many analysts don’t invest in the development of a testing program because of time constraints or simply because they aren’t aware of how easy it can be to run A/B or multivariate tests. Tools like Google Optimize are free and provide a robust feature set that integrates with Google Analytics and Google Tag Manager. 8. What are the primary drop-off points for customers prior to conversion? Analysts should have a clear understanding of what keeps users from converting on the website, no matter the type of site or conversion. With goal funnels or some elbow grease and expertise, analysts can identify where users drop off the site and what might be impacting their experience. With this visibility into customer needs, you can optimize the user experience and generate more conversions. 9. What are the main reasons users come to our site? Often teams don’t take time to understand the specific intentions of why users come to the website because the team members assume they know the reasons. However, their assumptions are often based on their own internal knowledge of the business. For example, you may see a significant increase in traffic to the website, only to find users are going to the careers page or reading a specific piece of content that doesn’t necessarily impact conversion. By understanding keywords, traffic sources, and landing pages that drive users to certain parts of your site, analysts can create user segments based on their intent. Understanding your digital analytics capabilities is the first step Uncovering your digital analytics capabilities can be the difference between a measurement strategy that ensures the continual improvement of online customer experiences or an expensive data tool that produces outcomes non-representative of business objectives. Before deciding to invest in a data management tool, assess whether your digital team has the expertise to leverage your current analytics tools. In most cases, we find the free versions of tools like Google Analytics and Google Tag Manager are sufficient for the needs of an organization. By getting answers to the right questions, you can discover your organization’s hidden capabilities and begin to leverage digital analytics to meet customer and business needs. Want to explore your organization’s digital analytics capabilities or dive deeper? Let us know.
Ready to talk?
Let us know how we can help you out, and one of our experts will be in touch right away.