Articles about Data
Predictive models tell you more than any tarot reading “Call me now for your free tarot reading!” If you were ever up late watching television in the late 90’s, you remember Miss Cleo, the woman who claimed she could see the future and tell your fortune with an over-the-phone tarot reading. You probably read the opening line in her dramatic Jamaican accent and see her with her crystal ball. Of course, it was “for entertainment only,” but that didn’t stop thousands of people from calling in, hoping for a glimpse of the future to help them make tough decisions. Wouldn't it be great if you could see what the future holds for your business so you can make smart decisions and build strategies that will support upcoming trends and mitigate risk? But if you can’t call Miss Cleo, what can you do? You look into the past. And no, we’re not talking about holding a séance. We're talking about leveraging your data with machine learning. Machine learning analyzes your data to look for trends and patterns that can be used to develop predictive models to forecast economic trends, customer behavior, and marketing campaign success. Learn More: Modern data platforms support machine learning to develop predictive models Get Smart: Want to learn more about the rise and fall of the self-proclaimed psychic? Check out Call Me Miss Cleo, a new documentary on HBO Max that provides an illuminating, in-depth look at the woman, the empire that was built around her, and her eventual downfall.
Predictive models tell you more than any tarot reading There’s a rapidly growing space in tech that doesn’t have a name, but you know it when you see it. We call it “Tech nobody asked for.” Tech nobody asked for is typically modern technology, such as a mobile app, device, or software that doesn’t solve a problem or improve an experience. Often, it creates additional steps or complications. Here are a few examples: Internet-connected hairbrush to “listen” to hair and recommend products Bluetooth water bottle that syncs to a “hydration app” Smart egg tray that syncs to your phone to tell you how many eggs you have and how fresh they are These may be extreme examples, but there are near-infinite examples. Have you been to a restaurant that got rid of printed menus and now requires you to scan a QR code or download a mobile app to see the menu? No one wants a QR code menu. Your business wants to solve customer problems and offer the best experience. Providing a mobile app or other new tech solution seems to be the right answer, but you want to avoid investing in and launching tech nobody asked for. That’s where our technology and marketing teams can help. We identify customer concerns, including obstacles in the buyer journey or gaps in customer experience, and help you build out tech solutions to overcome those issues, including mobile applications or website improvements. Check out this case study to see how Fusion did exactly that. >> If you want to learn more about custom, customer-friendly technology solutions, we can help you get started. Want to see more examples of tech nobody asked for? MIT Technology Review released their picks for the 10 worst technologies of the 21st century. Is there anything on the list you disagree with?
Are modern data platforms on your gratitude list? When you’re dealing with an inflexible, monolithic technical architecture, getting the right information at the right time is like trying to cook a traditional Thanksgiving feast in a microwave. You need better tools for the job. Thankfully, modern data solutions like data mesh frameworks can help. Using a data mesh distributes information across autonomous domains, allowing business users to own, manage, and share their data in a separate virtual environment, while governance remains centralized. It’s the data equivalent of asking your cousins to make the side dishes while you handle the turkey and set the table. Interested in figuring out your options for modernizing and democratizing your data frameworks? Check out our latest deep dive on modern data solutions. Your data consumers will be grateful! Get your extra helping of modern data trends >>
Accessing data at the speed of business is critical to remaining competitive in a digital-first world. But if you’re relying on outdated architecture where your data is trapped in silos or lost in a data lake, access to the functional data you need is seriously limited. When your existing framework is no longer serving your business, it makes sense to transition to a modern data platform, but you may have hesitations about whether it can help you succeed. To help you better understand this solution and what you need to gain from it, we are looking at data platform capabilities and sharing five modern data platform imperatives that will help achieve a more logical data management system. What is a modern data platform? With so many emerging data solutions, we understand that data is a very complicated environment, so we want to start by clearly defining what a modern data platform is and its capabilities. A modern data platform is a flexible, cloud-based, end-to-end data architecture that supports collecting, processing, analyzing, and delivering data to the end user in a way that is aligned and responsive to the needs of the business. On the surface, aside from it being cloud-based rather than on-premise, modern data platform capabilities aren’t different from traditional data architecture. The difference is in how new technologies have expanded their capabilities. Here are some of the ways modern data platforms can deliver more for your organization: Data ingestion Bringing new data into the environment is the first step to managing data, and in a legacy architecture, that is mainly done through batch processing. Batching collects and processes data at specific time periods or intervals. By leveraging the higher computing capacity of a cloud-based architecture, data can be streamed in real time to data storage units, eliminating bottlenecks and delays to keep data moving through the system in a more fluid manner. Quality and governance With AI integrated into the architecture, data quality and governance tools can be automated, speeding up how new data sources are analyzed, categorized, and assessed for security concerns. Security Security measures can be integrated at the base level for new data products, providing inherent encryption whether it’s at rest or in transit. Within a modern data platform, security measures are implemented to dynamically filter and obscure data as needed to support your organization’s security policies. Storage Cloud-based architecture offers the potential for nearly unlimited storage and offers a pay-as-you-go model, so you only need to invest in the volume of storage you need today. As your data storage needs increase in the future, you can add and seamlessly integrate additional space without creating silos for new data. Transformation In legacy architecture, transformations such as quality adjustments and business logic need to be applied in the early stages of data flow during large batch processing. While this ensures that the downstream usage of the data is more performant, it also locks the business rules in place which removes flexibility in how the business looks at and interacts with the data. The expanded computing power and advanced tools in a modern data platform offer a more flexible timeline to add transformations to the data. Business rules and logic can be applied later in the data flow and adapted to suit changing needs. Discovery Data discovery is streamlined through integrated tools within a modern data platform that can automatically scan, categorize metadata, and organize it so the most appropriate data is accessed more easily and quickly. Delivery In a legacy architecture, data delivery visualization tools required the data to be specifically structured prior to business usage, whether for reporting, data extracts, or API access. Now, visualization tools have advanced features that support access to semi-structured and unstructured data without the need for intensive (and expensive) data processing. Integrated tools simplify both data extraction and data sharing and have built-in security and monetization features. DevOps and DataOps In a modern data platform, DevOps/DataOps are cross-platform and cross-language supportive, which makes it easier and faster to coordinate development and release implementation tasks when architectures are built using multiple tools. 5 modern data platform imperatives The overall framework, capabilities, and patterns of managing data are universal within a modern data platform. However, no two platforms are the same. Each one is highly customized to support the data and data needs of the organization and require different combinations of tools or features to achieve specific functionalities and cover the needed capabilities. You still need to ensure your platform manages the data in a way that aligns to your organization’s unique needs, and this means that five modern data platform imperatives must be met. 1. Greater flexibility The greatest challenge of legacy data architecture is the lack of flexibility. The physical servers can’t be added to or modified easily to meet the changing data needs of your organization, so they need to be built with the capacity for future data needs. This is easier said than done given the rapidly changing landscape and the sheer volume of data you’re taking in. A modern data platform is incredibly flexible. It allows you to consider your data needs today and budget accordingly rather than trying to predict your data needs in the future which requires a significantly larger investment. As you need to increase data storage, adopt automation, or pivot in your data needs, these updates can be integrated seamlessly into the platform. 2. Improved access The people and applications accessing data need it in real time and in the proper format, but the needs of your data science team vary greatly from the needs of your business intelligence team. A modern data platform must support a faster time to market for data assets, and one way it does this is through a medallion architecture. A medallion architecture creates a multi-layered framework within the platform to move data through a pipeline to the end user. Bronze layer: Raw data is collected directly from the source systems with little to no transformation and stored here to provide a base layer of full history for additional processing. Silver layer: Data from multiple sources are curated, enriched, integrated, and organized in a structure that reflects the data domains of the organization. Gold layer: Data needed to support specific business drivers is aggregated and organized so it can be used for dashboard creation and self-service analysis of current states and trends. This architecture allows a diverse user base to access the data in the form that best suits their needs. Data scientists can access raw data from the bronze layer to identify new and emerging patterns, business applications can access data in the silver layer to produce data products, and business users can access the gold layer to perform analytics and create dashboards. 3. Incremental implementation Rather than transitioning to a modern data platform in a single, giant step, we recommend an incremental move. This makes it significantly easier and faster to focus on the current data products your organization needs, like reports and dashboards, while you are starting to build out the initial infrastructure. An incremental implementation lets you take a clear, informed look at the data you need, how you need it, and how it aligns with your business drivers. You can then choose to add, adjust, or stop processing certain data to put more focus on the data that will answer pivotal business questions. At the same time, building only what you need when it’s needed, an incremental implementation saves money and avoids bringing over old data that no longer serves your business. 4. Better communication between IT and business users A modern data platform needs to support improved communication between your IT or data engineers and your business users. As data flows through the framework and reaches the end user in the language they speak, the end-user has greater clarity. For business users, this may mean seeing gaps in how the existing data is not directly answering their questions and needs to find a different way to utilize the data. For the data engineers, this may mean seeing opportunities in how to filter out aberrations in the data to improve the aggregated data. This clarity allows the teams to work together to target solutions that will cover existing or emerging needs. 5. Re-focus valuable resources Once the initial data set is built, we apply repeatable patterns to the mechanics controlling data ingestion, storage, and delivery. Having a proven framework that can be repeated to unlimited data sets saves time and reduces the cost of building, operating, and maintaining the platform. Your data team can refocus their time on higher-level tasks, including improving data quality and speeding up delivery. Whether you have questions about data platform capabilities and functionalities or you’re ready to make the shift to a modern data platform, we’re here to help! Set up a call to talk to an expert or visit our modern data platform hub to learn more. Ask us your questions >> Learn more about modern data platforms >>
In a perfect world, all your data would be stored in an updated, organized database or data warehouse where your business intelligence and analytics teams could keep your company ahead of the competition by accessing the precise data they need in real time. In reality, as your organization has grown, your data has probably been stretched across multiple locations, including outdated databases, localized spreadsheets, cloud-based platforms, and business apps like Salesforce. This not only causes costly delays in accessing information, but also impacts your teams’ ability to make informed, data-driven decisions related to both day-to-day operations as well as the long-term future of your organization. So, how do you improve access to your data when it’s siloed in multiple areas? Data virtualization, while still fairly new, is an efficient, effective data delivery solution that offers real-time access to the data your teams need, and it is rapidly growing in popularity among large to enterprise-level organizations. While the market was estimated at $1.84 billion in 2020, a 20.9 percent CAGR has the data virtualization market projected to go beyond $8 billion by 2028 according to a 2022 Verified Market Research report. To help you determine if data virtualization solutions are the best option for your company, we’ll take a look at what data virtualization is, how it can solve your greatest data challenges, and how it stacks up to other data integration solutions. Understanding data virtualization First, what is data virtualization? When you have data housed across multiple locations and in various states and forms, data virtualization integrates these sources into a layer of information, regardless of location or format, without having to replicate your information into new locations. While this layer of data is highly secure and easily managed within governance best practices, it allows the data consumers within your organization to access the information they need in real time, bypassing the need to sift and search through a variety of disparate sources. Data virtualization supports your existing architecture Data virtualization does not replace your existing data architecture. Instead, it’s a single component in a larger data strategy, but it is often essential in executing the strategy successfully and meeting the goals of your organization. Think of your current data architecture as an old library where your data is kept on a variety of shelves, over multiple floors, and some of it is even stored in boxes in the basement. When you are looking for specific information, you have to go on an exhaustive, lengthy search, and you may not even find what you need. Data virtualization acts as the librarian who understands the organizational system, knows exactly where everything is located, and can provide you with the information you need immediately. Choosing data virtualization vs an ETL solution When reporting is delayed, analytics are inaccurate, and strategic planning is compromised due to bottlenecks, it’s essential that your organization prioritizes how data is integrated and accessed. Traditionally, organizations would only choose Extract, Transform, and Load (ETL). ETL is an intensive process in which all your data is duplicated from the original sources and moved into a data warehouse, database, or other storage. While ETL can bring your data together, there are two key problems with this method. The cost of moving and relocating data is often the chief concern most organizations have. And while it does improve your collection of data by keeping it siloed in one location, it doesn’t improve your connection to analyzable data that is needed to improve day-to-day operations. On the other hand, data virtualization solutions streamline how you access and connect to your data. Your business users submit a query, and the Denodo data virtualization platform pulls the data from across locations, extracts the relevant information and delivers it in real time in the needed format so it’s ready to analyze and use. The result? Increased productivity, reduced operational costs, and improved agility among business users while your architects and IT teams have greater control on governance and security. Take a deeper dive into data virtualization solutions Ready to dig deeper into data virtualization? We partnered with data management leader Denodo Technologies to put together Modernizing Integration with Data Virtualization, a highly informative webinar to help you learn how data virtualization helps your company save time, reduce costs, and gain better insight into your greatest asset. To learn how Fusion Alliance can create custom data virtualization solutions to scale your data management and improve access, reach out to our team. Ask us any questions or set up a quick call to explore your options. Learn more about modern data platforms >>
Whether your data is housed in a monolithic data architecture or across multiple, disparate sources such as databases, cloud platforms, and business applications, accessing the specific information you need when you need it probably presents a huge challenge. The length of time it takes to find data may have you or your analytics teams constantly relying on outdated information to run reports, develop strategies, and make decisions for your organization. If you’re exploring data solutions that will improve time-to-market while simplifying governance and increasing security, you’ve probably come across the terms “data fabric” and “data mesh,” but you may not know how to apply them to your business. To help you better understand these emerging trends in data architecture, we’re digging into what a data fabric and data mesh are and the specific benefits they bring to large and enterprise-level organizations. This will give you the foundational knowledge to determine how to choose data fabric vs data mesh or how both may be able to serve your organization. What is data fabric? When you think of every bit of data in your organization as an individual thread, it makes sense that it takes so long to access specific information. If thousands of individual threads are stored together in a bin, like in a monolithic architecture, or separated across hundreds of individual boxes with little to no organizational method, like in a distributed architecture, how long would it take to find the single thread you’re looking for and get it untangled so you can use it? A logical data fabric solves this problem by weaving all the threads of data together into an integrated, holistic layer that sits above the disparate sources in an end-to-end solution. Within the layer are multiple technologies working together to catalog and organize the data while machine learning and artificial intelligence are implemented to improve how new and existing data are integrated into the fabric as well as how data consumers access it. Are data virtualization and data fabric the same? A common misconception is that data virtualization and data fabric are the same. On the surface, they both support data management through the creation of a single, integrated layer of processed data atop distributed or unstructured data. Data virtualization is an integrated abstraction layer that speeds up access to data and provides real-time data returns, and this technology is a key component within the data fabric. However, data virtualization is still only one of the multiple technologies comprising the entity, which is a more comprehensive data management architecture. Benefits of data fabric Now that you have a better understanding of what data fabric is, let’s consider the problems it solves and why it may be right for your organization. Access your data faster When your data is in multiple formats and housed in a variety of locations, gaining access to the specific details you need can take hours, days, or even weeks, depending on your architecture. A logical data fabric leverages metadata, semantics, and machine learning to quickly return the needed data from across multiple sources, whether it’s a large amount of historic information or highly specific data used to drill down into a report. Democratize your data Data fabric uses advanced semantics, so the data is accessible in the language of business users, such as BI and analytics teams. Data consumers within the organization can access what they need without having to go through data engineers or the IT department, eliminating bottlenecks and sharing ownership of data. Improve governance Because of the automation capabilities of data fabric, you can implement a governance layer within the fabric. This applies global policies and regulations to data while allowing local metadata management to reduce risk and ensure compliance. What is data mesh? Monolithic data architecture keeps data in one centralized location. On paper, this seems like a more cost-effective, efficient option compared to a distributed architecture, but it still brings several challenges. Consider that in many large organizations relying on a monolithic architecture, massive volumes of unstructured data are stored in a data lake. For information to get into the hands of data consumers or before productization can occur, the data must be accessed and processed through the IT department, creating significant bottlenecks and bringing time to market to a crawl. A data mesh can solve this challenge. This is a new type of data architecture, only proposed in 2019 by Zhamak Dehghani of Thoughtworks, in which a framework shifts data from a monolithic architecture to a decentralized architecture. More specifically, the data is distributed across autonomous business domains where the data consumers own, manage, and share their own data where they see fit. While the domains are given a separate virtual schema and server so they can have full ownership over data productization, governance, security, and compliance are still unified within the monolith. Benefits of data mesh The challenges of centralized data ownership include latency, added costs of storage, software, replication, and lack of practical access for consumers, but implementing a data mesh can solve these. Eliminate IT bottlenecks When all data is forced to go through the IT department before being distributed to the individuals or teams requesting it, bottlenecks occur and slow down the flow of data. A data mesh allows data to bypass the IT department, allowing data to flow freely to the needed source. Improve flexibility and agility Finding specific information within the massive volume of unstructured, undefined data stored in a data lake requires increasingly complicated queries to get the needed information. However, a data mesh gives ownership of datasets to individual teams or business owners, simplifying access and offering real-time results through scalable, automated analytics. Increase connection to data By transferring data ownership to the data consumers, those who use it directly have a greater connection to it. The data is available in the language of business, and it can be shared across teams with greater ease and transparency. Choosing data fabric vs data mesh Data fabric and data mesh both support data democratization, improve access, eliminate bottlenecks, and simplify governance. While data fabric is built on a technology-agnostic framework to connect data across multiple sources, data mesh is an API-driven, organizational framework that puts data ownership back in the hands of specific domains. So, which is better in the debate between data fabric vs data mesh? The simple answer is neither one is better than the other, and the right option is determined by the use case. If the goal of your organization is to streamline data and metadata to improve connection and get real-time results across multiple teams, a data fabric built on a data virtualization platform can help you meet your goals. On the other hand, if you need to improve the process of data productization and decentralizing your data, a data mesh may be the best option. But the real answer is that contrary to popular belief, the two are not mutually exclusive and most businesses succeed by implementing both options. Data fabric and data mesh are complementary solutions that can work together to solve the challenges of your existing architecture. Learn more about data fabric and data mesh Want to gain further insight into choosing data fabric or data mesh? We partnered with data management leader Denodo Technologies for a recorded webinar. In Logical Data Fabric vs Data Mesh: Does It Matter? we provide an in-depth look at monolithic and distributed data architecture, the challenges they bring, and how both data fabric and data mesh can improve agility, reduce costs, and elevate the quality of your data. To ask additional questions or learn how Fusion Alliance can help you create and implement a successful data strategy to meet your unique challenges and goals, connect with our team today. Learn more about modern data platforms >>
The importance of data classification Often presented as a click-bait internet poll, the question “Is cereal a soup?” is only baffling until you realize that the answer hinges on how you define the term. Merriam-Webster contends that soup is a liquid sustenance often containing pieces of solid food. Therefore, as one respondent said, cereal is a soup “technically, though not existentially.” Proper definition of terms is also critical when it comes to classifying your data. To get the most from your data assets, you’ll need a strong data strategy, supported by definitions like: How information is grouped, weighted, and prioritized How common dimensions will be conformed How data will be standardized, cleansed, and tagged Your data use cases, sources, and architecture are unique. How you define your data strategy should be, too. Fusion’s team of data, technology, and digital experts can help you architect and implement a comprehensive data strategy, offer insights and best practices to support a growing data culture, or step in to solve a particular problem. Don’t let data eat your business for breakfast. Learn more about defining your data terms or get in touch for a quick consultation.
Today’s businesses collect more data than ever before, but many don’t have the architecture in place to store, process, and recall the data in real time. Whether an enterprise-level organization stores all its data in a single data lake or relies on multiple, disparate sources, both options cause significant delays in finding the specific information you’re looking for. Traditionally, if your organization wanted to update and upgrade the existing architecture, the only option was extract, transfer, and load (ETL) the data to a new framework but implementing a logical data fabric offers a better alternative — giving companies a cost-effective, efficient way to collect and integrate data while building a stronger framework across the organization. At a recent CDO Data Summit, Mark Johnson, Fusion Alliance Executive Vice President and editorial board chair for CDO magazine, sat down with thought leaders in the data industry to discuss why logical data fabric is essential in accelerating time to value. What is a logical data fabric? When you have multiple disparate data sources, a data fabric acts like a net cast over the top, pulling individual information sets together in an end-to-end solution. Data fabric is a technology-driven framework that lies within the existing architecture, unlike a data mesh, which is a methodology regarding how data should be distributed among data owners and consumers. In a logical data fabric, multiple technologies are implemented to catalog and organize existing data and integrate new data into the fabric. Data virtualization is the central technology deployed within this framework, creating an abstracted layer of unified data that is more secure and easily accessible. What challenges are solved by a data fabric architecture? Logical data fabric architecture offers a solution to the challenges organizations relying on numerous data storage solutions or repositories of structured and unstructured data face: Overcome slow data delivery By consolidating data into an integrated semantic layer, common business applications can process, analyze, and return the data in real time, in the language of the data consumer. This improves accessibility and significantly reduces latency that comes from applications having to search across multiple sources to return information. Simplify governance If every data warehouse, database, and cloud-based platform within your organization relies on separate governance, you are dealing with significant inconsistencies. By stitching the data together in a logical data fabric, centralized governance can be applied across all data and automated to maintain and streamline the process. Reduce IT bottlenecks Data fabric automates how data is processed, integrated, governed, and utilized, enabling real-time analytics and reporting. This puts data in the hands of your BI and analytics teams more quickly while removing bottlenecks from your IT department. With a logical data fabric architecture, your business can respond to trends and changes within your industry more quickly, helping you to evolve both short and long-term strategies to reflect what your data is telling you in real time. Is a logical data fabric the right solution for your organization? Learn more about data fabric architecture from the CDO Data Summit’s round table discussion. Mark Johnson is joined by: Baz Khauti, President at Modak USA Richie Bachala, Principal, Data Engineering at Yugabyte Ravi Shankar, SVP and Chief Marketing Officer at Denodo Saj Patel, VP of Data Solutions at Fusion Alliance This panel addresses critical questions about data in today’s business to help you solve your unique data challenges, including: Is the fabric of data virtual, physical, or both? How do we get value out of our data? Do we take a connect or collect approach? How comprehensive do we need our data approach to be? Are we optimizing for agility or for flexibility? How do we deliver unified data? Is the organization in agreement with what we are looking for out of their data? What AI/ML techniques do we want to employ, if any? If you have specific questions or are ready to take the next step and learn how we can help you create custom data solutions for your organization, reach out to us today for a quick chat! Learn more about modern data platforms >>
In the golden era of Universal Analytics (UA), Google pre-packaged a comforting array of reporting right out of the box. But, as companies transition to Google Analytics 4 (GA4) in preparation for UA’s planned sunset in 2023, marketers have been surprised to find far fewer of those pre-set analysis tools, and many are scrambling to rebuild the reports they rely on for key metrics. For example, if you check your UA account for acquisition, you’ll find roughly 25 different reports you can tap into right away. If you check acquisition in Google Analytics, on the other hand, you’ll see an overview screen and…two reports. But there’s no need to panic. While switching from UA means you give up those pre-packaged reports, what you gain from GA4 is the opportunity to collect data and analyze it in ways that make the most sense for your business. In this article, we’ll point you to places where you can find the UA reports you’re used to in GA4, and then we’ll show you how to build GA4 custom reports that fit your business needs. Forget the good old days. The best is yet to come. Find your favorite UA reports in GA4 While you might not be able to find a one-to-one match for everything you’re used to using in UA, GA4 does offer some reasonable facsimiles, although the naming may be different. Acquisition Reporting → Traffic Acquisition If you use UA’s Acquisition Reporting to answer questions about website traffic, you can find some similar metrics in GA4’s Traffic Acquisition. You’ll notice that Traffic Acquisition is set up in a similar format, but — and this is a big hurdle — you won’t be able to drill down into the data with a few quick clicks in GA4 like you can in UA. In GA4, instead of clicking around to find information, you use the plus sign (+) to set up secondary dimensions when you want to drill down into information. As you set up secondary dimensions, you’ll be able to search and narrow down the data to determine the best way to answer the questions your business is asking. In this case, GA4 shows you the same information you found in UA, but in a more targeted, deliberate format. Bounce Rate → Engagement Rate At first, it seemed bounce rate had bounced out of the analytics arsenal entirely with GA4, but now it seems the metric most responsible for marketing panic attacks is back, but in a slightly different form. Bounce rate does exist in GA4, but it’s calculated a bit differently because of GA4’s different data model. So, if you compare your current UA bounce rate to GA4, you will see a difference, and you’ll need to set new benchmarks. GA4 introduced a new metric to try and give us better information about how visitors use our websites: engagement rate. Unlike UA’s bounce rate, GA4’s engagement rate measures people who stay on your site and actually stay engaged. It’s a bit more dimensional than bounce rate, but also a little more difficult to manipulate. You can export engagement rate data into Excel if you’re up for doing a little more digging, but this report is one that might benefit from customization. Audience Overview → Demographics Overview Similar name, same functionality! As in UA, in GA4, Demographics Overview gives you a quick snapshot of your users, including: New vs returning users Demographic data Browser and operating system access Content Drill-Down → Pages & Screens Report In UA, the Content Drill-Down report gives a view of how site content performs at a hierarchical level within the URL structure. In GA4’s Page & Screens Report, on the other hand, you see your content by page title, but not by section. You can change the GA4 report view to page path, which allows a little bit more clarity, but the interface doesn’t support clicking around into different sections and paths. A few workarounds may help: Use the search function to look up different sections of your website, like “blog,” “about,” “services,” and so forth Export content to Excel to group and compare different sections against each other Use Explorations rather than Pages & Screens to dig into specific content performance questions Explorations When you can’t find a 1:1 match for a UA report you used to rely on, you could use GA4’s Explorations function to rebuild an exact match, but you could also take the opportunity to fine-tune the report to answer questions in an even better way. Within the GA4 Explore tab, users can build their own detailed reports, called Explorations, from a gallery of templates. We expect that this library will continue to grow but the baseline options are already quite useful. Of course, as you build out your own custom GA4 reports, you’ll want to start from a list of defined questions that serve your own internal goals, KPIs, and requirements. But to help you get the hang of how to create your own Explorations, we’ll outline a few examples here, based on UA reports you may be used to using. Behavior Flow → Path Exploration This report delivers a segmented view of website traffic. For example, to find out how many site visitors get to your contact page via organic search, you can create a path exploration by: Navigating to the GA4 Explore tab Choosing the path exploration template Clicking organic search sector Clicking the path to find how many of the visitors in this segment visited the contact page Behavior Flow → Funnel Report You can create a similar version of the path exploration report with a funnel report, adding more detail about the steps you want to analyze. You can set this report up by: Navigating to the GA4 Explore tab Choosing the path exploration template Clicking organic search sector Clicking the path to find how many of the visitors in this segment visited the contact page Adding form submission as a requirement Editing and adding steps to change the desired action This report can give you a better idea of common visitor behavior flows on your website. It’s a highly customizable GA4 report, both by steps and the dimensions of behavior you can track those journeys across the site. Exit Pages → Free Form Explorations Although you can’t find exit pages and exit page percentages out-of-the-box with GA4 as you can in UA, you can use free-form explorations to create a custom GA4 report to get you that data. Here’s the setup process at a high level: Navigate to the GA4 Explore tab Select the Free Form Exploration template Set up the page path you’re tracking, including exits and sessions Compare which pages have the most exits to total sessions to get the percentage While you can’t find the percentage in a calculated column, this report is a helpful replacement if you need to find this data quickly. Using Free Form Explorations, you can take any of your metrics and add any dimensions to home in on your data at a very close level. More options for building GA4 custom reports As you dig into GA4, you may find that you can add new dimensions and metrics to the tables for some of GA4’s limited out-of-the-box reports, but you may find that the results lack the depth you need. Depending on the types of reporting you need, you may also find that Explorations give you enough common views to replace most of what you find in UA. However, most marketers will need to get a step further in their GA4 reports before completely moving away from UA. With GA4 still in flux and new features and functionality shifting, Looker Studio (formerly Google Data Studio) may offer your team a way to find consistency and recreate some of the views you were used to using in UA with your GA4 data. Shifting from UA outputs to GA4 custom reports isn’t easy To make the switch from UA to GA4 seamless, you might need to call in reinforcements. Check out our GA4 resources or let us know if you have a specific question. Our team is helping mid-size businesses and enterprise-level organizations handle every aspect of the GA4 transition, and we’re happy to help you get what you need to be successful. [Match-up] Understand the differences between Universal Analytics and Google Analytics 4 [Path] What you need to know to upgrade to GA4 [Plan] 6 steps to make your Google Analytics 4 transition easier [Video] Meet the new Google Analytics 4 [Contact] Ask us anything
Data has always been integral to organizations. However, as customer expectations continue to evolve, data-driven insights have proven integral to optimizing customer relationships. This has resulted in data leaders not only being integral to transformation but often leading the transformation. From finding, acquiring, serving, and retaining customers to predicting and delivering those customer moments that matter, data and analytics have emerged as essential enterprise competencies. At this year’s MITCDOIQ Symposium, our Regional Vice President and Executive Data Leader Mark Johnson sat down with a panel of experts, including Todd James (Chief Data and Technology Officer at 84.51), Chris Tambos (VP of Data & Analytics at Fortune Brands Water Innovations), Eric Wiegand (Industry Expert), David Levine (VP of Solution Sales at Fusion Alliance), and Saj Patel (VP of Data Solutions at Fusion Alliance). Together, these industry leaders illuminate the outcomes and successes your business can see at the intersection of digital, data, and analytics and present emerging best practices that will ensure your success. As business models pivot to meet the ever-evolving needs of customers and organizations, these experts explain how their organizations answered some of their biggest challenges, including dealing with pandemic-related changes using data and analytics. This panel also covers how data fits into your bigger business strategy. Marketing has traditionally been the biggest consumer of customer data, but now we are seeing how important data is to all areas of the organization. With a wealth of knowledge between these great data minds, this panel provides information you won’t want to miss that can help you make the right choices and provide the ultimate customer experience.
Although still gaining momentum, data virtualization is on a fast track to address the challenges with traditional integration solutions, namely faster time-to-market for data and business capabilities, access to a broader range of data across your data ecosystem, and providing an integrated solution for management and governance. Data virtualization enables access to data without replicating or storing any of the data. It’s essentially a virtual layer of information that allows you to access and integrate data from various sources and systems seamlessly. But what are the typical use cases of data virtualization, and what are some of the challenges businesses encounter when trying to put it in place? Here we’ll dive into both questions, along with the potential opportunities data virtualization provides. Common uses of data virtualization Introducing data virtualization into an organization is generally use-case-driven. If your company fits into one of the following three use cases, it may benefit from implementing this type of holistic strategy. You have data in numerous locations The primary use case for data virtualization occurs when companies have data in multiple locations. For instance, if your business has migrated data to the cloud or multiple cloud locations, but still has data on-premises, virtualization can pull all that data into one access point. Virtualization is a great candidate to make siloed information look united to the business, even when the data exists in separate environments. But virtualization doesn’t just affect appearances: it also makes data from disparate sources simpler to access, which benefits the users. For instance, many companies collect and store customer data in multiple platforms, which can make it difficult for the organization to discern a true 360° view of the customer. Data virtualization can seamlessly integrate the data across platforms to present a single, unified view — saving time over manual analysis and reducing the risk of key data points falling through the cracks. Learn more about customer data strategy >> You’re trying to migrate to the cloud At this point, most companies are trying to modernize and move to the cloud to save money and time. But not all companies can move the data that quickly and completely abandon the legacy system. Instead, they migrate data a little bit at a time, which can be a tedious and lengthy process. Data migration projects can take months, and during that time, business users could spend significant time finding, reconciling, and analyzing the data manually. The result is a considerable loss of business opportunity; unable to respond to the needs of the business for data. Virtualization bridges the gap during this transition period, streamlining effort and boosting efficiency in the short term, so data can be migrated over time without negatively impacting business users. Learn more about cloud migration strategy >> You want to move from a DWH to a DaaS model Some companies prefer to bypass putting data into their traditional data warehouse (DWH) in favor of a data-as-a-service (DaaS) solution. For these organizations, the time-to-value savings of getting data into the hands of users more quickly overrides the case for standing up and maintaining their own DWH. Data virtualization enables companies to bypass the need to create ETL processes entirely and serve up unified data views from any combination of DaaS and legacy sources. As long as your organization has thoughtful governance in place and has considered the potential privacy and security impact, data virtualization can quickly harmonize a DaaS strategy. Learn best practices for evaluating your data storage options >> Common roadblocks to data virtualization If you’re considering data virtualization for your business, there are some potential constraints to consider. Incomplete MDM framework. If your company has outstanding master data challenges to address, it is best to have a master data strategy defined and solutioning options incorporated into the data architecture before data virtualization can take full shape. Mastering data is often a process and organizational change management problem to solve. Disbursed subject matter expertise. Creating a data virtualization solution requires thorough knowledge of your data, the business rules surrounding your data, and a strong understanding of the business needs for using that data. Since data virtualization brings disparate data together, the subject matter expertise on the various data domains can be spread throughout the organization. Identifying these SMEs and ensuring engagement of these individuals is a key enabler to achieving success with data virtualization. Governance issues. You never want to overlook governance in a rush to meet business requirements. Accountability and ownership of data are essential tenants of a successful data management framework. Before implementing a data virtualization project, be sure you have a solid governance operating model in place to ensure security, compliance, and data quality. Although data virtualization can be a transformative solution for many companies, it’s not your only option. Sometimes the use case isn’t quite there, or privacy and governance concerns outweigh the potential value of a data virtualization effort. Fortunately, there are multiple ways to realize the value of your data. Explore your data integration & architecture options >> Setting yourself up for data virtualization success Data virtualization can be an excellent solution for businesses struggling with integration challenges that are preventing the speed and scale of business growth. Collecting data from multiple platforms and presenting it in one unified view for business users streamlines workflows and makes data easier to use and digest across the organization. It’s not a one-size-fits-all solution, but for certain use cases, data virtualization offers significant value. How do you determine if data virtualization is a good fit for your business? How can you define and evaluate potential use cases to understand the potential pitfalls and weight them against an accurate projection of benefits? In our Data Virtualization Discovery Workshops, expert teams walk you and your key stakeholders through your unique constraints and opportunities, identifying the right next steps to advance your data management strategy. Starting with your current state architecture and building toward a true 360 view of your data, we’ll work with you to determine if data virtualization is a good fit, and which use cases will help you realize its value for solving some core business problems. Have a question about data virtualization? Ask us anything >> Ready to get started? Explore our Data Virtualization Discovery Workshops >>
When it comes to understanding how wearables are changing healthcare, consumer brands serve as a solid leading indicator. Popularized by brands like Apple Watch, FitBit, and Garmin, the global wearable healthcare market was estimated at $16.2 billion in 2021, and is projected to double in the next five years. Healthcare wearables in daily life Although most users rely on healthcare wearables to check texts during spin class or crush their friends’ daily step records, an increasing number of users rely on smartwatches and other medical wearables for life-saving medical information. As the technology evolves, healthcare wearables can now give minute-by-minute EKG readings, monitor blood sugar, check oxygenation levels, and help people use real-time data to manage their health while they go about their regular activities. Wearables also deliver oversight and peace of mind to caregivers, as when diabetic children wear devices that monitor insulin and food intake and link to mobile apps monitored by their parents. These breakthroughs allow patients of all ages more autonomy while providing reassurance to caregivers that the person is safe. Learn more about what wearable devices make possible >> Healthcare wearables in long-term care settings Long-term care presents a gap between that at-home monitoring scenario and the tech-saturated acute care space of a hospital or clinic. Historically understaffed, nursing homes and long-term care facilities struggle with high turnover, increasing rates of preventable errors, and unnecessary escalation of avoidable medical events. In addition to the impact on the patient and their loved ones, these realities impact the facility itself through lower reimbursement rates and increased cost of care. A 2021 American Healthcare Association and the National Center for Assisted Living survey on staffing in these facilities showed that 99% of nursing homes and 96% of assisted-living facilities face a staffing shortage. Harvard University professor David Grabowski says the pandemic only worsened that already critical situation. He notes, “We’ve overlooked and undervalued this workforce for a long time and now we’re at a full-blown crisis…We’re in a crisis on top of a crisis.” Ensuring the right level of care for high-risk and elderly patients amidst staffing constraints formed a critical use case for transformation. Healthcare wearables emerged as a leading option that would give staff the ability to monitor more patients, get notifications when care is needed, and escalate when necessary. Overcoming obstacles to adoption Implementing a program for wearable devices in nursing homes introduced more stringent requirements than consumer wearables, including: Privacy protection: Patient medical information, covered under HIPAA, requires more protection than off-the-shelf iOS and Android systems offer. Usability concerns: Patients under care in nursing homes and long-term care facilities often lack experience with technology, and/or the dexterity to manage new devices. Cost considerations: In addition to the cost of patient wearables and devices allowing nursing staff to monitor and communicate alerts, facilities must also invest in secure data infrastructure and information architecture beyond standard integrations in market-ready smartwatches. Creating a targeted solution Realizing that nursing home and long-term care facilities faced unique barriers to implementing wearable devices, BioLink Systems set out to create a solution. Initially, the company devised a device that could be attached to an adult brief to monitor urination levels and body position. However, early issues with the prototype limited production scalability. Fusion worked with BioLink to architect a cloud-based IoT solution that uses machine learning to exceed the company’s initial vision. Designed with a minimalist aesthetic and user experience to fit the target demographic, the BioLink bracelet and adult brief wearables: Meet HIPAA requirements Monitor patient fluids Track patient vital signs Alert nursing staff when patient vitals fall outside their customized range Escalate alerts if patients are not attended within an allotted timeframe Initial testing and rollouts in nursing homes delivered immediate results, including: Improved patient care Decreased response times Fewer avoidable events such as medication errors Decreased escalation of care level, including hospitalizations Improved oversight Increased compliance with state, federal, and agency regulations Better experiences for the patient and their loved ones Learn more about how BioLink’s wearables are changing healthcare >> What’s next for wearable healthcare devices As facilities gather more data from using these devices, the machine learning algorithm BioLink and Fusion designed will continue to refine unique vital sign ranges for each patient, resulting in more targeted care. Future iterations of the BioLink device will integrate that information with the patient’s electronic medical record, enabling further customization of care. While each device starts with a baseline for normal with each of these vital signs, the more data that is collected, the better the facility can care for the patient. For example, if a patient’s oxygen level is continuously high, the device eventually creates a new threshold for that patient’s vitals and only sends notifications accordingly. Especially with the elderly population, there are many people that can’t communicate what they need or when they feel a certain way. There are endless possibilities for being able to provide better care under these circumstances. With options like dehydration sensors, nursing care staff is better able to not only bring water to patients but ensure that they are actually consuming it. The more variables, the better Ultimately, the more variables, the better the information — resulting in better care and better outcomes. The correlation and combination of all the data from a patient can detect changes and allow for more timely, preventative care. And the more information included, the better the insights from the algorithm. With the right information, staff can prevent different medical events by predicting problems and eventually creating better remedies and treatments to avoid costly medical interventions or catastrophic incidents. As industry leaders and healthcare facilities see the impact of devices like BioLink’s bracelets, we expect to see greater adoption of healthcare wearables to elevate patient care, reduce facility costs, and find operational efficiencies even during times of staffing crises. With the right technology and innovation, we can change outcomes and save lives — with a wristband.
From online transactions to mobile payment apps like Venmo, today's consumers increasingly look for digital access to funds — and they expect a seamless experience. As customer expectations and the economy continue to evolve, digital transformation in finance and banking needs to keep pace. Banking culture hasn’t always kept up with what digital customers are actually looking for. The World Banking Report 2021 reported that, “Despite being vocal about improving the customer experience, the banking industry’s delivery of the key components of a strong customer experience, such as improving transparency and social responsibility, improving customer support, and reducing the cost of services, falls far short of customer expectations.” Three common barriers to digital transformation in banking There are several common challenges to digital transformation that keep banks from pivoting quickly to meet customer expectations. Roadblock #1: Technical debt As a highly regulated industry, traditional banking relies on complex and siloed legacy technologies that are often expensive to maintain. Over time, technical investments compound, making it increasingly difficult to find time or resources to shift to more modern or scalable platforms. When banks grow through mergers and acquisitions, attempting to integrate additional legacy systems adds to that technical debt. At the same time, banks face increasing competition from the fintech sector — online-first financial institutions that aren’t encumbered by aging platforms. Traditional banks saddled with technical debt may feel that they lack the time or resources to fully integrate, modernize, or replace their legacy technologies. But the longer this debt persists, the harder it is to compete with digital natives, leaving banks less agile in the marketplace. How platform modernization helped make an annuity organization more competitive >> Roadblock #2: Organization size Like many enterprise-level organizations, larger banks often create internal digital teams that combine business, IT, and marketing capabilities and develop expertise in their own technologies, systems, and processes. Faced with competing internal priorities and hampered by regulatory constraints, these internal teams may struggle to get alignment and prioritization for a banking digital transformation strategy and may lack the breadth of expertise necessary to implement a comprehensive modernization effort. Smaller banks, on the other hand, may be more nimble and successful at shifting internal priorities, but they may not have the resources to staff dedicated teams. While organization size is often called out as a hindrance to effective digital transformation in banking, the underlying problem may not actually be a headcount issue. Regardless of size or industry, most companies miss their digital transformation goals due to lack of clarity and strategy. “Digital transformation” in finance or any sector can be hard to define, implement, and measure. A more strategic approach starts with identifying concrete problems or issues, understanding customer needs, and developing solutions that bridge the gap with action steps that are clear, dynamic, and measurable. How technology strategy comes to life >> Roadblock #3: Relying on assumptions about customer needs and wants Understanding customers’ needs, pain points, and experiences can be difficult, and as users adapt to technology their preferences continue to change. This makes audience research even more critical when defining your bank’s digital transformation strategy. After a surge in remote work due to Covid, comfort levels with technology are at an all-time high. Research from McKinsey found that 75% of people using digital channels for the first time during the pandemic indicate that they will continue to use them when things return to “normal.” Not only are customers more comfortable with banking technology, but it has also become an important factor in choosing which bank to use. According to Mobiquity’s 2021 digital banking report, 40% of respondents agreed that they are likely to switch accounts to get better digital tools. Investing in both qualitative and quantitative data can dispel assumptions about your audience, while also revealing specific ways to improve the customer experience. As those opportunities are identified, banks can prioritize technology and services that will have the biggest impact. What goes into a successful customer experience strategy >> How to approach a digital transformation strategy in banking Given these challenges and the continuous evolution of customer expectations, several technologies offer significant potential gains and can help financial institutions stay competitive. Mobile app enhancements Mobile banking apps typically offer the ability to check balances, transfer funds, pay bills, and chat online with a bank representative. By building applications that go beyond these basic services, banks can increase their new customer base while improving customer retention and lifetime value. Leaders in the banking space now include peer-to-peer payments, lending inquiries, and chatbots as part of their applications. However, in addition to monitoring what competitors are doing, it’s important to implement a robust discovery process to see what the target audience wants from a banking app. This could include developing target personas and performing pain point analysis to find unique solutions and services that will better appeal to customers’ needs. From there, financial institutions are better poised to tackle the next layer of technology for the app space — personalization. Many banks are investing in personal financial management tools and customized product offerings in their apps, making banking more accessible and valuable than before. These user-friendly applications and their customization capabilities are an integral part of digital transformation in banking. Refine your mobile applications and provide a better customer experience >> Machine Learning Historically, machine learning engagements have required substantial data science and model training investments. But major ML platforms have evolved, lowering the barrier of entry for these projects. Now, midsize and even smaller banks can use machine learning models to better understand their customers and drive a more personalized experience. And machine learning isn’t just valuable for deepening current relationships; it can also help banks target and acquire new business by identifying trends and opportunities. This means higher quality leads, improved retention, and an increase in business with more potential for high lifetime value. [On-Demand] Reimagining customer insights, risks, & relationships through machine learning >> Data management strategy Traditional lending institutions underwrite loans by using a system of credit reporting. Banks that process loan applications evaluate the risk by looking at credit scores, homeownership status, and debt-to-income ratios. Today, three major credit bureaus provide this information. But these reports can sometimes contain erroneous information, and the information comes at a high cost since it can only be found in three places. And while banks often collect their own internal data, if that data is incomplete or disorganized it cannot offer useful insight. With structured data management strategies, financial institutions can mitigate losses by generating more data and using it to recognize trends and potential liabilities. See how one bank improved ROI by 1054% through strategic data management >> Robotic Process Automation (RPA) Some banking processes are still highly manual. Consider routine tasks like opening an account or reporting a stolen credit card — it takes time to get through the questions, and usually requires a phone call from the customer. With robotic process automation (RPA), in the case of a stolen credit card, the workflow process can automatically cancel the old card, issue a new card, and confirm the mailing address of the new card. RPA can also identify bots or theft with greater accuracy than a human analyst. RPA even has the potential to assist with workload transformation. In addition to streamlining and automating internal processes, RPA can be used to manage the cloud technologies that institutions rely on for their everyday tasks. This leads to more refined workload placement — and therefore a more productive workforce. The bottom line on digital transformation in banking From highly-personalized service offerings to easy-to-use applications, consumer expectations are high in the banking sphere. To keep up with these expectations, banks must position themselves to adapt quickly. Traditional banks are often at a disadvantage to digital-only competitors. Newcomers operate without the burden of legacy systems and outdated business models. But a digital-first attitude can help financial companies effectively implement technologies that enable digital transformation in banking. Find out how one financial services firm successfully handled digital transformation >> Ready to boost your productivity and customer engagement? Let us know your questions and find out how a strategic approach to digital transformation can help your bank thrive in a digital-first world.
Unleash the force of use cases If you keep up with the Star Wars canon, you might have some thoughts on droids. In a galaxy far, far away, these robot characters can be useful – flying your spacecraft, babysitting your large-eared infant, loading heavy objects without talking back – but they can also easily be misapplied. One minute you have a friendly new toy, and the next thing you know it’s gone over to the dark side and cut the power to your retractable space dome. Perhaps this reminds you of data solutions you’ve tried. It’s easy to get on board with a new technology idea. Data mesh, data fabric, and data democratization are heady concepts and terrific solutions – in the right context. So how do you harness the power of your data without inadvertently triggering a galactic crisis? You could get Jedi-master-level at wielding a light saber. But it might be faster (and, yes, safer and more realistic) to start with use cases. Maybe you need to connect information across multiple platforms to create a 360° view of your customer Maybe your business users need real-time data integration to gain the operational intelligence that drives better predictions Maybe you’re looking for a data-as-a-service solution to provision your suite of applications In these situations, data virtualization might be part of the droid solution you’ve been looking for. As a technology solution, data virtualization doesn’t stand alone. If you don’t have master data management, governance, and quality processes fully locked in, implementing data virtualization could put your whole operation at risk. If you need help identifying or prioritizing use cases or aren’t sure what steps to take next with your data, let us know. We don’t do robot companions (yet), but our customized data jumpstarts can give you a hyperdrive boost on your path to data maturity. Get smart: If Obi Wan Kenobi (currently streaming on Disney+) hasn’t given you enough time-warp whiplash, you could try reading The Kingdoms by Natasha Pulley. If the book doesn’t underline the importance of empowering your decision-makers with integrated data views, we don’t know what will.
As the pace of technological change continues to increase, digital transformation in healthcare often struggles to keep up. Challenges like integrating aging legacy systems, maintaining patient privacy, and leveraging disparate data sources into actionable insights loom large in healthcare, where time and resources are often at a premium. But the same circumstances that make digital transformation in healthcare more difficult are the very things that underline its importance. When patient lives are on the line, digital transformation isn’t just a “nice to have.” Healthcare systems that achieve their digital transformation goals see immediate improvements in patient experience, quality of care, and patient outcomes. From that standpoint, digital transformation in healthcare isn’t just about adding technology, it’s about revolutionizing the processes and systems that drive the health and well-being of the population as a whole. Case study: Life-saving technology in diabetes long-term care >> Putting patients first While individual healthcare providers commonly put their patients’ needs front and center, the system as a whole did not evolve with that mentality. Due to a variety of factors, including payer systems, consolidation, and the regulatory environment, healthcare systems got a reputation for siloed information, duplicate workflows, lack of clarity, and confusion. As healthcare organizations seek to modernize, smart health systems are taking a consumer-centric approach — redesigning patient experiences and pathways while improving care delivery and outcomes using digital technology. Article: Transforming customer engagement in the digital age >> Planning the future of digital transformation in healthcare During the pandemic, industries accelerated digital transformation efforts across the board, and healthcare was no exception. Out of necessity, more medical touchpoints and interactions moved online, from virtual office visits to automated triage to digital paperwork. Now, two years into the new normal, healthcare organizations are taking stock of their progress, appreciating the speed and scale of their efforts, and mapping opportunities for the future. A recent Deloitte study found that 60% of health systems say they are about halfway through their digital transformation journey. In our experience, working with technology innovators and leaders across industries is where things can get messy. Digital transformation is a long game, and organizations often get bogged down at the halfway mark. To keep moving forward and avoid costly wrong turns, healthcare leaders need a fresh vision and renewed roadmap. Evolving digital transformation in healthcare to meet the changing expectations of patients and providers requires a commitment to a digital-first, people-centric approach, but offers great opportunities for continued growth in connection, innovation, and successful outcomes. Based on our experience, we see five key areas where focused efforts can deliver outsized returns for healthcare systems that are mid-way through their digital transformations: 1. Modernize legacy systems to give providers and patients more options While the vast majority of individual healthcare providers and healthcare organizations use an electronic health records (EHR) system, relatively few seamlessly integrate with patient portals. A recent PEW Health Information Technology (HIT) survey found that almost 80% of respondents wanted to access and view their electronic health records through a website, an online portal, a mobile app, or electronically in some other way. Moreover, the same survey highlights a strong desire for their doctors to share information about the patient’s health status. For most healthcare organizations, integrating patient records across practices and within portals is a headache at best. Adding in the other digital interactions that today’s consumers expect — such as automated appointment and prescription workflows, chatbots, pre-filled forms, and instant answers — might seem impossible. Delivering a better patient experience and giving providers greater flexibility with their tools often takes a more strategic view. Rather than layering in more and more technology solutions, smart healthcare organizations take a holistic approach to modernization, creating flexible, modular solutions that give patients and providers more options in the near term while also making future enhancements easier. Case Study: How an AI healthcare company optimized its digital experience >> Article: Modernization challenges and the path forward >> 2. Mitigate risk to build patient trust In addition to technology lag, healthcare systems also struggle to connect patient health information due to regulatory constraints. To maintain HIPAA compliance in the US and GDPR compliance for EU patients, healthcare organizations sometimes limit the very information sharing that would result in higher quality care. To meet patient expectations of data privacy and personal health data security while also delivering on modern expectations for functionality and connectivity, health organizations need to build in best practices for security and governance throughout their technology architecture. While there are myriad ways to approach this issue, a couple of key options deserve consideration: BYOD Policies A 2019 study found that 63% of healthcare organizations sustained a security incident related to unmanaged and IoT devices. Given the rapid acceleration of digital transformation in healthcare since 2020, we suspect that number is much higher today. As healthcare organizations modernize systems and integrate more virtual and IoT solutions into their technology spaces, having a robust and updated BYOD policy becomes more important. Developing a compliant, enforceable strategy is a critical step in your modernization efforts. Case study: Navigating BYOD in a highly regulated industry >> Containerization One way to mitigate risk is to containerize data, workflows, and applications in the cloud. Although the cloud can sometimes get a bad rap for security, a carefully designed strategy puts security first and can prevent any breach from spilling over too far into other parts of your architecture. Article: Maintaining a composable enterprise >> Blockchain Best known in the context of cryptocurrency, blockchain uses a computerized database of transactions to allow secure information exchange without the need of a third party. Applying blockchain technology to the healthcare industry could improve information security management; healthcare data can be communicated and analyzed while preserving privacy and security. Countries like Australia and the UK have started experimenting with blockchain technology to manage medical records and transactions among patients, healthcare providers, and insurance companies. In both examples, decentralized networks of computers handle the blockchain and simultaneously register every transaction to detect conflicting information, keeping records accurate and making them more difficult to hack. Article: Building trust in your data privacy compliance >> 3. Use voice and wearables to enhance patient experience and outcomes Wearable devices and IoT-based health sensors can track a patient’s conditions and activities remotely, from their vital signs and hydration to the onset of a medical crisis event. The data collected can be helpful to healthcare providers and enable them to better guide patient care. Healthcare providers use IoT and wearable data for remote monitoring and preventative care, providing more specific, personalized connections even with lower staff coverage. Machine learning also drives AI-based natural language processing technology in the healthcare space. As more patients become familiar with voice models like Alexa, Siri, and Google Home, healthcare organizations see potential to deploy the technology for tasks like triage and treatment reminders. For example, the UK’s NHS uses voice technology to field common questions, deliver health information, and remind patients to take medication. Case study: Using wearables to improve patient care >> 4. Put data to work for predictive and preventative care Healthcare organizations collect volumes of data but traditionally haven’t used advanced analytics to translate the information into actionable insights. Today’s leading provider systems are exploring how real-time business analytics, predictive analytics, and AI can transform patient experience and how care is delivered. In much the same way that businesses use data analysis to spot trends, forecast consumer behavior, and drive purchasing decisions, healthcare organizations can use the information they collect to understand patient expectations, discover areas of dissatisfaction or waste, and identify opportunities to enhance the overall experience of patients with their facilities. Likewise, providers can use patient data to understand how a unique individual responds to treatment, spot key diagnostic markers, and even predict potential outcomes so that doctors and patients can work together to minimize risk. Article: Data analytics in healthcare settings >> 5. Automate administrative tasks to focus on patient care The growing number of administrative tasks imposed on physicians, their practices, and, by extension, their patients adds unnecessary costs to the health care system. Excessive administrative tasks also divert time and focus away from providing actual care to patients. Tools like Robotic Process Automation (RPA) can help healthcare systems save time and resources in areas such as administration, billing, and human resources — freeing up more time for face-to-face interaction with patients. When it comes to finding the right applications for automation in healthcare, it’s important to keep patient experience at the center of your strategy. Developing a customer-first automation strategy can help create the perfect blend of automated interactions and human interactions that will meet today’s expectations and delight patients rather than frustrate them. Article: Finding the right use cases for automation >> Evolving patient care through digital transformation in healthcare As the digital tools, apps, and resources pioneered during the pandemic continue to evolve, healthcare leaders must continue to push ahead with digital-first, patient-centric investments in technology, integrations, and solutions. Finding the right balance between patient and provider expectations, maintaining compliance, and enhancing patient care requires a mindset that values the patient’s perspective. Ready to take the next step? Get a machine learning jumpstart >> Get a better view of your data analytics maturity >> Refresh your digital transformation roadmap >> Wherever you are on your digital transformation journey, our team of digital, data, and technology experts can help. Ask us your questions about digital transformation in healthcare >>
This article originally appeared in CDO magazine. Data and analytics have long held promise in helping organizations deliver greater value across the entire stakeholder landscape, including customers, associates, and partners. However, since the beginning of the data warehousing and BI movement, achieving business value rapidly — in alignment with windows of opportunity — has proven elusive. For an organization to be competitive in the era of digital transformation, data must be front and center — and accessible in near real-time. But many organizations are struggling with data that is deeply buried, complex to access, difficult to integrate, and inaccessible to business users. Problems like these diminish the value of your data and its ability to inform decision-making at all levels. For most organizations, it’s hard to produce value from data quickly The main challenge has been the distributive and siloed nature of the data subjects that need to be integrated to achieve business-relevant insights. Data subjects — customers, products, orders, warehouses, etc. — typically reside in different systems/databases, requiring extraction, transformation, and loading into a common database where analytics can be mounted. Often, data delivery solutions like data warehouses, self-service BI, and data lakes are used to try and unlock these data silos; however, each of these solutions presents drawbacks in terms of effort, complexity, cost, and time-to-market. That is where data virtualization comes in and delivers a holistic view of information to business users across all source systems. So what exactly is data virtualization? In its simplest form, data virtualization allows an organization to attach to its data subjects where they reside in real-time. It presents disparate data subjects through a semantic layer that enables them to be integrated on the fly to support query and analytic use cases. By eliminating the need to design and build complex routines that move data from multiple source locations into a single integrated data warehouse, products like Denodo enable organizations to compress weeks to months of data preparation time out of the idea-to-execution value stream. As a result, value delivery is significantly accelerated. Learn more about how Fusion & Denodo can help you streamline data access to support your most critical business needs >> Optimization with data fabric. While data virtualization integrates data from different sources into one layer to provide real-time access, data fabric is an approach with end-to-end architecture that allows organizations to manage massive amounts of data in different places and automates the integration process. The thing about data fabric is that it has a huge job to do and must have a robust integration backbone to do it. A data fabric must support many data sources, be compatible with several data pipeline workflows, support automated data orchestration, empower various kinds of data consumers, and more. To do this successfully, a data fabric requires powerful technologies and a solid data integration layer to access all data assets. Many in the data community believe that you must choose data virtualization OR data fabric, but that is not the case — and that solid data integration layer is an example of why. The reality is that data fabric can be operationalized through data virtualization and optimizes your modern data architecture, allowing you to move with the speed of your business. By building a model that utilizes both concepts, businesses make finding, interpreting, and using their data near seamless. Technology by itself isn’t the answer. Even with the proven results of this class of technologies, many organizations continue to struggle with traditional data management and analytic architectures and solutions. This inability to adopt new approaches for data management and analytics only serves to deprive decision makers of rapid access to insights that are necessary to support agility in the pandemic-induced, rapidly transforming digital/global economy. The solution is not just found in technology. Instead, it is found in the minds of the humans responsible for delivering data management and analytic capabilities. It is a human change management problem we face. Remember the adage, people/process/data/technology? The next frontier to be conquered is optimizing the thinking and innovation risk tolerance of stewards of data management and analytics solutions within organizations. What do you think? Is your organization facing any of these issues or trying to tackle how to deliver significant value — better, faster, cheaper, smarter? I’m happy to chat about where you’re at and how to get where you would like to be. If you want to talk, send me a note.
The pace of change and unpredictable circumstances of the past couple of years have led many companies to rethink their just-in-time approaches to resourcing tangible goods and materials. But why stop there? To scale and adapt fast, companies also need a new approach to how they resource skillsets. One of our clients, PRECISIONxtract, did just that. By taking a just-in-time approach to their shifting skillset needs, the company was able to scale up fast — and minimize risk — in a changing business environment. A right-fit-first approach PRECISIONxtract’s transformative healthcare market access solutions offer patients and providers unprecedented connection to the right medication and resources in clinical settings. To bring that vision to life, PRECISION could have found a series of single-skill vendors or taken the time to recruit and onboard new employees. Instead, they looked for a cross-functional partner that would be a seamless fit with their company culture and that had the right mix of scalable skills. They found that fit with Fusion Alliance. Fusion quickly became an integral part of PRECISION’s team, assembling a group of more than 20 strategy, data, and technology experts to deliver responsive support for a growing set of initiatives. Boosting surge capacity across disciplines Knowing that their flagship product, Access Genius, needed design and functionality upgrades, PRECISION called on Fusion to assess and modernize the application without disrupting the existing business. To avoid downtime and increase speed to market, our team used an Agile process and a model-driven design, in which models from the source code informed modernization efforts. Streamlining the overall architecture not only saved development time, but also made Access Genius easier to deploy to PRECISION’s clients. And, to make the product easier to maintain and cheaper to run, we applied containerization through a microservices model and moved Access Genius to a distributed cloud hosting framework. Our solution provided real-time customer insights that were delivered across a variety of digital channels, in lieu of a people-driven process. This helped take Access Genius: From a complex, cumbersome, legacy monolith into a lightning-fast, distributed, cost-effective, cloud-native solution From a user-driven, database-centric format to a distributed API-based framework, enabling immediate data updates for important cost and coverage changes From a time-intensive customer engagement portal to an intuitive, streamlined, automated process Equipped with a modern, stable, extensible platform, PRECISION was free to explore opportunities for more radical innovation. Disrupting the market with frictionless access to timely data Although Access Genius successfully broke down barriers with data, the solution’s interface required users to navigate a complex dashboard with manual clicks and drop-downs. For pharma teams with limited time to connect doctors to information, seconds count. Working with PRECISION’s product team, Fusion technology experts analyzed the friction point of manual navigation and explored ways to make Access Genius more seamless for the user. Drawing on deep expertise deploying cutting-edge technologies into highly regulated spaces, Fusion suggested exploring a shift away from a traditional web-based interface to an AI-enabled voice functionality that would connect users to the most relevant data and messaging right in the flow of conversation. Changing the way pharma enablement tools go to market At the same time, other Fusion consultants were hard at work rethinking the way PRECISION’s products reached, empowered, and retained customers. We brought in a range of specialists to bring new strategies to life, including: Instructional designers and training developers created an interactive training platform to equip pharma sales reps with greater confidence in provider interactions by deepening their understanding of the Access Genius tool. RESULT: Access Genius IQ, a new training tool that helps PRECISION customers see faster ROI for their Access Genius investment Brand experts, visual designers, content strategists, and web developers elevated visual brand elements and created websites, editorial content, and outreach campaigns. RESULT: New website architecture, design, and content; long-form lead generation content; prospect cultivation email marketing Digital marketing strategists, creative designers, and ad teams implemented innovative ad campaigns in rapid succession as PRECISION had more time to develop and roll out new products. RESULT: LinkedIn ad campaigns generating 3X leads, including 100 qualified leads in the first 90 days Read more about the success of Fusion’s marketing partnership with PRECISION >> Reimagining the skillset supply chain Partnering with Fusion gives PRECISION access to a huge team of experienced consultants with a wide range of skillsets — allowing the company to surge and scale as their business needs and market realities shift. With Fusion bringing in the right people at just the right time, PRECISION saves valuable time and resources, enabling them to be more innovative, more agile, and more impactful for their customers, healthcare providers, and patients. Ready to explore how Fusion skillsets can help your team succeed? Our ongoing work with PRECISIONxtract is just one example of how we help companies build momentum for a digital-first world. We bring big-picture thinkers, technology-minded creatives, data scientists, and technical experts to work alongside our clients, providing a force-multiplying effect that leads to scalable, future-focused solutions for the most complex challenges. Ready to get started? Let’s talk.
On the journey from data to analysis to insight, companies are shifting from a traditional approach and leaping forward into new ways of delivering actionable business intelligence. While the core goals remain the same — enabling data-driven decisions, optimizing cost efficiencies, and driving revenue growth — new tactics demand new skills. The pivot to a use-case model and good governance throughout the data lifecycle meets those challenges while also delivering faster time to insight. Connecting the business to fit-for-purpose data The new data mindset is purpose-driven. Based on specific use-cases generated by the business, today’s data teams build, deploy, and configure purpose-built data assets that meet the organization’s needs fast. This streamlined process represents a significant shift from the status quo for traditional data teams, but the streamlined workflow pays off. To generate fit-for-purpose data, start here: Solicit use cases from the business Understand and analyze the characteristics and dynamics of the use case Assess your existing data portfolio and identify information that might meet the need Consider the appropriate technology to synthesize data sets and deliver actionable insights Establishing a data asset creation workflow pays off in efficiency and value for IT and the business units involved. Learn more about how to develop data as an asset >> Speeding time to insight Traditional data warehousing models impose a high cost for integrating disparate data sets. A legacy workflow might include: Amending data architecture Creating a semantic model Time-consuming extract, transform, and load (ETL) processes for all data sets involved Preparing the data Making the data available for analysis Today’s businesses don’t have that long to wait for insights. Modern data technologies like Hadoop make it possible to stage data in a platform for immediate access. To structure your data and technology architecture toward a use-case driven model that fosters speed to insight key considerations include: A prioritized list of problems that need solutions Any characteristics or constraints that might impact time to value Available data assets and technologies, like data virtualization, that would enable you to access and analyze data in place Once you adopt methods for analyzing data in place, your team can deliver value on a much shorter timeline. Learn more about data architecture and integration >> Improving data literacy The demand for data-driven insights continues to accelerate. Companies at the forefront of the shift from volume to velocity use analytics pervasively throughout their organization and have the technology and agility to act on insights quickly. To become a competitive, speed-driven organization, your business must excel throughout the analytics lifecycle: Acquire: Harvest data quickly by exploring evolving big data technologies and optimizing first-party data strategies Analyze: Identify the most impactful insights Act: Implement the insights iteratively and strategically That final step involves your organization’s data literacy. Providing insights is one thing, but training your people to take the next right action on the data they see might require new skills. Upskilling the workforce to better understand and use data pays off richly in transformative accuracy, speed, and confidence. Learn more about building data literacy within your organization >> Implementing harvest-to-delivery data governance As the volume of available data continues to increase, businesses are building complementary abilities to understand and use it. But implementing tools and technology to harvest, integrate, and analyze data without robust governance frameworks opens companies up to significant risk. Building strong governance into your data asset creation and management workflows from the start can help. Learn more about how to implement good data governance >> Elevating data leaders As the world becomes more digital, and more customer behaviors move to a mobile context, businesses are changing to meet and match digital footprints with geospatial dimensions. Leadership must keep pace The need for a Chief Data Officer (CDO) at the table isn’t really a question anymore. Today, leading companies are asking where analytics and digital belong in the leadership playbook. To get the most value out of your data management, the right team members — with the right support and authority in place — could not be more important. Learn more about the importance of empowering your CDO >> Data management can be complex. A strategic viewpoint can help. Find out more about Fusion’s approach to strategic data management, or ask us your questions. Wherever you are on your data journey, we can help you keep moving forward.
Every few weeks, we share insights with our Fuse subscribers along with news and trends we’re following across the web, including book recommendations. Here’s a compilation of some of our key insights from last six weeks. If you want content like this delivered directly to your inbox, we’ve got you covered. Subscribe to the Fuse here. Data is the Holy Grail In the classic film Monty Python and the Holy Grail, viewers hear King Arthur and his trusty servant Patsy approaching with a trademark “clip-clop, clip-clop” sound. When the duo emerges from the primordial mist, you see (spoiler alert) that the source of all this noise is not, as might be supposed, a horse. Rather, Patsy is banging two coconut shells together as the king trots about on his own two legs. The duo is getting from point A to point B in their quest, but not in the most efficient or effective way possible. Many companies follow that script. Equipped with buzzword mandates like process optimization and data-driven decision making, it’s all too easy to make small adjustments that sound like you’re headed in the right direction but aren’t necessarily getting you there any faster. How do you drop the coconuts and get on the horse (metaphorically speaking)? What does it look like to use data to drive optimization in real terms? We’ve got our eye on digital twins. Before you run away (how’s that for a deep cut Monty Python reference?) from yet another data buzzword, it’s worth another look at this practical application of machine learning and data analytics. Digital twins are most often used to optimize physical assets and processes like manufacturing, warehousing, and logistics. Using sensors to collect data on a product, machine, or physical process, the digital twin feeds real-time data to a machine learning algorithm to test variables and scenarios faster — ultimately leading to actionable process improvement insights. These days, we’re starting to see more businesses use digital twin frameworks to optimize and innovate non-physical business processes like accounting, HR, and marketing as well. A digital twin simulation can help you surface interdependencies and inefficiencies that might otherwise be blind spots, especially if they’re baked into your business culture as “the way we’ve always done it.” In the quest for digital transformation, don’t settle for coconuts. Instead, let’s talk about the ways your data can carry more of the weight for you. Get smart: If all this talk of Monty Python and the Holy Grail puts you in the mood for an old-school movie night, good news: it’s available on Netflix. And if you’re looking for a more literary scratch for your Middle Ages (ish) itch, we’re reading Cathedral by Ben Hopkins. It’s a fascinating look at the complex processes involved in constructing architectural marvels in the days before edge computing. We may handle optimization differently now, but human nature stays the same. Read the full Fuse: Data for April here. A horse is a horse, and other martech myths Martech is a crowded field, and a lot of the voices weighing in on your options have a horse in the race.* No one is out to skew the odds on purpose, but your organization is unique. Just because one solution is a front-runner doesn’t necessarily mean it’s a great fit for your business. So how do you sort the facts from the hype and decide where to place your bets? We rounded up a few martech myths as a starting point. Myth: One CDP is as good as another. Fact Check: Finding the right CDP (or CRM, or DMP, or any other solution you can think of) isn’t a simple box to check. And, once you make your decision, integrating and customizing your platform will also take time and attention. Myth: Everyone needs a CDP. Fact Check: Depending on your use cases, you might be able to do everything you need to do within your current tech stack. Myth: You should pick a platform and go all in. Fact Check: It probably goes without saying, but when it comes to technology there are no one-size-fits-all solutions. One product might be a great fit for your needs, but that doesn’t mean that vendor should supply your entire tech stack. Myth: Data and tech silos are just the way business works. Fact Check: Regardless of size, scope, or industry, today’s businesses can’t afford to be siloed. When you’re evaluating a tech solution or rethinking your entire customer data strategy, prioritizing integration is always a safe bet. On your mark. Ready to put your martech through the paces? Read on to find resources to help you optimize your stack and get your customer data strategy across the finish line. *Full disclosure: one of our team members won $100 when Rich Strike won the Kentucky Derby, but for the most part we are platform- and livestock-agnostic. Get Smart: We try not to be too on the nose with our book recommendations but couldn’t help ourselves this time. In Data Strategy, Bernard Marr collects a solid primer on the data landscape and how your organization can use it (legally and ethically) to advance your goals. Spoiler alert, though probably not surprising given the title, strategy turns out to be the foundational driver for effective data use. Whether you read the book, our Ultimate Guide to Customer Data Strategy, or just want to get a sense of potential next steps, we’d love to chat about customer data. Grab a time that works for you. Read the full Fuse: Marketing for May here. Three technology strategies walk into a bar If you’ve got a monolithic legacy system on your hands, sticking with the status quo isn’t a fun choice. But going nuclear and building back from scratch probably isn’t realistic. Wouldn’t it be great to find a middle ground? Meet the composable enterprise. It’s an iterative path toward digital transformation, with applications repackaged into components that can be used to build new solutions across the business. Piece by modular piece, you rebuild your technology ecosystem — becoming more efficient, effective, and scalable as you go. As the glue that holds those components together, APIs are key to building a composable business. And developing secure API solutions that accommodate shifting capacity demands and amplify your technology takes a hefty dose of strategy and expertise. That’s what we love about it! If APIs are your jam, too, or if you’re wondering if a composable system makes sense for your business, let’s talk. We’re talking APIs over IPAs in Cincinnati on June 16 and you’re invited. It’ll be fun! Get smart: You’ve probably spent the day wondering how speculative/sci-fi/literary fiction relates to API strategy and microservices (or maybe that’s just us). But, we’d guess, the same type of mind that enjoys transforming legacy monoliths into composable enterprises would also really track with a book like How High We Go in the Dark by Sequoia Nagamatsu. Modular pieces linked together by strong bonds leading to an intricate and ever-expanding whole? We’re here for it (the book and the technology strategy). Read the full Fuse: Technology for May here.
Traditionally, IT experts created data assets for business users upon request. Of course, people still went rogue, capturing, analyzing, and visualizing data in Excel “spreadmarts” outside of IT, but their potential for damage was limited. Today, as next-generation business intelligence (BI) tools become increasingly powerful and self-service enabled, and as global privacy laws and regulatory requirements increase, businesses without strong data management and governance programs face much greater risk. What is data democratization, and how can your business ensure that self-service data asset development doesn’t trigger chaotic — and costly — consequences? Data management best practices can help you: Keep up with the pace of information needs outside of IT without spawning ungoverned “Shadow IT” practices Manage existing Shadow IT practices, particularly if your organization adds substantially more powerful BI tools to the mix Develop a more open data culture while also valuing privacy, security, and good governance The solution lies in finding the right balance between increasing demands for data governance and the rapidly escalating need for data access. What causes shadow IT — and why it can be dangerous Growing demands for data-driven insights accelerate the demand for knowledge workers to get information when they need it and in a format they can use. As these requests for data insights balloon, IT departments quickly get backlogged. To solve the problem, businesses sometimes turn to self-service data tools, particularly in the BI space. These tools reduce repetitive demands on IT time while enabling users to personalize how they access and view data in their own channels. Tools like Tableau and Alteryx provide rich data visualization, which further speeds time to insight. Learn more about business intelligence (BI) options >> While data preparation used to require highly technical skills and toolsets to extract, transform, and load information and generate reporting, data democratization puts significantly more power in the hands of average business users. Business users can now do work that the savviest Excel-wielding shadow IT veteran never dreamed of. Flattening XML, geo-spatial custom polygon prep, blending, cleansing, creating predictive models, regressions, neural networks, and Naïve Bayes classifiers can be created and used without any traditional IT development knowledge. But data democratization has a dark side. Businesses can get into trouble when data democratization isn’t closely paired with data governance and management. Without a carefully cultivated data culture that understands data governance and management, shadow IT’s ballistic upgrade puts businesses at risk. Data management best practices for risk mitigation As data democratization becomes more of a reality in your organization, data management migrates from your IT and security teams to every business unit. Implementing data management across the business requires clear communication and leadership commitment. Audit your information ecosystem First, take stock of your current state in terms of data intake, preparation, access, and use. Take a fair and honest look at the data management practices and acknowledge where pockets of shadow IT exist. While Excel is obviously ubiquitous, understanding who has licenses for some of the newer tools, like Alteryx, may be a good place to start. When pockets are identified, ask some fundamental questions, like: What information is the business craving? Which tools or solutions have they tried? How are these tools being used? Is this the best tool or multi-tool solution for the job? Is there any overlap or duplication of assets across the business? What assets have they developed that could benefit a larger group or even the enterprise? Shift your data management mindset Then, resist the temptation to scold. The historical data management mindset toward those who created these one-off information stores needs to be turned on its head to focus on problems and solutions rather than reprimands. In light of more useful one-off data storage, you may find it hard to rationalize all of your current IT-generated assets. The cost to maintain them, particularly if they’re not actually being used, makes them liabilities, not assets. Taking the time to define what the business needs, and then collaborating on the process, information requirements, tooling, and, potentially, infrastructure and architecture solutions that would best meet and scale to fit those requirements is a far healthier approach. Then, your company not only creates a self-service machine that can keep pace with demand, but also goes a long way toward building a healthy data culture. How to build a strong data culture >> Get clear on good governance The term governance gets thrown around a lot, but does your organization have a clear idea of what you mean by it when it comes to your data? It’s not enough for IT to have documented policies and controls. A mature governance program must be seated in the business. Once again, effective processes begin with business requirements. While IT may bear responsibility for implementing the actual controls to provide row-level security or perspectives, the business must provide definitions quality rules, lineage, and information to inform and support governed access. In this sense, IT becomes the stewards responsible for ensuring those business-driven governance requirements are met. As your organization progresses toward data democratization, keep the following best practices in mind: Establish processes and workflows to bring democratized data and data assets under governance efficiently Co-create governance rules and standards with business units, and be sure they are communicated clearly to all data users Maintain governance requirements, quality rules, and access architectures that make data and data assets suitable and consumable by others within the organization How data governance fits into strategic data management >> Build a bridge between democracy and governance Although bringing the creation and persistence of data assets into the controlled IT fold is critical for good governance, allowing the business to quickly and freely blend, experiment, and discover the most effective fit-for-purpose data sets for their information needs takes the burden off of IT to try to figure out what the business needs. How do mature data organizations bridge the gap between democratized data and good governance? Workflows. Workflows bring democratized asset development and IT-implemented controls together. A strong data workflow, including how requests are processed, prioritized, reviewed, and either approved or rejected, is the critical gatekeeper that prevents democratization from turning into chaos. Your workflow should address: Data submission: Workflow is the process established by which data assets are submitted for enterprise or departmental consideration as governed assets and persisted according to IT’s self-service standards. Identifying the roles, process (inputs, outputs, gates), and relevant governance structure is fundamental to get a meaningful workflow in place. Data request backlog: Not every one-off dataset is an asset – the validity of the data produced must be verified to examine the lineage of the data and any transformation logic (e.g., joins, calculations) that was used in its creation. Data scoring: The usefulness of the data must be scored or assessed in some objective way to determine if it should be published and to whom. Data access and security: The workflow process should also address access and security requirements. By streamlining the information demand management process and making it more efficient, your IT team can shift focus to providing higher-value data and information for the business, while potentially driving down cost by retiring the production of lower-value reports or marts. Learn more about how to manage data as an asset >> Manage change well Shadow IT is called that for a reason. To get those datasets and those who create them to willingly step into the light is a culture shift that requires effective change management and clear communication. Creating an environment that encourages the creation of self-service and democratized data asset development by the business is important, but, when unchecked, can result in the proliferation of potentially redundant or conflicting data sources, none of which are under IT’s purview. Responsible development and management of all data assets within the organization requires balance, oversight, and commitment to change. Democratizing data holds huge potential for your business when it’s well managed and governed. Not sure where your company stands? Maybe a quick assessment could help. Our team of data experts can help you get clarity with a customized consultation, workshop, or audit designed to fit your needs. Let us know what’s on your mind >> Learn more about data strategy and how to get started >>
While artificial intelligence (AI) continues to linger in popular imagination in the form of humanoid robots, in real life AI more often exists as a process enabler. Over the past several years, as costs democratized the technology, AI and related emerging technologies like machine learning (ML) and deep learning (DL) became more accessible to mid-market companies. Today, most businesses use AI in one capacity or another — streamlining work, minimizing risk, and gaining competitive insights. These innovations are more than buzzwords. They have powerful potential to revolutionize the way your business collects, processes, and acts on data to solve the real problems facing your business. AI, ML, and DL in the business context To find the right AI applications for your business, it helps to understand your options. Artificial Intelligence Machine Learning Deep Learning Definition Machines programmed to be “smart” Machines that learn from experience provided by data and algorithms ML applied to larger data sets and using multi-layered artificial neural networks Common Examples Smartphones, chatbots, virtual assistants Spam filters, online purchasing recommendations Alexa, Google translate, facial recognition, self-driving cars Example Use Case Configuring a CMS to deliver personalized website experiences using available data points Discovering patterns in data such as “customers who buy X also buy Y,” purchasing cart analysis Processing a large volume of unstructured data, such as images or voice recordings, to generate insights Limitations Machine can only act on specific rules provided Humans must input data parameters as a starting point Requires very powerful – and expensive – computational resources How machine learning differs from AI “ML is the science of getting computers to act without being explicitly programmed.” Stanford University Machine learning takes a different approach to developing artificial intelligence. Instead of hand-coding a specific set of rules to accomplish a particular task, ML trains the machine using large amounts of data and algorithms that give it the ability to learn how to perform a task. Over the years, algorithmic approaches within ML evolved from decision tree learning, inductive logic programming, linear or logistic regressions, clustering, reinforcement learning, and Bayesian networks. Currently, machine learning uses three general models: Supervised learning: Humans supply factors until the machine can accurately apply the distinctions (for example, defining what counts as spam to a filter). Unsupervised learning: The system trains itself on provided data, which is used to surface unknown patterns, as in clustering and association. Clustering looks for patterns of demographics in data and how they predict one another, as in targeting groups of customers with products they will likely need. Association uncovers rules that describe data, as in online book or movie recommendations based on previous purchases and purchasing-cart predictions. Reinforcement learning: Using complex algorithms, the system learns through trial and error toward a defined “reward” of success. Cycling quickly through mistakes or near mistakes, the machine adjusts the weight of the previous results against the desired outcome. How deep learning works As another method of statistical learning that extracts features or attributes from raw data sets, deep learning builds on ML frameworks. While ML requires humans to provide desired features manually, DL uses even more complex algorithms and achieves more sophisticated results without human input. Deep learning algorithms automatically extract features for classification. This ability requires a huge amount of data to train the algorithms and ensure accurate results. To process this volume of data, DL requires specially designed, usually cloud-based computers with high-performance CPUs or GPUs. Using multi-layered artificial neural networks inspired by the biology of the human brain — specifically the organic interconnections between neurons — deep learning trains artificial neurons to identify patterns in information to produce the desired output. Unlike the human brain, artificial neural networks operate via discrete layers, connections, and directions of data propagation. Three common types of artificial neural networks and DL processing applications are: Convolutional neural networks (CNN) are deep artificial neural networks that are used to classify images, cluster them by similarity, and perform object recognition. These algorithms navigate self-driving cars and enable facial recognition, but are also used in leading-edge medical applications such as identifying tumor types. Generative adversarial networks (GAN) are composed of two neural networks: a generative network and a discriminative network. While GANs can be used negatively as in the creation of “deep fake” photos and video, organizations can also use GANs to create privacy-safe data pools for ML. Natural language processing (NLP) is the ability to analyze, understand, and generate human language, whether text or speech. Alexa, Siri, Cortana, and Google Assistant all use NLP engines, and many businesses are exploring ways to incorporate voice into their proprietary applications and digital solutions. Make smart decisions about AI New Era Technology provides cloud infrastructure and emerging technology solutions that accelerate your digital transformation. Our teams help businesses across a wide variety of industries uncover the best use cases for AI, and the right emerging technology solutions to meet your goals. We can help you source, clean, and integrate your data, build and train machine learning models, and iteratively test and improve your solution to maximize results. Not sure how this might work for your business? Check out these real-world examples: Find out how machine learning helps a national pizza chain retain customers >> Discover how AI transforms business processes >> Explore the future of wearables and mobile ML technology >> Learn how ML can help businesses predict sales pipelines >>
Developing a culture of and commitment to viewing data as an asset within your organization not only ensures good governance — and compliance with evolving privacy regulations — it also gives your business the insights needed to thrive in the rapidly changing digital world. Understand the lifecycle of data as an asset To encourage good data management processes, it’s important to understand the lifecycle of a data asset originating outside of IT. In these cases, data from multiple sources is blended and prepped for consumption, which typically includes steps to validate, cleanse, and optimize data based on the consumption need — and because these processes happen outside of IT, be on the lookout for potential security or governance gaps. While individual circumstances vary, from a big picture perspective the data asset development lifecycle generally follows these steps: Intake: Data assets can only be created or derived from other datasets to which the end-user already has access. While traditionally this was more focused on internal datasets, blending with external data, such as market, weather, or social, is now more common. Ask: How are new requests for information captured? Once captured, how are they reviewed and validated? How is the information grouped or consolidated? How is the information prioritized? Design: Once the initial grouping takes place, seeing data as an asset requires thoughtful design that fits in with the structure of other data sets across the organization. Ask: How will new datasets be rationalized against existing sets? How will common dimensions be conformed? How does the consumption architecture affect the homogeneity of data sets being created? Curation: Depending on the source, data might be more or less reliable, but even lower confidence information can be extremely valuable in aggregate, as we’ve seen historically with third-party cookies. The more varied the sources contributing to a data asset, the greater the need for curation, cleansing, and scoring. Ask: How will the data be cleansed and groomed based on the consumer’s requirements? Will different “quality” or certification levels of the data be needed? Output: Organizations that view data as an asset prioritize sharing across business units and between tools. Consider implementing standards for data asset creation that take connectivity and interoperability into account. Ask: How will data be delivered? Will it include a semantic layer that can be consumed by visualization tools? Will the data asset feed into a more modern data marketplace where customers (end users) can shop for the data they need? Understanding: As a shared resource, data assets require standardized tagging to ensure maximum utility. Ask: How will metadata (technical and business) be managed and made available for consumers for these sets? How is the business glossary populated and managed? Access: To maintain legal and regulatory compliance and avoid costly mistakes, good governance requires access management. Ask: Who will have access to various delivered assets? Will control require row- or column-level security, and if so, what’s the most efficient and secure way to implement those controls? Explore tools that streamline data asset preparation In many organizations, the data asset lifecycle is no longer a linear journey, where all data proceeds from collection to analysis in an orderly progression of steps. With the advent of the data lake, the overall reference architecture for most companies now includes a “marshaling” or staging sector that allows companies to land vast amounts of data — structured, unstructured, semi-structured, what some have labeled collectively as “multi-structured” or “n-structured” — in a single region for retrieval at a later time. Data may later be consumed in its raw form, slightly curated to apply additional structure or transformation, or groomed into highly structured and validated fit-for-purpose, more traditional structures. Podium Data developed a useful metaphor when speaking of these three levels of data asset creation. “Bronze” refers to the raw data ingested with no curation, cleansing, or transformations. “Silver” refers to data that has been groomed in some way to make it analytics-ready. “Gold” refers to data that has been highly curated, schematized, and transformed suitable to be loaded into a more traditional data mart or enterprise data warehouse (EDW) on top of a more traditional relational database management system. To streamline the creation of assets at each of those levels, many organizations adopt self-service tools to ensure standard processes while democratizing asset creation. While the vendor landscape is wide in this area, the following three examples represent key functionality: Podium, like Microsoft and others, adopted a “marketplace” paradigm to describe developing data assets for consumption in a common portal where consumers can “shop” for the data they need. Podium provides its “Prepare” functionality to schematize and transform data residing in Hadoop for a marketplace type of consumption. AtScale is another Hadoop-based platform for the preparation of data. It enables the design of semantic models, meaningful to the business, for consumption by tools like Tableau. Unlike traditional OLAP semantic modeling tools, a separate copy of the data is not persisted in an instantiated cube. Rather, AtScale embraces OLAP more as a conceptual metaphor. For example, when Tableau interacts with a model created in AtScale on top of Hadoop, the behind-the-scenes VizQL (Tableau’s proprietary query language) is translated in real time to SQL on Hadoop, making the storage of the data in a separate instance unnecessary. Alteryx is also a powerful tool for extracting data from Hadoop, manipulating it, then pushing it back into Hadoop for consumption. Keep security in mind It is worthy to note that many self-service tools have a server component to their overall architecture that is used to implement governance controls. Both row-level security (RLS) and column-level security (sometimes referred to as perspectives) can be put in place, and implementations of that security can be accomplished many times in more than one way. Many of these tools can leverage existing group-level permissions and security that exist in your ecosystem today. Work with a consulting services partner or the vendors themselves to understand recommended best practices in configuring the tools you have selected in your environment. Whether you’re evaluating self-service data tools or looking for ways to shift your organization’s culture toward seeing data as an asset, we can help. Fusion’s team of data, technology, and digital experts can help you architect and implement a comprehensive data strategy, or help you get unstuck with a short call, workshop, or the right resources to reframe the questions at hand. Read about key considerations for data democratization >> Learn more about data strategy and how to get started >>
Organizations collect data from a wide range of sources and store it in any number of solutions, which can be spread across the business. For marketing departments trying to deliver personalized customer experiences, those silos present a problem, and a customer data platform (CDP) can seem like an easy answer. That could be the right conclusion, but it also might be premature. If you’re wondering how to choose a CDP, it makes sense to start with the basics. How to choose a CDP: 1. Understand the benefits and limitations of CDP solutions 2. Get internal alignment around the need and timing for a CDP 3. Surface your requirements, and prioritize your top 3-5 4. Map your core requirements to functionality you need in a CDP 5. Get stakeholder buy-in and secure a budget 6. Create a CDP selection matrix 7. Get external help if you need it 1. Understand the benefits and limitations of CDP solutions The CDP Institute defines a customer data platform as “packaged software that creates a persistent, unified customer database that is accessible to other systems.” This simple definition goes a long way toward unpacking the pros and cons of CDPs. A CDP is packaged software. PRO: A CDP is prebuilt and doesn’t require as much help from IT to implement and maintain as other solutions like custom platforms and data warehouses. CON: Because it’s packaged software and not a platform, a CDP will be less customizable and less efficient than adding customized functionality to an existing architecture. A CDP must create a persistent, unified customer database. PRO: A CDP ingests data from multiple sources across the business, standardizes it, and uses it to build an identity graph to enable real-time customer identification that fuels targeted multi-channel marketing efforts. CON: A CDP overcomes some of the difficulty of identity resolution but obscures the methodology. You get a unified customer view, but not a lot of certainty that it’s accurate. A CDP must be accessible to other systems. PRO: You can use the aggregated data and outputs from a CDP with your downstream systems. CON: The CDP you select must either come with the API connections you need for the rest of your martech and enterprise tech stacks or you’ll need to factor in costs for custom APIs. 2. Get internal alignment around the need and timing for a CDP Today’s martech stacks are growing fast, and it’s not always clear whether the functionality your marketing team needs already exists in the broader enterprise architecture. Overlapping features and blindspots are a problem — but also an opportunity. Gathering a cross-functional group that includes marketing, IT, data, and product teams can help you figure out how to get the most from your technology investments across the business. If you’re wondering how to choose a CDP, you might be surprised to find that many of the core functions you’re looking for in a CDP are already available in your CRM, MDM, data warehouse, analytics, and BI solutions. For example: Some analytical CRMs can track real-time online events like website browsing, adding to cart, and the like, much like CDPs. Your data warehouse may allow for an identity graph overlay and machine learning algorithms that can play a key role in enterprise-wide customer identity resolution. Your IT and data teams may already have identity and access management processes in place, and that single source of customer data truth could be integrated with your existing martech stack. So, how do you know if your organization needs a CDP? Your cross-functional group can explore potential use cases in light of departmental needs, budget, and existing functionality. While every organization is different, here are some ways to frame the conversation. You might need a CDP if: Your organization has a large volume of customer data stored in multiple places, and you either can’t or haven’t been able to integrate it into a single, real-time view Your marketing team can’t access customer data or perform data tasks without help from data and IT teams You can’t unify your online metrics, CRM data, and offline touchpoint and transactional data, making it hard to build a 360-degree view of your customer You’re moving to a first-party data strategy, but you don’t have systems in place to use your data to inform audience segmentation and personalized campaigns You have functionality gaps in your current martech stack that match up to CDP features You might NOT need a CDP if: You have a minimal and well architected martech stack Your customer data is simple or straightforward enough to analyze easily without additional tools Your marketing plan doesn’t require a lot of personalization, either because your products and services don’t require it, or because your roadmap doesn’t call for it in the short term Your customer data strategy has already mapped your needs to existing solutions and your roadmap doesn’t include a CDP Your budget doesn’t allow for duplicating storage costs, building and operating data ingestion processes, or keeping up with the steep total cost of CDP ownership Your security requirements don’t allow for third-party customer data storage Your internal IT and data teams find more affordable and secure means to implement identity and access management, democratize data access, and connect data storage with martech tools 3. Surface requirements, and prioritize your top 3-5 As you meet with your cross-functional team and discuss your need for a CDP, you can also surface requirements and use cases for a CDP solution. From your larger list, choose 3-5 top priorities to help you choose a CDP. Some common examples of CDP use cases include: Streamlining identity resolution, and making those outputs more accessible and actionable Combining online and offline data Creating more personalized content experiences on your website Using more strategic targeting in your multichannel campaigns Integrating and standardizing data across systems and making it easier to use those outputs in omnichannel marketing efforts 4. Map your core requirements to features you need in a CDP After you identify your top 3-5 use cases, map those core requirements to features you need in a CDP. For example, if one of your core requirements is enabling more targeted multichannel marketing campaigns in the EU, you might need to look for a CDP that offers GDPR-compliant identity resolution processes. Be sure to note which systems and solutions you’ll need to connect to your CDP both in terms of data ingestion and output sharing with downstream systems. Your CDP will need integrations and APIs to enable those connections. Common CDP integrations include CRMs, analytics tools and dashboards, advertising platforms, BI tools, data warehouses, and data lakes. Meet with your cross-functional group again to confirm your conclusions and assumptions. This meeting is a good time to clarify any security and data governance implications for your CDP selection as well. 5. Get stakeholder buy-in and secure a budget Your next step is to get stakeholders on board for the CDP acquisition and to establish funding to make the purchase. Leadership needs to know the full report from your cross-functional team, but they’ll also have questions about what kind of return the organization can expect from the investment As you frame up the value story, consider thinking about what not having a CDP costs your company: Is your marketing team using a significant amount of IT or data team hours looking for information? If you were to solve your data connectivity issues with custom-built APIs, how much would that cost? How much are you losing in customer value by not providing a customized experience? How much time is your marketing team spending manually moving data from one system to another? 6. Create a CDP selection matrix The vendor selection process doesn’t have to be painful. Your team can feel more confident about choosing a CDP because you’re going into the process with defined use cases and requirements, as well as support from the organization. That said, with an ever-growing CDP vendor landscape, it helps to have a process for narrowing down your choices. To accomplish that task, many companies use an internally framed selection matrix or decision tree, customized to fit CDP-specific needs and requirements. These tools can help you think through solution options using your own criteria. However you handle the vendor review, plan to bring a short list of possibilities back to your cross-functional team for a final discussion before making a purchasing decision. 7. Get external help if you need it. Figuring out how to choose a CDP and where the solution fits into your larger customer data strategy isn’t easy. Fusion Alliance helps companies navigate changing customer data environments with a unique methodology that fosters collaboration, transparency, and shared ownership of digital transformations. Whether you’re just getting started or stuck in the messy middle of a CDP selection process, we help you unpack your processes and partnerships to identify risks and opportunities so you can take the right next steps toward a future-focused customer data strategy. Let’s talk. Keep reading: The Ultimate Guide to Creating Your Customer Data Strategy
Building trust into your customer data strategy In the rapidly changing regulatory environment around customer data privacy and security, it’s easy to get so caught up in specifics that you miss the big picture opportunity to build trust with your customers. Fundamentally, every box you check with your data: consent, collection, storage, use, sharing, and more is reflective of your respect for your customer. After all, how you treat their data tells customers a lot about how you’ll treat them. To take a customer-centric view of your data privacy compliance policies and procedures, we recommend a series of concrete steps applicable to any business or organization that deals with customer data in any form. While not exhaustive, this list forms a solid foundation for building trust in your data strategy. Know where you’re vulnerable One reason businesses are uniquely susceptible to data breaches and privacy omissions is the ubiquity of customer data across business units. While different departments may source, store, and use data in different ways, its presence demands compliance. The best way to get a handle on the issue is to create a data map. This unified view of the information you have, where it’s stored, and how it flows within your organization is not only helpful from a security and privacy compliance perspective, but it can also be a helpful resource for the business. Build a comprehensive data map in our Customer Data Strategy workshop >> Set guardrails When it comes to collecting customer data, just because you can doesn’t mean you should. Using your data map, think through your customer data requirements, and only plan to collect, process, and store what you really need to meet your goals. Many organizations find it helpful to spell out this less-is-more approach to data collection in a formal policy, and then make it part of their data culture and training. Pro Tip: Having written policies around customer data collection, use, storage, transmission, and sharing is important, but not sufficient. Building policies into your corporate culture takes focused planning and effort, but it pays off in compliance. Understand the current landscape Depending on your industry, location, and customer base, your company may be subject to different privacy laws, regulations, and requirements. Some common standards include GDPR, CCPA, and HIPAA. Your legal team probably already has a good handle on which of these apply in your current context, but do your security, IT, data, and business units have complete understanding of those implications? Consider cross-referencing relevant requirements to your data map to be sure your organization is fully compliant. Build an agile data privacy program What if your company is not currently subject to those regulations? As you build your data privacy program, it may be wise to look into today’s privacy standards as a near-future view. Legislatures and courts continue to support customer privacy, and policy changes at Apple, Google, and other large tech and search companies forecast trends to greater restrictions on customer data collection and use. Using current standards to anticipate future changes makes sense. Keeping your data privacy program agile and flexible not only ensures that you’ll stay ahead of costly changes in the future, but also builds customer trust — a valuable goal regardless of external factors. Establish a security and governance framework Depending on your industry, this may be a formal, full-time responsibility for an individual, team, or an entire department. Or you might opt for clear policies and procedures shared across business units. In any case, your customer data security and governance framework should include, at minimum: What types of customer data are collected How customer data is protected, during transmission, in use, and at rest What standards you follow for data security, quality, access, and retention Which protection measures you use when data is transferred, stored, or used, such as data masking, tokenization, format-preserving encryption, or keys Improve the user’s experience We commonly think of user experience on digital properties in terms of how a website or landing page looks and functions. But how you interact with your customers about their data is also part of their experience of your brand. Commit to clarity: rather than burying consent information in lengthy legalese, be up-front, clear, and simple in how your structure and format your consent management and opt-in features. As you think about the ways your customers experience your privacy program, some topics to consider clarifying include: How individuals can consent to (or opt out of) your company collecting and processing their data Why customers might want to share their data — that is, what they get out of the exchange in terms of improved experience, personalized discounts, and the like How you’ll establish that a user is over 16 or over 18, if your industry or topic requires that distinction How a customer can request to have their data deleted, and how your organization will comply Put the customer first Data privacy protections show no sign of slowing down, but companies with strong customer data strategies don’t need to worry. Whatever the future holds, building a customer-first approach to collecting, storing, and using data pays off in terms of strengthened relationships across the buyer’s journey and throughout the customer lifecycle. Not sure how to get started? We can help. Our team of digital, data, and technology experts partners with you to get your customer data strategy going — or back on track. Let’s talk >> Learn more about customer data strategy >>
In the crowded field of martech solutions, finding the right tools can be challenging. Businesses not only need to identify the right customer data strategy to fit their goals, but then source or upgrade the right software and systems to bring that strategy to life. In this quick comparison, we’ll define two commonly misunderstood tools, help you sort through the CDP vs DMP conundrum, and explore how they might fit into your technology stack. What is a Customer Data Platform (CDP) and what does it do? A CDP is not technically a platform; it’s a software solution that collects and streamlines customer data primarily from first-party sources to improve marketing operations. Because they are designed in support of long-term customer engagement, CDPs store data longer and can provide a single source of truth for customer records. Learn more about how CDPs fit into a customer data strategy >> What is a Data Management Platform (DMP) and what does it do? A DMP is a data warehouse that collects, segments, analyzes, and stores primarily third-party customer data for use in advertising campaigns. This adtech component plays a critical role in targeting and retargeting for short-term leads and customer conversions, but is not set up to support historical analysis. Learn how third-party cookie deprecation is impacting adtech >> How CDP and DMP solutions can work together A CDP and DMP can work together in a modern martech stack. A DMP can be one source of data for a CDP, and the CDP can also share information back to the DMP. When approached strategically, the question isn’t CDP vs DMP, but how the two systems can support each other. With the right processes in place, a DMP can help bring in new prospects, a CDP can help brands connect and engage, and retargeting and customer cultivation can continue in a seamless loop. More resources for your martech stack Also wondering about the CDP vs CRM debate? We’ve got five factors to consider >> Wondering how to choose a CDP? Check out our approach >> Need to get a big picture view? Get the Ultimate Guide to Creating a Customer Data Strategy >>
After investing in martech solutions — often layering in new platforms and software over time — many organizations find themselves stuck. Whether the root issue is technology, processes, or capabilities, teams get frustrated when their tools don’t deliver. If you’re in a similar position, the best plan is often to step back and review your customer data strategy. It might be time to re-evaluate in light of changing circumstances and shifting organizational goals. You might need a new roadmap to accommodate new privacy regulations. Or you might need a fresh take on how your martech stack fits into your enterprise architecture. Customer data strategies come to life in different ways, but smart implementations always start with well-aligned use cases and clear expectations. In this article, we’ll look at three real-life examples of how organizations we work with got unstuck by creating or refreshing their customer data strategies. Transformation 1: From scattered data to always on marketing Our client managed customer data across multiple platforms, with no connectivity between digital and on-premises touchpoints. Lacking a unified view of customer behavior, the client defaulted to scatter-shot marketing, with disappointing results. As part of a customer data strategy engagement, Fusion helped this client: Define what wasn’t working and identify root causes Align business objectives, technical requirements, and key use cases Recommend near-term remediation and future-state strategies Establish a roadmap with incremental steps toward the solution Then, we worked with the client to implement, test, and refine the customer data strategy, bringing the new solution to life in a way that fit the company’s culture and environment, including: Developing a Master Data Platform Customizing multiple platform APIs to unify customer engagement data Integrating multiple digital platforms Implementing PowerBI for data visualization As a result, the client now has a consolidated view of real-time customer behavior and multi-channel marketing activities, which enables an “always on” approach to customer engagement. Transformation 2: From customer churn to customer retention Another client was experiencing high rates of customer turnover but because they couldn’t discover the cause, they couldn’t develop a strategic plan for turning the trend around. Our team suspected that the key was in the client’s customer data. To identify root causes for the customer churn, we: Assessed the client’s customer data, which was housed in various locations and at different levels of quality across the organization Implemented a centralized data platform to reconcile and unify customer data from different systems of record Consolidated and cleansed the customer data, making it easier to use and analyze Designed machine learning models to test high-value use cases like identifying warning signs of customer churn, flagging high-risk customers that fit the indicators As a result of centralizing and standardizing customer data, and using machine learning to quickly analyze significant current and historical information, our team helped the client flag customers likely to leave and put retention strategies into action to reduce the churn rate. Transformation 3: From disconnect to martech maturity Another client we worked with had invested in powerhouse martech tools but wasn’t seeing the return they had expected. Overwhelmed by the disconnect between expectations and results, the organization asked us to help sort out what had gone wrong. Our team helped the client re-evaluate their customer data strategy to determine the best path forward. Some of our work included: In-depth analysis of existing technology platforms, software, and services Clarifying the customer journey and identifying friction points both for internal and external users Optimizing technology configuration and integrations, including key architectural changes Cleansing data to remove duplicate information and give the client greater confidence in the quality and reliability of the data they collected Implementing process and governance improvements As a result, the client’s marketing team now works faster and more independently of IT, confidently using customer data to automate and personalize marketing touchpoints, and speeding up time to execution for their outreach and campaigns. Get your transformation back on track Ready to do more with your customer data and martech solutions? Defining a customer data strategy and bringing it to life doesn’t have to be so daunting. Whether you need a quick consultation or an in-depth engagement, our team can help you identify opportunities, outline a path forward, and put you on track to optimize the ways you collect, store, and use your customer data. Let’s talk >> Get the Ultimate Guide to Creating a Customer Data Strategy >>
How do I get customers to give me their data? Three trust-building steps toward a first-party data strategy
You understand the implications of cookie deprecation and the importance of pivoting to a first-party data strategy. But, if you’re like many organizations, you might be wondering how to get customers to give you the data you need to make that strategy meaningful. You know that collecting first-party data is a tough ask from customers because you are one yourself. In a digital-first world, where data breaches make the headlines every week and privacy laws continue to tighten, people are increasingly wary of sharing their data. And yet, those same circumstances make it more vital than ever for marketers to collect it. How do you overcome data sharing reluctance and build a foundation of trust for your first-party data strategy? Every business is different, but we’ve identified three key steps that any organization can take to get – or keep – their first-party data gathering on track. Key 1: Be transparent Everyone knows companies need first-party data, but before they hand over their information, they want to know how you’re using it. Studies show that most people are comfortable sharing data with companies they trust, who use the data to meet customer needs — and if you plan on selling or sharing the information, they want to know upfront. Your best move is to explain how you store and use data in plain, uncomplicated language. Make the terms easy to find and the customer’s rights simple to understand. Then, highlight the ways you use the data you collect to benefit your customers. What benefits and experiences can they expect when they let you know about their interests and preferences? Key 2: Deliver value Don’t just talk a big game about personalized content. If you offer incentives in exchange for data, be sure it’s something that your customer actually wants. Your generic and sporadically published newsletter? Probably not it. A guide that helps the customer with a real issue they experience day-to-day? Probably more successful. What content meets the customer’s unique needs? What touchpoints build connections between your brand and the customer’s own values and personal identity? Smart companies build content and offers tailored to customer interests, needs, and buying stage, not broad filler work. Key 3: Be consistent Your customers expect a consistent experience every time they interact with your company, whether that’s on your website, chatting with customer service, or talking with sales. Rather than capturing these touchpoints in data silos, your technology needs to connect information across business units and channels, so the customer is tied back to a centralized profile and so that your organization can connect the dots. Whether you store the information all in one place or keep it dispersed, your analysis should be able to cross boundaries and deliver actionable insights that help you personalize content, marketing, and individual interactions so your customer has a seamless experience with your brand. Take the next step. Building a first-party data strategy is no easy task, but the results are worth it. If you’re not sure how to get started or think you might have gotten off track, we can help. Fusion Alliance helps companies reimagine how they connect with their customers through strategic solutions at the intersection of digital, data, tech, and cloud. Let us know how we can help >> Get the Ultimate Guide to Creating a Customer Data Strategy >>
The difference between a customer data platform (CDP) and customer relationship management (CRM) solution may be difficult to determine at first, because both options collect, store, and put customer data to use in support of business goals. While their functions may overlap, the CDP vs CRM debate becomes easier when you get clarity about the people, processes, and use cases for each option. How to make the CRM vs CDP decision 1. What is a CDP? 2. What is a CRM? 3. What data is collected by a CRM vs CDP? 4. Who uses a CDP vs CRM and for what purpose? 5. What do we need: a CDP, CRM, or both? 1. What is a CDP? A CDP unifies and standardizes large and detailed data sets from a wide variety of sources, resulting in robust customer profiles that enable real-time personalization. The CDP Institute defines a CDP as “packaged software that creates a persistent, unified customer database that is accessible to other systems.” Additionally, a CDP must have the following capabilities: Ingest data from any source Capture full detail of ingested data Store ingested data indefinitely (subject to privacy constraints) Create unified profiles of identified individuals Share data with any system that needs it Through the process of identity resolution, the CDP can match, merge, and deduplicate data into a single customer view that can be segmented and analyzed — by human analysts or with the assistance of machine learning. 2. What is a CRM? A CRM and a CDP are both software solutions that handle customer data, but they differ in how, why, and who for. The difference came about organically, as organizations adopted different use cases for their customer data over time. “CRM solutions were often proposed to tackle customer data management problems. The idea was that you could get ‘all of your data in one place’ to use for sales, marketing, and customer service. The promise was they’d break down silos in enterprises and design a view of the customer that wasn’t specific to sales or marketing or customer service. That sounds familiar to the promise of CDPs, doesn’t it?” — Lizzy Foo Kune, senior director analyst at Gartner A CRM helps organizations manage customer relationships by consolidating what is known about customers from one-to-one touchpoints and transactional details into a single database, giving sales and service teams personal and actionable insights. According to the Microsoft Dynamics 365 website “CRM systems help you manage and maintain customer relationships, track sales leads, marketing, and pipeline, and deliver actionable data.” Sound similar to a CDP? There’s a key difference: CRMs only apply to known customers and contacts. Moreover, they don’t cleanse, combine, standardize, or deduplicate the customer records, so they can’t give a business a “single customer view” across channels. 3. What data is collected in a CRM vs CDP? That key difference reflects the two business silos that CRMs were developed to unite: marketing and sales. Marketing needs a high volume of customer data across touchpoints in a single, unified view to understand your customers and their behavior. CDPs collect digital data automatically using integrations and code snippets embedded in digital touchpoints, gathering customer data from websites, laptops, mobile devices, apps, and even CRMs into one place. The CDP then cleans it, and produces consolidated customer. Sales needs customer data to help manage the customer relationship. CRMs store historical data about customer interactions in order to inform future interactions. The data CRMs collect is usually entered manually and its purpose is tightly focused on logging an interpersonal or transactional interaction — for example, notes from the latest sales call – to inform future interactions. The data inputs are simple, although difficult to standardize or automate, and are usually done manually by sales (and service) people to track the progress of the relationship. 4. Who uses a CDP vs CRM and for what purpose? Your organization’s CDP vs CRM discussions may come down to who needs to use the system to accomplish critical business tasks. As we’ve said above, marketers need a unified view of the customer’s entire experience of the brand over time. A CDP’s ability to ingest, cleanse, manage, and analyze large volumes of data from many digital sources makes that task easier. But for sales and support teams, the key driver is managing customer relationships. In these customer-facing roles, contact management is critical, so a CRM’s ability to capture notes and manual inputs about one-to-one interactions facilitates that function. 5. What do we need: a CRM, CDP, or both? While choosing between solutions isn’t easy, it’s not necessarily an either/or decision. You might find a both/and solution serves your business better. How do you make the call? If your business primarily needs to manage customer relationships in a more detailed, efficient, and personalized way, you might choose a CRM. In fact, over the last few years, CRMs have been innovating and evolving to function more and more like CDPs so it might be prudent to wait and/or choose vendors carefully. Gartner predicts that 70% of independent CDP vendors will be acquired by larger technology vendors or will diversify by 2023. “CRM systems have seen the competitive threat that CDPs brought to the table,” Gartner’s Foo Kune said. “As CRM technologies recognize that they need to update their aging databases to meet the needs of modern business functions, including marketing, augmenting your CRM with a CDP may be unnecessary.” If your business primarily needs to have a broad view of who your customers are and how they engage with your business, you may opt for a CDP. “Companies seeking a new strategy to form personalized customer experiences through data will need a CDP as it offers the resources to create a comprehensive view of the customer across each platform they interact with in real-time — whether it’s social media, apps or mobile,” says Heidi Bullock of Tealium, a CDP provider. “CRMs, on the other hand, help manage sales-focused customer data rather than collecting data across different channels.” And, if your business is broad you can choose both a CDP and a CRM. While CDPs and CRMs offer two different marketing and sales data management solutions with differing strengths, you don’t necessarily have to choose between them. “CDPs and CRMs can actually operate simultaneously, as they work to fulfill different business goals,” Tealium’s Bullock notes. It’s possible to use a CRM as an input and output channel to a CDP, and, in turn, use a CDP to provide a 360° customer view data set within the CRM. Choosing both a CDP and a CRM can deliver both an amazing customer experience and tremendous business value: achieving high marks in customer satisfaction and providing integrated tracking and engagement. The CDP vs CRM choice depends on your roadmap. Fusion works with clients to define a customer data strategy that fits each organization’s unique strategic objectives, operational needs, and timeline. From there, our team creates a tactical roadmap to define actionable steps toward those goals. Whether you’re just getting started or trying to get your digital transformation back on track, we can help. Ask a question >> Book a workshop >> Learn more about customer data strategy >>
Every few weeks, we share insights with our Fuse subscribers along with news and trends we’re following across the web. Here’s a compilation of some of our key insights from last quarter. If you want content like this delivered directly to your inbox, we’ve got you covered. Subscribe to the Fuse here. Data literacy: Food for thought How do you get from buzzwords like “data literacy” and “data culture” to confidence that data is driving better decisions across the business? If Peter Drucker was right — and he’s Peter Drucker, so he probably was — culture eats strategy for breakfast. It’s not enough to build a business case for data. You need a business culture to support it. In our experience, success starts with aligning people, processes, and business goals with purpose-built data and technology solutions. When people understand what data makes possible and how it impacts their job — where to find it, and how to read and interpret the data they need — convincing them to use it to drive better decision making is a much easier lift. Easier said than done? You bet. We love a complicated algorithm or elegant data architecture, and we’re basically ninjas at selling business cases (if we do say so ourselves). But there’s a reason Fusion stakes a claim on being people-focused. Because we don’t just love data. We love when it works. Get smart: If you’re looking for an overview of data culture and a baseline for building data literacy across your organization, we recommend Be Data Literate by Jordan Morrow. Although written as a primer for individuals, the book’s framework could easily be used as a springboard for helping your whole company level up its data acumen. Read the full Fuse: Data for March. A one-brain approach to B2B marketing In AppleTV+’s bizarrely compelling drama Severance, employees’ brains are modified to separate their work memories from their off-work thoughts. Of course, what makes the show sci-fi is the fact that no one really has a “work self” and a “life self.” So, why does B2B marketing often seem to assume that consumers and business purchasers are different people? Compare your IG feed to the LinkedIn ads you’re served. One platform shows you talking Australian lizards. The other shows you text about processing speeds. When you need insurance, you remember where to go. When it’s time to make a CMS platform decision you…probably should have made a note. We want to believe that our B2B customers make purely rational decisions, but experience and data suggest otherwise. Whether it’s B2C or B2B, people predominantly buy from emotion, not stats and features. Creative marketers who are willing to push the envelope can capitalize on this idea to stand out in the sleepy B2B marketing landscape. It’s hard to argue with results. One of our clients, a pharma sales enablement company, saw 3x lead growth when they pivoted from standard B2B ads to a brighter, more engaging campaign direction. Your B2B targets don’t come to work as a separate persona. Creative marketing captures attention with a whole-brain approach. Ready to ditch the sinister work-life lobotomy assumptions? We’re always ready to talk about how to set your brand apart, whether it’s new creative or a streamlined martech stack. Let us know how we can help. Get smart: Wondering how to sell creative marketing internally? We’ve been reading The Human Element: Overcoming the Resistance that Awaits New Ideas and thinking through the authors’ framework for overcoming our natural resistance to change — especially as it applies to organizations. If you’re struggling through a shift, this book could be worth your time. Read the full Fuse: Marketing for March. Put your technology on a balanced diet Tech creep is kind of like strolling the cereal aisle with a four-year-old (or a 34-year-old, no judgment) who begs for the choco-sugar-neon-behavior-bombs instead of the sensible-fiber-nut-loops you had planned. When it comes to building your tech stack or stocking your pantry, “it looked cool” isn’t really a strategy. And yet, for many companies, an enterprise architecture hodge-podged out of whatever looked good at the time often gets the job done. Until it doesn’t. A move to the cloud, a new data privacy mandate, or even the increasing demand for speed and agility to stay competitive might expose the imbalance in your tech stack. How do you get back to a more wholesome view? Realigning your solutions with your organizational goals and objectives is a great start. Regardless of how long you’ve been using it, does every piece of your technology still fit your plan? You might need to let go of sunk costs and admit that a tool has gotten a little soggy for your current needs. You might need to put your appetite for shiny new solutions on a diet. At the risk of straining our balanced breakfast metaphor past the breaking point (too late?), we recommend putting a healthy strategy on the menu. As guidelines change and organizations shift to keep up, this is a great time to reassess your tools and processes. In its simplest form, a refreshed technology strategy includes a current state audit, an ideal state articulation, and a plan to bridge the gap. Whether your internal culture skews Team Sugar-Bombs or Team Fiber-Loops, we can help you take a strategic view and bring your technology stack back into balance. Get smart: We get that it’s a little bit ironic for a bunch of tech consultants to recommend a book like Cal Newport’s Digital Minimalism. But hear us out. Newport’s approach to consumer technology – that tech and platforms should have to earn their place in your life by proving that they help you meet your goals and values – has some merit for the business world as well. We’ve all seen what Newport terms “maximalism” at play in sprawling, bolted together legacy architectures. Maybe the time has come for a more minimalist, goal-driven tech stack. Whether you’re ready to start over or looking for ways to modernize what you have, we’re always happy to talk technology strategy. Read the full Fuse: Technology for April.
Culture. It's what's for breakfast. When it comes to implementing emerging technologies and advanced analytics, it’s easy to get caught up in building the business case. But we can all think of businesses that devoted significant time and resources to leading-edge solutions and still failed to see results. If Peter Drucker was right – and he’s Peter Drucker, so he probably was – culture eats strategy for breakfast. This is not to say that your data initiatives are doomed to the frying pan (sorry, we couldn’t resist). Rather, to deliver value quickly, you can’t stop at data strategy, quality, and governance. It’s not enough to build a business case for data. You need a business culture to support it. Where do you start? How do you get from buzzwords like “data literacy” and “data culture” to confidence that data is driving better decisions across the business? In our experience, success starts with aligning people, processes, and business goals with purpose-built data and technology solutions. When people understand what data makes possible and how it impacts their job, where to find it, and how to read and interpret the data they need, convincing them to use it to drive better decision making is a much easier lift. Easier said than done? You bet. We love a complicated algorithm or elegant data architecture, and we’re basically ninjas at selling business cases through (if we do say so ourselves). But there’s a reason Fusion stakes a claim on being people-focused. Because we don’t just love data. We love when it works. Get smart: If you’re looking for an overview of data culture and a baseline for building data literacy across your organization, we recommend Be Data Literate by Jordan Morrow. Although written as a primer for individuals, the book’s framework could easily be used as a springboard for helping your whole company level up its data acumen.
Organizations today are inundated with data from different sources. This data can help you make better business decisions, improve customer interactions and retention, and create more intentional strategic plans. But none of that is possible if you can’t trust or access your data. Some of the common pain points we see around data in organizations are: There is no single source of truth to use to make decisions Managers don’t have the right data available in real time to accelerate decision-making Time is wasted searching for and reconciling data The data exists, but is not keeping up with business needs Concerns about compliance Lack of trust in data quality That’s why we created our proprietary Catalyst Strategic Data Management & Analytics (SDM&A) Framework — a comprehensive and flexible framework that enables you to examine your business across all domains of data and analytics maturity. Fusion’s Catalyst SDM&A Without the right framework, your information is useless to your organization. We work to ensure that the following data components are accounted for in your strategy and that you’re able to execute in a way that moves your business goals forward. Based on our experience, we’ve found that companies need a holistic, 360° view of the SDM&A landscape in order to decide how to proceed most efficiently while delivering maximum business value. Using our framework, you can: Understand your business’s current data maturity level across the seven critical domains shown above Identify gaps or deficiencies between your desired state and your current state, and walk away with a roadmap that gets you where you want to go Align data and analytic investments with business strategy, goals, and objectives. Our Catalyst SDM&A Framework is completely customizable, so we are able to meet you wherever you are on your data journey and create a solution that meets your unique needs. Click here to learn more about our strategic data management services.
Navigating the new data landscape Today’s consumers expect increasingly personalized experiences. To deliver that customized experience and increase market share, business leaders rely on personalization. Ever wondered how ads seem to follow you around the internet? That’s personalization at work. And the engine that drives it is data. The changing data landscape makes it essential for companies to re-evaluate their customer data strategies — and fast. Let’s get started. Where we are now As part of building a direct relationship with your customers, you probably collect some form of data with their consent. This information, known as first-party data, can include purchase history, application and website interactions, opt-ins and subscriptions, and the like. A customer provides a business with first-party data with the understanding that the business —and only that business — will use that data to better serve them. But not all data is collected through active consent. All your customer’s online activity and search history is currently tracked by cookies. That information, known as third-party data, can be sold from one business to another, without the customer’s knowledge, with the goal of getting more information about aggregated pools of consumers with similar behaviors. Think about the last time you shopped for a new car. You probably looked up options online or Googled dealerships nearby. Then, BAM! Suddenly all the ads you saw on Facebook, YouTube, and other websites were from car dealerships, with information targeted to the very model you were thinking about. That’s third-party cookies at work! Using third-party cookies, advertisers pay to get one step closer to consumers’ brains, gaining access to the professional status, consumer preferences, and personal interests that are otherwise outside of their first-party scope. Data in real-life While most organizations collect first-party data from their customers, many still rely heavily on third-party data and use a combination of the two to build their personalized customer experience. For example, using in-store Wi-Fi, Walmart collects a combination of first- and third-party data on close to 145 million Americans. Individual profiles include what customers buy, which stores they visit, and where they linger while they shop. Analyzing every clickable action the customer takes on Walmart.com, social media activity, and even weather data completes a full customer profile. By aggregating and analyzing all this data, Walmart makes inferences about highly personal circumstances, knowing that if you’re buying newborn diapers today, they should start hitting you with advertising for toddler sizes in two years. Even without a loyalty program, Walmart’s data and analytics ensure they know their customers and they have what they need to create a personalized shopping experience. Why these trends matter While customers may enjoy the customization of their experience, they don’t necessarily appreciate the creepiness factor of how often their privacy is invaded. The amount of information that consumers provide — knowingly or unknowingly — on a regular basis, which is then sold to other companies, creates a huge risk of data breaches and identity theft. Many organizations and advertisers see the coming deprecation of third-party cookies as a win for privacy, but a loss for businesses. But it doesn’t have to be a one-sided victory. Smart companies are figuring out how to use first-party data to achieve a personalized customer experience, turning the challenge into a win-win opportunity. Leveraging your first-party data To make up for the loss of third-party data, you need your customers to feel comfortable sharing their data with you. First-party data requires customers to be actively involved. Many customers are willing to give their information if they know it is being used responsibly by the business and that they will receive a better customer experience because of it. Your goal is to communicate how the information customers provide your company enables you to deliver a valuable, relevant experience that fits their needs and leaves them in control of their data. Focus on customer retention over acquisition When you deliver a relevant experience to your customers, they’re more likely to stay customers. First-party data allows you to deliver a more customized experience by gaining an understanding of each customer’s preferences for your products and services. You’re then able to re-engage through your owned channels and increase the lifespan of the customer relationship. And it doesn’t mean you have to leave acquisition behind. Just think … happy, loyal customers are likely to refer friends and family. Offer value in exchange for data Customer loyalty programs are an excellent example of successfully using first-party data, especially where there is a value-add such as accumulating points, getting a discount, early-bird notifications, and the like. In this case, customers are incentivized to provide their data and you can build a unique customer profile that will allow you to customize their interaction with your brand in the future. Create personalized content First-party data helps you discover who your customers are and find out about their habits and preferences directly from the source. Insights gained through first-party data analysis allow you to create different targets and strategies for delivering content personalized to your most loyal customers. It’s a trade-off The reality is, by increasing consumer privacy, businesses lose access to aggregated information and the ability to mass personalize and customers potentially lose the personalized experiences they’ve come to expect. The biggest question remains: can organizations fill the gap and start collecting and leveraging quality first-party data? The time is now Data and marketing teams must work together, and fast, to create first-party customer data strategies ahead of the cookie deprecation timeline. Building processes to collect quality data, keep it safe, analyze it correctly, and use it to create meaningful customer experiences won’t be easy, but laying that groundwork now can position your company for success in the new customer data landscape. This article was originally published on CDO Magazine.
Third-party cookies are a staple in a marketer’s arsenal and have been for decades. So, now that they are soon to be fully eliminated, marketing teams are panicking. Are we truly ready for a future without this key source of third-party data? While some companies have started to plan for a cookieless future, most still don’t have a baseline understanding of how their business will be affected — much less how to roll out a strategy to operate in this new world. It’s an important problem to solve for. And not all solutions are created equal. What marketers choose to invest in now could make or break their marketing campaigns for years to come. Why are third-party cookies being deprecated? The reality is that this has been a long time coming — Mozilla started phasing out third-party cookies in 2013. But now, Google is expected to phase out this online tracking tool in 2023, and Apple has moved their mobile device ID (the Identifier for Advertisers or IDFA) to opt-in only. Data privacy is a growing concern for consumers, and businesses must keep up. As consumer data gets collected and passed around between countless third parties, there are benefits to targeted marketing, but there are also more possibilities for harmful data breaches. And with multiple pro-privacy laws coming to fruition in the U.S. — such as the California Consumer Privacy Act and similar laws in Colorado and Virginia — organizations are being held more accountable for the data they own and use. Those who prepare for the change will stand out from the crowd by delivering relevant, timely, and insightful customer experiences compared to the one-size-fits-all experiences of the competition. Those who haven’t prepared will need to build that strategy quickly, all in an environment where there’s no clear replacement and simply less data available. So how do you do it? Here’s how you can still create a customized customer experience without third-party cookies. Leverage the first-party data you already have First-party data is often your best source for accurate and specific consumer information. Increasing first-party data has been a priority for marketers for years. According to a 2018 study, 85% of U.S. marketers said that increasing their first-party data is a high priority. And if you think about it, you’re probably already collecting data on your customers with different tools (e.g., email, social, etc). But there are better ways you can leverage first-party data. For example, look at your CRM and sales tools. When utilized correctly, you can evaluate customer data based on interaction reports, performance metrics, conversion rates, etc. Now is the time to look at the tools your company is already using and leverage them to reach the right audience with the right content. Personalize customer experiences with declared data Declared data, a type of first-party data, is one of the richest sources of customer information you can get. This is the data a customer gives you themselves in one-on-one interactions. This is the most accurate information about their desires and demographics. As Forbes explains, consumers are happy to share their information to get a more personalized experience. So don’t be afraid to ask for your customer data when it makes sense — it will be more important to have this declared data than it ever has been before. Utilize email marketing Although email marketing may not be new, it can still deliver ROIs of over 4,000%. It is a great channel for driving sales, nurturing relationships, and understanding your customers. Most importantly, it allows you to collect customer data through opt-ins — and then that data can be segmented by location, company size, position, etc. You can deliver unique offers and messages to each group for the best response. Businesses can use tools like HubSpot to build lists, segment communications, and create automated journeys. Email will become the go-to first-party data targeting solution as third-party cookies are being phased out. Your business can market to customers when they’ve left a website and utilize this information to create more customized messaging and experiences. In addition, you can utilize email lists within advertising platforms like Google Ads where subscribers can be retargeted and brought back to convert. A lot of what third-party cookies provided can be achieved with proper email marketing. Examine your partnerships for customer data exchanges You may be thinking about how you are handling third-party data, but how are your vendors preparing for the change? By leveraging the right technology, companies can safely and securely discover overlapping customers without exposing personal identifiable information (PII) or breaking data privacy laws. These insights alone can help you quickly assess the untapped potential of a collaboration. Now is the time to work with partners to begin exploring the safe exchange of data for the benefit of both parties. Overhaul your data management strategy This is a great opportunity to change the way you manage and leverage your customer data to develop targeting, execution, direct marketing, and customized experiences. This does require effort and investment from your organization (e.g., investing in a quality consumer data platform (CDP) or a great CRM system). Ultimately, your investment results in better control, a more customized experience, and a greater ROI. While CDPs and CRMs offer two different marketing and sales data management solutions with differing strengths, you don’t necessarily have to choose between them. It’s possible to use a CRM as an input and output channel to a CDP. And, in turn, use a CDP to provide a 360° customer view data set within the CRM. Choosing both a CDP and a CRM can deliver an amazing customer experience and tremendous business value: achieving high marks in customer satisfaction and providing integrated tracking and engagement. Learn more about the differences between a CDP and CRM, and what could work best for your organization. By making the investment now in a new and improved data strategy, you can set yourself up for success in a world without third-party cookies. Look forward to the cookieless future The elimination of third-party cookies will fundamentally change digital marketing as we know it. But it also presents an opportunity to move away from an old standard and push your online marketing into the future. By maintaining a solid understanding of all the forces at play and updating your strategies to prepare for the transition, you can set your organization up for success and keep you ahead of your competition. Start now with our risk assessment workshop. Ready to see how this change is going to impact your organization? Register for our upcoming webinar, “Does cookie deprecation affect me? And 5 other key questions to ask before it’s too late.”
Almost 20 years ago, Capital One recognized the need for one person to oversee their data security, quality, and privacy, and the role of the Chief Data Officer was born. Now reports show that 68% of organizations have a CDO (Harvard Business Review, 2020). And while the role has become more common and has significantly evolved, many data executives are still struggling to get a seat at the table or bring data to the forefront of their organization. In fact, in a recent survey, only 28% of respondents agreed that the role was successful and established. Company leaders agree that there needs to be a single point of accountability to manage the various dimensions of data inside and outside of the enterprise, including the quality and availability of that data. But now we are at a crossroads — what is the best way to align the work that the CDO does with the strategy of the business as a whole? The reality is that CDOs often struggle to find the internal and external support and resources needed to educate others to align with the organization’s goals. Implementing enterprise data governance, data architecture, data asset development, data science, and advanced analytics capabilities — such as machine learning and video analytics — at scale, is not an easy task. To be successful, data executives need support, resources, and communities focused on the elevation of data. We are proud to continue to help these communities come to life for the benefit of our colleagues and clients, establishing local resources here in the Midwest with global scale, reach, and impact. Read on as Mark Johnson, our Executive Leader for Data Management and Analytics, provides insight on the current state of data and the CDO, and provides details on multiple opportunities for data leaders of different levels to get more involved in the data community. Q: How has the role of data changed/evolved for organizations? The reality is that information is everything. This global pandemic proved that to many organizations. For some, it showed that their digital network was ready, and they were aptly prepared to take on COVID. For others, it has forced them to recognize their own immaturity with data and analytics. On its own, managing data is not exciting — the information just sort of exists. To give data value, you have to put it to use. And so, I think we are going to see the Chief Data Officer and/or Chief Data Analytics Officer really find their own in the coming years. It’s time for their seat at the table. The C-suite is now asking questions that can only be answered with data, and now they truly understand both the value and consequences of the data game. Q: What do you think are the biggest challenges facing CDOs/data leaders today? I think that the biggest challenge for data executives today is the acquisition of talent that is seasoned and experienced where you need them to be for your organization. Higher education hasn’t necessarily kept up with the data world, and often times it takes additional training to reach the right levels. The reality is that right now the talent is manufactured in the real world. Data executives have to be connected and equipped to mentor, train, and keep the right people. Q: You’ve mentioned that data leaders need to connect with each other. What value can people expect from these data communities? I think there is tremendous value. As we are seeing the power of data evolve in organizations, and the role of data leaders evolve as well, I think coming together to collaborate and share elevates the leader, the organization, and the view of data as a whole. In these communities, it gives people a safe space to talk about how they are doing, what they are doing, what their biggest challenges are, and what solutions are working for them. These communities have truly become both a learning laboratory and an accelerator for data. Q: As a big proponent of connecting data leaders, you have been involved in creating different opportunities for people to get together. What groups/events would you recommend, and how can people get involved? I personally have been involved with the MIT Chief Data Officer and Information Quality Symposium (MIT CDOIQ), which is such a great opportunity to start with for connection. It has developed into additional opportunities for data leaders at all levels to get involved and create the kind of community we need to truly elevate the value of data. Organizations like the CDO Magazine, the creation of CDO roundtables across the nation, and the International Society of Chief Data Officers (isCDO) all evolved from connecting data leaders and identifying common challenges. MIT CDOIQ: The International MIT Chief Data Officer and Information Quality Symposium (MIT CDOIQ) is one of the key events for sharing and exchanging cutting-edge ideas and creating a space for discussion between data executives across industries. While resolving data issues at the Department of Defense, the symposium founder, Dr. Wang, recognized the need to bring data people together. Now in its 15th year, MIT CDOIQ is a premier event designed to advance knowledge, accelerate the adoption of the role of the Chief Data Officer, and change how data is leveraged in organizations across industries and geographies. Fusion has been a sponsor of this symposium for seven years now, and we are so excited to see how the event has grown. Designed for the CDO or top data executive in your organization, this is a space to really connect with other top industry leaders. CDO Roundtables Fusion has always been focused on building community and connecting people. And when one of our clients, a Fortune 500 retailer, mentioned wanting to talk with other data leaders from similar corporations, we realized that there was a big gap here — there was no space that existed where data leaders could informally come together, without sales pitches and vendor influence, and simply talk. That’s how the CDO roundtables were born — a place that allows data leaders to get to know each other, collaborate, accelerate knowledge growth, and problem solve. We just started two years ago in Cincinnati, but now, now we’ve expanded to multiple markets including Indianapolis, Columbus, Cleveland, Chicago, and Miami. These groups are designed for your CDO/CDAO and truly create an environment for unfiltered peer-to-peer discussion that helps solves data leadership challenges across industries. If you’re interested in joining one of these roundtables or starting one in your market, email me or message me on LinkedIn. I’m here and ready to get these roundtables started with executives in as many communities as I can. The more communities we have, the more data leaders and organizations we can serve. International Society of Chief Data Officers (isCDO) Launched out of the MIT CDOIQ symposium, the isCDO is a vendor-neutral organization designed to promote data leadership. I am excited to be a founding member of this organization, along with our Vice President of Strategy, David Levine. Our ultimate goal is to create a space that serves as a peer-advisory resource and enables enterprises to truly realize the value of data-driven decision making. With multiple membership options available, isCDO is the perfect opportunity for data leaders looking to connect with their peers and gain a competitive advantage by focusing on high-quality data and analytics. CDO Magazine I am really proud to be a founder of the CDO magazine, as it really is a resource for all business leaders, not just the CDO. We designed the magazine to be a resource for C-suite leaders — to educate and inform on the value proposition, strategies, and best practices that optimize long-term business value from investments in enterprise data management and analytics capabilities. Check out the publication here. And if you’re interested in contributing content or being interviewed, let me know at email@example.com. Closing: The role of the CDO is integral to organizations, but it’s still evolving. Now more than ever, it is important that data leaders come together to collaborate and problem-solve. Fusion is excited to be a part of each of these initiatives, and we are committed to being an agent of change in the communities we serve and beyond. By connecting global thought leaders we believe that organizations will realize the value of data to power their digital transformation. If you’re interested in joining any of these data communities or just have questions, feel free to reach out to Mark via email or on LinkedIn.
In a few short years, hyperautomation, or intelligent automation, has gone from a relatively unknown term to a word used across the technology spectrum. Gartner’s Strategic Technology Trends for 2020 named hyperautomation the #1 strategic technology trend for the year. Gartner also forecasted that the hyperautomation software market will reach nearly $600 billion by 2022. What’s fueling the investment? Organizations are trying to remain competitive by decreasing costs and increasing productivity. A focus on hyperautomation can address business challenges and improve operational efficiency, not to mention elevating the customer experience. “Hyperautomation has shifted from an option to a condition of survival,” said Fabrizio Biscotti, research vice president at Gartner in a recent press release. “Organizations will require more IT and business process automation as they are forced to accelerate digital transformation plans in a post-COVID-19, digital-first world.” The foundation of hyperautomation With Robotic Process Automation (RPA) at its core, hyperautomation incorporates advanced technologies — including artificial intelligence (AI), machine learning (ML), natural language processing, optical character recognition (OCR), process mining, and others — to not only automate tasks typically completed by humans but also to build intelligence into the processes, as well as the information derived from those processes. By building on RPA, hyperautomation elevates workflow automation to make decisions previously made by people. It augments the power and value of what RPA provides with a proven path to applying AI to improve business operations. Hyperautomation and digital transformation Because of the level of automation that can be achieved, hyperautomation is commonly referred to as the next major phase of digital transformation. And, it’s an intricate process. Organizations must implement automation simultaneously on multiple fronts to reach the end goal of hyperautomation. They often need to partner with digital innovation advisors and technology consultants to create a hyperautomation strategy from top to bottom and take all of the organization’s nuances into account. To achieve scalability, disparate automation technologies must work together. Careful planning, implementation, and improvement of processes are accomplished through intelligent business process management (BPM). BPM is a core component of hyperautomation and supports long-term sustainability and operational excellence. The combination of BPM solutions with low-code, RPA, AI, and ML has become a driving force for digital transformations, integrating essential data, connecting your workforce, and developing applications. It is up to technology leaders to create a clear strategy, set objectives, and prioritize actions across all business operations. Doing so ensures that the application of automation is efficient. Employees on the front lines are also in an excellent position to identify which processes would provide the most benefit from automation. This can be supported by implementing a demand management solution. Then, it can then be synchronized with organizations’ change management to ensure employees understand the changes and are prepared for more advanced processes, thus elevating the workforce. Organizations may be wary of the costs of change on such a large scale, but the process of integrating technologies does not always require creating a new infrastructure to replace manual operations. Many RPA, AI, and ML solutions can be integrated into automation and technologies that already exist. The future of hyperautomation The next generation of hyperautomation includes support for more complex processes and long-running workflows. Software robots will be able to interact with business users across core business functions, directly impacting the customer experience. Hyperautomation represents the next step in intelligent automation and will transform how we will work in the future. It allows businesses to protect their investments through a holistic approach to digital transformation. As hyperautomation becomes more prevalent, we will realize a seamless and equal blend of robotics, human employees, and existing systems, which will all work collaboratively in a way never seen before. No matter your industry, hyperautomation is worth consideration for its potential cost savings, intelligent processing, intelligence mining, employee efficiencies, and customer service improvements. Learn more about how hyperautomation technologies like ML and AI can benefit you.
Advances in mobile health technology have transformed the entire landscape of healthcare, including the ability of physician groups, employers, nursing home facilities, and pharmaceutical companies to capture data in the healthcare space. Gone are the days of paper tracking for your glucose levels and blood pressure. Instead, wearable devices like watches and trackers can seamlessly provide real-time data streams to applications and third parties. The data collected from wearables can be used for clinical research, patient monitoring, and wellness tracking, among other uses. Each data point collected can add complexity to your broader data set. Because of the amount and complexity of data, turning to machine learning (ML) can help organizations leverage their data to identify patterns and make data-driven decisions. By applying machine learning techniques to wearable device data, we can now surface patterns in big data and make predictions about behavior. Machine learning enables healthcare-related industries to leverage wearable device data and identify trends, improve recommendations, and define research outcomes. Popularity of wearable devices Wearables are popular, and their adoption continues to grow. Globally, the wearable technology market is expected to grow from $69 billion in 2020 to $81.5 billion in 2021, an 18.1% increase, according to the latest forecast from Gartner. What’s fueling the growth? Demand for smart devices in the healthcare sector is rising, as is demand for Internet of Things (IoT) devices. Many devices are not fitness-specific, featuring notification of text messages, push notifications for mobile apps, and the ability to pay for items by scanning a QR code with Google Pay or Apple Pay. As such, they have broad appeal. “As a result of the pandemic, we have seen wearable devices become much more than just activity trackers for sports enthusiasts. These devices are now capable of providing accurate measurements of your health vitals in real time. Improved measurement accuracy coupled with the latest advancements in ML make it possible to detect abnormalities before they lead to a major health event.” – Alex Matsukevich, Fusion Alliance Director of Mobile Solutions Types of data collected by wearable devices There are a variety of brands and categories of wearable devices, from mass-market consumer versions to highly specialized types created for niche uses. Apple, Fitbit, Google, Samsung, Garmin, LG, Sony, and Microsoft dominate the market. Though the concept of “wearables” includes a focus on wristwatches, while exercise equipment, glasses, and textile sensors are also becoming more common. Wearable devices can measure: Sleeping patterns Heart rate Irregular heart rhythms Location/route during exercise Pace, stride, and distance while moving Blood oxygen levels Falls Limitations of wearable device accuracy Wearables do have limitations, and accuracy is a concern. Healthcare decisions made using erroneous data could have outcomes detrimental to a patient’s overall health. A study from the University of Michigan reviewed 158 publications examining nine different commercial device brands. In laboratory-based settings, Fitbit, Apple Watch, and Samsung appeared to measure steps accurately. Heart rate measurement was more variable, with Apple Watch and Garmin being the most accurate, and Fitbit tending toward underestimation. But for energy expenditure (calories burned), no brand was deemed accurate. This does not mean that the results are invalid, but that there is a significant difference between results from wearables and clinical results in a lab setting. Wearable devices are constantly upgraded and redesigned as technology improves. And data collected by wearables does not provide a clinical diagnosis. As such, this data is just part of the larger picture of health and can be used only in conjunction with other factors to evaluate your overall wellbeing. Overcoming the biggest challenge of wearable device data analysis Healthcare professionals are already using ML to analyze data for patients. Research published in the International Journal of Research and Analytical Reviews confirms that ML techniques are successful in predicting health conditions such as heart disease, diabetes, breast cancer, and thyroid cancer. The biggest hurdle to incorporating device data into broader data sets is the addition of new inputs, such as hours of sleep or total steps walked per day. Traditional data points such as total cholesterol or blood pressure readings are less frequent, so there is a smaller amount of data overall. The challenge is finding how to best incorporate it into other data sets to create a more comprehensive picture of health. The future of wearable device data and machine learning We can glimpse into the future of wearable device data and machine learning with Microsoft’s recent patent filing. Their potential product aims to provide wellness recommendations based on biometric data, such as blood pressure and heart rate, pertaining to work events. To do this, Microsoft requests access to applications used by employees. Microsoft then tracks data points such as: Duration of time spent writing emails Number of times a user refreshes their inbox Time spent reading emails Number of corrections made when writing emails Recipient list for emails Number of meetings in a day Tone of language in emails By combining this information with biometric data (from a secondary device such as a Fitbit or Apple Watch) and machine learning, Microsoft could begin to understand what work events trigger a response. For example, suppose an employee received an email from their manager. Microsoft might observe that the employee spent a higher-than-average amount of time reading the email and that the employee’s heart rate was also elevated during this time. Based on these insights, Microsoft could propose recommendations for helping employees manage stress levels, highlighting events that trigger anxiety. Patent filing sample image from Microsoft outlining tips and recommendations to improve employee wellness With a broad user base using both Office and Teams already, Microsoft has a deep understanding of work-related events. As Facebook built their business making sense of our social lives, Microsoft has the potential to optimize our work lives. “Wearables combined machine learning will become the new standard in personalized consumer electronics, rapidly increasing in popularity and scale every year until then. An integrated device of the future will be able to get a baseline of your health and will alert you to any abnormalities present. We already see this happening with the new Apple Watch, and it will be very soon that this technology becomes commonplace.” – Michael Vieck, Fusion Alliance Software Developer Wearable devices will transform healthcare experiences Data is the key to predicting, understanding, and improving health outcomes. IBM Research anticipates that the average person will generate more than 1 million gigabytes of health-related data in their lifetime, equivalent to 300 million books. The sheer volume of data means that machine learning will be vital in making sense of it. Paired together, wearable devices and machine learning have the potential to transform healthcare experiences. Today’s applications and uses are only the beginning. Read more: Top 3 reasons to invest in machine learning for mobile Machine learning and wearable devices of the future Wearing Your Intelligence: How to Apply Artificial Intelligence in Wearables and IoT
Return on investment (ROI) is top of mind for everyone. With so many competing priorities, how you spend your time and money, and what you get for it, matters more than ever. The focus used to completely be on the level of your investment. But the paradigm is shifting because of the data capabilities that now exist. In this article, we’ll explore how the definition of ROI has changed because of modern technology and approaches, where your data ROI comes from, and how to accelerate it. Setting goals for your data analytics efforts Data without analytics is ultimately an investment without return. Most organizations sit on troves of data, but can’t do anything with it. But analytics is a progression. Each of the levels on the data analytics maturity model represent questions you can begin to answer. To go from nothing to cognitive takes a lot — your investment increases substantially. Descriptive: What happened? Diagnostic: Why did it happen? Predictive: What will happen? Prescriptive: How can it happen? Cognitive: What can be suggested? With each step up the model, you add more information and complexity. For example, the descriptive level of maturity can be answered with a look at history. As you progress, you will need more information and stronger data relationships to better understand the “why.” Your data quality and integrity are also important. When you get to the cognitive step, you’re expanding outside your universe of data, and the contextual element of what you’re doing gets broader. For this step, consider Microsoft’s Cortana or IBM’s Watson. But in the modern data world, there doesn’t have to be a huge upfront investment. Shifting your focus from a return on investment to a return on insights can drastically impact how you invest in your data and your results. Calculating the ROI of data and analytics projects Ultimately, ROI is realized from leveraging effective data management to enable access to: more and better data maximizing visualizations advanced analytics actionable insights for outcomes For data management, that means: Improved quality and completeness Confidence and trust Accountability through governance Improved stewardship Advancing culture change to help stakeholders understand the importance and value proposition By improving your data management, the insights from your data become better and more actionable, including: Access to more data and the inclusion of new sources Faster and easier access to data Greater integration of disparate data Easier standup and use of analytics technology In years past, insights from data analytics might have been limited to data scientists or experts in the field. But now, with analytics tools and technologies, data insights are useful to — and actionable for — people across the entire organization. The larger the investment in time and money, the more emphasis on ROI, how quickly it can be realized, and the amount of trailing value. Data leaders have to work with their organizations to understand what the best strategy is – whether that be a smaller investment with a slower return or a big investment that allows you to realize your ROI sooner. It is critical to evaluate your organization’s needs, expectations, and goals before making decisions on strategic data management. Understanding the classic data ecosystem In a classic data ecosystem, the setup might look something like this. In a classic data ecosystem, it requires deep analysis to understand sources and the definition of data, and a considerable amount of time and effort to reach the gold standard you need for your data to be utilized for BI and analytics. Investments are required on all layers. There is no real way to invest in one aspect of your data and analytics and still find value. There is also significant effort required to ensure that as you introduce new systems, you don’t break legacy systems and processes already in place. Quite often, work must be done upfront to ensure that changes (even upgrades) will not cause disruption. In addition, significant “time-to-market” factors need to be considered with classic data ecosystems. Often, the slow delivery of data and features forces businesses to make incremental changes without undertaking any kind of larger project. Doing so might be helpful at first, but can cause issues later. With a slow delivery of data, many organizations using a classic data ecosystem find that they are unable to keep up with the pace of business today. Classic data ecosystems are often built to meet reporting needs, not analytical needs and the analytics piece is a one-off project. The deployment and incorporation of analytical models into production in a classic setup requires a considerable amount of time and customization. Building a modern data ecosystem In this more modern data ecosystem, there is a more layered approach. Now it is much easier to gather, ingest, and integrate the data, and bridge gaps between systems, along with including new concepts like data lakes that can include data layers at the bronze, silver, and gold levels. You don’t have to invest fully in all of the layers, you can invest where you need to. You still have BI & analytics capabilities, but you have more of an application integration framework that serves additional needs. And then there is the trust of the data. This setup allows for more flexibility and customization for all parts of the organization. Read more: How to build your data analytics capabilities The value of incremental change Your investment doesn’t have to be an all-or-nothing proposition. You can incrementally build out components and capabilities and can make data available for exploration without deep upfront analysis that often slows everything down. Additionally, you can control the degree of your investment in a significant way. Without having to push data through all the layers to make it useful and the use of flexible architecture, you don’t have to make a significant investment and change to make it worth it. You can also leverage external tools in the interim. Service and subscription-based features allow for fast initiation, and exploratory efforts can be stood up and torn down easily and quickly. New technologies and design/development paradigms enable faster adoption overall. And now, more user groups are able to access data and analytics, create more use cases, and make business decisions on the insights. Ultimately, it is time to shift your thinking on ROI and leverage modern data technology and tools and focus on the return on insights, intelligence, and innovation. For more information on data, analytics, and assigning ROI, Fusion’s Vice President of Data, Saj Patel, recently spoke at the CDO Summit. His presentation details how to accelerate ROI and gain buy-in across your organization. Watch the recording here and connect with us if you have any specific questions Learn more about Strategic Data Management here.
While every organization’s journey to digital transformation looks different, one thing remains the same — the importance of data. Tackling your data systems and processes is vital to fully transform. However, the reality is that most organizations are overwhelmed with data about their customers. But these troves of information are completely useless unless companies know that the data they have is accurate and how to analyze it to make the right business decisions. In today’s world, organizations have been forced to pivot and have realized the value data can bring to drive insight and empower their decision-making. However, many organizations have also recognized their data immaturity. So how do you move forward? The role of data in digital transformation Data can be your organization’s biggest asset, but only if it is used correctly. And things have changed. A lot of organizations have completed the first steps in their digital transformation, but now they are stuck — they aren’t getting the results they expected. Why? They haven’t truly leveraged their data. According to Forrester, “Firms make fewer than 50% of their decisions on quantitative information as opposed to gut feelings, experiences, or opinions.” The same survey also showed that while 85% of those respondents wanted to improve their use of data insights, 91% found it challenging to do so. So, now that you’ve got the data, how can you make it more valuable? Data strategy is key to your digital transformation With so many systems and devices connected, the right information and reporting is critical. But first, you have to make sure you have the right technology in place. Utilizing big data Although you might feel inundated with the amount of data you have coming in, using big data analytics can bring significant value to your digital transformation. Through big data analytics, you can get to a granular level and create an unprecedented customer experience. With information about what customers buy, when they buy it, how often they buy it, etc., you can meet their future needs. It enables both digitization and automation to improve efficiency and business processes. Optimizing your legacy systems Legacy systems are critical to your everyday business, but can be slow to change. Why fix what’s not necessarily broken? But just because systems are functioning correctly doesn’t mean they’re functioning at the level you need them to — a level that is conducive to achieving your data and digital transformation goals. This doesn’t have to mean an entire overhaul. You’ve likely invested a lot into your legacy systems. One key to a good data strategy is understanding how to leverage your legacy systems to make them a part of (instead of a roadblock to) your digital transformation. With the enormous scale of data so closely tied to applications, coding and deployment can often make this stage of your digital transformation feel overwhelming. Sometimes DevOps tooling and processes are incompatible with these systems. Therefore, they are unable to benefit from Agile techniques, continuous integration, and delivery tooling. But it doesn’t have to feel impossible — you just need the right plan and the right technology. Focusing on your data quality Even with the right plan and technology, you have to have the right data. Bad data can have huge consequences for an organization and can lead to business decisions made on inaccurate analytics. Ultimately, good data needs to meet five criteria: accuracy, relevancy, completeness, timeliness, and consistency. With these criteria in place, you will be in the right position to use your data to achieve your digital transformation goals. Implementing a data strategy with digital transformation in mind So how do you implement your data strategy? You should start by tackling your data engineering and data analytics. The more you can trust your data, the more possibilities you have. By solving your data quality problem, you can achieve trust in your data analytics. And then, the more data you have on your customers, the more effective you can make your customer experience. But, this all requires a comprehensive data strategy that allows your quality data to be compiled and analyzed so you can use it to create actionable insights. The biggest tools to help here — AI and machine learning. The benefits of a data-driven digital transformation The benefits of investing in your data are clear, including increased speed to market, faster incremental returns, extended capabilities, and easier access and integration of data. Discover more about the different ways you can invest in your data and improve and accelerate ROI for your organization. Ultimately, your goal is to elevate how you deliver value to your customers. Digital transformation is the key to understanding your customers better and providing a personalized customer experience for them. Leveraging your data can make all the difference between you and your competitors. And we’re here to help. Learn more about how some of our clients have benefited from investing in their data and digital transformation.
The pandemic made it clear that traditional banking is a thing of the past. Online banking had already been on the rise, but a 200% jump in new mobile banking registrations in April 2020 established that customers are able and willing to change. As more Americans bank virtually, banks are fighting to meet customer demand? And beyond the challenges set forth by the pandemic, “digital natives” like Rocket Mortgage, Venmo, Stripe, and Robinhood are all vying for business. These technology-forward organizations position themselves differently from traditional financial institutions and are attracting a younger user base for their services. But a traditional bank has advantages over these challengers: Familiarity and history: Your personal relationships and history with customers mean that your bank is often more aware of their history. And customer questions can be answered in person instead of being routed through a call center. Deep and rich data: Historical data can prove invaluable for ML efforts. Customer deposit amounts, payments, and balance information can be used to predict future behavior. Preference for personal banking: Customers, especially those with a high net-worth, may have discomfort with digital channel dependency for wealth management. A new brand might be a risk, and they could feel uncomfortable not having a specific person to call if something goes awry. As we get back to our “new normal,” traditional banks can use the rich data and relationships they have with customers to their advantage. Forward-thinking leaders are reimagining what it looks like to do business, and they’re using machine learning to elevate the customer experience. Discover how you can use machine learning to create engaging and profitable relationships with your customers. Every bank can find value from machine learning Machine learning might sound like a type of data analysis useful to only the largest of organizations, but its concepts can scale to meet the needs of small and mid-sized banks too. When we use the word machine learning (ML), we are referring to machines and systems that can learn from “experience” supplied by data and algorithms. In banking specifically, ML algorithms can be used to identify patterns in data beyond what humans are capable of observing, and these learnings can be applied to new data sets. It is now possible to improve the customer experience using ML. By parsing customer transaction data, ML can identify clues and patterns ahead of time, even before the customer considers taking action. For example, the process of buying a home and obtaining a mortgage might begin with small savings accumulation or an increase in deposit amounts from wages. ML models can assess banking-specific data like credit patterns, risk tolerance, and price sensitivity, and can be coupled with demographic data like age, median income, and distance to branch. The goal of using ML data in this use case is to target prospective customers with offers most relevant to their situation and stay ahead of customer demands. Knowing where to begin — and where to focus efforts Machine learning has such a wide variety of applications, it can be difficult to know where to start. Identifying a use case for customer-focused ML expenditures is a good first step. In general, we have found that you can benefit from starting with a use case with low or medium relative complexity. Examples focused on improving the customer experience include: Predicting service line interest (HELOC, mortgage refinance, etc.) Streamlining loan approval processes Increasing lines of credit Improving fraud alert notifications With so many use cases to choose from, it can be easy to get lost in the planning for each example. Instead, try focusing on one area at a time. Using your strengths, combined with ML concepts, you can deliver an optimal customer experience that digital challengers just can’t match. Need help getting started? Check out our Machine Learning Jumpstart program. Cross-selling across the relationship with machine learning You can leverage machine learning to determine not only which customers would be a good fit for a mortgage loan, but also the other products that customers might need. Thinking about a mortgage loan, a Home Equity Line of Credit (HELOC) might be a good match for a new homeowner. In any case, the message and product can be tailored to meet the customer’s specific needs. Another part of cross-selling is to personalize the offer based on the customer’s history and propensity to buy. Perhaps an interest rate would be meaningful for one type of customer, while a waived application processing fee would entice another. For individuals who identified as being interested in high revenue products, the marketing effort can be even more personalized, like a phone call or an in-person event invitation. Applying machine learning in real life The following illustration is an example of how an internal dashboard might appear to a banker or service representative. For any specific product, each person has a percentage likelihood that they will take action. Individual model scores are shown, along with next steps, such as outreach about an investment account, or mailing a promotion about mortgage rate refinancing. In this example, marketing inputs, like website data, are combined with transaction and deposit information. When a banker or service representative encounters a customer, either in person in on the phone, they can suggest specific next steps, or ask if the customer has questions. Having a dashboard with this information enables banking employees to be empowered to guide the conversation with data in real time. Related Case Study: Machine learning predicts outcomes at Primary Financial How does a financial services firm improve sales targeting to predict its clients’ desires to invest? Machine learning was the answer for PFC. Learn more. FAQs about machine learning and banking Does the machine learning process work fast enough to enable real-time benefits? For all but the most complex scenarios, yes! Normally, ML is fast enough to be integrated into real-time transactions. Does machine learning get in the way of compliance requirements? In general, no. By using existing data that you obtained or using your data in coordination with third-party data, you are not running amiss of privacy and compliance concerns. How do we ensure the use case we pick machine learning is right given that there are so many to choose from? We recommend focusing initially on those low-cost, high-ROI use cases with a low-medium relative complexity. Given additional experience, context, pipelines, and an understanding of how advanced analytics programs operate, then more complex initiatives can be undertaken. Data reliability can be a concern. Using low-quality data is not advised, but it is possible to start projects with small data sets. Engaging with a third party to evaluate your situation is advised in situations like this. Reimagining customer insights & relationships Banks that employ machine learning will have a portfolio of more customers than ever who are positioned for a variety of banking products delivered in a digital, personalized, and meaningful way. Now is the time to act and implement machine learning to meet customers where they are, using the contact methods that they desire and delivering the products and services to best meet their needs. Need help building your use case and plan? Access our Machine Learning Use Case Guide for Banks. Want to dig deeper? Check out our webinar on this topic.
Getting the right data to the right people at the right time is the name of the game in today’s demanding marketplace. Every company has to find a way to harness big data and use it to drive growth. And if your organization isn’t talking big data, you are at a competitive disadvantage. This article covers a top-level view of big data’s evolution and key components. It can help you understand the importance of big data and technologies that are essential to its discussion. With this foundation, you can proceed to the next step — addressing what to do with your data and how. Just how much data exists? Every passing moment, the pace of data creation continues to compound. In the time it takes you to read these few paragraphs there will be: more than 200 million emails sent millions of dollars in e-commerce transacted 50 hours of YouTube videos uploaded millions of Google searches launched tens of millions of photos shared Every few minutes, this cycle repeats and grows. In 2019, 90% of the world’s digital data had been created in the prior two years alone. By 2025, the global datasphere will grow to 175 zettabytes (up from 45 zettabytes in 2019). And nearly 30% of the world’s data will need real-time processing. Over the last decade, an entire ecosystem of technologies has emerged to meet the business demand for processing an unprecedented amount of consumer data. What is big data? Big data happens when there is more input than can be processed using current data management systems. The arrival of smartphones and tablets was the tipping point that led to big data. With the internet as the catalyst, data creation exploded with the ability to have music, documents, books, movies, conversations, images, text messages, announcements, and alerts readily accessible. Digital channels (websites, applications, social media) exist to entertain, inform, and add convenience to our lives. But their role goes beyond the consumer audience — accumulating invaluable data to inform business strategies. Digital technology that logs, aggregates, and integrates with open data sources enables organizations to get the most out of their data, and methodically improves bottom lines. Big data can be categorized into structured, unstructured, and semi-structured formats. The development of modern data architecture Until recently, businesses relied on basic technologies from select vendors. In the 1980s, Windows and the Mac OS debuted with integrated data management technology, and early versions of relational database engines began to become commercially viable. Then Linux came onto the scene in 1991, releasing a free operating system kernel. This paved the way for big data management. What is big data technology? Big data technologies refer to the software specifically designed to analyze, process, and extract information from complex data sets. There are different programs and systems that can do this. Distributed file systems In the early 2000s, Google proposed the Google file system, a technology for indexing and managing mounting data. A key tenet to the idea was using more low-cost machines to accomplish big tasks more efficiently and inexpensively than the hardware on a central server. Before the Information Age, data was transactional and structured. Today’s data is assorted and needs a file system that can ingest and sort massive influxes of unstructured data. Open-source and commercial software tools automate the necessary actions to enable the new varieties of data, and its attendant metadata, to be readily available for analysis. Hadoop Inspired by the promise of distributing the processing load for the increasing volumes of data, Doug Cutting and Mike Cafarella created Hadoop in 2005. The Apache Software Foundation took the value of data to the next level with the release of Hadoop in Dec. 2011. Today, this open-source software technology is packaged with services and support from new vendors to manage companies’ most valuable asset: data. The Hadoop architecture relies on distributing workloads across numerous low-cost commodity servers. Each of these “pizza boxes” (so called because they are an inch high and less than 20 inches wide and deep) has a CPU, memory, and disk storage. They are simple servers with the ability to process immense amounts of various, unstructured data when running as nodes in a Hadoop cluster. A more powerful machine called the “name node” manages the distribution of incoming data across the nodes. By default, data is written to at least three nodes and might not exist in its entirety as a single file in any one node. Below is a simple diagram that illustrates the Hadoop architecture at work. Open source software The majority of enterprises today use open source software (OSS). From operating systems to utilities to data management software, OSS has become the standard fare for corporate software development groups. Serving as a progressive OSS organization, Apache Software Foundation is a non-profit group of thousands of volunteers who contribute their time and skills to building useful software tools. As the creators, Apache continuously works to enhance Hadoop code — including its distributed file system called Hadoop Distributed File System (HDFS) — as well as the code distribution and execution features known as MapReduce. Within the past few years, Apache released nearly 50 related software systems and components for the Hadoop ecosystem. Several of these systems have counterparts in the commercial software industry. Vendors have packaged Apache’s Hadoop with user interfaces and extensions, while offering enterprise-class support for a service fee. In this segment of the OSS industry, Cloudera, Hortonworks, and Pivotal are leading firms serving big data environments. Now software systems are so tightly developed to the core Hadoop environment that no commercial vendor has attempted to assimilate the functionality. The range of OSS systems, tools, products, and extensions to Hadoop include capabilities to import, query, secure, schedule, manage, and analyze data from various sources. Storage Corporate NAS and SAN technologies, cloud storage, and on-demand programmatic requests returning JSON, XML, or other structures are often secure repositories of ancillary data. The same applies to public datasets — freely available datasets, in many cases for economic activity by industry classification, weather, demographics, location data, and thousands more topics. Data of this measure demands storage. Distributed file systems greatly reduce storage costs while providing redundancy and high availability. Each node has its local storage. These drives don’t require speed or solid-state drives, commonly called SSDs. They are inexpensive, high-capacity pedestrian drives. Upon ingestion, each file is written to three drives by default. Hadoop’s management tools and the Name Node monitor each node’s activity and health so that poorly performing nodes can be bypassed or taken out of the distributed file system index for maintenance. The term “data lake” describes the vast storage of different types of data. These vastly different data sources are in a minimum of a dozen different file formats. Some are compressed or zipped. Some have associated machine data, as found in photos taken with any phone or digital camera. The date, camera settings, and often the location are available for analysis. For example, a query to the lake for text messages that included an image taken between 9 p.m. and 2 a.m. on Friday or Saturday nights in Orlando on an iPhone would probably show fireworks at Disney World in at least 25% of the images. Administration of big data initiatives The enterprise administration of applications — their storage requirements, security granularity, compliance, and dependencies — required Hadoop distributions to mature these capabilities in the course of becoming a managed service to an enterprise (like those from Cloudera and Hortonworks). In the graphic above, you can see a view of Hadoop’s place among other software ecosystems. Note that popular analysis tools (below) are valuable in developing big data solutions: Excel and Tableau databases such as SQL Server and Oracle development platforms such as Java and Informatica Data Quality Administration through Cisco and HP tools is common. Further development of big data Commercial software companies have begun connecting to Hadoop, offering functionality such as: data integration quality assessment context management visualization and analysis from companies such as IBM, Microsoft, Informatica, SAP, Tableau, Experian, and other standard carriers. Analytics and data science Analytics is the endgame for developing a big data environment. The rise of big data has given credence to a new resource classification, the data scientist — a person who embodies an analyst, technologist, and statistician all in one. Using several approaches, a data scientist might perform exploratory queries using Spark or Impala, or might use a programming language such as R or Python. As a free language, R is rapidly growing in popularity. It is approachable by anyone who is comfortable with macro languages such as those found in Excel. R and its libraries implement statistical and graphical techniques. Moving to the cloud Cloud computing is very different from server-class hardware and software. It involves cloud storage, multi-tenant shared hosts, and managed virtual servers that are not housed on a company’s premises. In cloud environments, an organization does not own equipment, nor does it employ the network and security technologists to manage the systems. Cloud computing provides a hosted experience, where services are fully remote and accessed with a browser. The investment to build a 10- or 20-node HDFS cluster in the cloud is relatively small compared to the cost of implementing a large-scale server cluster with conventional technologies. The initial build-out of redundant centers by Amazon, Microsoft, Google, IBM, Rackspace, and others has passed. We now have systems available at prices below the cost of a single technician. Today, cloud computing fees change rapidly with pricing measured by various usage patterns. Conclusion: The rise of big data is evident Big data is not a fad or soon-to-fade trending hashtag. Since you began reading this article, more than 100 million photos have been created, with a sizeable portion having a first-degree relationship to your industry. And the pace of data creation continues to increase. The distribution of computing processes can help organizations to gain a 360-degree view of their customers through big data collection and analysis. And companies that embrace big data technologies and solutions will rise ahead of their competitors. Big data technologies are becoming an industry standard in finance, commerce, insurance, healthcare, and distribution. Welcoming big data technologies and solutions is key in the optimization and continued growth going forward. Companies that embrace data solutions can continue to improve management and operational processes and create a competitive advantage to withstand an ever-evolving marketplace.
What will the future hold when it comes to digital transformation? We don’t have a Magic-8 ball or special spidey sense, but our team does anticipate sizeable change. We asked a few of our team members what they thought based on their work and personal experiences. Here’s what they’re envisioning for 2021 and beyond. The cloud gains more ground I expect cloud-based business application platforms such as Dynamics 365, Salesforce, ServiceNow, and Workday to drive significant digital transformation within modern workplaces in the next year. Following on the heels of many core infrastructure services moving to the cloud — such as email, servers, files, and data — the next major lift for many organizations will be to modernize and automate their core business processes. I anticipate areas like finance, HR, production, and other critical business operations and workflows will be the next major shift to using cloud-based business application platforms. Moving away from legacy, on-premise solutions is not always a simple task, but in doing so, employees can then work remotely without being tethered to an office environment. Greg Deckler, Vice President, Cloud Services Connect with Greg on LinkedIn Remote workers collaborate differently The work-from-anywhere model has been proven to work and it will continue. However, right now, a Zoom meeting is about the extent of what most people see as remote teamwork — and we all know those can be exhausting. I predict greater adoption of tools like Miro and Mural. These online workspaces allow for active collaborating and co-creating in real time. The need to move quickly and keep pace with digital transformation will require these types of tools, and those who know how to leverage them, to make the most of a remote team’s time together. Doug Scamahorn, Solution Director, UX Design & Innovation Connect with Doug on LinkedIn Cookie compensation I think 2021 will be the year when businesses and marketers confront the pending deprecation of the third-party cookie. Google is driving the industry towards new solutions for retargeting and attribution following the announcement that Chrome will cease to support third-party cookies in 2022. While industry players debate over a long-term replacement, expect to see a scramble to shore up first-party data in the meantime. At a tactical level, this will look like increased pushes for “registered” online experiences where users must explicitly identify themselves, as well as the integrations that power these points of data collection. In the background, businesses will be pushing to connect the dots between online and offline touchpoints using a variety of identifiers, from email to devices to data from “walled gardens” like Amazon, Facebook, and even Walmart and Target. Companies may opt for a CDP (consumer data platform) solution on top of their existing data stack to manage data points specifically for targeted marketing campaigns. When reporting on campaign success and attribution, analysts may need to adopt new tools and strategies for managing “fuzzier” readouts on customer behavior and journey identification. Amy Brown, Solutions Director Connect with Amy on LinkedIn Augmented reality becomes actual reality As mobile processing and bandwidth progresses and matures, we can expect more augmented reality (AR) apps to provide visual assistance in a huge range of applications. I fully expect we will see vehicles with heads-up displays, smart glasses (remember Google Glass?), and other clear displays to be adopted by more companies and thus, individuals. Visual processing in itself is gaining in popularity. Retailers like IKEA are already using AR with their IKEA Place app to enable customers to “see” furniture in their spaces. Microsoft’s recent HoloLens release is a good example of where we’re headed. Jeremy Keiper, Competency Lead Connect with Jeremy on LinkedIn B2B marketers will get more creative There’s always been an understanding that marketing is both an art and a science. Over the last decade, marketers have leaned into the science. Data provided marketers with information about customer behavior that was never available before. Even before the pandemic, B2B marketers were relying heavily on digital channels to engage customers. But pandemic office closures caused marketers to rely on channels like email, webinars, social media, and search engine marketing (SEM), in an attempt to reach prospective buyers who were now working from home. And they had to get creative. Marketers had to be willing to test new ideas and try things that haven’t been “proven,” and to think creatively about how we connect with and engage prospects and customers. I anticipate this to continue, and marketers will use customer data to make sure they understand consumer goals and motivations, then get creative about how to reach out and connect. Kristin Raikes, Sr. Director of Digital Strategy Connect with Kristin on LinkedIn Looking ahead Thinking about the year ahead, we do know that even after offices reopen and things get back to “normal,” the new “normal” will look different than it did before. If people continue to work from home or prefer to engage with brands virtually versus physically, then technology will have to adapt. Are there any major trends not listed above that you think will be a key to digital transformation for this year? If you have questions about specific trends, you can also connect with our team via their LinkedIn profiles above. Our consultants and team members work with clients to improve, streamline, and create actionable change. We create exceptional customer experiences by leveraging data insights, experience design, and technology to transform the way you connect with your customers. Interested in learning more? Let us know, or sign up for our newsletter to get to know us.
Everyone is talking about digital transformation, but there’s a lot of confusion and misinformation about what that term means. Much more than just a buzzword or doing things “digitally,” digital transformation means reimagining your business and driving it forward in a better way. When done correctly, digital transformation can fundamentally change how you deliver value to your customers. That’s what you need to focus on if you want to thrive in this market. Customers today have high expectations and a lot of options, so carefully considering your customer journey is critical to success. Before you get started, here’s an overview of what you need to know about digital transformation and how to continue to drive it forward: What exactly is digital transformation? The evolution of digital transformation Cloud computing fuels digital transformation More than just technology Keys to successful digital transformation The challenges of digital transformation Where are you in your journey? What exactly is digital transformation? Digital transformation is the integration of technology into all areas of a business, fundamentally changing how you operate and deliver value to customers. It can often be confused with digitization or digitalization, but they are all different things altogether. Digitization is the process of taking information and converting it from a physical format to a digital one, like typing up paper notes and putting them into Microsoft Word or scanning into a PDF. Digitalization, on the other hand, uses these digital files to take the processes already in place and make them more efficient using the new digital format. Digitalization allows you to make your processes faster but doesn’t evolve the process itself. Digital transformation does more by enabling you to interact with your customers in a new way that is constantly evolving to meet both their needs and your business needs. By ideating and implementing better business processes and technologies, you’ll not only create an elevated customer experience that will result in increased profitability, but you’ll also save significant time and money in operating costs. The evolution of digital transformation Digital transformation might be a popular term today, but this was also true in the late 1990s through the mid-2000s. It started with companies computerizing processes 30 years ago, and as the internet was established, websites started to connect companies with their customers. That’s when Fusion Alliance got started, and we’ve been focused on the end goal of developing solutions for clients ever since. Digital processes emerged to support customer interactions, from sending emails to managing online ordering. As digital ambitions grew, companies realized a need for dedicated digital teams to manage social and mobile channels. Connected to customers, suppliers, and other stakeholders, companies realized the need to connect all of these data silos. Seeing the potential in connectivity, organizations focused on digital platforms connecting all systems. Then they started to experiment with new digital ways of doing business, leveraging data more effectively, and creating greater agility. Today, we live in a world where customer expectations have never been greater. They demand personalized experiences in every interaction with a company’s products and services. Because of this, companies must innovate quickly and deliver. In addition, the pandemic has forced IT leaders to adapt yet again, with many adopting cloud software for video collaboration and building apps that enable workers to enter offices governed by social distance practices and contact tracing. Technologies such as cloud computing, the Internet of Things, and Artificial Intelligence all power the innovation that is able to deliver value. An important element of digital transformation is technology. But often, it’s more about doing away with outdated processes and legacy technology than it is about adopting new tech. It should also about enabling innovation. Cloud computing fuels digital transformation Companies are increasingly moving toward a hybrid cloud infrastructure. From SaaS applications and on-premise solutions to a mix of public and private clouds, hybrid cloud strategies help companies find the right balance for their unique infrastructure needs. Over the past year, cloud providers like AWS, Azure, Google, IBM, and Oracle have made investments to support hybrid strategies. OEMs like HPE, Dell (VMware), and Cisco have also increased investments in tools that enable simpler connectivity between on-premises data centers and the cloud. These investments are all centered on meeting the customer where they are in the moment. Hybrid cloud adoption was already underway before 2020’s pandemic, but the sudden disruption sped things up. Being agile and nimble was, and still is, a significant business advantage. More than just technology Although the focus of digital transformation is often on emerging technologies moving the business forward, true transformation has to encompass much more. Business leaders at the top level must be involved in this change and must be willing to invest in and empower your employees, as well as focus on building your culture. As employees begin to see changes occurring around them, they might begin to wonder how it will affect them and their coworkers. They may question their own position within the organization and try to figure out their own best next steps. During any transition, leadership must communicate with employees to help them feel secure in their own positions as well as the direction of the organization. Head off potential issues by making sure your employees understand what digital transformation is, what it’s not, and how they can be a part of it. Additionally, it may be best to communicate how rapidly things can change throughout this process. Being as transparent as possible and preparing your employees for future changes can make all the difference in ensuring your digital transformation is successful. Keys to successful digital transformation Data from McKinsey shows that when companies achieve transformation success, they are more likely to have digital-savvy leaders in place. Less than one-third of all respondents say their organizations have engaged a chief digital officer (CDO) to support their transformations. Those that do are 1.6 times more likely than others to report a successful digital transformation. Companies that have already invested in data and infrastructure to support technology efforts are more adept at again succeeding. The keys to finding success with digital transformation projects vary because no two companies are the same. However, we do have a few recommendations: Define what digital transformation means to your business Create a map of where your business is now, including people, process, data, and technology. Then define where your business needs to be. The gap identified becomes a roadmap. Defining the gaps are in the business is the first key step in your digital transformation process. Identify and involve the right internal stakeholders Leaders and decision-makers might not have the insider knowledge to execute technical challenges. Your stakeholders should be throughout the organization, across departments, and exist at different leadership levels. Align with a partner to shepherd the process You might find your business well-suited to tackle projects outside your normal scope of work, but many companies lack the internal resources to be able to undertake larger projects. Finding an outside partner might be helpful to keep things moving and provide direction with next steps. The challenges of digital transformation While digital transformation is worth it and necessary to the survival of your organization, there are challenges that come along with reimagining any new business process. To make your transformation truly successful, ensure that you have a good understanding of your brand and your customers. Without that, the entire digital transformation strategy you build will be misguided, and you’ll end up back on square one. Budget can also be a big hindrance for a lot of companies looking to begin their digital transformation. Often, there are additional resources and training needed upfront. Although the cost savings and increased profitability are worth it in the end, the initial expense can be intimidating. Additionally, poor data quality can become a huge challenge for organizations as they go down the digital transformation road. Poor data analytics often leads companies to make important decisions based on misguided data. Poor data also prevents companies from being able to use emerging technologies like artificial intelligence and machine learning, since they prove useless when fed with bad data. Sound strategy and good data will ensure you start your digital transformation journey on the right foot. Where are you in your journey? Whether you’re an established business or a startup, the perfect time for digital transformation is now. No matter how old your business is, your digital transformation will be unique to your organization. It is helpful to think ahead to the future — where do you see your company in the next 5 years? Create a roadmap of where you want to be in terms of customer experience, technology, and data insights, and involve those team members in your planning discussions. By involving decision makers across the organization, you can ensure that you are aligning all parts of your business with the end goal. And, if you can, bring in an experienced third-party to help streamline planning, bring in additional insights, and ensure that your projects run smoothly. Digital transformation allows your organization to deliver the right customer experience to the right people and not only remain relevant in your market, but actually build your brand. One thing is consistent: customers today expect a flawless and customized customer experience, and you’ll need digital transformation to deliver it. Your digital transformation partner Customers today expect a customized customer experience, and you’ll need digital transformation to deliver. When you’re ready, we’re here to help you execute. Need help addressing or assessing where are on your organization’s digital transformation journey? Let us know.
The financial industry has faced waves of changes over the last two centuries. Emerging nations, the American gold rush, the power of the stock market, and even the Great Depression have all shaped how banking works and what consumers expect from their banks. Notably, from 2015 onward, bankers began to list technology risk among their top five concerns1. While these changes have increased banking access and options for the average consumer, they also brought in more tech-savvy competition and greater regulatory scrutiny as heaps of data have become digitally accessible. Ironically, the very technological disruption that has so upended the financial industry will also be what brings new opportunities for growth and increased wallet share. This is especially true with advanced data tools such as artificial intelligence (AI) and machine learning (ML); according to one source2, 83% of early AI adopters have already achieved substantial (30%) or moderate (53%) economic benefits. In light of the benefits machine learning can bring, we’ve compiled four major areas where we’ve seen ML used to reduce costs, increase revenue, and mitigate risk for banks. 1. Acquire new customers Gone are the days where marketing was limited to just a few channels; now banks must maintain an omnichannel presence in order to reach younger consumers who may not listen to the radio or watch TV. Acquiring new customers means reaching them where they are with messaging that’s highly targeted and relevant. Yet as margins get slimmer and budgets are squeezed, reaching these consumers with targeted messaging without breaking your budget can be a challenge if you’re not careful. How machine learning can help Making the most of your marketing involves making the most of your data. Machine learning can help you identify trends in consumer behavior and interests, which can help you deliver the right marketing messages in the right channels at the right time. ML opportunities Identify which existing bank customers will buy another bank product Score your commercial leads based on risk, profitability, and probability to close ON-DEMAND WEBINAR: Learn how to turn data into insights that drive cross-sell revenue 2. Deepen relationship with customers Digital transformation has affected every business in profound ways, especially in the area of reaching customers and managing the customer relationship. Today’s users want a more seamless experience, more targeted messaging, and on-demand access to information, and they’ll move to the bank that can meet their digital demands. How machine learning can help AI and ML allow you to combine your leadership’s decades of experience with customer engagement data. So not only will you have a gut check of what customers want, you’ll have quantifiable data to back it up. Which means your sales and customer relationship initiatives will ultimately be more effective at targeting customers ready to upsell and at cross-selling more of your products to hungry buyers. ML opportunities Identify high-value customers early and engage with them differently Predict the likelihood of a customer taking their deposits elsewhere Identify which disputed purchases are legitimate Project a customer’s lifetime value for those with a limited history with the bank 3. Reduce Financial Risk Consumers are becoming both more credit averse and less credit worthy, which extra pressure on banks and credit unions of all sizes. On top of that, banks face increased risk caused by data breaches, fraudulent activity, and increased costs brought on by regulatory compliance3 . These challenges make maintaining adequate cash reserves more difficult than ever before at a time of increasing market volatility. How machine learning can help Machine learning can give you the insights needed to reduce your overall financial risk by helping you identify fraud and financial liabilities early – so you make and keep more of your profits. ML opportunities Clarify the liabilities on your balance sheet and determine which are the greatest risks Detect fraud and misuse of the company’s finances Project cash reserves to reduce excess bank cash GET THE USE-CASE WORKBOOK: The ultimate guide to machine learning use cases for banks 4. Optimize investment offerings The investment management arm of today’s banks continues to change rapidly as industry challenges increase. Today’s investment managers deal with increased market volatility, capped organic growth, and increasing fees. Because of these challenges, they struggle to keep up with shifting expectations of clients who demand a better investment turnover. How machine learning can help Machine learning can be used to detect patterns hidden in a bank’s historical investment data combined with external financial data. These patterns produce actionable insights that can increase the accuracy of key investment decisions. ML opportunities Match securities to investors based on trade history and market conditions Dynamically price securities based on competitive offerings, market saturation, and risk profile See how one institution used ML to predict their deposit customers' likely deposits on a daily basis, freeing $40,000,000 in excess cash reserves The up-and-coming (and existing) opportunities for financial institutions to win with ML are staggering. One source estimates that advanced data initiatives like AI and ML are predicted to boost overall business profitability by 38% and generate $14 trillion of additional revenue by 20354. And while it’s true that digitally savvy industry newcomers may take advantage of these trends faster than their legacy peers, legacy banks and credit unions hold something the younger competition doesn’t: mountains of historic data that, when mined for insights using AI and ML, can give them a leg up in retaining customers, increasing their wallet share, and reducing their overall financial risk. To adequately leverage these four opportunities, banks will need to embrace the very technology disrupting the industry. Banks that view their data as one of their most important assets and embrace AI and ML to create new insights will likely see growth, whereas those who don’t will struggle to keep up. 1) 2015 Banking Banana Skins Report 2) 2017 Deloitte 3) 2017 Financial News 4) n.d., Accenture
Executive summary The credit card industry is becoming more complex. Advanced loyalty, targeted offerings, unclear rate conditions, and many other factors can often make it difficult for banks to identify the right customer. Ultimately, the financial services firms that will succeed in this environment will engage the right customers with the right message at the right time. Market leaders will be those who can accurately forecast the revenue and risk for each prospective and existing customer. While the credit card environment has changed, the analytics and modeling techniques have largely remained the same. These models are highly valuable, but do not offer flexibility to evaluate granular and complex customer behaviors incumbent in a financial services firm’s data and other public and private data sets. Machine learning and deep learning (collectively, machine learning) change the paradigm for predictive analytics. In lieu of complex, expensive, and difficult to maintain traditional models, machine learning relies on statistical and artificial intelligence approaches to infer patterns in data, spanning potentially billions of available patterns. These insights, not discoverable with traditional analytics, may empower the financial industry to make higher-value, lower-risk decisions. In this brief article, we discuss three potential opportunities that Fusion expects should add high value to the financial services industry. Advanced analytics for banking Machine learning uncovers patterns in complex data to drive a predictive outcome. This is a natural fit for the banking industry as firms are often working with imperfect information to determine the value of incoming customers. How it works: Traditional models vs. machine learning Credit scorecards represent the basis of most credit card issuance decision making. Whether a firm leverages off-the-shelf models or applies bespoke modeling, Fusion expects the following is representative of a credit scorecard: In the aggregate, these models are highly valuable. But on a per-applicant basis, patterns and details are lost. In machine learning, we can explore detailed and expansive public and private data about segmented applicants for marketing purposes in real time. For example, we can supplement our existing models with data that can be used to segment potential customers such as: Regional FICO trends Educational attainment Social media sentiment analysis Mortgage and equity analysis Much, much more Machine learning can apply artificial neural networks to uncover patterns in your applicants’ history across millions of data points and hundreds of model statistical training generations. When detecting these patterns, machine learning models can uncover risk in approved applicants and value in sub-prime applications. For example, by exploring existing customers, machine learning could potentially reveal that applicants with low FICOs but high educational attainment for a specific city suburb have historically resulted in minimal write-offs. Conversely, a potentially high FICO applicant may have recently moved into a higher-net-worth neighborhood, requiring a high expenditure on a financial institution’s credit lines, resulting in repayment risk. Ultimately, your customer data can tell a far richer story about your customers’ behavior than simple payment history. Machine learning opportunities Financial services firms can gain more insight and capitalize on the benefits of machine learning by applying their marketing dollars towards customers who are more likely to fit within their desired financial portfolio. Lifetime customer value for customer with limited credit data Currently, credit score is determined based on traditional data methods. Traditional data typically means data from a credit bureau, a credit application, or a lender’s own files on an existing customer. One in 10 American consumers has no credit history, according to a 2015 study by the Consumer Financial Protection Bureau (Data Point: Credit Invisibles). The research found that about 26 million American adults have no history with national credit reporting agencies, such as Equifax, Experian and TransUnion. In addition to those so-called credit invisibles, another 19 million have credit reports so limited or out-of-date that they are unscorable. In other words, 45 million American consumers do not have credit scores. Through machine learning models and alternative data (any data that is not directly related to the consumer’s credit behavior), lenders can now directly implement algorithms that assess whether a banking firm should market to the customer segment, thereby assigning customer risk and scores, even to credit invisibles (thin-file or no-file customers). Let’s look at a few sources of alternative data and how useful they are for credit decisions. Telecom/utility/rental data Survey/questionnaire data School transcript data Transaction data – This is typically data on how customers use their credit or debit cards. It can be used to generate a wide range of predictive characteristics Clickstream data – How a customer moves through your website, where they click and how long they take on a page Social network analysis – New technology enables us to map a consumer’s network in two important ways. First, this technology can be used to identify all the files and accounts for a single customer, even if the files have slightly different names or different addresses. This gives you a better understanding of the consumer and their risk. Second, we can identify the individual’s connections with others, such as people in their household. When evaluating a new credit applicant with no or little credit history, the credit ratings of the applicant’s network provide useful information. Whether a bank wants to more efficiently manage current credit customers or take a closer look at the millions of consumers considered unscorable, alternative data sources can provide a 360° view that provides far greater value than traditional credit scoring. Alternate data sets can reveal consumer information that can increase the predictive accuracy of the credit scores of millions of credit prospects. This allows companies to target consumers who may not appear to be desirable because they have been invisible to lenders before, which can lead to a commanding competitive advantage. ON-DEMAND WEBINAR: Learn how to turn data into insights that drive cross-sell revenue Optimizing marketing dollars to target customers Traditional marketing plans for credit card issuers call to onboard as many prime customers that meet the risk profile of the bank. However, new customer acquisition is only one piece of the puzzle. To drive maximum possible profitability, banks can consider not only the volume of customers, but also explore the overall profitability of a customer segment. Once these high-value customer segments are identified, credit card marketers can tailor specific products to these customer segments to deliver high value. Machine learning can assist both in the prediction of total customer value, as well as the clustering of customers based on patterns and behaviors. Identifying high-risk credit card transactions in real time Payments are the most digitalized part of the financial industry, which makes them particularly vulnerable to digital fraudulent activities. The rise of mobile payments and the competition for the best customer experience push banks to reduce the number of verification stages. This leads to lower efficiency of rule-based approaches. The machine learning approach to fraud detection has received a lot of publicity in recent years and shifted industry interest from rule-based fraud detection systems to machine-learning-based solutions. However, there are also understated and hidden events in user behavior that may not be evident but still signal possible fraud. Machine learning allows for creating algorithms that process large datasets with many variables and helps find these hidden correlations between user behavior and the likelihood of fraudulent actions. Another strength of machine learning systems compared to rule-based ones is faster data processing and less manual work. Machine learning can be used in few different areas: Data credibility assessment – Gap analytics help identify missing values in sequences of transactions. Machine learning algorithms can reconcile paper documents and system data, eliminating the human factor. This ensures data credibility by finding gaps in it and verifying personal details via public sources and transactions history. Duplicate transactions identification – Rule-based systems that are used currently constantly fail to distinguish errors or unusual transactions from real fraud. For example, a customer can accidentally push a submission button twice or simply decide to buy twice more goods. The system should differentiate suspicious duplicates from human error. While duplicate testing can be implemented by conventional methods, machine learning approaches will increase accuracy in distinguishing erroneous duplicates from fraud attempts. Identification of account theft, unusual transactions – As the rate of commerce is growing, it’s very important to have a lightning-fast solution to identify fraud. Merchants want results immediately, in microseconds. We can leverage machine learning techniques to achieve that goal with the sort of confidence level needed to approve or decline a transaction. Machine learning can evaluate vast numbers of transactions in real time. It continuously analyzes and processes new data. Moreover, advanced machine learning models, such as neural networks, autonomously update their models to reflect the latest trends, which is much more effective in detecting fraudulent transactions. Summary Bottom line: machine learning can leverage your data to develop patterns and predictions about your customers and applicants. These machine learning models are typically simpler to develop and deploy and may be more efficacious than traditional financial services modeling. These models also enable a more detailed forecast about your customers, allowing you to reduce risk while targeting more profitable customers through their lifetime with your credit card services. Related resources Case study: Machine learning predicts outcomes in financial services Case study: How Donatos uses machine learning to retain customers 5 tips to keep the wealth in your company Fusion Alliance has extensive experience in the financial services industry and serves as a preferred solutions provider for many prominent financial services institutions, including Fortune 500 firms. If you’d like to discuss your organization, let us know.
This article was originally published at the Forbes Communication Council Amazon has generally been considered the standard-bearer for product recommendations, and for good reason. The retail giant utilizes user data on past purchases, browsed-for items, and even what users have recommended to others to generate recommendations. Just think, recommendations likely popped up in the sidebar during your most recent Amazon binge. “People who viewed this product also viewed...” often appears as you scroll. A chatbot may even appear with ideas related to your shopping history. This is conversational marketing at work. Still, these advancements fall short of creating a truly personal experience that can predict and assist buying behavior by having a full view of who the person is — not just their recent search and purchase history. Even common segmentation methods fall short by making assumptions based on age and gender that fail to account for many outlying factors that can be easily discovered. The future of retail will be defined by immersive, conversational experiences that lead to better customer interactions and increased buyer loyalty from brands that stay ahead of the curve. While conversational marketing has taken many companies this far, using conventional conversational marketing techniques, in conjunction with machine learning, can be the answer retailers are looking to create the experience of the future. Online retail: Blending conversational marketing with AI technologies While conversational marketing has become the trend in business-to-business (B2B) demand generation strategy, there exists a huge business-to-consumer (B2C) opportunity, as well. Conversational marketing practices utilize website chat features and chatbots to initiate in-the-moment interactions with customers and build context to quickly qualify them for the appropriate next step. According to David Cancel’s aptly titled book, Conversational Marketing, both baby boomers and millennials are likely to adopt the use of chatbots, with a majority in both groups finding instantaneous responses and quick answers to simple questions being potential benefits. Aside from the obvious advantage of getting answers to product questions, automated chatbots offer a number of opportunities to enhance shopping experiences when coupled with data. Machine learning and chatbots While many B2C companies are already leveraging chatbots to streamline the customer experience, there lies even greater opportunity with machine learning to truly learn from and predict consumer behavior. Today’s practical machine learning models enable rapid iteration of data and deliver quick, reliable data sets. Data collected from customer conversations about the products they research, buy, and use can tell a deeper story about the customer themselves over time. Instead of a static list of recommended products based on their last purchase, machine learning can help us understand the customer’s lifestyle and habits in such a way as to help the customer make the best purchase in the moment. As an example, imagine an on-the-go, seasoned business professional with a love of podcasts and streaming music. Our traveling audiophile is a regular adopter of new headphone technology and is on the hunt for a new pair. While segmented data and previous purchase history might be able to get us in the ballpark when it comes to their next tech purchase, it doesn’t tell the whole story. In fact, the reason for this purchase has nothing to do with a search for the latest technology, but rather because past purchases have missed the mark for this customer’s need for multitasking and call connectivity. In this case, relying on past purchase history or even peer purchasing information won’t help. However, their experience with a chatbot powered by machine learning can give us helpful predictive data that informs the retailer of their need for a balance between audio quality and the ability to quickly and clearly connect to meetings during travel. A few quick questions allow the chatbot to suggest a new pair of headphones to fit their lifestyle, along with helpful content and reviews that match our customer’s pre-purchase research habits. Marrying predictive data to emerging technologies As advances in artificial intelligence (AI) continue to blur the line between human and bot, and retail brands continue to experiment with augmented reality (AR) to replicate brick-and-mortar shopping experiences, it’s vital that data plays a role in the next phase of online shopping. Not only should brands be placing an emphasis on the aesthetic experience that can be delivered through apps infused with AR, but they should also make room for predictive machine learning data to make the buying process even easier for the consumer, making them more likely to return in the future. In fact, for any brand wishing to be at the forefront of the next wave of retail evolution, I believe it’s vital that a data governance framework be in place and actively funnel information to teams developing emerging technology. The days of keeping customer data siloed away from our product teams need to come to an end in order to fully realize the marketplace potential. The future of retail is filled with possibilities that can completely reshape the way we understand consumer behavior and connect with the consumer to meet their needs in real time. Taking tangible steps to listen to our customers, learn from them, and act to predict their needs, while delivering a stellar shopping experience along the way, is more than a possibility — it’s a reality. At Fusion Alliance, we find our place at the intersection of advanced analytics, experience design, and technology, leveraging machine learning to gain customer insights that inform our strategies. Learn more about our approach to machine learning solutions >>
Digital natives like Uber and Lyft have all but changed the face of the taxi industry and the customer experience. Many are huge fans of these companies, and it’s no wonder. You don’t have to flag anyone down or have an awkward street scuffle to get a ride. Uber has skipped the web and gone straight to the mobile device as its target platform for order management and fulfillment. Every customer transaction goes into Uber’s database: name, address, credit card info, cell number, pickup location, drop-off location, where you travel and when — the list goes on. But it’s not just the ride that is valuable to Uber. Their database is where the real value is. And guess what? Uber was recently valued at $49 billion. Every company has data. And every company needs a data strategy to take advantage of its value. Some steps to create a winning strategy are: 1. Make data management and analytics a priority Increasingly, business and personal transactions and interactions are going through digital channels. As they move to digital channels, they leave behind a lot of data that was not available to companies before. New sources of data exist everywhere — social media, geolocation data, etc. There’s just a lot more digital content available now to help a business understand its performance, relationships, and reputation. But you need to know what to do with it. For example, as Uber’s database grows, it becomes more valuable. They can look at their customers’ digital footprint through analytics and, over time, they can see distinct types of users emerge from their travel and interaction patterns. They can then use these analytics to expand and refine their service offering to better serve the needs of users and travelers with similar digital footprints. 2. Overcome IT challenges There are many new choices of data technologies. You need to figure out how to incorporate them into your company’s existing technology stack. But even before that, you need to understand how to manage data as an asset and consider: How data is governed Who owns the data How to manage data quality and security How to handle demand management as new data requests come in and new data sources are identified In addition, you need to understand how it will be integrated into the infrastructure and environment. That said, it’s hard to find the people right now who understand all of these technologies. There is a shortage of data scientists and Hadoop engineers, for example. Having access to the resources with the skills to implement and manage these new technologies can be one of your biggest constraints and barriers. Legacy systems vs. open source There are also challenges associated with the whole technology space. Legacy vendors, such as Oracle, Teradata, and Microsoft, want to maintain their hold. They’re all fighting to remain relevant in a technology market space where open source is creating more compelling and cost-effective solutions for businesses. Microsoft, which has a huge research component, has had difficulty embracing the open source movement in the past. Today they are in full support of open source projects in Azure and Visual Studio and release many of their own code bases, such as .NET. Open source is actually much more valuable because it’s run by people who are constantly working on issues of security or lack of performance. These folks will immediately address your issues for one overriding reason, they are passionate about code — Wikipedia all over again! 3. Prepare for organizational change We’re not just dealing with our own transactional data anymore. We’re dealing with data from our industry or sector, as well as external data, such as weather. Though weather might not seem like it has anything to do with your business, weather data can provide great insight that can enable you to positively impact business. Many other datasets are also available, some for a fee, which organizations have discovered they must pay attention to, in addition to their own operational data. The technologies of today’s data management agenda are new and emerging and are not technologies for which IT traditionally has the skills. There’s a big divide between IT capability and what the business demands for integrating and managing data. As a result, roles like data scientist and data analyst, with the kinds of necessary skills, are not yet common within organizations, making organizational and change management a requirement. 4. Embrace the role of a CDO Until recently, we’ve pretended as if the people who are responsible for the technology (the wires, pliers, software, and ERP systems) actually care about the data, but they don’t. A CIO is not the best person to manage the data. There’s a new paradigm out there, the chief data officer or CDO. Data is such a critical corporate asset that it needs to be managed strategically and at the executive level outside of IT. Technology is an enabler, but data is an asset. Currently many account for them in the exact opposite paradigm. Many organizations are now appointing a CDO, reporting to the COO or CEO, and their role is to oversee and manage quality, integrity, and use of the organization’s data assets, just like the CFO governs the organization’s financial data. Start implementing a winning data strategy today The elements covered here will get your business off to a strategic start toward more effective management of your data and analytics. If you want to be successful, remain open to new ideas, get help from outside, and embrace new paradigms for how your business should interact with data as it continues to evolve. It’s critical to understand that you should absolutely take advantage of anything that can accelerate driving insights from the data already just sitting in your systems out into the marketplace. Having a strategic partner who brings the required expertise and ability to implement proven methodologies will enable your company to create successful data capabilities, and you’ll be able to groom and train internal resources at the same time. That’s what’s going to enable you to beat your competition.
Internet users produce an estimated 2.5 quintillion bytes of data each day. Yes, that’s quintillion — as in a one followed by 18 zeroes. That’s a mind-boggling amount of data. Yet, every day, that information is mined, analyzed, and leveraged into usable insights that businesses then use to streamline operations, assess risks, track trends, reach a specific target audience, and so much more. Big data, the term we use to describe this vast amount of information, is a goldmine for industries seeking to increase revenue and improve operations. But without a solid strategy for how to use that data, you could scour the internet until the end of time and still not see any gains. Before you dive in to the big datasphere, it’s best to familiarize yourself with what a big data strategy looks like. Then, you can take measured steps to ensure your vision is properly focused and ready to deliver the value you need. What is a big data strategy? A big data strategy is exactly what it sounds like: a roadmap for gathering, analyzing, and using relevant industry data. Regardless of business vertical, an ideal big data strategy will be: Targeted. You can’t hit a moving target, let alone one that’s too nebulous to define. Drill down to the details until stakeholders are aligned on the business objectives they want to reach through your big data strategy. Actionable. Data can be insightful without necessarily being actionable. If your big data strategy doesn’t serve up information usable by the broader team while paving the way for next steps, it likely won’t be beneficial in the long run. Measurable. As with any other business plan, a big data strategy needs to be measurable to deliver lasting success. By measuring your incremental progress, you can refine your strategy along the way to ensure you’re gathering what you need and assessing it in a way that serves your goals. What’s the best way to approach a big data strategy? Now that we’ve covered the basics of what a successful big data strategy entails, let’s turn to how your organization might put one into practice. As we’ve worked with clients across industries, we’ve seen the following six steps deliver wins. Your big data strategy will likely require unique details, but this action plan gives you a starting point. 1. Gather a multi-disciplinary team Big data is not solely an IT project; it’s a business initiative. The team should have more representatives from business departments than from the corporate technology group. Members typically include knowledgeable staff or managers from finance, business development, operations, manufacturing, distribution, marketing, and IT. The team members should be familiar with current reports from operational and business intelligence systems. A common thread? Each team member brings ideas about performance indicators, trend analysis, and data elements that would be helpful to their work but which they don’t already access. More importantly, they know why having that information readily available would add value — not only for their business units, but for the organization as a whole. 2. Define the problem and the objectives What problem should be analyzed? What do you hope to achieve through your strategy? Take three problems you’d like to have solved and formulate them into questions. Limit yourself to three, to start. There will always be more questions to answer. Don’t try to tackle them all at once. Write those questions as the subject line on three emails. Send them to all members of the multidisciplinary team. The replies will guide your efforts in narrowing (or expanding) the initial scope of study. Here are a few questions to get the ball rolling: What do you want to know (about your audience, your processes, your revenue streams, etc.)? Which factors are most important for increasing margin on a given service or product? How much does social media reflect recent activity in your business? Which outcomes do you want to predict? Developing a 360-degree view of all customers in an enterprise may be too ambitious for an initial project. But finding the characteristics of commercial customers who have bought products from multiple lines of business in five key geographic markets might be a more manageable scope right out of the gate. With this approach, iterations in development provide expansion to all lines of business or to all markets in cadence with a company’s business pace. 3. Identify internal data sources Before getting into the technical weeds, you need to know what data exists internally from a functional viewpoint. Gap analysis will uncover incomplete data, and profiling will expose data quality issues. Your first step is just to identify what usable data you have. If customers for one line of business are housed in an aging CRM, and customers for a newer line of business are found in a modern system, a cross-selling opportunity analysis will point out the need to integrate those data sources. Do you have an inventory of data sources written in business language? In forming a strategy, a team will want to have references, such as vendor contracts, customer list, prospect list, vehicle inventory, AR/AP/GL, locations, and other terms that describe the purpose or system from which the data is derived. The list can be expanded for technologists later. Learn how to develop data as an asset >> 4. Find relevant external data sources If you don’t have enough data internally to answer your questions, external data sources can augment what you do have. Public data sites like Data.gov, the U.S. Census Bureau, and the Department of Labor Statistics’ Consumer Price Index have a vast amount of information available to anyone who can operate a search function. Data.gov alone has over 100,000 datasets, some containing millions of rows covering years and decades. Social media is another invaluable source of data. Regardless of industry, Twitter, Facebook, and Pinterest posts may have a greater impact on your operation than you realize. Be sure that a couple of members of the team pursue data from social media sources to include in the initial study. 5. Develop an organizational system One of the most important elements of a big data strategy is organizing the data you collect. Whether it’s analytics dashboards or full-blown data fabric systems, you’ll need a way to organize data in order to analyze it. Decide how and where you want the data to live, how it can be accessed, and who will have access to it. Remember that the more you democratize data, the more your team grows comfortable with reading and handling this information, and the more insight you can glean. However, this also means you’ll need a strong system of management to ensure the data is secure. 6. Get experienced guidance Engaging an experienced team that has led others through data strategy and implementation can help you jump-start your strategy. An external resource skilled in big data management can provide your company with a smooth progression through the many tasks at hand. Your guide should have extensive knowledge of business data elements, or BDEs, which are key to creating understandable and cross-company analytical outputs, including reports, charts, graphs, indicators, and other visualizations. Seek guidance especially if your organization doesn’t have a data glossary, network administration, or knowledge of new technologies, as implementing these can be highly technical and time-consuming. Planning your big data strategy Planning a big data strategy will require you to rethink the way you manage, operate, and analyze your business. But with the right guidance and tools you can develop an effective strategy that positions your company for growth and success. Need a guide on the path to creating your big data strategy? We’re here to help. Reach out to an expert to learn more about how you can leverage big data for your business. Discover our strategic data management services >>
The future looks rosy for companies who take advantage of what strategic data management can do. But the specter of needing a team of people handling on-premises hardware and the cost implications of doing so continues to make organizations hesitant to move forward with a new data strategy. Here are a handful of factors to consider when weighing the costs versus benefits of implementing a big data strategy in your organization. 1. Compare the dollars and cents In 2012, I conducted a study that compared the cost of managing data with traditional data warehousing assets, such as Oracle, to the cost of managing that same data with an open-source software framework, such as Hadoop. At the end of the day, including a 60% discount off list price for the hardware and software licenses for Oracle, the cost to manage 1 terabyte in a 16 terabyte configuration with traditional assets was $26,000 per terabyte compared to $400 per terabyte with an open-source framework. 2. Analyze the total cost of ownership The reason there wasn’t a mass exodus in 2012 from Oracle to Hadoop was because you have to consider the total cost of ownership. You have to ask, “Does my organization have the skills to manage this new technology environment? Is my existing Business Objects universe investment compatible with the back end?” In 2012, the answer was no. Today, you can connect your existing Business Objects universe investment to Hadoop on the back end. Then, take all that data out of Oracle, expose it through HIVE tables where it can be accessed, and enable the environment to perform even faster than it can perform in Oracle for pennies on the dollar. Pennies! Why wouldn’t you do that? 3. Evaluate the competitive advantage It goes something like this, “Well, if my competitor is running their data warehouse for $4 million a year on a legacy technology stack, and I can ‘lift and shift’ my data warehouse to a technology stack that I can run for $40,000 a year, who’s going to gain a competitive advantage?” 4. Assess the value of a 360-degree view of your customer In the TV series, “How to Get Away with Murder,” a forensic analysis of a suspect’s cell phone data that was backed up to his computer is performed. The other data is provided by the telecom provider. Because of the GPS service on the suspect’s phone, the detectives were able to identify his entire route from one state to another, how much time he spent when he was in motion, how much time he spent when he stopped, when he started again, and how many minutes his phone was in a particular location. They were able to create a geospatial plot of his path, all using the data stream from his mobile phone as he was driving his car with his phone on his person. This brings us to another important point when we think about data today. We’re living in a world of mashups. There’s opportunity to go out and subscribe to a Twitter feed and mash that up with an email address linkage in a way that would identify my behavior and thought processes. All that lives in the Twitter space or in my Facebook posts can be analyzed. Mashing up these many sources of data into a mega-analytic platform capability has become something that is easy to accomplish, but not if you don’t have a strategy for how you’re going to manage the data. Sam Walton’s objective with his fledgling Walmart stores was to always know what the customer wanted to buy and always have it on the shelves when he or she walked into the store. Back in the 1980s, Walmart used Teradata technology to build a database to collect all of the point-of-sale data, which was then used to calculate how many units they would need to ship to each store so they wouldn’t have to carry a surplus of inventory. The rest is history. The database actually became much more valuable to Walmart than the inventory carrying costs problem they solved using it. And now Walmart is a half-trillion-dollar a year global company. 5. Gauge the payoff of higher-end analytics Amazon is another huge data success story. As you know, they started as an online bookseller and didn’t make much money selling books online. But what they were able to do is get consumers to go to their portal and interact and leave data behind. They were very successful in leveraging that data, and from that data, they have grown into a company with over $100 billion dollars in sales. And now, of course, Amazon sells everything. Amazon is using the highest-end analytics, called predictive analytics. In fact, they recently filed for a patent on an analytic model that can predict what you’re going to buy before you buy it. Predictive analytics tells them there’s a pretty good chance that you’re going to purchase a product in the next 24-48 hours. They’re so confident in the accuracy of their algorithm that they will ship you that product before you even buy it. Let’s say something from Amazon shows up on your doorstep that you didn’t order, but it’s something that you wanted. Then you’ll pay for it. This isn’t yet a production feature of amazon.com, but keep your eye on the bouncing ball! The future of big data strategies and strategic data management The future belongs to companies whose data game is completely integrated into the foundation of how they do business in the marketplace. And because companies like Amazon know so much more and their revenue is so diverse and their ability to manage data is so significant, they are now even in the data hosting and data enrichment services business. They are selling their data and hosting apps in an infrastructure that exists because of their desire to manage data and ability to do it effectively. If you look at where the venture capital partners are investing their money today, you’ll see that it’s in companies who are busy creating that layer of integration between the front end and the back end because they have determined that the benefits of having a big data strategy greatly outweigh any costs.
Recently, our team was on a call with a client who was trying to consolidate dozens of transactional systems into a single model to support a more effective reporting paradigm. The envisioned solution focused on self-service, visual analytics, while also supporting more traditional reporting. This client’s challenges were similar to what many other businesses face today. They wanted: Quicker time to insight Empowered end users Lessened dependency on IT Reduced reconciliation of reports, etc. Sound familiar? The client wasn’t questioning whether or not there was value in the project ahead. Their questions were focused on the best approach. Do we pursue a big bang approach or pursue something more agile in nature? Upon further discussion and reflection, the objectives of the program seemed to be a perfect case for agile. Let’s talk about why. Iterative selling of value While the client knew the value of the project, we discussed how, in reality, data projects can die on the vine when the value isn’t apparent to the business funding the initiative or to the IT executives who need to demonstrate their operational ROI. As such, the ability to demonstrate value early and often becomes critical to building and keeping the momentum necessary to drive projects and programs across the finish line. Project sponsors need to constantly sell the value up to their management and across to the ultimate customer. Iterative wins become selling points that allow them to do so. Know your team’s delivery capability To truly understand what can be delivered (and by when) means accurately assessing how much work is in front of you and how quickly your team can deliver with quality. This example project was as new as the client’s team. For them, the most logical approach was to start doing the work to learn more about the work itself as well as the team. After a few iterations, the answers to the following questions become clearer: Parametric estimating – How do I estimate different complexities of types of work or data sources? How do I define the “buckets” of work and associate an estimate with each? What values do I assign to each of these buckets? Velocity – How quickly can my team deliver with each iteration? How much work can they reliably design, build, and test? Throttling – What factors can I adjust to predictably affect velocity without compromising quality or adversely affecting communication? Continuous improvement – Fail fast, learn fast, adapt. Do I understand what factors are impeding progress that I can influence? What are we learning about and how are we accomplishing the work so we can improve going forward? How do we get better at estimating? Team optimization – Do I have the right players on the team? Are they in the right roles? How does the team need to evolve as the work evolves? Foster trust – ensure adoption Anyone who relies on data, whether they are business or IT, has their go-to sources that they rely on. Getting an individual to embrace a new source for all of their information and reporting needs requires that the new source be intuitive to use, performant, and above all, trustworthy. As with any new solution, there will be skepticism within the user community, and whether conscious or not, an unspoken desire to find fault in the new solution, thereby justifying staying with the status quo. Data quality and reliability can be the biggest factor that adversely impacts adoption of a new data solution. By taking an agile, iterative development approach, you expose the new solution to a small group initially, work through any issues, then incrementally build and expose the solution to larger and larger groups. With each iteration, you build trust and buy-in to steadily drive adoption. Generate excitement By following an iteratively expansive rollout, genuine excitement about the new solution can be fostered. As use expands, adoption becomes more a result of a contagious enthusiasm rather than a forced, orchestrated, planned activity. Tableau’s mantra for many years has been “land and expand” — don’t try to deploy a solution all at one time. Once people see a solution and get excited about it, word will spread, and adoption will be organic. Eliminate the unnecessary While there are many legitimate use cases for staging all “raw” data in a data lake, concentrating on the right data is the appropriate focus for self-service BI. The right data is important for ensuring the performance of the semantic model, and it’s important for presenting the business user with a model that remains uncluttered with unnecessary data. Agile’s focus on a prioritized set of user stories will, by definition, de-prioritize and ultimately eliminate the need to incorporate low priority or unnecessary data. The result is the elimination of wasted migration time and effort, a reduced need for the creation and maintenance of various model perspectives, and ultimately quicker time to insight and value. Adjust to changing requirements and priorities Finally, it’s important to understand that data projects and programs focused on enabling enhanced or completely changed reporting paradigms take time to implement, often months. Over the time period, priorities will likely change. An agile approach allows you to reprioritize with each iteration, giving you the opportunity to “adjust fire” and ensure you’re still working on the most important needs of the end-users. Ready to roll out a successful self-service business intelligence program and not sure where to start? If you’re ready to take the next step, we’re here to help.
Data science is an important field of study as a means of analyzing big data. The success stories of how data science and machine learning provide organizations with new insights that stimulate the growth of customer service, productivity, and profitability by leaps and bounds are true. The initial steps for integrating data science into your organization need not be costly. The focus is often on finding a “data scientist” who will find ways to provide immediate insight into your data. But a more thoughtful, measured approach to incorporating data science into your organization may be more efficient and effective. What is data science? It’s not easy to pin down the definition of data science. Depending on whom you talk to, the meanings can be radically different. A strong definition was offered by Jeff Leek in 2016, “Data science is the process of formulating a quantitative question that can be answered with data, collecting and cleaning the data, analyzing the data, and communicating the answer to the question to the relevant audience.” Leek’s definition is pertinent because it avoids relying on specific concepts, such as big data and machine learning, or specific tools, such as Hadoop, R, and Python. Data science can actually be performed using any number of tools and on many types of data, regardless of the size. The classic data sets used to develop and test statistical processes are actually very small. While big data is often a wonderful resource, in reality, one of the first steps will always be to aggregate and/or reduce the data to a smaller size in order to be useful. The specific types of statistical modeling tools and algorithms are many and varied, and new ones continue to be developed all the time. The most important consideration is not which tool or algorithm is used, but that the correct solution is applied to the problem. Many business questions can be answered through simple summaries, counts, or percentages. The trick is understanding the data enough to decide the best approach and having the skill sets and tools available to implement that approach. This is heavily dependent on process, which brings us to a second reason Leek’s definition is so apt — it emphasizes that data science is a process with multiple parts that all need to work together. Data science as a process The process of data science can be broken down into five parts. 1. Know your use case Knowing your use case delivers actionable information about the core needs of the organization, and it’s absolutely key to driving the entire process. Knowing your use case defines what data is required, how it will be gathered, how it will be looked at, and how the results need to be reported. Data science works best when there is a question or a hypothesis that needs to be answered or proven. 2. Acquire and clean the data The data you acquire can come from inside the company and/or outside the company (public domain data sets, social media feeds, etc.), but it must be driven by the needs of the use case question. Acquiring and cleaning the data is often time-consuming and resource-intensive, but it is the most important part of the process. Surveys of data and statistical analysts often state that this step consumes 80-90% of their time, leaving only 10-20% for the actual statistical analysis, but it is absolutely critical that this part of the process is done with great care. Accuracy of analysis is tightly related to the quality of the initial data sources. 3. Understand the data Once you have the data, you need to understand what you have. This includes: What it is and what it is not What it contains that is useful What it contains that might be problematic or misleading Exploratory analysis of the data, i.e., learning the properties within the data that relate and can be applied to the use case question at hand, is important. And information about the source of the data and how it was processed is critical in assessing its usefulness. Spending time sampling and profiling your data pays great dividends in two key areas, using the data in analysis and being able to assess the validity of the results. 4. Use the data to answer the question This is the step where the high-end skillsets of a statistical analyst are applied and is often the quickest and seemingly easiest part of the data science process. When the data has been ingested into an environment by a load process, deep analysis begins. This includes using statistical modeling, machine-learning algorithms, clustering techniques, and other appropriate tools to see if the question can be answered. If the previous steps have all been done well (a clear question exists, the data was properly cleansed, and is fully understood), then selecting and implementing the analysis can be fairly straightforward to the skilled statistician. 5. Communicate the results It is vital to make the results of this seemingly arcane and mathematically dense process understood at the business level. Interesting and actionable results are of no use if no one knows about them or can understand them. Resourcing data science as an organization Looking at the process outlined above, it’s clear that finding a single technologist, engineer, or mathematician who can accomplish all steps is not likely. Rather, a data science team of several people who cover all of the necessary skill sets would be the most viable solution. Building such a team is not difficult. Most organizations already have employees with many of the required abilities. 1. Know your use case: Business analysts and subject matter experts The business analysts (BAs) and subject matter experts (SMEs) will hopefully already have a firm grasp of the organization’s internal data and know the current use case questions being asked by the business. The key here will be for them to expand their horizons to other data sources and wider questions. They will need to start looking beyond internal systems to other externally available data sources and consider how these might be used to gain new insights into how the organization is relating to the outside world. Thinking creatively about what other information may be available and how it might be used can lead to even more intriguing use case questions. 2. Getting and cleaning data: Database/data warehouse architects and ETL programmers Like BAs and SMEs, architects and programmers will need to expand their activities to include both external and highly unstructured data. They will also need to understand the more specific requirements of how a statistical analyst needs the data formatted and delivered. Fortunately, getting and cleaning data is generally part of these architects’ and programmers’ everyday lives, and leveraging their knowledge and skills will be critical to providing the analysts with the information they need. 3. Understand the data and communicate the results: Data analysts, data stewards, report developers Data analysts, data stewards, and report developers should already have a good handle on the organization’s internal data. Like BAs and SMEs, the analysts, stewards, and report developers will need to expand their horizons to other data sources. They will already have a history of bridging the communications gap between IT and the business, and that will help the statistical analyst understand the data and the business to understand the results. 4. Use the data to answer the question: Statistical analyst/data scientist Unless the organization already employs statisticians, the skill set of a statistical analyst or data scientist will most likely need to be added. This can either be done by bringing in an outside resource or developing the skill sets internally. Do not discount your existing data analysts when looking to fill this role. Their current knowledge of the data is a huge running start, and an intermediate level of statistical training will provide them with a variety of new tools to utilize. It will not make them Ph.D. statistician rock stars and they might not fully understand the underlying theories, but not all use case questions require deep statistics to answer, and the practical application of regression modeling and machine learning tools can go a long way. 5. Repeat step 3: Understand the data and communicate the results: Data analysts, data stewards, report developers Conclusion Data science can provide an organization with new and surprising insights into both internal processes and interactions with the outside world. Take time to build the correct structure and resources to implement data science so it can become an integral and productive asset to the organization.
With increasing customer demands and competition a click away, access to data-driven responses in real time has become a necessity for business users and marketers. Today, the market is full of digital analytics tools for measuring your customer experience across web and mobile applications, customer relationship management (CRM) systems, and point of sales (POS). When used correctly, these digital analytics tools can provide businesses with a wealth of insights into the performance of their digital platforms. To best leverage digital analytics, you will first need to set clear business objectives and define how your organization intends to measure success on your digital platforms. An in-depth look into your current measurement strategy, if one exists, including metrics and key performance indicators (KPIs) will reveal if digital analytics are providing the data and insights necessary to ensure business and customer needs are met. Identifying metrics Organizations often make the mistake of using out-of-the-box metrics like page views, sessions, bounce rates, and session duration as KPIs. These basic metrics are not representative of actual business objectives and can prove useless without the right context. For example, if a marketer wanted to understand the value of a landing page, they would want to look at the number of leads generated by the page or the long-term business impact that customers who came to the site through that page brought. Instead, reports focus on the number of people that saw the page or the bounce rate for the content. While this is helpful information, it doesn’t mean anything if you can’t tie the analysis to ultimate business success. Regardless of industry, website visits and page views do not increase bottom lines, nor should they be used as KPIs. If these are the types of metrics you’re seeing in reports or using in your analysis instead of relying on KPIs like leads, transactions, revenue, or conversions, then it may be time to re-examine your digital measurement strategy. Developing a measurement strategy Successfully integrating digital analytics into business processes requires a clear measurement strategy. A measurement strategy outlines business objectives, what should be tracked on the website or mobile app that will inform these objectives, the types of reporting that will be available, and to whom it will be exposed to once the implementation is complete. Depending on current processes, analytics tools, technology, and available resources, the process of uncovering this information can take several months, but it is a vital step that should not be overlooked or rushed, as the end result is a digital measurement model that provides the framework to align digital analytics with business strategy. The digital measurement model A digital measurement model is a high-level, visual summary that links your core business objectives, such as increasing brand awareness, customer acquisition, or increasing sales, to the digital strategies used to achieve these objectives and their requisite goals. From there, specific KPI and targets will be identified for each digital strategy, helping business and marketing stakeholders understand whether their efforts are trending in the right direction. These elements should be captured in a matrix that can be used to inform the tracking strategy, reporting development, and ultimately gauging the health of your digital practice. Benefits of a measurement strategy Creating a measurement strategy that aligns your business goals with the activities of the digital teams can have a significant impact on how the business operates. With clearly defined objectives and KPIs for measuring digital outcomes, digital teams can focus their efforts on producing measurable value, instead of opting for a shotgun approach that hopes some portion of their efforts will drive outcomes. A well-defined digital measurement strategy encourages an environment of accountability. With KPIs to measure the gap between real-time digital outcomes and targets, executives gain greater visibility into the progress (or lack thereof) being made toward business objectives. It also creates a baseline of expectations, helping digital team members to better prioritize work to produce measurable value. Most importantly, developing a digital measurement strategy gets people talking. Shaping strategy to reflect business objectives encourages collaboration among business operatives and leaders across the board, from marketing analysts to the CMO. Gaining alignment on what matters most helps an organization instill confidence in teams and helps team members gain a better understanding of how their day-to-day work contributes to the overall mission of the company. Final thoughts All too often, digital analytics are completely overlooked within marketing teams. This could be due to lack of expertise around robust measurement implementations, or analytics has been under-prioritized in favor of more tactical activities. Whichever the case, overcoming hurdles to generate actionable business insights from your digital platforms is vital to the health of your digital practice and the needs of your customers. To successfully leverage digital analytics, organizations need to take a deep look into their current measurement strategy and reframe as needed to align their implementations with their established business strategy. Ultimately, having a clearly defined digital measurement strategy paves the way for receiving lasting, meaningful insights from your digital platforms and provides a system of accountability for team members and leadership to unite around.
When digital analytics do not produce useful outcomes to inform business decisions, digital teams often point to reporting and analysis as the culprit. But an in-depth investigation of the measurement strategy and digital analytics implementation often reveals a much different truth: digital marketing teams often don’t fully understand the capabilities of the tools at their disposal. This common issue is born out of inconsistent technical implementations, lack of analytics expertise within the team, or a general misunderstanding of the types of metrics and data that the business should collect. As long as digital teams maintain that reporting and analysis are at the heart of the issue, organizations will not be able to leverage the full feature set available with analytics tools like Google Analytics or Adobe Analytics. This leaves digital leaders questioning whether they should invest in more expensive or specialized tools to get the “right” data that will create new and incremental value. The often surprising reality is that an updated implementation with more focused tracking would suffice to provide digital teams with the valuable data they seek across all digital platforms (e.g., mobile app, CRM and point of sale). With the recent addition of tools like Google Data Studio (a lightweight BI dashboard tool) and Google Optimize (an optimization experimentation tool for the free Google Analytics suite), the vast majority of businesses don’t need a paid analytics solution. You can invest money usually spent on expensive data management tools into analysts or other digital marketing efforts. In most cases, the free versions of tools like Google Analytics and Google Tag Manager are more than sufficient for the needs of an organization, but teams don’t have a true understanding of the tools’ capabilities or don’t implement their tracking in a way that works within the limitations of the free toolsets. 9 questions that can clarify you digital analytics capabilities If you wonder whether your digital analytics toolset is up to par, start by asking some basic questions. The answers will divulge the true extent of your digital analytics capabilities and identify areas for improvement from both technical and expert standpoints. 1. Are we combining data that we already have about our customers with their on-site activities? Many digital teams who use only the out-of-the-box versions of tools like Google Analytics are unaware that the tools come with powerful custom features. Custom dimensions, for example, provide valuable context to information being collected. You might have data, including gender, zip code, customer segment, persona type, etc., about a specific user based on their digital profile. Populate these values into analytics code, along with everything that is tracked using custom dimensions, in order to create meaningful user segments that provide insights as opposed to just metrics. 2. How do specific sections of the website compare to others? Many teams stop their measurement at the page level. This is a natural inclination, considering all interactions are recorded with the single page associated with it. However, there are available features, such as content groups and custom dimensions, that allow you to combine the data from specific pages into predefined site sections or groups. You can then compare these page groupings to each other to understand how they impact conversion and acquisition. 3. How does a specific content type impact conversion? By using the aforementioned features alongside conversions and goal tracking, analysts can point out which content types have the greatest impact on conversion, user drop-off, and other key performance indicators (KPIs) reflecting business objectives. Organizations can then use this information to better allocate resources. For instance, if you learn zero percent of your blog traffic converts on the site, you need to shift resources toward a more conversion-friendly design, engaging content, or traffic sources that are converting. 4. Can your analysts easily set up event tracking or conversion tracking without developers? Teams should use tools like Google Tag Manager or Dynamic Tag Manager to implement their analytics platforms whenever possible. These platforms allow marketers to have control over what is tracked and how that data is expressed in the analytics tool. Many companies do not use these implementation tools. Instead, they still rely on developers to add code, adding significant time to the tagging process and deterring analysts from tracking in some cases. 5. What are our most efficient sources of traffic for conversion? Analysts should be able to relay the traffic sources that generate the most conversions and which sources are the most efficient at doing so. An SEM program may generate the most conversions, while organic search might have a drastically higher conversion rate. In these instances, it’s worth exploring what an investment in the organic search channel could do versus making a similar investment in SEM or social media. 6. How are key segments of customers converting against other key segments on the website? Analysts can create meaningful segments within the analytics tool to understand how different types of customers utilize the website. These segments can be developed using existing customer data or even basic demographic data, like age and gender. Segmentation yields the necessary data to understand how different types of users are being impacted online to help identify areas for improvement. 7. Can we run A/B or multi-variate tests on the website today? A key part of optimizing your digital strategy should be conducting experiments with your website or app content. Many analysts don’t invest in the development of a testing program because of time constraints or simply because they aren’t aware of how easy it can be to run A/B or multivariate tests. Tools like Google Optimize are free and provide a robust feature set that integrates with Google Analytics and Google Tag Manager. 8. What are the primary drop-off points for customers prior to conversion? Analysts should have a clear understanding of what keeps users from converting on the website, no matter the type of site or conversion. With goal funnels or some elbow grease and expertise, analysts can identify where users drop off the site and what might be impacting their experience. With this visibility into customer needs, you can optimize the user experience and generate more conversions. 9. What are the main reasons users come to our site? Often teams don’t take time to understand the specific intentions of why users come to the website because the team members assume they know the reasons. However, their assumptions are often based on their own internal knowledge of the business. For example, you may see a significant increase in traffic to the website, only to find users are going to the careers page or reading a specific piece of content that doesn’t necessarily impact conversion. By understanding keywords, traffic sources, and landing pages that drive users to certain parts of your site, analysts can create user segments based on their intent. Understanding your digital analytics capabilities is the first step Uncovering your digital analytics capabilities can be the difference between a measurement strategy that ensures the continual improvement of online customer experiences or an expensive data tool that produces outcomes non-representative of business objectives. Before deciding to invest in a data management tool, assess whether your digital team has the expertise to leverage your current analytics tools. In most cases, we find the free versions of tools like Google Analytics and Google Tag Manager are sufficient for the needs of an organization. By getting answers to the right questions, you can discover your organization’s hidden capabilities and begin to leverage digital analytics to meet customer and business needs. Want to explore your organization’s digital analytics capabilities or dive deeper? Let us know.
Amazon, Netflix, Airbnb, Uber, and other disruptors have raised the bar on what customers expect from a business. These online giants have figured out how to use their customer data to make personalized recommendations and predict when customers are going to buy — and present offers at just the right time. Brands that use personalization report an average growth of 20% in sales (Monetate research), and customers feel less spammed and more like they’re in control of the experience. It’s no surprise that consumers are looking for that same personalized, frictionless experience when interacting with their financial institutions, whether through mobile banking, your website, at a brick-and-mortar branch, or at one of your ATM locations. And it pays off for banks who can engage their customers. According to a 2013 Gallup study, fully engaged customers bring in an additional $402 in revenue per year to their primary bank, as compared with those who are actively disengaged. Even better, the research said 71% of fully engaged customers believe that they will be customers of their primary bank for the rest of their life. That could be your bank, but only if you can reach your customers in ways that feel natural and valuable to them. Customers want to be engaged with the right messages at the right time Imagine if you could understand your customers so deeply and predict their buying patterns so clearly that you could deliver targeted marketing only to those ready to invest in more products with your bank. Not only that, what if you could know what to say to them and on which channels to reach them? How would that impact your business? The trend is clear: financial institutions must adopt a customer-centric business model now to ensure success in the future. This puts banks like yours at a crossroads, and the problem is where and how to embark on that journey. Tackle your greatest challenges The formula seems simple. Increase your engagement and you’ll increase your revenue. But meanwhile, you’re under pressure to acquire new customers, maintain your base, forecast/reduce risk, manage capital, navigate security compliance and financial regulations, and optimize the business. You may also grapple with siloed data, legacy systems, and outdated processes, all seemingly monumental challenges that may adversely affect your customer experience. For example, your customers and employees may not have access to the right data at the right time to provide an optimal experience. Or, from a marketing standpoint, different departments within your company may be targeting the same customers, resulting in too many emails. Or your customers may get untimely messages about promotions that have passed or receive communications that don’t apply to their current situation. This creates frustration and a poor user experience that may be enough to make your loyal customers turn away. Other banks have been in your shoes, facing the same challenges and fears, but they’ve made major strides in putting the focus on the customer. They’ve found success through the “magic” of machine learning (ML). ML enables your staff to prioritize your over-capacity bankers’ focus and marketing spend on opportunities that are real. ML is a modern technique that uses algorithms to analyze enormous amounts of data. Machine learning models learn on their own and identify insights and patterns to predict future behavior. Machine learning algorithms connect the dots far faster and deeper than people can, exposing patterns in your customers’ behavior that empower your team to take actions that will impact your business’ top and bottom lines. Unlike traditional analytics tools, ML can evaluate account holders, securities, and transactions in real-time. If you want immediate decisions integrated in the moment, machine learning is the answer. And, good news, even though you may feel you are behind the curve right now, you have something that the younger fintechs you compete against don’t − a wealth of historic data that can be “mined” by ML to answer your specific business questions. Some organizations need help in improving the quality of their data for effective use in the machine learning model, and that’s not an uncommon challenge. But good data will be your key to success. Machine learning applications in finance Banks have found many successful ways to leverage machine learning. For example, they use it to answer specific business questions across all departments, including: How do I increase my customer wallet share, including: What are my best opportunities for cross-sell/remarketing my existing customers? Can I identify customers that we can convert from other banking institutions? Can I identify loan-default risk early enough to take an action? Can I dynamically price securities based on investor demand and market saturation? Can I predict my cash and reserve activity to optimize liquidity levels? Can I identify account holders’ attrition activity before they disengage? What percentage rate and product messaging would make my ideal prospect buy? The first step towards engaging customers with the right messages at the right time is to capture what questions your bank wants to solve. With these questions in hand, you can move to the next step, seeing how much predictive value these “use cases” for machine learning will give your financial organization. Case in point, these very questions are how it started for a large, institutional bank sitting on decades of financial transaction data we worked with. They wanted to more accurately predict member activity and drive better returns on cash reserves – and leveraged machine learning to do it. Our machine learning model identified patterns in their transactions, which spanned hundreds of credit unions and billions in cash to predict the deposit activity of millions of credit union members on a daily basis. The result? We freed $40 million in excess cash reserves. The insights gleaned also empowered the organization to pass on greater returns to members by selling short and long-term securities, arbitrage, and reducing borrowing fees. Another institution, Primary Financial Corporation (PFC), found great success using machine learning to improve their sales targeting. PFC wanted to predict CD issuers’ funding needs and institutions’ desires to invest. They developed machine learning models that synthesized PFC’s financial and competitive data to price securities, identify buyers, and project trade profitability. By the time the first phase of the project was complete, PFC could predict with over 80% accuracy and 70% precision the likelihood of a particular investor to buy a given investment. The common thread in these stories is that both organizations had an abundance of historic data at their fingertips, but they hadn’t explored how ML could help them retain more deposits, sell more products, or reduce their financial risks. The rapid predictive insights that machine learning continues to provide to both companies has been game-changing. And both are now exploring other ML applications. Get started Machine learning is widening the gap between banks who embrace it and their competitors who haven’t. If you don’t improve your banking experience, your customers will turn to another bank or even be serviced by a fintech. As you navigate how to become that customer-centric organization you want to be, explore machine learning as a way to get you closer to your customer and see rapid results. Start by coming up with specific questions that your business needs to answer, and take time to learn more about what machine learning can do in your organization. Contact Fusion Alliance to discuss if ML is right for your project. ON-DEMAND WEBINAR: Learn how to turn data into insights that drive cross-sell revenue
A massive storm is brewing in the banking, financial services, and insurance industries, and when it strikes, it will be devastating to the unprepared. That storm is the unprecedented transfer of wealth, $3.9 trillion worth, that will be passed from the hands of older generations to younger in the next eight years or so. The rains have already started to trickle, but when they come in full force, if your organization hasn’t already connected with younger generations, you’ll see millions of dollars in wealth walk right out your door. If your bank doesn't have a plan in place for customer retention, it’s not too late to take action. Consider that millennials (born circa 1981-1997, also called Gen Y), are now the largest generation, accounting for over 25% of the population. They are followed by Gen Z (born circa 2000-present), those born with digital devices in their hands, who comprise more than 20% of the population. The potential purchasing power of these generations combined is something that can make or break banks, wealth management firms, and insurance companies. Yet most businesses in these industries still don’t have a game plan to connect with an entire population. Will your company be different? The problem is complex, but no matter where you stand, a solution is within your reach if you create a strategy informed by data and insights that has a clear road map to success. Here are five tips to building successful customer retention strategies for your bank, so you can emerge strong on the other side of the impending wealth transfer. 1. Understand the challenges of banking for millennials Recognize that this is a whole new audience you’re dealing with. The old ways won’t work in the new economy of connected consumerism. A 360-degree view of your current customers will help you gain insights into what the older generation wants, but with an eye towards the future consumers of your brand. They’re not like baby boomers (born 1946-1964) or Generation Xers (born 1965-1979). This newer generation sees things differently than their parents and grandparents did. Get to know this younger audience on their terms and understand why they have different belief and value systems, why they view traditional institutions skeptically. Examine the world from their eyes. They’ve seen that industry giants who their elders once perceived as invincible (e.g., Lehman Brothers) are now gone. Or that others, like Wells Fargo, AIG, and Countrywide, had to be rescued by the government from the brink of bankruptcy, with taxpayers footing the bill. They’ve seen the effects of parents being laid off after years of loyal service to a corporation. They know families who lost their homes when the housing bubble burst. Can you blame them for being leery of traditional institutions? An Androit Digital survey examining millennials’ brand loyalty reported that 77% said they use different criteria to evaluate brands than their parents do. Are you aware of what criteria they are using to evaluate your brand? If not, you need to arm yourself with answers. Research shows that younger generations frequently turn to friends, independent online research, reviews, and social media for decision making. For example, an astounding 93% of millennials read reviews before making a purchase, and 89% “believe friends’ comments more than company claims,” according to an IBM Institute for Business Value survey. Your future hinges on understanding these behaviors. A report by Gallup on the insurance sector revealed, “Millennials are more than twice as likely (27% vs. 11% respectively) as all other generations to purchase their [insurance] policies online rather than through an agent.” Online purchasing is far from the mainstream among insurance consumers overall: “74% originally purchased with an agent vs. 14% online – but if this trend among millennials continues to grow, it could substantially change the way insurance companies interact with customers in the coming years,” the report stated. Likewise, “Banks are losing touch with an entire dominant generation,” according to Joe Kessler, president of Cassandra Global. The Cassandra Report cited that 58% of young adults said they would rather borrow money from friends or family instead of a traditional institution. Two-thirds of the respondents said it is “hard to know where to learn about what financial services they might need.” In other words, when it comes to banking, millennials don’t know who to trust. Begin the process of getting to know this younger clientele by conducting research that will help you gain insights into what they stand for, how and where they interact, and what their expectations are of your industry, your company, and your brand. By evaluating that data, you will be able to set the process for communicating with and building different ways to engage with these new young consumers. Your interactions and communications must be seamless and easy and reflect that you can talk in their terms. You’ll need to look at this emerging demographic with a “digital lens” because this is how millennials engage with brands. What are those channels, what are their preferences? What other services can you make available in a seamless and frictionless and customized way? If you don’t take the time to get to know your audience, you won’t be able to lay the foundation for a successful strategy to engage them. 2. Make young customer retention your bank’s primary mission Younger generations, millennials especially, are driven by a different set of values. They want a work/life balance. They like to donate money. They don’t want a lot of stuff. They like to travel. They want to experience life. They question long-standing rules that don’t make sense to them. So, develop your business strategy around a purpose or a mission – one that they will connect with. Build upon the information you learned about your younger customers in tip #1, then map this customer’s journey with behavioral analytics. Evaluate the digital channels and content that your younger clients find compelling. Now you can create a strategy and roadmap to engage these customers. 3. Build your customer experience for different audiences A strong customer experience (CX), one that creates loyalty, is one that is personalized, timely, relevant, appropriate, and built on trust. The more customizable the user experience, the better. According to Janrain, 74% of online users are frustrated with brands that provide content that doesn’t reflect their personal interests. You know users want to be recognized on their terms, but you have a problem. How do you build a single CX that addresses vastly different generations with different behaviors and interests? Is there a way to reconcile their differences via a single CX? The answer is no. For the time being, you need to develop both. If someone tells you differently, beware. Think about it. In wealth management, banking, and insurance, the older generation still holds the money and keeps the lights on for your business. The newer generation will get that money within a decade, but if you go full-throttle and build a single, mobile-first CX, you’re going to alienate the people holding the purse strings. In the next few pivotal years, your bank’s customer retention will be heavily dependent on how well you address each audience on their own terms. How to cater to older generations Older folks prefer offline channels, like walking into a branch, agency, or brokerage firm. They like to do business face to face or via phone conversations with tellers, bankers, agents, and wealth advisors. Online, they like having a “control panel” style experience on a desktop, such as you might find with financial trading platforms. This is how you build trust and timely, relevant, personalized experiences. Online, build a web portal to appeal to the interests, needs, and communications preferences of the older generation. The younger generation will use the web portal now and then, but that is not going to be the experience they associate with your brand – because you’ll give them their own. How to cater to younger generations Give the younger generation mobile apps and SMS communications. With over 87% of millennials saying they are never without their phone, this is where you should reach them. They have no interest in stepping foot in a building that feels like an institution or talking to some random agent, broker, or salesperson when they can do everything quickly and effortlessly on a mobile device. Take the information you learned in tips #1 and #2 and build strong loyalty, providing timely, relevant, personalized, and appropriate experiences on a digital dimension. As you build a CX specifically tailored to banking for millennials, you’ll find you can gain loyalty on their terms because you’ll be able to interact in a more agile, nimble, and personalized way. The older generation will probably use the mobile app for simple tasks like checking information and balances, but they’re going to associate their comfort with your brand with the CX they use most – the desktop. Two CXs could be the right solution for today’s transitioning market, but keep in mind that there are additional channels through which you can build loyalty with these younger audiences across the digital landscape. For example, you can share educational, informative content through social media channels. 4. Knowledge transfer to the younger generation Everyone in wealth management, insurance, and financial services already has a foot in the door with the younger generation. That connection is the strong relationship between existing older customers and their offspring. Leverage it. First, understand that the older generation wants to take care of the younger ones by leaving money to them, but they are worried that the next generation doesn’t have the knowledge or discipline to hold onto and grow that money. There are so many stories of young people, like athletes or celebrities, getting rich quickly, getting bad advice about money, and then squandering it all away. What if their children make the same mistakes? Help address that fear and protect those kids by arming your older customers with educational tools on how to prevent this from happening. For this CX, you’ll need to develop portals and educational content, manage and market that content, and make it come to life in an updated website (geared to the older generation) that features whitepapers, articles, or videos, such as “Talking to Your Children About Money 101” and the like. Educate this audience on how to talk about the benefits of insurance or long-term investment strategies and provide them with incentives to set up meetings with themselves, their offspring, and you. The younger generation isn’t interested in talking to an institution, but they will listen to the advice of the parent or grandparent giving them this money. Let the parents and grandparents have meaningful conversations that hold much more weight than your business sending a bulk email to junior that says, “Invest in an IRA.” Now when members of the younger generation, the recipients of transferred wealth, decide to check out your company on the advice of their parents or grandparents, they will access your relevant app that speaks their language and addresses things of interest to them. They’ll soon figure out that you’re not some stodgy institution and will be much more open to a discussion when their parents suggest a conversation with your company’s brokers, advisors, or agents. This is how the knowledge transfer will occur organically, growing your bank’s customer retention along the way as you build a relationship of loyalty and trust. You not only will give the benefactors peace of mind that their offspring will be good stewards of their fortune when the time comes, but you’ll keep the money in-house because you took time to connect with and earn the trust of the young beneficiaries. 5. Make use of emerging technologies in banking to satisfy the ever-changing digital landscape At this point, you know you could benefit from two CXs. The web platform focuses on the needs and concerns of the older generation that holds the wealth today. The mobile platform addresses the younger person who will inherit the wealth, providing guidance, teaching the basics of how to invest or buy insurance, and will be chock full of quizzes, games, personalized spreadsheets, automated tools, and more. The challenge is that when the older generations pass on, the desktop experience will be moot. You don’t want to have to rebuild all the technology infrastructure that you worked so hard to establish. The answer? Don’t build applications or tools – build platforms for the future that can be adapted as the younger generation takes over and as mobile-first interactions become predominant five years from now. Don’t overlook the fact that more cost-effective emerging technologies in banking, such as infrastructure in the cloud, will be a necessary ingredient for success. Banks and insurance companies are reluctant to get in the cloud, but if you understand that most applications are going to be in the cloud five years from now, you understand the critical nature of developing these capabilities today. The cloud enables rapid changes to meet market and customer demands. It is flexible and nimble. You pay for what you use, can pay for service or infrastructure, and simultaneously increase security and reliability. To those unfamiliar with the cloud, security can be a scary proposition. However, with major cloud providers like Microsoft and Amazon employing an army of experts to ensure security and regulatory compliance, the cloud is safer from a security standpoint than most on-premises data storage. While 85% of companies using the cloud report they are confident their providers are able to provide a secure environment, 90% of IT managers reported they are not confident in their own companies’ ability to detect security problems internally. If you’re building a flexible technology platform with the right digital CXs, infrastructure that looks to the future and cloud capabilities, then your organization will be positioned for success when the wealth transfer hits in the next decade. Final thoughts on customer retention strategies for banks There are more than 75 million millennials out there spending $600 billion every year, and that number is only going to increase. They are graduating from college with massive amounts of debt, face a precarious job market, and are typically naïve about financial matters and insurance.The companies who aggressively work to offer practical tools and advice on banking for millennials are the ones who will outperform their competition in the future. It’s not too late, but you cannot wait to take action. If a business does not begin building the bridge between current wealth owners and soon-to-be wealth recipients until after the wealth-transfer process has begun, it will experience a devastating economic blow and get left behind by those who have embraced this shift. The ball is in your court Everyone has predicted that the landscape of the wealth management, banking, and insurance markets will change dramatically due to the digital disruption and younger generations, but with the right strategy in place, your organization can emerge as a leader. Look at this as an opportunity to differentiate. A digital strategy will be the key to your success. Don’t look at digital as an application. Digital is the way all future generations will engage and interact. Leverage it today and do it well to tie the present with the future. Your formula for success is to create an actionable plan that is both informed and driven by insights and data on what people buy, how, what they expect, how they feel, and whether the experience is personalized, relevant, and timely. You need to understand your audience and use those insights to feed a strategy that ties into the mission and purpose of your customers. Bring your strategy to life in a digital channel that sits on top of flexible technology. Measure your customers’ experiences and level of engagement with your brand, and then make adjustments, continually working off of research and data. Follow this formula, and eight years from now, you’ll be the organization that is reaping the rewards because you understood how to keep millions of dollars from leaving your company. Need help improving your customer retention in banking? Let us know.
Artificial intelligence (AI) and machine learning (ML) have completely transformed mobile development. Mobile app users today are often looking for an easy and relevant user experience — one that has been customized for them. The best way to get there? Machine learning. Machine learning identifies anomalies and patterns that ultimately optimize the user experience. If your technology conversations have stalled at the brainstorming or ideation phase, consider why. If you don’t have a clear answer, you’re not alone there either. “Strategic decision makers across all industries are now grappling with the question of how to effectively proceed with their AI journey,” says Marianne D’Aquila, research manager, IDC Customer Insights & Analysis. Despite questions about how to proceed, organizations know they need to invest in ML for mobile before current competitors, and those waiting in the wings, figure out how to profit from it first. Considering the speed at which machine learning is being adopted and spreading, and its potential to quickly help companies on multiple fronts, the time for execution and implementation is now. Here are the top three reasons that make machine learning development for mobile important right now: 1. Machine learning for mobile increases app security “Facial recognition” ($4.7 billion, 6.0%) and “fraud detection and finance” ($3.1 billion, 3.9%) were among the top five categories of AI global investment in 2019, according to the AI Index 2019 Annual Report (an independent initiative at Stanford University’s Human-Centered Artificial Intelligence Institute). It’s not surprising. From TikTok’s recent security flaws to Target’s $18.5 million settlement, app vulnerabilities and potential data breaches are breaking news, and there are few signs of a slowdown. While the short-term financial impact can hurt, the long-term cost of losing the trust of customers and partners can be even more painful. Companies that receive users’ personal information (e.g., passwords, billing addresses, answers to security questions) for processes such as app authentication or making purchases must continually optimize how the data is used. Through machine learning and automating parts of the process, you can identify anomalies faster, allowing you to see patterns and manage potential weaknesses more quickly. Operationally, ML can detect and staunch security issues related to data inside your company, such as logistics or pricing anomalies, that could be a drain on resources. For example, if one of your products is selling faster than usual via a shopping app, it could be related to a pricing error. Do you really want that $450 device on sale for $4.50? The mobile application landscape is comprised of a wide variety of operation system versions, devices, and software systems. This creates a much greater number of attack surfaces that attackers can target. (A first step to optimizing security is risk evaluation and awareness. Contact Fusion to hear more.) 2. Machine learning leads to increased mobile privacy It could be argued that the recent news cycle around privacy indicates a real desire for clarity, if not outright skepticism. In more than 3,600 global news articles on ethics and AI from mid-2018 to mid-2019, the dominant topics were “framework and guidelines on the ethical use of AI, data privacy, the use of face recognition, algorithm bias, and the role of big tech.” You’ve heard about Russia’s role in the 2016 election and the use of personal information for ad targeting. These sorts of debacles haven’t led consumers to give up on digital. Instead, they are demanding more privacy oversight and are being more cautious about the apps they use. Privacy concerns are complementary to security issues. While security comprises keeping personal data from hackers, trolls, or criminals, privacy is more related to keeping personal data in a person’s own hands, away from any individuals or organizations that don’t need to be privy to it. For example, if you use an activity tracking app to record runs, you might appreciate a note when you hit a milestone: “You had a personal record today!” Machine learning makes it possible for the mobile app to directly detect this activity and send a congratulatory message without any human intervention. There’s no need for a stranger to know you clocked a fast 10K. Machine learning on the edge further increases privacy by eliminating the need for data to be sent to the cloud. When ML on the edge is in place, individualized data never leaves the device, keeping the user’s personal information in their own hands at all times. Amazon, Alexa, and Google Home employ ML on the edge, as some functions are offloaded to a device while others have to go to the cloud. In addition to supporting privacy, the reduced travel time for data makes these apps and devices faster. 3. Machine learning for mobile helps create personalized customer experiences Consumers expect their demographic, behavioral, and other personal data to be secure and private, while they also want increasing levels of personalization. Delivering on these demands can be a delicate, real-time balancing act for companies, but machine learning helps make it possible to juggle data acquisition with protection and those prickly questions around how to use the data to everyone’s advantage. But is there a clear business case to pursue personalization? According to a 2019 Salesforce report, the answer is yes, as 75% of 8,000 consumers and business buyers surveyed expect companies to use new technologies to create better experiences. Machine learning for mobile enables you to make user-experience headway on several fronts. First, it can help you build a baseline of customer app usage. Once you have that baseline, you can see patterns in user behavior. Next, particular behaviors or deviations from the baseline can trigger delivery of a relevant coupon, suggested product to explore, or a reminder to revisit an abandoned shopping cart. Even more sophisticated, ML can serve up colors, screen layouts, and language that appeal most to a particular user. And with machine learning, the reactions are in real-time. The more your user engages with your mobile app, the more refined and personalized the experience becomes. Through machine learning, your brand becomes more closely aligned with the customer experience that your customer desires. Getting started can feel uncomfortable at first, but at Fusion, we’ve found that organizations often have low-hanging fruit ripe to benefit from machine learning for mobile. You just need to be able to see and then act on those opportunities. Working alongside you on this journey should be people who understand data science and machine learning, and who can uncover weaknesses to target. Now is the time to move forward on machine learning for mobile initiatives. Current market conditions indicate a shortage of professionals in machine learning and data science. Fusion fills this gap. If you’re interested in hearing more about machine learning for mobile, let us connect you with one of our experts.
One of the most difficult aspects of search engine optimization is measuring the success of a campaign. Historically, SEO providers give their clients a list of “valuable” keywords and the position of the client’s website for each keyword. Unfortunately, this kind of ranking data provides a comparatively narrow view of search engine activity because it only measures expected results. In order to get maximum insight into an SEO campaign, key performance indicators (KPIs) need to be established to capture the complete picture. Keeping score Search engines include more than 300 data points when they are calculating the score or rank of a page. Measuring how your site, page, or campaign will perform based on a specific request is a very challenging problem. A good starting point is to begin recording and measuring a variety of on-site and offsite indicators in order to develop a custom SEO solution based on the competitive landscape. You’ll need to test and develop metrics that provide relevant insight into your site’s ability to rank versus competing sites currently ranking for targeted traffic. On-site measurements On-site measurement begins by scoring the content of your website based on quantity, quality, and structure: The quantity is the number of pages of unique content and the rate that new pages are added to the website The quality score is more subjective, but relates to the relevance of the content to questions being asked by the target demographic The structure score looks at items like URLs and HTML tags to determine the ease in which the content could be indexed All of these measures are then combined into a content score, which is compared to top-ranking competitors. The content scoring also identifies gaps in subject matter and opportunities for new topics. User experience is scored by measuring bounce rate, pages per visit, and time on your site. This data is evaluated on a device-type basis in order to make sure that all visitors have a similar experience. It is also important to understand your site’s performance by checking PageSpeed and Yslow scores. In addition, the actual response time of each page and supporting assets determine a speed score. Offsite measurements Search engines rely extensively on outside factors to determine website relevance. Measuring several leading metrics to identify potential opportunities can expand the online reach of your site. Begin by evaluating incoming links to the website (follow and no-follow) to determine the number and quality of external websites linking to your site. During the backlink analysis, links that should be disavowed because of potential penalties related to Google Panda must be identified for future action. Finally, check your social media activity to determine what content is being shared and/or discussed and the overall reach of your site on social media platforms. Business factors In order to measure the return on investment (ROI) for any online marketing activity, it is important to have well-defined goals for your website and visitors. Start by developing a value for each type of conversion. On an e-commerce site, it is very easy to define the value because the visitor has put items into a shopping cart and either completed the purchase or abandoned the cart. If your website supports a large brand or collects leads, the definition of value is more difficult. However, value always exists and it is important to agree on a value in order to report on the business success from web activities. Once the business goals are defined, you’ll need a method for segmenting different online marketing activities so each channel can be measured independently. SEO traffic can be defined as a website visitor who arrives from organic search results and didn’t include a brand name in the search terms. However, with Google’s update to secure search, the availability of keyword data has been reduced significantly and the exclusion of brand name keywords has become more difficult. In order to compensate for the lack of keyword data, you should now consider an SEO visitor as any visitor who enters your site from organic search with a landing page that is not the home page (and possibly a few other pages). This is when the fun begins. You can now measure visitor traffic from SEO and compare it to other marketing activities. The ability to show a financial return on SEO is possibly the most important factor to business stakeholders and executives. Measuring the success of your campaigns with KPIs Using the metrics outlined above can provide a clear picture of where SEO effort is being applied and how it impacts your business financially. The reporting also identifies successful strategies that you can expand on based on your KPIs. It is important to remember that measurement and execution of an SEO campaign never ends. Search engines are testing changes to their ranking algorithms on a daily basis. New content, links, and social media content activity are also constantly in flux, and without continuous monitoring even successful SEO campaigns may fail if they are discontinued.
The B2B and B2C markets are abuzz with the terms artificial intelligence, machine learning, and deep learning. But what do these terms mean, exactly? They’re often used very loosely and you may think they’re interchangeable. But they’re not. Here’s a short overview of artificial intelligence, machine learning, and deep learning to help you cut through the static to determine which solution is right for you and your business. Working definitions Artificial intelligence (AI) is the term for the broad discipline that includes anything related to developing machines that are “intelligent” through programming. This includes many daily items you’re familiar with, from smartphones and marketing software to chatbots and virtual assistants. Machine learning (ML) refers to machines and systems that can learn from “experience” supplied by data and algorithms. ML is often used interchangeably with AI, but it’s not the same thing — ML is a developmental outgrowth of AI. Deep learning (DL) is a further developmental outgrowth of ML, but applied to even larger data sets. It uses multi-layered artificial neural networks to deliver high accuracy in assigned tasks. In terms of historical development, AI came first. It serves as the foundational discipline from which ML evolved. And ML is the foundational discipline from which DL evolved. One way to conceptualize their relationship to one another is as nested arenas of AI development along a timeline: Artificial intelligence overview and how it works In its broadest sense, AI refers to machines programmed to act according to well-defined rules and responses. The responses are confined to the set of rules that are provided, and the machines can’t deviate from those rules, except if they fail. A very basic example of AI would be your clothes dryer. You can set a specific time and temperature, and the machine performs the task according to the instructions given. It doesn’t have the ability to make decisions or make any changes by itself. A more sophisticated example would be configuring your CMS to deliver personalized website experiences. By analyzing a targeted selection of data points about your customer and writing the appropriate logic, your website can display the most relevant content. In neither case is the machine capable of being more than its programming — even if that programming makes the machine very capable in accomplishing its assigned tasks. Machine learning overview and how it works “ML is the science of getting computers to act without being explicitly programmed.” Stanford University Machine learning is a different approach to developing artificial intelligence. Instead of hand-coding a specific set of rules to accomplish a particular task, in ML the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform a task. Over the years, the algorithmic approaches within ML have included and evolved from decision tree learning, inductive logic programming, linear or logistic regressions, clustering, reinforcement learning, and Bayesian networks. Currently, there are three general models of learning used in machine learning: Supervised learning Unsupervised learning Reinforcement learning Supervised learning Right now, most machine learning is supervised, which still requires a lot of human intervention to accomplish the training. For example, a “supervisor” has to manually tell the spam filter what to look for in spam vs. non-spam messages (e.g. look for the words “Western Union” or look for links to suspicious websites, etc.) until the machine has gained enough “experience” to learn and accurately apply the distinctions. The training goes something like this. The algorithm would be first trained with an available input data set of millions of emails that are already tagged with the spam/not spam classifications to train the ML system on the characteristics or parameters of the ‘spam’ email and distinguish it from those of ‘not spam’ emails. Unsupervised learning In general, unsupervised learning is more difficult to implement than supervised learning. But it’s very useful when your example data set has no known answers and you’re searching for hidden patterns. The system has to train itself from the data set provided. Two popular types of unsupervised learning are clustering and association. Clustering groups similar things together and consists of dividing a set of elements of the existing data set into groups according to a heretofore unknown pattern. For example, defining a customer demographic and further clustering them based on education or income might affect their purchasing decisions in favor of one product or another. This would allow you to specifically target each cluster of customers more effectively. Association involves uncovering the exact rules that will describe the larger portions of your data. For instance, “People who buy X also tend to buy Y.” For example, online book or movie recommendations are based on the association rules uncovered from your previous purchases or searches. Association algorithms are also used for purchasing-cart analysis. Given enough carts, the association technique can help predict another item you might like to put into your cart. Reinforcement learning The machine learning system learns by trial and error through training characterized by receiving virtual rewards or punishments. Reinforcement learning comes from child-development research that says instead of telling a child which piece of clothing to put into which drawer, you reward a child with a smile when the child makes the right choice on their own or you make a sad face when the child makes the wrong choice. After just a few iterations a child learns which clothes need to go into which drawer. In reinforcement learning, very complex algorithms are designed so that the machine tries to find an optimal solution. It operates according to the principle of reward and punishment, and by this approach, it moves quickly through several mistakes or near mistakes to the correct result by adjusting the weight of the previous results against the desired outcome. This allows the machine to make a different, better decision each time until it is rewarded. Deep learning overview and how it works Deep learning is the newest area of ML and AI that uses multi-layered artificial neural networks to accomplish tasks such as object detection, speech recognition, and language translation − all with an extremely high degree of accuracy. The artificial neural networks (ANN) are inspired by the biology of the human brain, specifically the organic interconnections between neurons. The human brain analyzes information it receives and identifies it via neuron connections according to past information it has stored in memory. The brain does this by labeling and assigning information to various groups, and it does this in nanoseconds. Similarly, when a system receives an input, the deep learning algorithms train the artificial neurons to identify patterns and classify information to produce the desired output. But, unlike the human brain, artificial neural networks operate via discrete layers, connections, and directions of data propagation. Despite the level of sophistication of its algorithms, DL is still just another method of statistical learning that extracts features or attributes from raw data sets. The major difference between deep learning and machine learning is that in the latter you need to provide the features manually. DL algorithms, on the other hand, automatically extract features for classification. This ability requires a huge amount of data to train the algorithms. The accuracy of the output depends on the amount of data, and deep learning requires huge data sets. Additionally, due to the sophisticated algorithms, deep learning requires very powerful computational resources. These are specially designed, usually cloud-based computers with high-performance CPUs or GPUs. There are several kinds of artificial neural networks and DL processing applications you may have already heard of: Convolutional neural networks (CNN) are deep artificial neural networks that are used to classify images, cluster them by similarity, and perform object recognition. These are algorithms that can identify faces, tumors, and navigate self-driving cars. Source: TowardsDataScience Generative adversarial networks (GAN) are composed of two neural networks: a generative network and a discriminative network. GANs are very popular in social media. If you feed the GAN with a large enough data set of faces, it can create completely new faces that are very realistic but nevertheless fake. Natural language processing (NLP) is the ability to analyze, understand, and generate human language, whether text or speech. Alexa, Siri, Cortana, and Google Assistant all use NLP engines. Putting AI, ML, and DL to work for you What most of us think of as AI is, more accurately, machine learning. But understanding the history, development, and distinctions between artificial intelligence, machine learning, and deep learning can help you determine which solution would be right for your goals. The solution you choose, however, is also dependent on the amount and type of data you have access to. Within the last couple of years, almost every company is using machine learning or deep learning (and therefore, by definition, artificial intelligence) in some capacity to move their business forward. The competitive gauntlet has been thrown down. Fortunately, tools that were previously only available to enterprise-size companies are now affordable and accessible to mid-market companies, making machine learning the most accessible playground right now. Fusion Alliance provides cloud infrastructure and other ML services that accelerate machine learning modeling, training, and testing to our banking, financial, and retail customers. Read more about our work in AI, ML, and DL: 3 fusion experts share their machine learning secrets Unlock customer credit insights with machine learning How Donatos uses machine learning to retain customers Conversational marketing and machine learning are shaping the future of retail
In the quest to solve its most pressing challenges, the banking industry is being transformed by its adoption of artificial intelligence (AI) and machine learning (ML). Financial institutions are under pressure to better understand their customers, drive a more personalized customer experience, acquire new business, forecast risk, prevent fraud, comply with increasing regulations, improve processes . . . the list goes on and on. Most banks continue to use traditional, expensive analytics tools to tackle these challenges, but they struggle to keep pace with demands, and the tools are difficult to maintain. Machine learning relies on statistical and artificial intelligence approaches to rapidly uncover patterns in complex data, patterns that can’t be discovered through traditional tools. The impact of machine learning in banking While adoption of machine learning in finance is in the early stages, institutions who have leveraged this secret sauce are finding it to be a differentiator. For example, a large regional bank leveraged ML to predict institutional customers’ likely deposits on a daily basis, freeing $40 million in excess cash reserves. Another institution, credit union service organization Primary Financial Company, used ML to synthesize financial and competitive data to price securities, identify buyers, and project trade profitability. PFC can now ascertain with over 80% accuracy and 70% precision the likelihood of a particular investor to buy a given investment. For these companies, their early ventures into ML have certainly moved the needle on what they can accomplish. We spoke with three artificial intelligence and machine learning experts at Fusion Alliance to tap into their experience with banks, learn where the market is headed, and get answers to some common questions. Q: What do you see as the 2020 trends in machine learning for banks and credit unions? A – John Dages: 2020 is the year where we see machine learning become more democratized. Historically, machine learning engagements have required substantial data science and model training investments. However, the major ML platforms are evolving and providing advanced automated machine learning and feature analysis toolchains, lowering the barrier of entry for ML projects. Our team is also actively monitoring new “explainability” techniques to add deeper transparency for ML-based predictions and insights. Historically, the black-box nature of some ML algorithms (specifically deep neural networks) makes it difficult to relate to business principles. Ideally, these emerging techniques will increase confidence in ML models early in their lifecycle. In the banking sector, we have seen a great deal of capital chase trading and investments, but we are also seeing ML flow into loan operations, cash management, and general risk. A – Sajith Wanigasinghe: Machine learning applied to fraud detection is a major trend. Artificial intelligence is beneficial here because ML algorithms can analyze millions of data points to detect fraudulent transactions that would tend to go unnoticed by humans. At the same time, ML helps improve the precision of real-time approvals and reduces the number of false rejections Another leading trend is using robo advisors for portfolio management. Robo advisors are algorithms built to calibrate a financial portfolio to the user’s goals and risk tolerance. Chatbots and robo advisors powered by natural language processing (NLP) and ML algorithms have become powerful tools with which to provide a personalized, conversational, and natural experience to users in different domains. A – Patrick Carfrey: Personalized delivery of banking services is going to improve in 2020. New products are entering the marketplace that enable consumer and commercial bank customers to receive relevant account information in real-time, at the grain and timeliness that customers want. Q: What is your favorite machine learning use case for banks right now? A – John Dages: Machine learning will change the way banks see credit risk. FICO and the five C’s of credit are limited in features, captive to three agencies, potentially biased, and outmoded. The models we are building will allow lenders to view a complete picture of a borrower, offering customized predictions on creditworthiness. The banks that adopt this model will see an increase in lending opportunities while better understanding the liabilities on the balance sheet. A – Sajith Wanigasinghe: Customer lifetime is my favorite use case, where we can predict how valuable would a customer be within X number of years so that the bank can establish a good relationship with the customer in the early stages. A – Patrick Carfrey: Remarketing/cross-selling is a powerful option for banks right now. Given all the customer data that banks own, including deposits, transactions, and more, ML can tell if a customer is a good target for a new product in the bank’s portfolio. This is especially relevant as customers are expecting more. Being able to predict customer needs supports that need. Related Article: 4 ways banks can leverage the power of machine learning Q: What is the one machine learning data tool you can’t live without? A – John Dages: Excel. Sure, the enterprise data tools are highly capable (and the team spends a lot of time there), but the ability to quickly navigate data, perform simple transforms, and share data with a tool everyone knows is critical. I can’t remember a project where we didn’t get exemptions to install Excel in the banks’ datacenters. A – Sajith Wanigasinghe: TensorFlow framework would be one of the tools that I can’t live without because it’s the number one framework that I use every day and in 99% of our projects. TensorFlow is an open-source machine learning library which helps you to develop your ML models. The Google team developed it, and it has a flexible scheme of tools, libraries, and resources that allow me to build and deploy machine learning applications. A – Patrick Carfrey: TensorBoard. This is TensorFlow’s visualization toolkit, and it provides a nice visual interface for tracing key metrics through the model training pipeline. Deep learning models can get complex quickly, and being able to explore a model outside of the command line is nice. Clients love the graphs, too! Q: What are the biggest machine learning myths you wish more people understood? A – John Dages: For those beginning to develop an AI/ML center of excellence, there is going to be a gravity to focus on the cutting edge (deep learning, cognitive science, others). While there is obviously value there, there are a multitude of “traditional” machine learning practices and algorithms and that are lower complexity. A deep neural network should be the last resort, not the first option! A – Sajith Wanigasinghe: That machine learning and AI will replace humans. In fact, machine learning and AI will help you do your job much faster and better, and enable you to focus on the satisfying and important human elements of your role — including creativity and strategy. Think of machine learning and AI in terms of a tool, not a replacement for humans. A – Patrick Carfrey: For every machine learning project I’ve delivered, our clients will inevitably ask “We love the model, but can you tell us more about how the model is making the predictions?” This is a surprisingly challenging question to answer, particularly for black-box neural networks. Fusion has a variety of techniques to provide additional details, but they aren’t necessarily directly correlated to the actual model we’ve developed. If it is insights you seek, not decisions, consider business intelligence tools and processes in lieu of machine learning. There is room for both! Meet our panel of experts: John Dages With 15+ years of technology leadership experience, John brings a unique perspective to companies on their advanced analytics journey. He led numerous machine learning initiatives for large enterprises across industries. Those projects range from customer acquisition and retention to securities pricing and trade analytics. John’s background in application development, analytics, systems integration, and I&O helps him formulate how businesses can use data to drive competitive advantage and engineer true intellectual property. Sajith Wanigasinghe Sajith is an expert in machine learning, artificial intelligence, and enterprise-wide, web-based application development. He applies his experience and insights to help enterprises identify and solve challenges across the business that are ideal for machine learning. Sajith led teams that have revolutionized the financial, insurance, food, and retail industries by introducing advanced, intelligent forecasting systems that are powered by machine learning and artificial intelligence. He holds a B.S. in computer science from Franklin University. Patrick Carfrey Patrick joined Fusion Alliance over six years ago, leading a variety of application development initiatives for a flagship Fortune 500 client. Patrick is a firm believer that software is social, choosing to spend as much time in front of end users to build the best possible product. In that capacity, Patrick has developed and deployed practical machine learning solutions to help better understand and predict customer behavior to drive maximum engagement. He is the Competency Lead of Java at Fusion and holds a B.S. in computer science and engineering from The Ohio State University.
Ready to talk?
Let us know how we can help you out, and one of our experts will be in touch right away.