data strategy

Top 10 Big Data Trends in 2016 for Financial Services

Posted on Updated on

2015 was a groundbreaking year for banking and financial markets firms, as they continue to learn how big data can help transform their processes and organizations. Now, with an eye towards what lies ahead for 2016, we see that financial services organizations are still at various stages of their activity with big data in terms of how they’re changing their environments to leverage the benefits it can offer. Banks are continuing to make progress on drafting big data strategies, onboarding providers and executing against initial and subsequent use cases.

For banks, big data initiatives predominately still revolve around improving customer intelligence, reducing risk, and meeting regulatory objectives. These are all activities large Tier 1 financial firms continue to tackle and will do so for the foreseeable future. Down-market, we see mid-tier and small-tier firms (brokerage, asset management, regional banks, advisors, etc.) able to more rapidly adopt new data platforms (cloud and on-premise) that are helping them leapfrog the architectural complexities that their larger brethren must work against. This segment of the market therefore can move more rapidly on growth, profitability and strategic (conceptual/experimental) projects that are aimed at more immediate revenue contribution, versus the more long-term, compliance and cost-dominated priorities that larger banks are focused on.

The market for data software and services providers is moving closer to a breaking point where banks will need to adopt, on larger scales and with greater confidence, solutions to manage internal operations and client-facing activities. This is not unlike the path we have seen cloud technologies take.

Here are some predictions about how big data technologies are evolving, and how these changes will affect the financial services industry:

  1. Machine learning will accelerate, and will be increasingly applied within the fraud and risk sectors. Data-scientist demand and supply continues to work towards equilibrium. Advanced techniques will start being applied within fraud and risk that improve models and allow acceleration towards more real-time analysis and alerting. This acceleration will come from education and real-world applications of market leaders.
  1. Gaps will become more evident between the leaders and the laggards. Each year we see banks that press the gas pedal and are ready to adopt new technology, and those that remain conservative in their efforts to run/change their organization. The stories and use cases will proliferate and become more varied in 2016, and will lead to increased evidence of strong, observable and benchmarked business returns (not just cost takeout) in the broader market.
  1. Data governance, lineage, and other compliance aspects will become more deeply integrated with big data platforms. In order to find a more complete and comprehensive data solution to manage compliance mandates, many banks develop or purchase point solutions, or they try to use existing legacy platforms that are not able to deal with the data surge. Fortunately, there are an increasing number of improved data governance, lineage and quality solutions for Hadoop. More importantly, these new platforms can reach beyond Hadoop and into traditional/legacy data stores to complete the picture for regulation, and they are doing so with the volume, speed, and detail needed to achieve compliance. In addition, 2016 will continue to see the push for “data lakes” that can serve as converged regulatory and risk (RDARR) hubs.
  1. Financial services organizations are struggling to understand how to leverage IoT data. This is the next wave of hype that is grabbing attention in big data, and questions abound in terms of financial services applications. For some industries (telco, retail, and manufacturing) it is already a reality, and these segments have driven the need for IoT data and forced the current conversation. For banks, will IoT data be used more for ATM or mobile banking? Areas that are worth exploring over the coming year involve multiple streams of activity in real time. For example, real-time, multi-channel activities can use IoT data to offer the right offer and advice to retail banking customers at the right time. Or perhaps we should think about this in reverse, where financial firms could embed their services into the actual “thing” or device or other client touch points, not unlike trading collocation facilities that then report home.
  1. Integration into trade, portfolio management, and advisor applications becomes a more prominent feature for software providers. The drum roll of headlines that relates to “gaining benefit from big data” beats louder. Ultimately, this will be judged by end users in the financial sector and the observed (or unobserved, yet measured) benefits and ease of use. Applications that are built off the core of big data platforms will provide that bridge, and sharpen the spear that is big data. We’ve already seen this push with the likes of market data providers, but not with other business user applications, be it CRM, OMS/EMS, etc.
  1. Risk and regulatory data management continue to be the top big data platform priorities. Growth and customer-centric activities sit atop the list of corporate strategies, and there will be firms that can link those strategies to big data. Regardless of whether your bank is an advanced data-driven firm or not, the evolving nature of regulations and the monster challenge to aggregate risk and move towards predictive analytics is still a ways off, yet it’s still a requirement and acknowledged benefit at the C-suite level. Unless heaven opens and regulators ease up on their requirements, risk and regulatory data managements will still be the major challenges for financial institutions in 2016.
  1. The adoption of Hadoop for R:BASE storage and access will proliferate within financial services. Everyone arrives at the party at different times, and adopting technology is no different. The “long tail” adoption is far from here, but middle market or even small-tier banks will begin to see the benefits Hadoop based on:
    • Providers that are bundling/integrating and delivering more complete solutions, services and platforms
    • A community of users that continues to grow and provide the reference base to jump into the pool

    Data offload is now a “classic” use of Hadoop (relatively speaking), while the cool kids move on to larger big data playgrounds, and the masses will climb on board for this application of big data.

  1. Financial services “big data killer apps” gain wider recognition in the market. These have been the FinTech incubators over the past two to three years, and they are helping to form the front-end links needed between the end user and the data platforms. Expect to see more banks running proof-of-concepts with these applications, which will validate the software and provide the basis for “complete solutions.” Both the front-end and back-end should be optimized in concert, rather than as separate projects. We see this market rapidly expanding from the service integrator end as well. This will usher in the discussion of how “big data software” vs. “legacy software” will be adopted by banks.
  1. Operations becomes, and always has been, the last frontier that gains traction. As more reliable “big data” platforms emerge, the idea of security masters, deeper metadata enrichment, ontologies, integrating LEI, and other standards becomes a stark reality. The traditional data approaches are valid, yet some of the thinking will need to turn on its head to gain the full leverage of new solutions–for example dealing schemas and data modeling. Further, with the work of big data largely taking form in the front office, marketing and risk, there are obvious and enormous overlaps of data in middle and back office operations that can more easily be used to leverage existing data lake efforts. We expect to see risk assessment and performance-related big data activities in the middle office to rapidly increase. Further, we will see deeper dialogue on how to actually bring back-office functions (reconciliation, corporate actions, etc.) on board as well.
  1. The institutional side of the bank will start to adopt and take cues from the retail line of business on ways to improve understanding, marketing, and targeting of clients. There are certainly some pure B2B firms that are leveraging big data for improved client intelligence, but largely they take a back seat to the retail B2C line of business, be it credit card, retail banking, wealth management or lending. An easy crossover is for fund complexes (large mutual fund managers) to improve data collection from wealth advisor networks and broker interactions, as well as improved product utilization. This is especially important as mutual funds are typically once removed from their retail client base, so understanding their institutional clients (advisors) is vital.

Confidence still remains key for many larger banks and other financial services firms in adopting new provider “big data” solutions. That said, as you look towards 2016, there will be a greater push from management to move “big data” projects out of IT and within the hands of the business user. In order to do so, there will be a host of architectural, functional, speed, availability, and security questions to consider. As always, applying the traditional rigor to new architectural layouts does not change, as cost and sprawl seen in traditional architectures will begin to surface in new Hadoop and converged big data build-outs.

Further, there will and should be strong leverage and utilization of existing staff/processes for activities such as data governance, quality, reference data management, and standards. This will require continued education of all parties, namely those outside of IT, to understand the rapid developments in the marketplace.

Lastly, there will be the growing conversation on what balance of open source and provider solutions makes sense. Not all open source projects are designed to purely fit the needs of the institutional user, yet open source delivers the agility required going forward—each bank’s requirement will vary, and finding the right mix will be vital to accelerate efforts with big data, which is really all data. All told, the market in 2016 will move forward and evolve to reduce confusion, which will calm the currents in this swaying ocean of “big data.”

Read the rest of this entry »

Is Your Data Worth Anything?

Posted on Updated on

fools-goldOne signpost that a firm is maturing their analytic and data-driven capability is when they monetize data…IDC’s Dan Vesset made this point to me and I agree. Specifically, I’m talking about data, housed within the firm that has not previously been used or leveraged as a source of revenue. It could be dark data (stored and largely unused) or internal data that is frequently used but not shared. That said, understanding where and how you monetize your data is not such an easy task as it turns out either. Predominantly, I believe, why financial firms (or other industries for that matter) aren’t finding this easy is largely due to some key factors. While not exhaustive, the big ones tend to be:

  • They are lacking a robust data-driven (digital, information, analytical, etc..) strategy or are lost on what that is in the first place
  • Do you know what data you have is even valuable? Or what could be valuable?
  • New cultural, behavioral and business model considerations need to be made. And these don’t have to be grand corporate wide sweeping initiatives, they can be more focused and tailored to start.

What does data monetization look like? There are actually very good examples in financial services from new and old firms alike. New firm business models (digitally native) are largely doing this as a big data play as seen in the financial peer-to-peer and crowd-funding/sourcing segment. One good example is Estimize, a give to get model, where crowd sourced estimates are turning the model of analyst stock earnings forecasts upside down. On the site, anyone can contribute to the earnings consensus for a particular stock. As it turns out the predictions are eerily more accurate than what you will get from the street. That data then becomes something Estimize turns around and sells. Another post I’m writing will look more closely at these new data business models, but for now it’s an example of how data is being cultivated and sold. The best examples of where older, more traditional financial firms are winning is in credit cards and payments firms. Other good old guard examples are financial firms (intermediaries) that sit at the intersection of markets, such as State Street, who fairly recently launched Global Exchange to take advantage of the massive amounts of data it collects across markets and offer advanced risk analytics to its clients.

While not in financial services, BloombergBusinessweek’s recent article covering The Weather Channel provides a great read on how a whole company transformed itself with this approach. It offers a nice glimpse into how it could be done and the success it can wring for a firm

There are few particularly interesting highlights of this story. First, is the business was stuck in a traditional model and was seeking a way out. Whether it is a data-driven, innovation, growth, or some other strategy that is being worked through, having the top ranks recognize the opportunity and move on it is always cited as a starting point – and exactly how the Weather Channel started. The next important element is that they worked diligently to close the ‘digital loop.’ This means being able to digitally account for or process information across a life-cycle from start to finish. In this case, mobile was not only a pillar and linchpin in their digital strategy, but also a key component in tightening their digital loop. A firm may not have every aspect of their processes collapsed into a digital workflow, as we often see with new economy e-commerce companies, but they are diligently working to improve it and that is a major feet and accomplishment unto itself.

The last take away I had was the two above steps paved a path to data monetization. The Weather Channel’s data-driven strategy is realized as it is able to apply advanced analytics on its data and recognize a new major source of revenue. And the great part is how they combined their competitively advantageous weather data with partners (Procter & Gamble) to gain insights that have begun to force them (or us even) to rethink some of our basic assumptions about weather, marketing and how people buy products as noted in the article:

In the beginning, WeatherFX assumed it would have to break down its data along the traditional marketing demographics of age, gender, and race, but it quickly realized it didn’t. No matter who they are or what they look like, people in the same place mostly react to weather the same way.

Some thoughts on tackling the challenges. Work to understand what being data-driven means and how it’s achieved. Having researched and written on it more deeply over the past several months, it’s not as obvious as one my think – it can’t be a “I’ll know it when I see it” approach. While the word and concept is used often (and has been for a while) there is surprisingly little that examines what it really means and how to make it a strategic part of your business.

The second is how to go about realizing what data is of worth internally. Here are some some basic criteria (Gartner assist here) to understand what value your data has or can in order from strongest form to weakest:

  1. Scientific approach: firms are using advanced statistics (doing more with math and computing power) and being more experimental to test the correlations, relative values and accuracies of data. Getting a firmer grip on what influences/causes something else
  2. Timeliness: The most valuable information tends to be closer in time or fresh
  3. 1st party data: This is data that is unique to a firm about its clients, products or stores that no one else has and is a cornerstone in data as a competitive advantage.
  4. 2nd party data: These may evolve from alliance or partnerships that is again another unique set of information that is not openly available, yet not entirely your own
  5. Open or 3rd party data: Lowest level is freely available data that is public or from 3rd parties that can be purchased by anyone

The last major challenge in changing the business or culture with leadership…there are scores of books that have been written on the topic and frankly I couldn’t do it justice here. So, perhaps start with the Inc.’s Top 10.

PWC put out an article, The Data Gold Rush, that further explores what business models can be considered when trying to monetize data that is worth a read as well. We continually find in articles, about big data, that discovering new products is often one of the potential benefits from mining your data. That new product may very well already be sitting on your servers.

Opportunity to not be ineffective with your data strategy in 2013

Posted on Updated on

Image

Data governance is a trend that has moved back to the forefront of the “how to manage information” discussion. A recent WS&T article highlights the use of deeper analytics for trading and points to data management as a central problem for capital markets firms.  While deeper analytics is required to propel the business, one glaring issue that is a typically found when analytical platforms are deployed is the lack of completeness, timeliness, accuracy and overall data quality found at many top tier capital markets firms. Yes – still. Seems shocking after so many years of hard work and dollars spent. Many firms still struggle to understand the impact to liquidity and the G/L or balance sheet when doing various trade related functions (e.g. pre-trade scenario analysis) for example.

Certainly, the notion of more data to improve analytics is compelling and there is tremendous value in mining big and small data. However, to make 2013 the year when firms improve their information strategies, formalized efforts should be made to improve data that capital markets firms already work with. The improvement comes from data quality enhancement that is defined and refined by a formalized data governance initiative.

Establishing a data governance charter, program and action group is essential to create organization, definition and agreement between business and IT and avoid mishaps that typically arise from past data quality projects. This step is critical to operationalize data successfully for analytics, and it must be done in a timely manner. Successes are required to reach the deeper analytics needed for understanding where trade and operational profit, cost and customer opportunities exist.

Impetus to formalize data governance programs are no longer just to meet regulations. IT is pressured from the business side to produce outcomes from these multiyear data transformation ‘projects’ as well – they are tired of waiting for results.  If providing reliable and actionable data to the business aren’t enough to persuade the firm to truly work through data governance then perhaps market survival is. Some leading sell-side firms have decided to go all in and attempt to tie existing data efforts into a governance program so they can gain the competitive advantages clean data brings.

We’ve recently seen the U.S. congress take a large and fundamentally vital component of the future success of the country’s economic well-being and merely apply a short-term band aid solution.  In many ways, data is as important to capital markets firms as tax implications are to the country – let’s not allow firms to head over the data cliff.