Incedo Inc.

Wealth management industry is transforming rapidly as it pivots towards the fee based advisory model. The advisory model by nature requires a deeper level of relationship with the customers as compared to the commission based model which is more transactional in nature. At the same time, wealth managers are facing challenges from

  • Changing client mix and expectations,
  • Fee compression
  • Fintech disruption.

You can read details about how digital disruption is shaping the wealth management industry in our previous blog. Investment management has been commoditized and is no longer a differentiator. A robo advisor can perform portfolio allocation much better and at a much lower cost than a human advisor. If the advisors expect to charge more than robo advisory fee then they need to offer personalized financial advice based on holistic understanding of the client’s life stage, risk profile, investment objectives, preferences etc.

Issues with Traditional Segmentation Methods

The foundation of all personalization efforts is rooted in understanding the clients better and segmenting clients along the slices of client value, potential, demography, behavior etc. Client segmentation is informally practiced at wealth management firms but tends to suffer from some limitations

  1. Static segmentation – Client segmentation is not a one time product purchase, it is a continuous and dynamic process. Customers move from one segment to another over a period and their investment preference, risk profiles can change based on their own life events or market conditions. For example, in the current zero interest environment there has been more demand for the riskier instruments even from the conservative investor segments. Traditional or one time segmentation tools are not able to consider drifts over a period and can therefore provide stale results.
  2. Just focused on Value – Every practice or financial advisor at an informal level knows their most valuable clients as measured by assets under management or fees/ commissions earned. But value-based segmentation only provides you descriptive inputs about the advisory practice. It ignores other key parameters that can help in personalizing investment decisions, service models etc. Example: life stage has direct correlation with investment recommendation eg. 529 plans for mass affluents with young kids or IRA rollover recommendations for pre-retirees.
  3. Not scalable – Informal or semi automated segmentation methods have trouble in scaling when the number of clients increase and segmentation variables multiply. Traditional segmentation models can place clients in one segment or another but tend to provide mixed results when the number of variables and data points increase. On an average an advisor has about 80-100 clients. If we are talking about a large advisory practice with multiple advisors or ensembles with a shared servicing model, it is not possible to keep track of all clients and their changing variables without automating it.
  4. Limits personalization – The end objective of segmentation is not just to place clients in one bucket or another, it needs to inform the decision making process for personalized next best action. A static or non automated segmentation process stays mostly at macro level. To personalize client recommendations, micro segments need to be created and the manual segmentation methods struggle with that objective. Example: Within the retirees macro segment, the objectives, risk profiles and investment patterns of early retirees will be different from those of late retirees. In the accumulation stage, the investment objectives of investors with kids will be different from those with double household income and no kids. Unless we create micro segments, wealth managers will continue to provide advice which may be generic and at worst non contextual.
Segmentation needs to be dynamic, scalable, micro level and should inform the Next Best Action

Growing use of ML/ Data science in Client Segmentation

Although the use of data science and machine learning is growing in the wealth management space, the industry still lags various other consumer facing industries in using the full potential of Data and AI/ML. Today, there are multiple factors which are making it easier for wealth managers to use the power of machines to build segmentation engines.

  • The data sources and volumes have exploded and there is much more fine grained level of client data available than ever before.
  • There is large body of knowledge from the experience of other industries on how ML based segmentation enables data driven marketing
  • Lastly, cloud now allows for unlimited compute capacity by spinning concurrent workloads to perform complex processing and data analytics at minimal costs.


Various wirehouses, BDs, RIAs, technology providers over the last few years have started using AI to drive their segmentation model and recommendation engines. Machine learning based client segmentation can create data driven clusters which may not be readily visible via manual segmentation. Machine learning algorithms can analyze multiple deterministic features and analyze their correlation to create unsupervised clusters sharing homogeneous characteristics and behavior patterns. Such clustering does not suffer from any unconscious bias stemming from informal segmentation.

A scalable machine learning based segmentation model relies on the following data types and is able to slice customer data along multiple dimensions. Some examples below:

Segmentation TypeBased onSegment ExampleData Required
Geographic & DemographicLocation, Age, Income, Profession, genderUrban vs Rural
Millennials vs Baby Boomers
Client & Account Data
Value/ Potential ValueGDC , AUM, Type of Revenue (Fee vs Commissions), NetworthUNHW, HNW, Mass Affluent, Masses
High Value Vs Low Value
Trades Date, Advisory Billing data, Positions Data
Risk ProfileGoals, Risk Profile, Return objectives, Time HorizonConservative vs Aggressive InvestorSuitability Data
BehavioralTrading Frequency,& PatternsPassive vs Active InvestorTrades Data, Positions Data, CRM
TechnographicEngagement with ApplicationsTechnologically challenged vs Tech Savvy ClientsPortal & App Analytics ( Number of logins, time spent)

This data can also be supplemented with external data to provide additional insights which may not be apparent from the first party data. For example, first party data such as client zip location when supplemented with external census data can provide valuable information about zip affluence, education level, demographic segment etc. Similarly, held away investments and accounts data can help paint a holistic financial picture of the client and determine the advisor’s wallet share.

Client Segmentation across Customer Journey

Let us see how client segmentation aids data driven decision making and helps in improving key metrics during the client’s journey:

Client Acquisition- RIAs can align their prospecting efforts with the client segment value proposition to ensure a larger prospect funnel and higher prospect to client conversion . As per the Schwab 2020 RIA benchmarking study, the firms that adopted an ideal client persona and client value proposition attracted 28% more new clients and 45% more new client assets in 2019 than other firms. Therefore the first step is to identify your target segment and align your messaging and marketing accordingly. example

  • A business development campaign aimed at Pre retirees and retirees needs to focus on themes of safety & capital preservation while one focussed on young professionals will focus on themes of growth and return. Segmentation engine can identify geographic areas which are likely to have prospects that match the firm’s target segments and where a particular campaign will find most resonance
  • In another example, the segmentation engine can classify the leads and prospects data into specific segments by matching the lead characteristics with existing client segments. Segmentation engines can predict if a lead is likely to become a high value customer and also suggest the kind of campaign that will appeal to them.
  • Wealth Managers are now combining client segmentation and advisor segmentation to predict and match which advisors will best serve a prospective client based on prospect’s preferences, life stage etc
Wealth managers are now using segmentation for matchmaking between clients and advisors

Client Growth- To capture the greatest wallet share of their clients, advisors should tie the investment recommendations to the client’s demographic, psychographic and risk segmentation. We talked earlier about recommendations for 529 plans for investors with young kids and rollover recommendations for pre retirees. Some more examples on how customer segmentation engines are feeding into next best action platforms to provide contextual recommendations for clients:

  • Growing popularity of ESG with the younger investors or the increased sales of life insurance to the urban middle age group during pandemic are good examples of how advisors and product companies align product recommendations with client segment’s preferences.
  • Similarly, if the clients are more focused on increasing their retirement savings, then recommendation around how they can contribute more than defined limits using backdoor roths will be appreciated by client
  • If the client is in a high tax bracket currently, then the advisor needs to recommend tax deductible IRAs while if they are going to be in higher tax brackets during their retirement, then Roth IRAs may be a better investment vehicle.
  • When segmenting based on the client’s browsing behavior, wealth managers can also send research reports and/or articles pertaining to sectors/ investment products that the client searches for in the portal. In addition, the portal can provide these inputs to advisors on the client dashboards for their next conversation.

As technology such as direct indexing mature further, clients will increasingly ask for customization based on their values, preference, beliefs and the wealth managers will have to offer customized portfolios at scale.

Client Servicing- Effective segmentation also helps in building a tiered service model with differentiated services to the most valuable clients and repeatable services to all other clients. Some wealth management firms craft personalized experiences for their top clients based on their hobbies and interests. Psychographic and technographic segmentation also help in devising the service channel for the clients. For example,

  • Clients that delegate all their investment responsibilities to their advisors and give them discretion on their accounts tend to prefer a light touch servicing model.
  • Clients who want high touch services and want to validate investment decisions would be more impressed by detailed research and analysis.
  • While a third category technically savvy clients want to have all the information , portfolio and plan details available anytime anywhere and prefer online service channels

Client Retention- A client mix skewed towards low value and unprofitable clients can encumber and advisory practice’s service levels & profitability and can put the most valuable clients at attrition risk. Many wealth managers have their bottom 50-60% of clients contributing only about 5% of their revenues and the top 20% accounting for more than 80% of the revenue. Therefore, advisors should periodically shrink to grow better. Client segmentation can help wealth managers prioritize clients for retention and for letting go based on current value, potential future value, influence potential. Ageing baby boomer population has brought on another kind of client attrition risk for advisors. As per industry studies, the financial advisors are not retained 70% to 90% of the time when the wealth transfers to the next generation. The client retention efforts in such cases therefore not only need to focus on the immediate clients but also on the next generation.

Segmentation is a growth as well as a Defensive Imperative

Thus, ML based segmentation can greatly aid data driven marketing efforts for wealth managers and lead to a higher return on the marketing dollars. It leads to measurable efficiencies in client servicing, attracting more clients and retaining high value clients. Lastly, it lays the foundation for a personalization engine for targeted recommendation, communication and servicing. While we have focussed above on how client segmentation can turbocharge an advisory practice’s growth, it is also a defensive imperative for the wealth management industry. FAANGs have perfected customer segmentation and personalization to an art form and are eagerly eyeing trillions of dollars of the wealth management industry. Wealth managers would do well to weaponize their data by using the power of machines and insulate themselves against the looming threat of big pocketed disruptors.

FAANGs have perfected customer segmentation and personalization to an art form and are eagerly eyeing trillions of dollars of the wealth management industry

To achieve the full promise of ML based segmentation, data infrastructure needs to support running of segmentation models at scale. To paint a holistic client picture, wealth management firms also need to break data silos and ensure availability of high quality, harmonized and consumable client data. Our next blog will discuss the data challenges in the wealth management industry and how the modern data management techniques can help overcome these challenges. Till then Happy Segmenting.


What is Data Engineering?

Data engineering is quite popular in the field of Big Data, and it mainly refers to Data Infrastructure or Data Architecture.

The data generated by many sources like mobile phones, social media, or the internet are raw. It needs to be cleansed, transformed, profiled, and aggregated for Business needs. This raw data is called Dark Data, which is polished and made useful. The practice of architecting, designing, and implementing the data process system, which will help to make the data converted to helpful information, is called Data Engineering.

What is modern data engineering?

Modern data engineering is a fast, secure, and high-quality implementation and deployment of new software/systems that streamline operations and reduce costs with minimal workforce interruption. It operationalizes and enables engineering practices such as big data analytics and cloud-native applications. The modern software delivery operates to facilitate continuous integration, alerting, continuous deployment, monitoring, security compliance, and other scenarios that improve software quality and agility.
The modern data engineering provides:

  1. The high speed where companies can act faster to address issues or customer needs
  2. Better agility to have a quick feedback to help evolve behavior, and
  3. Reduced costs of the workforce through automation and improved efficiencies

What is the difference between Data Engineering and Data Science?

Data Engineering and Data Science are complimentary. Virtually, data engineering ensures that data scientists can look at data reliably and consistently.

Data science projects often require a specialist team or teams with specific roles, functions, and areas of expertise. The numerous services related to the complex processes of cleaning, processing, sorting, storing, arranging, modeling, and analyzing large data sets are useful to perform. Differentiating the different members of a data science team based on their positions and their fields of expertise has become increasingly popular. Data scientists use techniques such as data mining and machine learning. The tools used to analyze data in powerful ways are R, Python, and SAS.

Data engineering is one of the subsets in the field of data science and analytics. It distinguishes data science teams who design, construct, and maintain the big data systems used in analytics from teams who build algorithms, create probability models, and provide analyses of the results. Data engineering deals with many core elements of data science, such as the initial collection of raw data and the process of cleansing, sorting, securing, storing, and moving that data. The analytical procedures that characterize the later stages of a data science project are less essential in the area of data engineering. The tools used in Data Engineering are SQL and Python.

What Are The Key Data Engineering Skills and Tools?

Computer engineers use specialized equipment, where each device poses its unique challenges. They need to understand how data is formed, stored, protected, and encoded. Such teams will need to understand the most efficient ways of accessing and manipulating the data.

Extract Transform Load (ETL) tool is a category of technologies that move data between systems. It accesses data from different technologies that apply rules to transform and cleanse the data and make it ready for analysis. Some ETL products include Informatica and SAP Data Services.

Structured Query Language (SQL) is the primary language for querying relational. This is used within a relational database to perform the ETL activities. SQL is especially useful when the source and destination of the data are the same database types. SQL uses multiple methods and is recognized as well as many people understand.

Python is a general-purpose programming language. It is a famous tool in performing ETL tasks due to its ease of use and extensive libraries for accessing databases and storage technologies. Python is mostly used for data engineering instead of the ETL tool because of its flexibility and power in performing these tasks.

Spark and Hadoop work with large data-sets on clusters of computers. They make it simple to apply the power of systems working together to achieve a job on the data. These tools are not as easy to use as Python.

HDFS & Amazon S3 are used in data engineering to store data during processing. They are specialized file systems that can save a virtually unlimited amount of data, making them useful for data science tasks.

The tools used in data engineering are categorized under two titles:

  1. Data tools: Apache Hadoop, Apache Spark, Apache Kafka, SQL, and NoSQL.
  2. Programming tools: Python, Java, Scala, and Julia

Why is data engineering necessary?

Without data engineering, data science would be next to impossible. There would be no data as such, which would bring machine learning and AI to an end because it uses algorithms that are requiring a lot of data to build. Data Engineering provides Data Transmission speed for the data to be comprehensive and be continuously updated. Increase In Data Volume Improves Forecasting by data engineering. Lack of data and the ability to handle it discourage many entrepreneurs from doing so.

Nevertheless, the most significant organizations have no way to quickly and without delay, produce the data required for AI and machine learning. Still, at the moment, they are collaborating with technology engineers to build a well-organized technology pipeline. Ignorance of the need to harness their data wealth effectively may soon be left with nothing.


What is a data warehouse?

A data warehouse is the electronic storage of an organization’s historical data for data analytics. It contains a wide variety of data that supports the decision-making process in an organization. Data Warehousing is a process of collecting and managing data from varied sources to provide meaningful business insights. Typically, it is used to connect and analyze business data from heterogeneous sources. The core of the BI system, data warehouse, is built for data analysis and reporting.

What are the Data warehouse architectures?

Mainly, there are three types of data warehouse architectures:

  1. Single-tier architecture – The objective of a single layer is to minimize the amount of data stored by removing data redundancy. This architecture is not frequently used in practice.
  2. Two-tier architecture – Here, the physically available sources are separated from the data warehouse. This architecture is not expandable & does not support a large number of end-users. Sometimes, this architecture faces connectivity issues due to network limitations.
  3. Three-tier architecture – It is the most widely used architecture that is consists of the Top, Middle, and Bottom Tier.
  4. Bottom Tier – A relational database of the Data warehouse serves as the bottom tier where Data is cleansed, transformed, and loaded.
  5. Middle Tier – This is an OLAP server & provides an abstract view of the database, serving as a mediator between the end-user and the databases.
  6. Top-Tier – It is a front-end client layer channel data out of the data warehouse.


What are the tools of data warehousing?

Some of the most prominent tools for data warehousing are:

  1. MarkLogic: This is a useful data warehousing solution that makes data integration more comfortable and faster using an array of enterprise features. It helps to perform complex search operations. It can query different types of data like relationships, documents, and metadata.
  2. Oracle: This is the industry-leading database. It offers a wide range of choice of data warehouse solutions for both cloud and on-premises. It helps to optimize customer experiences by maximizing operational efficiency.
  3. Amazon RedShift: This is an easy and cost-effective tool that uses standard SQL and existing BI tools to analyze all types of data. It also allows complex queries to be executed against petabytes of structured data, using the database optimization technique.


What are the benefits of the data warehouse?

Data warehouse allows business users to access important data from some sources easily and collate all of it in one place. It provides consistent information on various cross-functional activities. It is also supporting ad-hoc reporting and query and helps to integrate many sources of data to reduce stress on the production system. Its restructuring and integration make it easier for the user to use for reporting and analysis.

Data warehouse helps to reduce total turnaround time for analysis and reporting. It allows users to access critical data from the number of sources in a single place. Therefore, it saves the user’s time of retrieving data from multiple sources. A data warehouse stores a large amount of historical data. This helps users to analyze different periods and trends to make future predictions.

What is the difference between a Data warehouse and a data mart?

  • Data Warehouse is a vast repository of data collected from different sources, whereas Data Mart is only a subtype of a data warehouse.
  • Data Warehouse focuses on all departments in an organization, whereas Data Mart focuses on a specific group.
  • Data Warehouse takes a long time in data handling, whereas Data Mart takes a short time for data handling. Unlike data mart, the designing process of data engineering is very complicated.
  • Data Warehouse implementation process takes one month to 1 year, whereas Data Mart takes a few months to complete the implementation process.
  • Data Warehouse size ranges from 100 GB to more than 1 TB, whereas Data Mart size is less than 100 GB.


What is the difference between the Data Warehouse and the Data Lake?

  • Data Warehouse stores quantitative data metrics with its attributes, whereas Data Lake, will store all data irrespective of the source and its structure.
  • Data Warehouse is a blending of technologies and component which allows the strategic use of data, whereas, Data Lake is a storage repository for vast structured, unstructured, and semi-structured data.
  • Data Warehouse is the schema before data is stored, whereas Data Lake defines the schema after data is stored.
  • Data Warehouse uses the Extract Transform Load (ETL) process, while the Data Lake uses the ELT(Extract Load Transform) process.
  • Data Warehouse is ideal for operational users, whereas Data Lake is ideal for those who want in-depth analysis.


What is the difference between data warehouses and data mining?

  • The data warehouse is the process of pooling all the relevant data together, whereas Data mining considered as a process of retrieving data from large data sets.
  • Data warehousing is a process that needs to occur before any data mining can take place, while business users usually do data mining with the assistance of engineers.
  • The data warehouse is a technique of collecting and managing data, whereas Data mining is the process of analyzing unknown patterns of data.
  • Data Warehouse is complicated to implement and maintain while Data mining allows users to ask more complicated queries, which would increase the workload.
  • Data Warehouse is useful for operational business systems such as CRM systems when the warehouse is integrated. In contrast, Data mining helps to create suggestive patterns of essential factors like the buying habits of customers.


What is the difference between a data warehouse and database?

  • The data warehouse is an information system that stores both historical and commutative data collected from single and multiple sources. In contrast, Database is a collection of related data that represents some elements of the real world.
  • The data warehouse is designed to analyze data, whereas the Database is designed to record data.
  • Data warehouse uses Online Analytical Processing (OLAP), and Database uses Online Transactional Processing (OLTP).
  • Data Warehouse is the subject-oriented collection of data, while Database is application-oriented-collection of data.
  • Data modeling techniques are used for designing Data Warehouse, whereas ER modeling techniques are used for creating databases.
  • Data Warehouse tables and joins are accessible. They are denormalized, whereas Database tables and joins are complicated because they are normalized.


What is the Data warehouse on a cloud?

The cloud data warehouse market has grown in recent years, as organizations reduce their own physical data center footprints and take advantage of cloud economics. The cloud companies use it to mainly abstract for the end-users that just see a large warehouse or repository of data waiting and available to be processed.

Cloud data warehouses include a database or pointers to a collection of databases, where the production data is collected. Another core element of modern cloud data warehouses is an integrated query engine that enables users to search and analyze the data. This assists with data mining.

While choosing a cloud data warehouse service, organizations consider several criteria. Such as:

  • Having an existing cloud deployments
  • Data migration ability
  • Different storage options

What is meant by Data warehouse design?

Data warehouse design builds a solution to integrate data from multiple sources that support analytical reporting and data analysis. It is a single data repository where a record from various data sources is integrated for online business analytical processing (OLAP). It means that a data warehouse needs to meet the requirements of all the business stages within the entire organization.

Data warehouse, if designed poorly, can result in acquiring and using inaccurate source data that negatively affect the productivity and growth of the organization. Thus, data warehouse design is dynamic, and the design process is continuous, but, is a hugely complex, lengthy, and hence error-prone process.

Extraction of multiple data from sources, its transformation, and load (ETL) to be organized in a database as the data warehouse is the target of the design. It has two approaches – the top-down approach & the bottom-up approach

What is a Data warehouse software?

A data warehouse serves as a gateway to the composite data between analytics supports and the operational data stores. Instead of traditional transactional processing, it is the database built for data analysis. In a data warehouse to efficiently facilitate decision-making, data is collected from various data sources, and then load into the warehouse is standardized. It can be grouped into tables, and redundantly cleaned and transformed for consistency.

Data warehouse software acts as the central storage hub for a company’s integrated data that is used for analysis and future business decisions. The combined information within data warehouses comes from all branches of a company, including sales, finance, and marketing, among others.

Data warehouses combine data from sales force automation tools, ERP and supply chain management suites, marketing automation platforms, and others, to enable the most precise analytical reporting and intelligent decision-making. Businesses also use artificial intelligence and predictive analytics tools to pull trends and patterns found in the data.

What are Data warehouse solutions?

Data warehouse solution is categorized into two: on-prem data warehouse and cloud data warehouse. It’s the choice and requisites od the uses to choose any one of them. Both offer their solutions to organizations.

  • On-prem data warehouse solutions- The healthcare organizations, banks, and insurance companies occasionally still prefer on-prem Data warehouses because of the control they have over them. This refers to keeping (and funding) their own IT staff to maintain their instances of these solutions and develop new capabilities for them. Some of these companies, including a software company, have IT teams that iteratively introduce new technologies and bug fixes using agile approaches. This approach works well in cases where legacy systems still exist in service, and where integration involves mainly low-level customizations (code, connectors, and changes in configuration).
  • Cloud-based data warehouse solutions- The cloud-based solution is advantageous as a managed solution, where tasks like sharing, replication, and scaling are done along with many other happenings automatically, in the background. It has have fixed costs with no additional outlay for hardware nor variable costs when something fails or needs an upgrade. In the case of the building of data infrastructure from scratch, then cloud-based is profitable with a shallow barrier to entry.


What is Data warehouse modeling?

Data warehouse modeling is the process of designing schemes that describe the reality or the fact of the data warehouses summarized and detailed details. Information warehouses’ primary purpose is to help DSS processes, while data warehouse modelling aims to make data warehouse efficient by supporting complex queries on long-term information. It is an essential stage of building a data warehouse for two main reasons. Through the schema, in the first place, data warehouse customers can imagine the warehouse data relationships and access them more effectively. Secondly, a well-built scheme enables the creation of an efficient data warehouse system. It helps to decrease the cost of implementing the warehouse and improve the efficiency of using it.

In conclusion, the data warehouses are designed for the customer with general information knowledge about the enterprise. At the same time, operational database systems are more oriented toward use by software specialists for creating distinct applications.

What is Data warehouse testing?

Testing is required for a data warehouse as it is a strategic enterprise resource. Data warehouse testing practices are used by the organizations to develop, migrate, or consolidate data warehouses. The success of any on-premise or cloud data warehouse solution depends on the execution of valid test cases identifying the issues related to data quality. The standard process used to load data from source systems to the data warehouse is Extract, Transform, and Load (ETL). Data is extracted from the source and is transformed to match the target schema. Then, it is loaded into the data warehouse.

Testing the data warehouse data integration process is essential with data driving critical business decisions. The data source determines the consistency of the data, so data profiling and data cleaning will begin. The history of the source data, business rules, or audit information may not be available anymore.

Data pipeline:

What is a data pipeline?

Data Pipeline is an arbitrarily complex chain of processes that manipulate data where the output data of one process becomes the input to the next. It serves as a processing engine that sends data through transformative applications, filters, and APIs instantly.

A data pipeline is a combination of data sources, applies transformation logic, and sends the data to a load destination. In the world of digital marketing and continuous technological advancement, data pipelines have become saviors for collection, conversion, migration, and visualization of complex data.

The critical elements of a data pipeline include sources, extraction, denormalization/standardization, loading, and analytics. Data pipeline management has evolved beyond the guidelines concerning conventional batching.

What is Data pipeline architecture?

A data pipeline architecture is a system that organizes data events to make reporting, analysis and using data more accessible. It is used to gain insights by capturing, organizing, and routing data. According to business goals, a customized combination of software technologies and protocols automates the management, transformation, visualization, and movement of data from multiple resources.

The architecture of the data pipeline is the design and configuration of code and systems that copy, clean, or transform as necessary, and route source data to destination systems such as data warehouses and data lakes.

Three factors contribute to the speed of data that moves through a data pipeline, which is the throughput, reliability, and Latency.

How to build a data pipeline?

A data pipeline, as discussed, is the process of moving data from one system to another. It allows transforming data from one representation to another through a series of steps. ETL (extract, transform, and load) and data pipeline are often used interchangeably, but data must not be converted to form part of a data pipeline. But typically, the destination for a data pipeline is a data lake.

An ideal data pipeline has the properties of Low Event Latency, Scalability, Interactive Querying, Versioning, Monitoring, and Testing of data.


What is data as a service?

Data as a Service (DaaS) is a data management technique that uses the cloud to provide services for data storage, processing, integration, and analytics through a network link.

DaaS is similar to software as a service which a cloud computing strategy that involves delivering applications to end-users over the network instead of running applications locally on their devices. Similar to SaaS, the need to install and manage software locally is removed, DaaS outsources most data storage, integration, and processing operations to the cloud.

DaaS is a term that starts seeing widespread acceptance now. It is partially because traditional cloud storage systems were not initially developed to manage large data workloads; instead,  they catered for hosting applications and storing simple data. It was also challenging to process massive data sets through the network in the earlier days of cloud computing when the bandwidth was mostly limited.

How to implement data as a service?

DaaS removes much of the set-up and planning work involved in developing a data processing system on site. The essential steps for getting started with DaaS include:

  1. Choose a DaaS solution – Factors involved in selecting a DaaS offering include price, reliability, flexibility, scalability, and how easy it is to integrate the DaaS with existing workflows and ingest data into it.
  2. Sign up for and activate the DaaS platform.
  3. Migrate data into the DaaS solution – rely on data to migrate, and the speed of the network connection between the local infrastructure and DaaS, data migration may need more time.
  4. Begin to leverage the DaaS platform to deliver faster with more reliable data integration and data insights.

What is data analytics as a service?

Data Analytics as a Service (DAaaS) is a protractible analytical framework that uses a cloud-based delivery model, where different data analytics tools are available and can be configured by the user to process and analyse huge amounts of heterogeneous data efficiently.

The DAaaS platform is designed to be protractible for handling various possible use cases. One clear example of this is the Analytical Services series, but it is not the only one. For example, the system can support the integration of different external data sources. To enable DAaaS to be extensibility and readily configured, the platform includes a series of tools to support the complete lifecycle of its analytics capabilities.

Data infrastructure

What does Data Infrastructure mean?

Data infrastructure can be thought of as a digital infrastructure that is known for advertising data consumption and sharing. A secure data infrastructure enhances the efficiency and productivity of the environment in which it is employed, increasing collaboration and interoperability. If data infrastructure implemented correctly, it should reduce operational costs, boost supply chains, and serve as a baseline for developing a growing global economy.

Data infrastructure is a collection of data assets, the bodies that maintain them, and guides how to use the collected data. It is the proper amalgamation of organization, technology, and processes. Privacy of data is a crucial aspect, and thus the data assets in a data infrastructure could either be in the open part of the shared form. Data can create extreme value if it has an open data infrastructure. However, if the contents are critical, data protection is required.

What is Data infrastructure management?

Management of data infrastructure starts with the selection of a suite of data management products. They help in maintaining control of data no matter where it resides in the hybrid cloud environment. It is also to drive simplicity and efficiency using software management tools designed to work together. Gaining flexibility to choose the best to manage data to increase productivity and business agility is one of the other objectives.

How to build a big data infrastructure?

Big data can bring extensive benefits to businesses of all sizes. However, in any business project, proper preparation and planning are essential, precisely when it comes to infrastructure. As far as it was hard for companies to get into big data without making substantial infrastructure investments. To get a move on with big data and turn it into insights and business value, it is likely to make investments in the following critical infrastructure elements: data collection, data storage, data analysis, and data visualization/output.

  • Data collection: The data arrives at the company. This covers everything from sales reports, customer files, reviews, social media networks, mailing lists, e-mail archives, and any data gleaned from tracking or evaluating operational aspects.
  • Data storage: Here, the data from the sources are stored. The main storage options comprise a traditional data warehouse; a data lake, a distributed/cloud-based
  • Data analysis: Running the stored data to find out the need to process and analyze it. So this lamina is all about turning data into insights. This is where programming languages and platforms come into play.
  • Data visualization/output: Analysing the data, passing on to the people who need them, i.e., the decision-makers in the company. Deliberate and precise communication is essential, and this output can take the form of brief reports, charts, figures, and critical recommendations.


Data governance

What is data governance?

Data governance defines the enterprise’s management of the availability, usability, integrity, and security of their data with a set of rules and processes. It is based on internal data standards and policies that also control data usage.

Effective data governance makes sure that data is consistent, trustworthy, and does not get corrupted. It is increasingly critical as organizations face new data privacy regulations and depend more on data analytics to optimize operations and drive business decision-making.  In order to organize with efficiency and use data in the context of the company and coordination with other data projects, data governance programs must be treated as an ongoing, iterative process.

What is data governance frameworks?

A robust data governance framework is central to the success of any data-driven organization because it makes sure this asset is properly maintained, protected, and maximized.
It may be best thought as a function that supports an organization’s overarching data management strategy. To help to understand what a data governance framework should cover, DAMA envisions data management as a wheel, with data governance as the hub from which the following 10 data management knowledge areas radiate:

  • Data architecture
  • Data modeling and design
  • Data storage and operations
  • Data security
  • Data integration and interoperability
  • Documents and content
  • Reference and master data
  • Data warehousing and business intelligence (BI)
  • Metadata
  • Data quality

It refers to the process of building a model for managing enterprise data. The system sets the guidelines and rules of engagement for business and management activities, especially those that deal with result in the creation and manipulation of data.

What are the data governance tools?

The enterprises today have built several data governance tools for the smooth flow in storing and retrieving data. Some of the popular Data Governance Software tools are:

  1. OvalEdge
  2. Truedat
  3. Collibra
  4. IBM Data Governance
  5. Talend
  6. Informatica
  7. Alteryx
  8. A.K.A
  9. Clearswift Information Governance Server
  10. Datattoo
  11. Cloudera Enterprise
  12. Datum


What is data governance in healthcare?

Data governance in healthcare is called information governance. It is defined as an organization-wide framework for managing health information throughout its lifecycle. The lifecycle start point is from the moment a patient’s information first entered into the system until the time they discharge. The lifecycle includes things like payment, research, treatment, outcomes improvement, and government reporting.

Having robust enterprise-wide data governance policies and practices helps the Institute’s facilities to achieve Healthcare Improvement’s Triple Aim:

  • Enhance the patient experience of care – quality and satisfaction
  • Upgrade the health of populations
  • Decrease the per capita cost of healthcare

The practical steps to enterprise Data Governance for health information management and technology professionals include accessibility, data quality, physician burnout, privacy, and ethics.

What are the data governance principles?

The principles of data governance include:

  • Data must be a recognized valued & strategic enterprise asset – Data is the primary influencer for organizational decision making, so enterprises should ensure that their data assets are defined, controlled, and accessed in a careful and process-driven way. Hence, management can be confident in the accuracy and the output of data.
  • Data must have clear and defined accountability – For the enterprise-level integration, data should be accessed through authorized processes only
  • Data must follow and be managed by its internal & external rules – To avoid data chaos, standardized policies for which the organizations define the rules and guidelines should be adhered to strictly.
  • The Data quality, across the data life cycle, must be defined & managed consistently – Enterprise’s data must be tested periodically against the set quality standards

What are the best practices for data governance?

Data governance defines as a set of processes to ensure data meets business rules and precise standards as it is entered into a system. It enables businesses to exert control over the management of data assets and encompasses the process, people, and technology that is required to make data fit for its intended purpose.

Data governance is essential for different types of organizations and industries, but especially for those that have regulatory compliance. Enterprises are required to have formal data management processes in place to govern their data throughout its lifecycle and achieve compliance.

Data processing

What is data processing?

Data processing is the conversion of data into a usable and desired form or manipulation of data by the computer. It includes the transformation of raw data into a machine-readable form. This conversion is carried out using a predefined sequence of operations either manually or automatically. The processing is mostly done automatically by computers. The processed data can be obtained in various forms such as image, vector file, audio, graph, table, charts, or other desired format. The secured way depends on the software/method of data processing used. Data processing infers to the processing of data required to run organizations and businesses. If it is done by itself, it is referred to as automatic data processing.

What are the data processing services?

Data processing involves extracting relevant data from a source, converting it into usable information, and presenting it in a digital format that is readily available. To transform this data into meaningful information, data processing professionals apply different conversion techniques and analysis. It holds a great advantage for many organizations, as it allows for a more efficient method for retrieving information, while also safeguarding the data from loss or damage.

The four main stages of the data processing cycle are:

  • Data collection.
  • Data input.
  • Data processing.
  • Data output.


Data ownership

What is data ownership?

In essence, data ownership is a process of data governance which details the legal property of enterprise-wide data by an organization. This states the owner’s legal rights and complete control over a single piece or collection of elements of the data. It offers details about the legitimate owner of data properties and the regulation of the data owner’s collection, usage, and distribution.

A particular organization, as a data owner, can create, modify, share, edit, and restrict access to the data. Data ownership also determines the right of the data owner to delegate, transfer, or surrender any of these rights to a third party. In the medium to large organizations with vast databases of centralized or distributed data elements, this definition is typically applied.

If an internal or external entity illegitimately breaches its owner, the data owner claims the possession and copyrights to such data to ensure their control and ability to take legal action.

Data accelerators

What are data accelerators?

The data-accelerator repository consists of everything that needs to be set up in an end-to-end data pipeline. There are many ways to participate in the project:

  • Submission of bugs and requests
  • Reviewing code changes
  • Reviewing the documentation and make updates in the content.

Data Accelerator offers three levels of experiences:

  1. No requirement of code at all, using rules to create alerts on data content.
  2. Allows us to quickly write a Spark SQL query with additions like Live Query, time windowing, in-memory accumulator, and others
  3. Enables integrating custom code written in Scala or using Azure functions.

For example, Data Accelerator for Apache Spark democratizes streaming big data using Spark by offering several key features such as a no-code experience to set up a data pipeline as well as a fast dev-test loop for creating complex logic.

Data operations

What is data operations?

Data Operations (DataOps) is a process-oriented, computerized approach used by analytics and data teams to maximize efficiency and of data analytics cycle time. Although Data Operations started as a collection of best practices, a modern and independent approach to data analytics has now matured.

DataOps is enterprise data management for the Artificial Intelligence era. Now you can seamlessly connect your data consumers and creators to find and use the value in all your data rapidly. Data operations are not a product, service, or solution. It is a methodology: a technological and cultural change to improve your organization’s use of data through better collaboration and automation.

It means improved data trust and protection, shorter cycle time for your insights delivery, and more cost-effective data management.

What is Database operations?

Database operation is a vehicle through which users and applications have access to data in related databases. The performance of database operations is measured in the context of a tracking application that accumulates track information in a database. Tracks are mentioned using their location (spatial coordinates). The tracker runs in discrete time intervals called cycles. All along each period, the tracker receives a set of target reports from a radar. It insists on the database search for all tracks that could be associated with each target report, based on location. The tracker may guide the database to insert new records based on target reports that are not associated with any tracks and to delete specific tracks.

The code of the database obtains tracker information consisting of the scan, insert, and remove operations to be performed. The production is a collection of record identifiers that are used to scan the individual memory records. As the actual database does not exist, the numbers are typically random 32-bit integers. The goal is to calibrate the performance of the search, insert, and delete operations, without altering the contents of any particular record. The primary motive for this is to avoid generating a large amount of data necessary for the database.

The effects of a database operation may be cached either on-demand or in a scheduled manner — in one or more caching services — thus reducing the burden on back-end databases, minimizing latency, and managing network bandwidth use. The configurable coherence windows manage the coherence of the cache.

DBA Services

What are the DBA services?

Database administrators (DBAs) will use specialized software to organize and store data. The role may include capacity planning, installation, configuration, database design, migration, performance monitoring, security, troubleshooting, as well as backup and data recovery.

Database as a service (DBaaS) is defined as a cloud computing service model providing the users with some form of access to a database without the necessity for physical hardware set-up, software installation, or performance configuration. The service provider manages all the administrative tasks and maintenance, and all the customer or device owner has to do is use the database.

Types of DBA Services:

Multiple kinds of DBAs focus on various activities like logical design and physical design, specializing in building systems, specializing in maintaining and tuning systems. They are:

  • System DBA – focuses on technical issues in the system administration area
  • Database architect – involved in new design and development work
  • Database analyst – performs a role similar to that of the database architect
  • Data modeler – responsible for a subset of the data architect’s responsibilities
  • Application DBA – focuses on database design and the ongoing support and administration of databases for a specific application
  • Task-oriented DBA – backup-and-recovery DBA who devotes his entire day to ensure the recoverability of the organization’s databases
  • Performance analyst – focuses solely on the performance of database applications
  • Data warehouse administrator – a thorough understanding of the differences between a database that supports OLTP and a data warehouse

What are remote DBA services?

Some of the remort services of DBA are listed as follows:

  • Installation
  • 24*7 monitoring
  • Disaster recovery
  • Upgrade & migration
  • Performance tuning
  • Database memory tuning
  • Sql tuning
  • Operating system tuning

What are the DBA consulting services?

Data Analytics Consulting Services uses an array of methods that optimizes various business intelligence tasks by leveraging existing data, which is the new tweak in Business Analytics.
Business Analytics has raised decision making to radically different strata. In today’s business world, informed decisions are being made by slicing, dicing, and scrutinizing the data. This analysis, anyhow, does not have any value if the business aspect of the problem is ignored at hand. Data Analytics Consulting Services balance business and hardcore analytics to deliver value-added analytical solutions.

What are the DBA managed services?

Database managed services can help to reduce many problems associated with provisioning and maintaining a database. Once, developers build applications on top of managed database services to drastically speed up the process of provisioning a database server. The self-managed solution leverages, configures, and secures a server (on-premise or in the cloud), connects to it from a device or terminal. Then, installation and setting up of the database management software are done before beginning to storing data.

Managed database, allows you only to configure the additional provider-specific options, and have a new database ready to integrate with the website. It is a cloud computing service in which the end-user pays a cloud service provider for accessing a database. The process of provisioning database management varies from provider to provider, but it is similar to that of any other cloud-based service.

MDM systems

What is an MDM system?

Master Data Management (MDM) is a mechanism that produces a standardized collection of data from various IT structures regarding consumers, goods, suppliers, and other business entities. MDM is one of the central areas in the overall data management process, helping to enhance the consistency of data by ensuring that identifiers and other main data elements are correct and consistent across the organization.

It is the primary mechanism used to examine, centralize, organizes, categorizes, localizes, synchronizes, and enriching master data according to the business rules of your company’s sales, marketing, and operational strategies.

MDM allows:

  • Focus on product, service, and business efforts on sales-boosting activities.
  • To have a highly personal service and the interaction-based experience.
  • De-prioritize unprofitable time and resource-draining practices.


What is MDM compliance systems?

Master data management (MDM) is the key to corporate conformity. MDM refers to the software, tools, and best practices that can be distributed through different databases and other repositories to regulate official corporate documents.

MDM ensures data is generated, validated, processed, secured, and transmitted in a clear set of policies and controls.

MDM has grown into a vast field of integration for all data management technologies. Enterprise IT organizations are gradually carrying out MDM approaches covering relational databases, data warehouses, and profiling

Quality tools, including data mapping and transformation engine, business intelligence, enterprise information integration (EII), transform load extraction, and metadata management.

Data lake

What is a data lake?

A data lake is a highly scalable repository of vast quantities and data types, both organized and unstructured. It is a full data system in which for all analytics, including data engineering, data science/AI/ML, and BI, a wide range of data can be processed, stored, and analyzed.

Data lakes manage the full lifecycle of data science. Firstly, ingesting for building a data lake and cataloging data from a variety of sources. Then, the data is enriched, combined, and cleaned before analysis. This process makes the discovery and analysis of data easy in case of visualization, direct queries, and machine learning. Data lakes complement traditional data warehouses by providing cost-effectiveness, more flexibility, and scalability for ingestion, transformation, storage, and analysis of the data.

What is data lake analytics?

Data Lake Analytics is an on-demand analytics service at a job that simplifies big data. Instead of deploying, configuring, and tuning hardware, queries are written to transform the data and extract valuable insights. The analytics service will instantly handle jobs of any size by setting the dial to the power required.

  • Data Lake Analytics is associated with Active Directory for user management and permissions. It comes with built-in monitoring and auditing and uses existing IT investments for identity, security, and management. In this approach, data governance is simplified and made easy to extend current data applications.
  • The cost-effective solution for running big data workloads is Data Lake Analytics. The system scales up or down automatically, as the job starts and completes. There is no requirement of hardware, licenses, or service-specific support agreements.
  • Data Lake Analytics works with Data Lake Storage for the highest performance, throughput, and parallelization.

What is data lake architecture?

The architecture of a Business Data Lake has multiple levels with various functionality tiers. Its lowest levels represent data that is mostly at rest, whereas the upper levels show real-time transactional data. The data flow through the system has no or little latency. The essential tiers in Data Lake Architecture are:

  1. Ingestion Tier: Depicts the data sources where data is loaded into the data lake in; real-time or in batches
  2. Insights Tier: Represents the research side where insights from the system are used for data analysis
  3. HDFS: A cost-effective solution for both structured and unstructured data and landing zone for all data that is at rest in the system.
  4. Distillation tier: Takes data from the storage tire and converts it to structured data for more accessible analysis
  5. Processing tier: Runs analytical algorithms and users queries with varying real-time and interactive batch to generate structured data
  6. Unified operations tier: Govern’s system management and monitoring, auditing and proficiency management, data management, and workflow management

What are data lake solutions?

The solutions of a data lake are high performing, bringing bring together data from separate sources, and make it easily searchable, analytics, maximizing discovery, and reporting capabilities for end-users.

For a repository of enterprise-wide raw data, a data lake can deliver impactful benefits compared to the combined big data and search engines.

  • Data richness– storing and processing of structured and unstructured data from multiple types and sources
  • User productivity– end-users get the data they need quickly via a search engine, without SQL knowledge.
  • Cost savings and scalability– zero licensing costs over open source allows the system to scale as data proliferates.
  • Complimentary to existing data warehouses– data warehouse and data lake can work in conjunction with a more integrated data strategy.
  • Expandability– data lake framework can be applied to a variety of use cases, from enterprise search to advanced analytics applications across industries


What is data lake storage?

Data lake storage is suitable for storing a large variety of data coming from different sources like applications, devices, and others. Users are allowed to store relational and non-relational data of any size virtually. Also, it does not require a schema to be defined before any data is loaded into the store. Each storage file is sliced into blocks, and these blocks are distributed across multiple data nodes. There is no limit to several neighborhoods and data nodes. Moreover, the data lake storage allows users to store data for its structure:

  • Unstructured data – no pre-defined data model/format for data
  • Semi-structured data – Data with self-described structures that do not support the formal structure of data models linked to a relational database or other data tables
  • Structured data – data residing in a field of a record file (for example – spreadsheets and data contained in a relational database)

Data Lake Storage supports analytic workloads that require large throughput. It improves performance and reduces latency. For better security standards and limit sensitive information visibility, data must be secured in transit and at rest. The data lake storage provides precious security capabilities so that users can have peace of mind when storing their assets in the infrastructure.

Data quality management (DQM)

What is data quality management (DQM)?

DQM indicates business fundamentals that needed a combo of the right people, procedure, and technologies with all the simple goals of improving the measures of data quality.
It is an authority that integrates the role formation, role distribution, policies, responsibilities, and processes for the procurement, maintenance, tendency, and sharing of data. Good collaboration among the technology groups and the company is necessary to accomplish a quality data management.

The ultimate function of DQM is not only to develop high-data quality but also somewhat to attain the business outcomes that rely on high-quality data. The big one is customer relationship management (CRM).

Data quality management tools?

These tools remove output errors, typos, redundancies, and other problems. Data quality management tools also make sure that organizations apply guidelines, automate processes, and provide reports about processes. Used successfully, these methods reduce the inequality that pushes up enterprise spending and annoys customers and business partners. They also increase sales and drive productivity gains. These tools mostly address four primary areas: data cleansing, data integration, master data management, and metadata management.

They identify errors using lookup tables and algorithms. These tools have turn into more functional and computerized. They now resolve various functions, in addition to validating contact information and email addresses, data visualization, data consolidation, extract, transform and load (ETL) tools, data validation reconciliation, sample testing, data analytics, and Big Data handling.

What are the best practices of data quality management?

For effective data quality management, there are two ways. One is the strategies for achieving data quality, and the other is for the implementation of data quality techniques.

Companies have adopted many policies for efficient data quality management. A focused approach to data governance and data management can have far-reaching benefits. The best practices of effective Data Quality Management are:

  • Letting Business Drive Data Quality – Instead of allowing IT to hold the reins of data quality, the business units as the prime users of this data, are better equipped to define the data quality parameters
  • Appoint Data Stewards – These are the leaders who control data integrity in the system are selected from within the business units, as they understand the data translation into the specific business needs
  • Formulating A Data Governance Board – They ensure that similar approaches and policies are in function regards to data quality are adopted across company-wide
  • Build A Data Quality Firewall – Building an intelligent virtual firewall provides detection and blocking of bad data when it enters the system. Corrupt data is detected automatically by the firewall and is sent back to the source for rectification, or made adjustments before letting it pass into the current environment.

Data quality management is a cyclic process involving logical step-by-step implementation. Such quantifiable steps help in standardizing solid data management practices in deploying the incremental cycles to integrate high levels of data quality techniques into the enterprise architecture. The Best Practices for Implementation of Data Quality Techniques categorized in successive phases listed below:

  1. Data Quality Assessment – This is a guided impact analysis of data on the business. The business-criticality of data is an essential parameter in defining the scope and priority of the data to be assessed.
  2. Data Quality Measurement – The characteristics and dimensions to evaluate the data quality, specifying the units of the measurements, and setting the appropriate standards for these measures are the basis for implementing processes for change. It also helps to press data controls into the functions that acquire or modify the data within the data lifecycle.
  3. Incorporating Data Quality into the tasks and processes – Building the functionality takes precedence over data quality during the application development or system upgrade. This is used to integrate data quality goals into the life cycle of the device development, incorporated as necessary criteria for each implementation process.
  4. Improvement of data quality in operational systems – Data exchanged between data providers and consumers must be provided for under contractual agreements, which establish acceptable quality rates. The data measurements based on output SLA’s can be integrated into these contracts.
  5. Inspect the cases of Data Quality where standards are not met and taking remedial actions – If data are found to be below the expected standards, the remedial activities should be subject to successful data quality control mechanisms similar to the software development defect monitoring systems.

Covid-19 and the aftermath: Impact on the industry

We are in the middle of one of the worst health crises the world has experienced in decades and COVID-19 has not only caused socio-economic disruption but also impacted nearly all sectors and geographies across the globe. Wealth management is one of the vulnerable sectors with highly correlated revenues to capital market performance. Despite recovery in capital markets in recent weeks especially in the US, many WMs have not seen their assets to pre-Covid levels as many European and Emerging markets are still much lower than pre-Covid levels .This has accentuated the pressure on revenues and calls for cost optimization & prudence on middle-back office functions.

Wealth management operations perform some of the most critical tasks including client onboarding checks, account setup, trading, asset transfers, etc. The immediate impact on operations was managing extremely high trade volumes and ensuring that critical processes continued to run smoothly. Most firms did not have business continuity and operations readiness plans for an event of this nature. Firms must therefore realize that this adversity presents an opportunity to resolve immediate priorities (BCP, automate critical and high effort tasks, etc.) and redefine longer term strategy to align with the paradigm shifts for an optimized operations framework.

Even before Covid-19, there was a paradigm shift that was already underway in Wealth Management operations and the pandemic merely exposed or amplified the need for Next generation operations transformation. Primary drivers of the shift were client expectations for personalized portfolios and changing priorities, growing regulations and need for real-time compliance reporting and increased competition from FinTech.

As an example, trade operations teams have always had pain points including manual reconciliation leading to delays in trading, lack of straight through processing, lower accuracy and increasing processing time, etc. Firms with higher operations maturity have relied on automation investments for e.g. automated settlement and reconciliation to minimize the impact of volatility due to COVID. Firms with lower maturity have had to rely on shuffling teams to manage trades, resources spending longer hours to complete daily trading and settlement.

What will it take to win in a post Covid world?

In today’s world, operations must not be seen as just ‘support’ but a mission-critical function. This is because acquisition costs can generate a compelling ROI only over the next 3-5 years when the wallet share is deepened. For deepening of wallet share, delivering a superior CX is critical which can happen only when wealth management firms can exceed advisor and client expectations.

For immediate resolutions, the firms should perform Next generation operations transformation’ strategy can help wealth management firms with automation capabilities, process mining and outsourcing to drive maturity and bring efficiencies


The framework should be built out in modular fashion for reusability.

Incedo believes for firms to emerge as winners in the long term, they must consider three key shifts in the way operations are managed and run. Next generation operations transformation’ strategy can help wealth management firms with automation capabilities, process mining and outsourcing to drive maturity and bring efficiencies.

Next generation operations transformation

  • Change the objective from cost efficiency to customer experience. An optimal and holistic client experience involves minimal manual touchpoints, fewer documentation and faster turnaround time for onboarding, account maintenance requests, asset transfers etc.
  • Wealth management firms should aim to “Re-imagine processes” rather than focus on ‘process standardization’. It goes beyond process standardization by mining data, deriving insights and determining best action to digitize and automate sub-standard processes
  • Firms need to focus on outcome driven KPIs rather than traditional transaction SLAs. Derive success metrics of the end to end process rather than measuring siloed metrics. For e.g. for client onboarding the key outcome to focus on is when the account is funded and ready for trading rather than measuring individual process steps for submission, set up, etc.

Over the last few years, firms have invested in automation and process improvement initiatives but have not been able to achieve maturity in their operations transformation journey.

We believe that these initiatives are not realizing their expected outcomes because:

  1. Automation solutions deployed in silos instead of reviewing the overall customer journey
  2. Firms are automating the current underlying processes as-is which could be inefficient and hence not achieving higher returns on investment
  3. Focus on automating the product features rather than customer journey
  4. Data & AI not collected and analyzed sufficiently to perform data driven decision making

To learn more about how to rethink your next generation operations transformation initiatives and how Incedo can partner with you in your journey, mail us at

COVID-19 pandemic has brought a significant change in the way financial advisors manage their practices, clients and home office communication. Along with the data driven client servicing platforms, smooth transition and good compensation, advisors are closely evaluating their firm’s digital quotient to provide them the service and support in times of such crisis and if not satisfied, may look out options of switching affiliation during or post this crisis.

This pandemic is not a trigger rather has provided additional reasons for advisors to continue to look out for a firm that fits better in their pursuit of growth and better client service.

2018 Fidelity Advisor Movement Study says, 56% of advisors have either switched or considered switching from their existing firms over the last 5 years. publishes that one fifth of the advisors are at the age of 65 or above and in total around 40% of the advisor may retire over the next decade.


A Cerulli report anticipates transition of almost $70 trillion from baby boomers to Gen X, Gen Y and charity, over the next 25 years. Soon, the reduction in the advisor workforce will create a big advice gap, that the wealth management firms will have to bridge by acquiring and retaining the right set of Advisors


We are observing a changing landscape of advisor and client population, mounting cost pressure due to zero commission fee and the need for scalable operations. COVID-19 has further accentuated the need for the firms to better understand the causal factors for changes in advisor affiliation, to optimize their resources deployed for engaging through the Advisor life cycle. The wealth management firms are increasingly realising that a one fit for all solution may not get optimal returns for them.

Data and Analytics can help the firms segment their advisors better and drive better results throughout the advisor life cycle. Advisor Personalization, using specific data attributes can deliver contextual and targeted engagements and can significantly improve results by dynamically curating contextual & personalized experiences through the advisor life cycle.

A good data driven advisor engagement framework defines and measures key KPIs for each stage of the advisor lifecycle and not only provides insights on key business metrics but also addresses the So What question about those insights. As wealth management firms collect and aggregate data from multiple sources, they are also increasingly using AI/ML based models to further refine advisor servicing.

Let us look at the key goals or business metrics for each stage of the advisor life cycle below and see how data and analytics driven approach helps in each stage of the life cycle.


Prospecting & Acquisition

To attract and convert more high producing advisors, recruitment teams should be tracking key parameters through the prospecting journey of the advisors so that they can identify:

  • What is the source of most of their prospective advisors; RIA, wirehouses, other BDs
  • Which competitors are consistently attracting high producing advisor
  • What % of advisors drop from one funnel stage to another and finally affiliate with the firm
  • What are the common patterns and characteristics in the recruited advisors

Data driven advisor recruitment process that relies on the feedback loop helps in the early identification of potential converts, thereby balancing the effort spent on recruited vs lost advisors. It also improves the amount and quality of the recruited assets.

For example, analysis of one-year recruitment data of a large wealth management firm revealed that prospects dealing with variable insurance did not eventually join the firm due to the firm’s  restricted approved product list. Another insight revealed that prospects with a higher proportion of fee revenue vs the brokerage revenue increased their GDC and AUM at a much faster rate after one year of affiliation. Our Machine Learning Lead Scoring Model used multiple such parameters and scored a recruit’s joining probability and 1-year relationship value to help the firm in precision targeting of high value advisors.  These insights allowed the firm to narrow down their target segment of advisors and improved conversion of high value advisors.

Growth & Expansion

A lot of focus during the growth phase of the advisor lifecycle is on tracking business metrics such as TTM GDC, AUM growth, commissions vs Fee splits. The above metrics however have now become table stakes and the advisors expect their firms to provide more meaningful insights and recommendations to improve their practices. Some of the ways, firms are using data to enhance advisor practice are by:

  • Using data from data aggregators and providing insights on advisor’s wallet share and potential investment opportunities
  • Providing peer performance comparisons to the advisors
  • Providing next best action recommendations based on the advisor and client activities

For example, our Recommendation Engine analysed advisor portfolio and trading patterns and determined that most of the high performing advisors showed similar patterns in Investment distribution, asset concentration, churning %. This enabled the engine to provide targeted investment recommendations for the other advisors based on their current investment basket and client risk profile. The wealth management firms are also using advisor segmentation and personalization models based on their clients, Investment patterns, performance, digital engagement, content preference and sending personalized marketing and research content for the advisors based on their personas thus driving better engagement.

Maturity and Retention

It is always more difficult and costly to acquire new advisors as compared to growing with the existing advisor base. The firms pay extra attention to ensure that their top producer’s needs are always met. Yet despite their best efforts, large offices leave their current firms for greener pastures or higher pay-outs. The firms run periodic NPS surveys with their advisor population which indicates overall satisfaction levels of the advisors, but they do not generate any insights for proactive attrition prevention. Data and analytics can help you identify patterns to predict advisor disengagement and do targeted proactive interventions.

For example, our attrition analysis study for a leading wealth manager indicated that a large portion of advisors over the age of 60 were leaving the firm and selling out their business. This enabled the firm to proactively target succession planning programs at this age demographic of advisors. Our analysis also indicated a clear pattern of decreased engagement with the firm’s digital properties and decreasing mail open rate, for the advisors leaving the firm. Based on factors such as age, length of association with the firm, digital engagement trends, outlier detection, our ML based Attrition Propensity model created attrition risk scores for advisors and enabled retention teams to proactively engage more with at-risk advisors and improve retention.

As per a study from JD Power, wealth management firms have been making huge investments in new advisor workstation technologies designed to aggregate market data, client information, account servicing tools and AI-powered analytics into a single interface. While the firms are investing heavily in technology, only 48% advisors find the technology their firm is currently using, to be valuable. While only 9% of advisors are using AI tools, the advisor satisfaction is 95 points higher on a 1000-point scale when they use AI tools. Advisors find a disconnect between the technology and the value derived from the technology.

This further necessitates the need for personalised solutions for advisors and an AI driven Advisor personalisation platform which provides curated insights to the firms. This helps in targeted & personalized services & support to advisors through the Advisor lifecycle, enabling optimal utilization of the firm’s resources and unlocking huge growth potential.

The firms that will understand the potential of data driven decision making for their advisor engagement and will start early adoption of such tools will thrive in these uncertain times and will emerge as a winner once the dust settles.

The magnitude of the spread of the COVID-19 pandemic has forced the world to come to a virtual halt, with a sharp negative impact on the economies worldwide. The last few weeks have seen one of the most brutal global equity collapse, spike in unemployment numbers, and negative GDP forecasts. With the crisis posing a major systemic financial risk, effective credit risk management in these times is the key imperative for the banks, fintech and lending institutions.

Expected spike in delinquencies and credit losses post COVID-19

The creditworthiness of banking customers for both retail and commercial portfolios has decreased drastically due to the sudden negative impact on their employment and income. In case of continuation of the epidemic for a longer-term period, the scenarios in terms of defaults and credit losses for banks could potentially be much higher than as observed in the global financial crisis of 2008.
expected spike in delinquencies and credit losses post covid-19

Need for an up-to-date, agile and analytics driven credit decisioning framework:

The existing models that banks rely upon simply did not account for such a ‘black swan event’. The credit decisioning framework for banks based on existing risk models and business criteria would be suboptimal in assessing customer risk, putting the reliability of these models in doubt. There is an immediate need for banks to adapt new credit lending framework to quickly and effectively identify risks and make changes in their credit policies

Incedo’s risk management framework for the post COVID-19 world

To address the challenges thrown up by the COVID-19, it is important to assess the short, medium and long-term impact on bank’s credit portfolio risk and define a clear roadmap as a strategic response focusing on changes to risk management methodologies, credit risk models and existing policies.

We propose a six-step framework for banks and lending institutions which comprises of the following approaches.


  1. COVID Risk Assessment & Early monitoring Systems

Banks and lending institutions should focus on control room efforts and carry out a rapid re-assessment of customer and portfolio risk. This should be based on COVID situational risk distress indicators and anomalies observed in customer behaviour post COVID-19. As an example, sudden spike in utilization for a customer, less or no credit of salary in payroll account, usage of cash advance facility by transactor persona could potentially be examples of increasing situational risk for a given customer. In the absence of real delinquencies (due to moratorium or payment holidays facility), such triggers should enable banks to understand customer’s changing profiles and create automated alerts around the same.


  1. Credit risk tightening measures

Whether you are a chief risk officer of a bank or a credit risk practitioner, by now you would have heard many times that all your previous credit risk models and scorecards would not hold and validate any longer. While that is true, it has also been observed that directionally most of these models would still rank order with only a few exceptions. These exceptions or business over-rides can be captured through early monitoring signals and overlaid on top of existing risk scores as a very short term plan. Customers with a low risk score and situational risk deterioration based on early monitoring triggers are the segments where credit policy needs to be tightened. As the delinquencies start getting captured, banks should re-create these models and identify the most optimal cutoffs for credit decisioning.


  1. Personalized Credit Interventions

There are still customers with superior credit worthiness waiting to borrow for their financial needs. It is very important for banks to discern such customers from those that have a low ability to payback. To do this, banks require personalized interventions to reduce risk exposure while ensuring an optimal customer experience through data-driven personalized interventions. Banks need to help customers with liquidity crunch through Government relief programs, bank loan re-negotiation, and settlement offers while building a better portfolio by sourcing credit to ‘good’ customers in the current low rate environment.

  1. Models Re-design and Re-Calibration

A wait and watch approach for the next 2-3 months period to understand the shifts in customer profile and behavior is a precursor before re-designing the existing models. This would enable banks to better understand the effect of the crisis on customer profiles and make intelligent scenarios around the future trend for delinquencies. There would be a need to re-calibrate or re-design the existing models. Periodic re-monitoring of new models would be a must, given the expected economic volatility for at least next 6-12 months period.

  1. Model Risk Management through Risk Governance and Rapid Model Monitoring

There is an urgent need for banks to identify and quantify the risks emerging due to the use of historical credit risk models and scorecards through Model monitoring. While the risk associated with credit products has increased, the delinquencies have not yet started getting captured in the bank’s database due to the payment holiday period facility introduced by govt’s of most of the countries. In such a situation, it is critical to design risk governance rules for new models that may not have information related to dependent variables (e.g. delinquency) captured accurately.

  1. Portfolio Stress Tests aligned with dynamic macro economic scenarios

Banks and lending institutions need to leverage and further build on their stress testing practice by running dynamic macro-economic scenarios on a periodic basis. The stress testing practice has enabled banks in the US to improve their capital provisioning and the COVID crisis should further enable banks across the geographies to use the stress tests to guide their future roadmap depending on how their financials would fare under different scenarios and take remedial actions.

The execution of the above-mentioned framework should ensure that banks and fintech’s are able to respond to immediate priorities to protect the downside while emerging stronger as we enter the new normal of the credit lending marketplace.

Incedo is at the forefront of helping organizations transform the risk management post COVID-19 through advanced analytics, while supporting broader efforts to maximize risk adjusted returns.

Our team of credit risk experts and data scientists has enabled setting up the post COVID early monitoring system, heuristic post COVID risk scores, and COVID command centre for a couple of mid-tier US based banks over a period of last few weeks.

Learn more about how Incedo can help with credit risk management.

While the reckless overextension of credit lines by lenders and banks was the root cause of the financial crisis of 2007-09 and it had the US primarily as its central point, this time the financial crisis has been caused by a virus with rapidly evolving geographical centers and covering almost the entire world. The banks though are in a catch 22 situation, they need to support the government’s lending and loan relief measures while also maintaining low credit loss rates and enough capital provisioning for their balance sheet. Effective risk management and credit policy decisioning was never as challenging for the banks as it is now in the post covid-19 world.

COVID-19 implications and challenges for banks and lending institutions

Sudden shift in risk profile of retail and commercial customers – The surge in unemployment, deteriorated cash flow for businesses, etc has led to a sudden shift in the credit profile of customers. The data that banks used to leverage before COVID might not provide an accurate picture of the consumer’s risk profile in the current times.

Narrow window of opportunity to re-define credit policies – Bank’s credit policies in terms of origination, existing customer management, collections, etc have been designed over years with a lot of rigor, market tests, design and application of credit risk models and scorecards, etc. The coronavirus has caught the bankers and Chief Risk Officers by surprise and there is a narrow window of opportunity to make changes in existing models and risk strategies. While a lot of banks had built a practice of stress testing for unfavorable macroeconomic scenarios, the pace and impact of coronavirus have been unprecedented. This requires immediate response from the banks to mitigate the expected risks.

Government relief programs like payment moratoriums – The introduction of payment holidays and moratorium programs are effective to take some burden off consumers but prevent the banks from understanding high risk customers as there is no measure of delinquency that banks can capture from existing data.

Four-point action plan and strategy to navigate through the COVID-19 crisis

Banks will need to go back to the drawing board, re-imagine their credit strategy and put in accelerated war room efforts to leverage data and create personalized risk decisioning policies. Based on Incedo’s experience of supporting some of the mid-tier banks in the US for post COVID risk management, we believe the following could help banks and lenders make a fast shift to enhanced credit policies and mitigate portfolio risk

  1. Covid situational risk assessment – As a starting point, Risk managers should identify the distress indicators that capture the situational risk posed post Covid-19. These indicators could be a firsthand source of customer’s situational risk (e.g. drop in payroll income) or surrogate variables like higher utilization or use of cash advance facility on credit card etc. Banks would need to leverage a combination of internal and external parameters, such as industry, geography, employment type, customer payment behavior, etc. to quantify COVID based situational risk for a given customer.

  2. Early warning alerts & heuristic risk scores based on a recent behavioral shift in customer’s risk profile – A sudden change in the financial distress signals should be captured to create automated alerts at the customer level, this in combination with a historical risk of the customer (pre-COVID) should go as a key input variable into the overall risk decisioning process. The Early warning system should issue alerts, alerting the credit risk system of abnormal fluctuations and potential stress prone behavior for a given account.

  3. Executive Command Centre for COVID Risk Monitoring – The re-defined heuristic customer risk scores should be leveraged to quantify the overall risk exposure for the bank post COVID. Banks need to monitor the rapidly changing credit behavior of customers on a periodic basis and identify key opportunities. The rapid risk monitoring based command center should focus on risk across the customer lifecycle and various risk strategies and help provide answers to some of the following questions of the bank’s management team
    • What is overall current risk exposure and forecasted risk exposure over short term period?
    • How has the overall credit quality of existing customer base changed, are there any patterns across different credit product portfolios?
    • What type of customers are using payment moratoriums, what is the expected risk of default of such customer segments?
    • Quantification of the drop in income estimates at an overall portfolio level and how it could affect other credit interventions?
    • What models are witnessing significant deterioration in performance and may need re-calibration as high priority models?executive-command-centre-for-COVID-risk-monitoring
  4. Personalized credit interventions strategy (Whom to Defend vs Grow vs Economize vs Exit)  – To manage credit risk while optimizing the customer experience, banks should use data driven personalized interventions framework of Defend, Grow, Economize & Exit. Using customer’s historical risk, post COVID risk and potential future value-based framework, optimal credit intervention strategy should be carved out. This framework should enable banks to help customers with short term liquidity crunch through government relief programs, bank loan re-negotiation and settlement offers while building a better portfolio by sourcing credit to creditworthy customers in the current low interest rate environment.


The execution of the above-mentioned action plan should help banks to not only mitigate the expected surge in credit risk but also enable a competitive advantage as we move towards the new-normal. The rapid credit decisioning should be backed with more informed decision making and on an ongoing basis, the framework should be fine-tuned to reflect the real pattern of delinquencies.

Incedo with its team of credit risk experts and data scientists has enabled setting up the post COVID early monitoring system, heuristic post COVID risk scores and COVID command center for a couple of mid-tier US based banks over a period of last few weeks.

Learn more about how Incedo can help you with credit risk management.

Digital Transformation was one of the most important business trends across the wealth management circles before the unprecedented global disruption shifted all the focus towards ensuring business continuity.  Recognizing the changing digital behavior, leading RIA custodians, broker dealers, TAMPS, RIAs had either embarked or were kickstarting their digital transformation journeys. The disruption caused by COVID 19 has clearly laid bare the nascent stages of digital evolution for the many wealth management players. The customer service centers are overwhelmed with increased call volumes and reduced capacities. Similarly, financial advisors are required to field multiple long calls from anxious clients who are uncertain about their investments. Low adoption of digital assets provided by broker dealers and RIAs firms may be a result of sub optimal CX or gaps in information availability. We all may be in a long period of disruption, and the firms that are not able to drive digital adoption, or continuing to remain person dependent will realize the difficulty in client servicing, let alone operational scaling.  Digitalization needs to be looked at as an essential part of the wealth manager’s business continuity efforts as it ensures information availability and provides online self-service capabilities. Digital Insulation is another complementary term that offers an ambitious glimpse of future possibilities.  To protect their businesses from personnel-related disruptions, organizations will need to invest digitalization and thus ensure business continuity.

Drivers of Digital Transformation

Digitalization in the wealth management was primarily driven by the following drivers:
Drivers of Digital Transformation

  1. Changing business model– The business model has been steadily shifting away from a product focused brokerage model to a relationship focused advisory model. In a study conducted by , the consolidated commissions revenues for the top 50 Independent broker dealers have reduced in the last 5 years, while the advisory fee has increased by more than 50% in the same period. Shifting client base of advisory clients expect engagement across multiple channels and customer experience becomes paramount.
  2. Revenue compression– Zero commissions are already a reality and were a seminal event for the industry. Revenue impact for the players will range from anywhere between 10%- 20%. Also, RIA custodians are likely to levy additional fees on the participants to cover for the lost revenues. With the fed rates likely to remain low for the foreseeable future, the revenue stream from sweep accounts will also reduce substantially further, thus accentuating revenue pressures.
  3. Changing the age mix of client and advisors– As the wealth transfers from baby boomers to the millennials, the millennials will make up for an increasingly valuable client segment. Similarly, as the ageing advisor population retires, the new advisors will primarily be dependant on technology, largely influencing their business decisions.
  4. Fin Tech Disruption– Advisor Fintech tools, also known as Advisor tech, have not only invaded the usual favorites domains such as CRM, financial planning, and portfolio management but have created new advisor tech segments such as mind mapping, account aggregation, forms management, social media archiving etc. 2020 T3 advisor software survey covered almost 500 different tools across almost 30 sub segments highlighting the plethora of tools available for clients and advisors.

The above drivers are creating two main needs for the wealth management players:

Need to Scale Servicing. The first two drivers (changing business model and revenue compression) are forcing wealth management players to realize the need to digitalize and gain operational scale for servicing more clients. In a study conducted by, servicing clients was cited as the most important digital driver for the wealth management firms. The ongoing disruption will further fuel the demand for straight through client onboarding, E- account opening, digital signatures, and workflow-based proposal generation solutions. Moreover, the organizations that still depend on back office processors to open accounts and onboard clients will see an increased transition. Similarly, advisors and clients need to be provided with tools to move to a more self-service model.

Need to Scale Knowledge– The last two drivers (changing Age mix & FinTech Disruption) trends have resulted in increasing client and advisor expectations. An increasing number of clients no longer just delegate their investment decisions to advisors but also seek to collaborate and validate the investment decisions. They look for real time knowledge about their current investment and investment insights. With the prevailing uncertainty, many clients will also start demanding real time information about risk tolerance of their portfolios are and how they can quickly pivot to either protect their investments or to take advantage of any profitable bargains. The clients will naturally drift towards financial advisors who provide full-service client portals to access and monitor their investment. Similarly, advisors will drift towards firms which provide digital practice management tools and advisor self service capabilities.

The third form of scale which will become very relevant in the current disruption is Scaling digital collaboration. With Social distancing becoming the norm, in person client meetings may not be possible for some time. While advisors and clients can still talk and make video calls, current tools do not allow for collaborative discussion or presentations. Going forward, the organizations will need to invest in tools that enable online client engagement and advice delivery as a complimentary engagement channel. Software providers can study the evolution of telemedicine systems, which provide a full suite of features including video conferencing, document sharing, scheduling appointments, taking notes, as well as client history. Once client portals or CRM systems can be enhanced for Tele Advice, this alternate engagement channel is likely to grow in popularity with both clients and advisors, allowing remote collaboration and engagement.

To sum up, digitalization is the best antidote for any such future disruptions, which will be assisting wealth management firms in modernizing advice and accelerate their digitalization efforts to not only transform but to insulate their businesses. Digitalization can in fact become a vital cog of the business continuity efforts by enabling self-service, information disintermediation and collaboration.

The wealth management industry has gone through major changes in the past few years.

The amount of investable wealth among U.S. households has increased tremendously over the past few years and will be changing hands with wealth passed over to millennial. Over the next 25 years, Cerulli has estimated that $31 trillion will be passed on to Generation X households, while $22 trillion will be passed on to Millennials [1].  With two diverse trends – growing wealth in the hands of a younger demographic and aging investors primarily the baby boomers having complex financial goals ranging from retirement planning to long term medical care, the demand for financial advisors has also grown simultaneously. Independent financial advisors have continued to grow in terms of revenue and assets under management. The trend should continue as advisors can enjoy the flexibility and opportunity for higher income with fewer cuts to wirehouses and broker dealers. The major drawback of going independent is the lack of support for back office and administrative operations. This is where turnkey asset management providers come into the picture to help provide advisors the necessary support.

TAMPs have been helping advisors focus their attention on client needs while taking over back office and administrative support activities, including client onboarding, asset transfers, trade execution, portfolio management, trust accounting, proposal generation, and performance reporting.

Trends impacting advisors and TAMPs:

  1. Financial advisers are experiencing an increase in demand by young professionals. These are the HENRYs (High earners not rich yet) segment with lower range assets but would like to start investing and are looking for financial guidance to keep them on the right track towards their long-term goals. [5]
  2. “One of the biggest challenges facing investment advisory firms today is disintermediation. People can invest by themselves rather than hiring an investment professional to manage their money”. Advisors need to provide clients with an experience which is custom for their needs, shows value add and helps them invest strategically.
  3. Technology is redefining the advisor-client experience in multiple ways. Clients now want to have access to their portfolios and performance instantly which means advisors need to share on-demand requests with a low turnaround time.
  4. Clients expect personalized and custom services suitable to their individual risk profile and future goals. While tech savvy investors look for sophisticated digital systems, they also see value in the attention and financial experience of advisors to help them build a smart investment portfolio. As a result, advisors’ expectations are increasingly focused on technology & better investment management
  5. ‘Holistic financial planning,’ which goes beyond client set up and onboarding, changing investment strategy basis new life events, addressing multiple life goals are essential for advisor success. Advisors are therefore, looking for digital platforms which will enable them to service these needs. For example, a portfolio simulation which will help clients design different investment scenarios and view the impact of those changes on their goals can be hugely beneficial for advisors.

Strategy to address market trends:

  1. Reimagining the client experience: To meet client expectations of personal and customized investment strategy, TAMPs need to provide advisors with digital solutions enabling them to walk clients through risk analysis, goal setup and investment strategy definition in a simple yet effective manner. Two technology offerings are key to successfully optimize the client experience – investor portal and smart portfolio generation platform. Clients value access to their portfolio and look for information beyond quarterly performance reports. An investor portal providing a 360 degree of the client accounts, progress towards goals, investment strategies and performance has become a basic requirement for many clients and therefore, advisors. A sophisticated portfolio selection tool which will recommend investment strategies basis the clients’ stage of life, their goals, major events such as receiving inheritance, retirement, marriage and their attitude towards risk and market changes will enable advisors to provide a hybrid model with a smart platform and human touch
  2. Optimize advisor performance: The key success metric for TAMPs is growth in AUM, which is dependent on the success of advisors and their ability to acquire new clients and retain existing ones. Advisor performance analytics is therefore gaining traction and becoming increasingly relevant. Firms must leverage data analytics to derive insights from best performing advisors and provide the next best action to help them better collaborate with clients. To retain and bring in new advisors, TAMPs should review advisor experience metrics, assess CSAT wr.t. technology & operations services and continue to improve the experience through simplified back office processes and technology solutions.
  3. Drive profitability through efficient operations: While technology platforms enable advisors to grow, efficient back office support is necessary to help independent advisors survive. Adding services to the operations portfolio will provide immense value add for advisors. While billing, trade management, statements generation are core activities, additional services such as sleeve level reporting, white labelling, custom proposal generation, trust accounting, tax loss harvesting, automated rebalancing, account aggregation will help acquire more advisors. A key focus area for TAMPs should be to minimize operations & compliance risk as meeting compliance requirements is a top priority for advisors. Using automation to improve the speed and accuracy of transactional processes helps reduce costs and improve accuracy.


  1. Cerulli Associates, Federal Reserve, U.S. Census Bureau, Internal Revenue Service, Bureau of Labor Statistics, and the Social Security Administration
  2. A Year of Tremendous Growth for RIAs

Over the past two months, COVID-19 has not only created a global health crisis but also led to socio economic disruption and affected major industry sectors, including healthcare, banking, insurance, capital markets and so on.

Wealth management is one of the vulnerable sectors with highly correlated revenues to capital market performance and has already started experiencing loss in revenue and growth. The stock market response to the COVID-19 pandemic has been panic driven and volatile and could continue to be so until the spread of the virus is contained. With the economic data likely to worsen in the coming months, stock markets could experience another round of correction.

As a result, firms have initially struggled and are now implementing plans to reduce costs, assess spending, with continued efforts to tackle extremely high trade volumes and keep critical processes running. Most firms have now dealt with the initial priorities to ensure large scale business continuity and set up the majority of the workforce to work remotely. These firms are now working to identify data and information security risks and reprioritize organization strategies and projects.

There are a few firms that are yielding benefits of prior investments in digital transformation, automation and infosec who are slightly ahead in the digital maturity curve while others are just starting out to plan and strategize their digital journey for  the near future.

From our experience, we believe there are four key themes shaping up during this crisis which will help wealth management firms stay resilient:

  1. Focus on cost reduction and rationalization: To tackle market volatility, there is an increased focus on optimizing costs and improving operational efficiency. With a growing volume of business transactions, deployment of tactical automation solutions to automate trade processing and compliance reporting will embed the much needed flexibility and improve productivity. Outsourcing additional processes for short to medium term will also help address the increase in workload without huge cost investments. On the technology front, leveraging cloud solutions would be a quick win to reduce fixed costs immediately.
  2. Prioritize risk and data security: Given millions of resources are working remotely, companies will have to revisit cybersecurity best practices and enhance/upgrade systems to protect from unauthorized access, phishing scams, etc. With unsecured channels and networks for remote employees, wealth management firms will also need to reassess access to applications depending on criticality due to the increasing threat of cybersecurity. Adoption of multi-factor authentication and enhancing security incident management protocols would be vital in maintaining data security.
  3. Continue to focus on Digital Transformation: Firms need to double down at their digital transformation practice to defend their core business and emerge as a winner in this new normal. Digital analytics is critical for companies to refine their portfolio strategy, help automate critical processes through usage patterns, strengthen market research and insights to better communicate with advisors, broker dealers and investors. The significance of omnichannel and well-designed advisor & investor portals could have never been higher. Simple and intuitive portals will help communicate account/portfolio performance and help stakeholders make data, transaction requests faster and understand how they are being impacted in real time. It’s critical to harness the data across the web, mobile, branches, CRM to make sure the best of the experience can be provided to clients and advisors.
  4. Enhance IT resiliency: Most firms were unprepared for a crisis of this magnitude, given its unprecedented nature. While on the one hand, businesses have managed to get their workforces set up remotely, it is critical that they continue to assess the impact of network traffic, volumes,  on the infrastructure. They should also prepare and update plans to address security breaches, network breakdowns, and critical resource unavailability in a proactive manner.

In spite of the downfalls, every crisis helps businesses realize their underlying strengths and helps them define their strategy roadmap for the next journey. We strongly believe that investments in operational efficiencies, digital transformation and customer experience optimization while continuing to work on data security and BCP will be the key pillars of running a resilient business during this crisis. They will continue to remain important in the ‘new normal’ that will emerge post the pandemic as well.