HCP engagement in the new era

The engagement strategies for pharma representatives to connect with HCPs (Health Care Professionals) were already in a state of transformation. Covid-19 has only accelerated this process. With fewer HCPs now preferring in-person meetings and with the advent of new technologies, there has been a steady rise in the use of various digital channels like email, social, virtual connects etc. In the evolving landscape of medical engagement, a mix of approaches have emerged. The pandemic brought a big increase in using online and digital methods, especially video calls and emails. According to a 2022 research, before the pandemic, these methods made up only 3% and 14% of medical contacts with HCPs, but in 2022, they make up up 22% (7-fold increase) and 25%, respectively. All of these developments have compelled the drug companies to reimagine their engagement strategies, maintain a healthy relationship with the HCPs and use the right channels for making the required impact on the HCPs.

Personalization – The need of the hour:

With different technologies at their disposal and HCP preferences differing, pharma companies have realized that they have to change their marketing and engagement tactics to meet the engagement needs of each doctor. Each HCP’s expectation from the interaction is different and the education required for each of them is largely dictated by the patient cohort the HCPs serve. The same research highlights that a significant 61% of physicians pinpoint greater personalization as the key factor that sets apart and enhances the value of medical engagement. The interest points of HCPs differ and their response to channels differs. So does the response of HCPs to various incentives offered by pharma companies. For example, in one of the recent analyses executed using the Incedo LighthouseTM Platform, we found that the Pediatricians (PD) respond to nutritional rebates much more than their Non-PD counterparts.

Personalization, and sometimes hyper-personalization, therefore is the central theme of customer engagement across domains. HCPs now prefer to be connected to a digital platform of their choice e.g. mobile, email, social, call activity, etc. This behavior may differ across various HCP segments e.g. across therapeutic areas, affiliation, years of experience, and geography, apart from the patient cohorts they serve.

With response data now available to the drug companies, it is possible to derive insights on the HCP preference by various cuts such as segment, sub-segment, geography, etc. The HCP preference and behavior patterns shed light on how receptive they are to digital engagement. This data analysis is leveraged by organizations to analyze the context and content for a digital interaction to derive Next Best Action by answering critical questions related to the message and channel strategies.

  1. https://www.iqvia.com/-/media/iqvia/pdfs/library/white-papers/iqvia-medical-affairs-next-frontier-unlocking-omnichannel-engagement_dec2022.pdf
  2. https://www.iqvia.com/-/media/iqvia/pdfs/library/white-papers/iqvia-medical-affairs-next-frontier-unlocking-omnichannel-engagement_dec2022.pdf

By harnessing the Machine Learning capabilities of the Incedo LighthouseTM platform alongside the AWS Personalize solution, the process of automatically categorizing users based on their preferences in various HCP segments becomes seamlessly achievable. This intelligent user segmentation not only enhances engagement with marketing campaigns but also boosts retention through personalized messaging. Ultimately, such precision in targeting leads to an improved return on investment for your marketing expenditures helping pharmaceutical companies understand High Impact Channels using Incedo LighthouseTM with the power of cloud capabilities.

In one of the recent deployments of Incedo LighthouseTM at a pharmaceutical company, the client wanted to understand the most impactful channels for engaging with the HCPs. This was also driven by the CMO agenda to understand the profitable marketing channels. Understanding the segments and sub-segments of HCPs through the segmentation models powered by the Incedo LighthouseTM platform, analyses were done for different therapeutic areas. The marketing investment was translated into input variables specific to the channel and the impact was measured on the overall sales leveraging the regression models built and operationalized in the AWS Sagemaker integrated with Incedo LighthouseTM platform. The insights from the models developed on the cloud were used to derive the contribution of different channels on both the baseline and promotional sales. Using this, the ROI of various channels was determined. In this case, at a broad level, every marketing dollar spent gave an extra 40 cents as a return. Also, it was understood that among all the channels, digital channels were underinvested and the HCPs responded best to them.

The Incedo LighthouseTM platform helps you better connect with HCPs using tools like the KPI Tree and Cohort Analyzer. You can figure out which HCPs did not get enough attention but responded well, and which ones got too much attention without responding. By digging deeper into affiliations and hospitals, you get practical insights to create specific and effective strategies for better engagement with HCPs. What sets these models apart is their development, training, validation, and deployment via AWS Sagemaker, a cutting-edge cloud platform that amplifies the power of machine learning. This cloud integration not only ensures the robustness of the analytical tools but also underscores the commitment to harnessing advanced technology for optimizing HCP engagement strategies.

Incedo LighthouseTM goes a step further with advanced visualization capabilities by leveraging the cloud, allowing the generation of response curves for each channel and providing additional drill-down options. These features are crucial for simulating the performance of HCP cohorts and channels, helping to identify the break-even dollar spend. The integration of Amazon QuickSight into the platform transforms complex data into easily understandable visual insights, contributing to better decision-making and boosting operational efficiency in the pharmaceutical industry. By applying optimization algorithms from Incedo LighthouseTM pre-built accelerators and leveraging Amazon QuickSight, organizations can craft a channel strategy that optimizes HCP engagement across different mediums while minimizing investment.

A Complementary Partnership

“Data is the new currency.”— has gained immense popularity in recent years as data is now a highly valuable and sought-after resource. Overtime data continues to be accumulated and is becoming increasingly abundant.​​ The focus has now shifted from acquiring data to effectively managing and protecting it. As a result, the design and structure of data systems have become a crucial area of interest, and research into the most effective methods for unlocking its potential is ongoing.

While innovation and new ways keep coming to the fore, the best of the ideas currently consists of two distinct approaches in the form of data mesh and data fabric. Although both aim to address the challenge of managing data in a decentralized and scalable manner, they have different approaches and benefits, and they differ in their philosophy, implementation, and focus.

Data Mesh

The architectural pattern was introduced by Zhamak Dehghani for data management platforms that emphasize decentralized data ownership, discovery, and governance. It is designed to help organizations achieve data autonomy by empowering teams to take ownership of their data and provide them with the tools to manage it effectively. Data mesh enables organizations to create and discover data faster through data autonomy. This contrasts with the more prevalent monolith and centralized approach where data creation, discovery, and governance are the responsibility of just one or a few domain-agnostic team(s). The goal of data mesh is to promote data-driven decision-making and increase transparency, break down data silos, and create a more agile and efficient data landscape while reducing the risk of data duplication.

Building Blocks of Data Mesh

data-management-platforms

Data Mesh Architecture

Since data mesh involves a decentralized form of architecture and is heavily dependent on the various domains and stakeholders, the architecture is often customized and driven as per organizational needs. The technical design of a data mesh thus becomes specific to an organization’s team structure and its technology stack. The diagram below depicts a possible data mesh architecture.

It is crucial that every organization designs its own roadmap to data mesh with conscious and collective involvement of all the teams, departments, and line of Business (LoBs), with a clear understanding of their own set of responsibilities in maintaining the data mesh.

Data mesh is primarily an organizational approach, and that's why you can't buy a data mesh from a vendor.

Data Fabric

Data Fabric is not an application or software package; it’s an architectural pattern that brings together diverse data sources and systems, regardless of location, for enabling data discovery and consumption for a variety of purposes while enforcing data governance. A data fabric does not require a change to the ownership structure of the diverse data sets like in a data mesh. It strives to increase data velocity by overlaying an intelligent semantic fabric of discoverability, consumption, and governance on a diverse set of data sources. Data sources can include on-prem or cloud databases, warehouses, and data lakes. The common denominator in all data fabric applications is the use of a unified information architecture, which provides a holistic view of operational and analytical data for better decision-making. As a unifying management layer, data fabric provides a flexible, secure, and intelligent solution for integrating and managing disparate data sources. The goal of a data fabric is to establish a unified data layer that hides the technical intricacies and variety of the data sources it encompasses.  

Data Fabric Architecture

It is an architectural approach that simplifies data access in an organization and facilitates self-service data consumption. Ultimately, this architecture facilitates the automation of data discovery, governance, and consumption through integrated end-to-end data management capabilities. Irrespective of the target audience and mission statement, a data fabric delivers the data needed for better decision-making.

Principles of Data Fabric

Parameters Data Mesh Data Fabric
Data Ownership
Decentralized
Agnostic
Focus
High data quality and ownership based on expertise
Accessibility and integration of data sources
Architecture
Domain-centric and customized as per organizational needs and structure
Agnostic to internal design with an intelligent semantic layer on top of existing diverse data sources
Scalability
Designed to scale horizontally, with each team having their own scalable data product stack
Supports unified layer across an enterprise with the scalability of the managed semantic layer abstracted away in the implementation

Both data mesh and data fabric aim to address the challenge of managing data in a decentralized and scalable manner. The choice between the two will depend on the specific needs of the organization, such as the level of data ownership, the focus on governance or accessibility, and the desired architecture.

It is important to consider both data mesh and data fabric as potential solutions when looking to manage data in a decentralized and scalable manner.

Enhancing Data Management: The Synergy of Data Mesh and Data Fabric

A common prevailing misunderstanding is that data mesh and data fabric infrastructures are exclusive to each other i.e., only one of the two can exist. However, fortunately, that is not the case. Data mesh and data fabric can be architected to complement each other in a way that the perquisites of both technologies are brought to the fore to the advantage of the organization. 

Organizations can implement data fabric as a semantic overlay to access data from diverse data sources while using data mesh principles to manage and govern distributed data creation at a more granular level. Thus, data mesh can be the architecture for the development of data products and act as the data source while data fabric can be the architecture for the data platform that seamlessly integrates the different data products from data mesh and makes it easily accessible within the organization. The combination of a data mesh and a data fabric can provide a flexible and scalable data management solution that balances accessibility and governance, enabling organizations to unlock the full potential of their data.

Data mesh and data fabric can complement each other by addressing different aspects of data management and working together to provide a comprehensive and effective data management solution.

In conclusion, both data mesh and data fabric have their own strengths but are complementary and thus can coexist synergistically. The choice between the two depends on the specific needs and goals of the organization. It’s important to carefully evaluate the trade-offs and consider the impact on the culture and operations of the organization before making a decision.

What is Contract Pull Through?

The pharma sales team engages in contracts with brands, hospitals, clinics, infusion centers, doctor offices, IDNs, ONA, GPOs, and other networks. These networks are often referred to as pharma accounts, and contracts are lined up to improve overall sales, market share, and profitability. Contracts with these accounts are based on various factors such as rebate percentage, formulary tiers, and performance-based fees.

Pharma’s gross contracted sales is a large multi billion dollar opportunity which is growing at a rapid pace. This is a big opportunity for the commercial team to boost sales with these accounts. Pharma companies analyze data on contracts, rebates, terms, and tiers to see how accounts perform. This enables them to identify accounts that are doing poorly and the ones that are doing exceptionally well.

We may define Contact Pull Through, as the analysis of –

  1. How much an account has purchased (sales) by contract program and by brand;
  2. How much they’ve received in discounts (rebates, chargebacks);
  3. How they are doing against their baselines; and
  4. Where are the opportunities to buy more and save more?

Why does pharma need to focus on Contract Pull Through?

Large and mid-size pharma companies have Contract Pull Through from the accounts, as the top-of-mind problem, as even increasing effectiveness by 2-5% would mean savings to the tune of millions of dollars.

Based on our experiences working with the client’s market access team – we realized that the organization delegated the task of supporting field pull-through entirely to its payer account managers. These executives reported spending 75% of their time creating and pulling reports, time that could have been better spent with customers or in more strategic dialogue with the field members.

The key stakeholders for Contract Pull Through are the field team members i.e., Business Engagement Managers (BEMs) and Healthcare Market Directors (HDs), who need to focus on Contract Pull Through for –

  1. Generating contract awareness and pull through for major providers in their ecosystems
  2. Creating awareness around the contracts/terms offered by pharma firms for the products
  3. Engaging with customers to show their historical performance and current performance

Some specific Contract Pull Through use cases the pharma account team focuses on include:

  • A portfolio purchasing summary of the account, which enables the account to understand how much volume the account has bought, and the savings received from pharma products
  • Product contribution at an account level, which enables the understanding of how much volume is coming from each pharma product
  • Contract eligibility of an account, to understand which contracts are available at a certain account

Other business insights that Contract Pull Through data may help pharma companies are around –

  • How is the account performing as compared to others, in their ecosystem/region?
  • Which is the account’s dominant payer, and how does that payer work at a national level?
  • How much can this account purchase to reach the next tier?

Use case deep dive: Portfolio purchasing summary of an account

A portfolio purchasing summary of the account is one of the critical use cases handled through Contract Pull data – what the Contract Pull Through team is looking for is to understand a particular account within a regional ecosystem, across all periods of the contract or for a particular period –

  • What has been the number of gross sales? Has that gone up or down compared to its last quarter?
  • What part of the total gross sales is contracted vs non-contracted? What part of sales is attributed to specialty pharmacy? What percentage of account savings is attributed to contracting?

Also, another key insight to look at for portfolio purchasing summary for the account is to identify the product mix/segment mix (GPO, 340B, NCCN, etc.) across the portfolio, and double click on which product/ segment contribution has gone up, or down over the previous periods, and how does it fare against its anticipated baseline numbers.

These insights help field teams understand account purchase volume, savings from contracted products, account performance compared to expectations, and identify opportunities for cost-saving purchases.

How to generate the pull through business insights from data?

To arrive at these insights- the key data elements that must be looked into are contracts, chargebacks, 867 Sales, non-contracted sales, rebates, terms and tiers, account hierarchy, and zip to territory-mapping data.

These data sets coming from different source systems are ingested, assimilated, and presented as output for improved decision-making –

  • Firstly, it needs to be ingested into cloud-based or on premise databases by using RPA tools like UIPath
  • The ingested data then goes through a set of data quality checks to ensure data is of expected quality.
  • The clean dataset then is transformed through the ETL process, where complex calculations through business-defined rules are applied, and
  • Presented through BI tools reports providing visualized/graphics and tabular data on gross sales, savings, performance, opportunities for an account, and other pull through insights.

Unlocking Insights with Incedo’s Data and analytics services on AWS environment

With broad and relevant expertise on data and analytics solutions on AWS cloud, Incedo offers Data and Analytics services in database transformation with data ingestion, data preparation, modernization, archival, integration with data warehouses, formation of data lakes in AWS, real time and operational analytics, business analytics, visualization and data governance. These services provide a holistic view of specific accounts in regional ecosystems, breakdown of sales components, and product/segment analysis. This empowers field teams to optimize their strategies and enhance Contract Pull Through effectiveness in the pharmaceutical industry.

AWS SageMaker provides an effective solution for creating an efficient data processing pipeline. Data was collected from various sources including contracts, chargebacks, 867 Sales, non-contracted sales, rebates, terms and tiers, account hierarchy, and zip to territory-mapping data. Once the data is ingested, SageMaker supports a set of data quality checks to verify that the data meets the expected quality standards, guaranteeing data integrity. After ensuring the data’s accuracy, it allows a transformation process involving complex calculations based on predefined rules.

Amazon QuickSight offers a powerful solution to present results through visually appealing reports and dashboards, employing Business Intelligence tools. Incedo’s strategic approach leveraged these capabilities to empower stakeholders in the pharmaceutical industry. This enables them to make informed, data-driven decisions and optimize their Contract Pull Through strategies effectively. With Amazon QuickSight, complex data is translated into comprehensible visual insights, facilitating better decision-making and ultimately enhancing the pharmaceutical industry’s operational efficiency.

Conclusion:

BEMs/ HDs pay significant attention to generating Contract Pull Through insights. Thus, large and mid-sized pharmaceutical companies should invest in a robust system to understand account performance, optimize rebates, and potentially save millions of dollars. This also aids in focusing on top accounts and renegotiating terms for underperforming ones.

Incedo Lighthouse™ with Self-Serve AI is a cloud-based solution that is creating a significant business impact in  commercial effectiveness for clients in the pharmaceutical industry. Self-serve entails empowering them with actionable intelligence to serve their business needs by leveraging the low-code AI paradigm. This reduces dependency on data scientists and engineers and makes faster iterations of actionable decisions and monitoring their outcomes by business users.

As internal and external enterprise data continues to grow in size, frequency, and variety, the classical challenges such as sharing information across business units, lack of a single source of truth, accountability, and quality issues (missing data, stale data, etc.) increase.

For IT teams owning diverse data sources, it becomes an added workload to ensure the provisioning of the enterprise-scale data in requisite format, quality, and frequency.  This also impedes meeting the ever-growing analytics needs of various BU teams, each having its own request as a priority. Think of the several dashboards floating in the organizations created at the behest of various BU teams, and even if with great effort they are kept updated, it is still tough to extract the insights that will help take direct actions to address critical issues and measure their impact on the ground. Different teams have different interaction patterns, workflows, and unique output requirements – making the job of IT to provide canned solutions in a dynamic business environment very hard.

Self-service intelligence is therefore imperative for organizations to enable business users to make their critical decisions faster every day leveraging the true power of data.

Enablers of self-service AI platform – Incedo LighthouseTM

Our AWS cloud-native platform Incedo LighthouseTM, a next-generation, AI-powered Decision Automation platform t arms business executives and decision-makers with actionable insights generation and their assimilation in daily workflows. It is developed as a cloud-native solution leveraging several services and tools from AWS that make the journey of executive decision-making highly efficient at scale. Key features of the platform include:

  • Customized workflow for each user role: Incedo LighthouseTM is able to cater to different needs  of enterprise users based on their role,  and address their specific needs:
    • Business Analysts: Define the KPIs as business logic from the raw data, and define the inherent relationships present within various KPIs as a tree structure for identifying interconnected issues at a granular level.
    • Data Scientists: Develop, train, test, implement, monitor, and retrain the ML models specific to the enterprise use cases on the platform in an end-to-end model management
    • Data Engineers: Identify the data quality issues and define remediation , feature extraction, and serving using online analytical processing as a connected process on the platform
    • Business Executives: Consume the actionable insights (anomalies, root causes) auto-generated by the platform, define action recommendations, test the actions via controlled experiments, and push confirmed actions into implementation
  • Autonomous data and model pipelines: One of the common pain points of the business users is the slow speed of data to insight delivery and share action recommendation, which may take even weeks at times for simple questions asked by a CXO. To address this, the process of insights generation from raw big data and then onto the action recommendation via controlled experimentation has been made autonomous in Incedo LighthouseTM using combined data and model pipelines that are configurable in the hands of the business users.
  • Integrable with external systems: Incedo LighthouseTM can be easily integrated with multiple Systems of Record (e.g. various DBs and cloud sources) and Systems of Execution (e.g. SFDC), based on client data source mapping.
  • Functional UX: The design of Incedo LighthouseTM is intuitive and easy to use. The workflows are structured and designed in a way that makes it commonsensical for users to click and navigate to the right features to supply inputs (e.g. drafting a KPI tree, publishing the trees, training the models, etc.) and consume the outputs (e.g. anomalies, customer cohorts, experimentation results, etc.). Visualization platforms such as Tableau and PowerBI are natively integrated with Incedo LighthouseTM thereby making it a one-stop shop for insights and actions.

Incedo LighthouseTM in Action: Pharmaceutical CRO use case:

In a recent deployment of Incedo LighthouseTM, the key users were the Commercial and Business Development team of a leading Pharma CRO.  The company had drug manufacturers as its customers. Their pain point revolved around the low conversion rates leading to the loss of revenue and added inefficiencies in the targeting process. A key reason behind this was the wrong prioritization of leads from conversion propensity and total lifetime value perspective. This was mainly due to manual, human-judgment-driven, ad-hoc,, static, rule-based identification of leads for the Business Development Associates (BDA) to work on.

Specific challenges that came in the way of the application of data science for lead generation and targeting were:

  • The raw data related to the prospects –  that was the foundation for f for e predictive lead generation modeling – was in silos inside the client’s tech infrastructure. This led to failure in developing high-accuracy predictive lead generation models in the absence of a common platform to bring the data and models together.
  • Even in a few exceptional cases, where the data was stitched together by hand and predictive models built, the team found it difficult to keep the models updated in the absence of integrated data and model pipelines working in tandem.

To overcome these challenges, the Incedo LighthouseTM platform was deployed. The deployment of Incedo LighthouseTM in the AWS cloud environment not only brought about real improvements in target conversions but also helped transform the workflow for the BDAs. By harnessing the power of Data and AI, as well as leveraging essential AWS native services, we achieved efficient deployments and sustained service improvements.

  • Combine the information from all data sources for a 360-degree customer view, enabling the BDAs to look at the bigger picture effortlessly. To do so effectively, Incedo LighthouseTM leveraged AWS Glue which provided a cost-effective, user-friendly data integration service. It helped in seamlessly connecting to various data sources, organizing data in a central catalog, and easily managing data pipeline tasks for loading data into a data lake.
  • Develop and deploy AI/ML predictive models for conversion propensity using Data Science Workbench which is part of the Incedo LighthouseTM platform, after developing the data engineering pipelines that create a ‘single-version-of-the-truth ’ every time raw data is refreshed. This was done by leveraging the pre-built model accelerators, helping the BDAs sort those prospects in the descending order of their conversion propensity, thereby maximizing the return on the time invested in developing them. The Data Science Workbench also helps with the operationalization of various ML models built in the process, while connecting model outputs to various KPI Trees and powering other custom visualizations. Using the Amazon SageMaker Canvas, Incedo LighthouseTM enables machine learning model creation for non-technical users, offering access to pre-built models and enabling self-service insights, all while streamlining the delivery of compelling results without extensive technical expertise.
  • Deliver key insights in a targeted and attention-driving manner to enable BDAs to make the most of the information in a short span of time. Incedo LighthouseTM leverages Amazon QuickSight, a key element in delivering targeted insights, that provides well-designed dashboards, KPI Trees, and intuitive drill-downs to help BDAs and other users make the most of the information quickly. These tools allow leads to be ranked based on model-reported conversion propensity, time-based priority, and various custom filters such as geographies and areas of expertise. BDAs can double-click on individual targets to understand deviations from actuality, review comments from previous BDAs, and decide on the next best actions. QuickSight seamlessly integrates with Next Gen Stats apps, and offers cost-effective scalable BI solutions, interactive dashboards, and natural language queries for a comprehensive and efficient user experience.  This resulted in an increased prospect conversion rate due to data-driven automated decisions leveraging AI that are disseminated to BDA in a highly action-oriented way.

In the ever-evolving landscape of cloud computing, organizations strive to enhance operational efficiency, optimize costs, and deliver exceptional performance. One such standout player in the industry is “Incedo,” a pioneering force in the cloud domain. In this article, we delve into the comprehensive Cloud Operations capabilities particularly on AWS platform offered by Incedo and explore the diverse use cases that make them a frontrunner in the industry.

Understanding Cloud Operations

Cloud Operations is a crucial aspect of managing and maintaining cloud-based services, ensuring seamless performance, scalability, and reliability. Incedo with its specialization in AWS cloud computing platform, goes beyond conventional practices to provide a robust suite of services that are designed to streamline processes, enhance security, and drive innovation.

Key Cloud Operations Capabilities at Incedo

  1. Automated Infrastructure Management

    Incedo leverages advanced automation tools to manage and orchestrate infrastructure, minimizing manual interventions and optimizing resource utilization. Through automated scaling, provisioning, and configuration management, Incedo ensures a resilient and agile infrastructure.

    In combination to the Auto Scaling Groups, Incedo leverages AWS CloudFormation to automate the provisioning and management of infrastructure. Through Infrastructure as Code (IaC), Incedo ensures the consistent deployment of resources, reducing the risk of manual errors and enhancing scalability. Templates define AWS resources, and changes are tracked and versioned, ensuring reproducibility and traceability.

  2. Cloud Resource Management

    Manually managing hundreds of thousands of compute instances across environments is a tremendous challenge. Incedo set out to resolve this problem for its customers by building a solution on AWS Systems manager.

    Incedo utilizes the OpsCenter from the AWS Systems manager catalogue for a central location that operations engineers and IT professionals can use to view, investigate, and resolve operational issues related to any AWS resource. Utilizing AWS Incident Manager helps operations teams prepare for incidents with automated response plans. whereas AWS Change Manager provides a central location for operators and engineers to request operational changes (Path management and system upgrades) for their IT infrastructure and configuration.

  3. Continuous Monitoring and Performance Optimization

    Incedo’s state-of-the-art monitoring solutions provide real-time insights into the performance of cloud resources. By utilizing predictive analytics-based cloud solutions, Incedo identifies potential bottlenecks and proactively optimizes workloads for peak efficiency.

    Use of Amazon CloudWatch provides real-time monitoring of AWS resources. Alarms and events are configured to trigger automated responses, ensuring optimal performance and availability. With CloudWatch Metrics, Incedo gains insights into resource utilization, enabling proactive optimization for improved efficiency.

  4. Security and Compliance

    Security is a top priority for Incedo. Their Cloud Operations team implements robust security measures, including encryption of data at rest as well at transit, identity management, and access controls. Incedo ensures adherence to industry-specific compliance standards, instilling confidence in clients regarding the safety of their data.

    Incedo places a strong emphasis on security, utilizing AWS IAM to manage user access and permissions. IAM roles and policies are meticulously configured, ensuring the principle of least privilege. Incedo helps clients achieve and maintain compliance with industry standards by implementing security best practices within the AWS environment.

  5. Disaster Recovery and Business Continuity

    Incedo’s Cloud Operations extend to comprehensive disaster recovery and business continuity planning. With geographically distributed data centres and failover mechanisms, Incedo ensures minimal downtime in the face of unforeseen events.

    Incedo’s disaster recovery strategy involves leveraging AWS Backup for centralized backup management and AWS Elastic Disaster Recovery (CloudEndure) for seamless replication and failover. This combination ensures business continuity by minimizing downtime and data loss in the event of disruptions.

  6. Cost Optimization

    The Cloud Operations team at Incedo excels in cost management and optimization. Through effective budgeting, utilization tracking, and rightsizing of resources, Incedo helps clients achieve cost efficiencies without compromising on performance. 

    Incedo’s technical approach to cost optimization involves using AWS Cost Explorer to visualize, understand, and manage costs effectively. AWS Trusted Advisor is employed to analyse an organization’s AWS environment and provide recommendations for cost optimization, performance improvement, security, and fault tolerance.

    This is another area where Incedo is getting ahead of the field while developing its inhouse FinOps tools and solutions.

Use Cases Handled by Incedo

  1. Financial Services Scalability

    Incedo’s Cloud Operations have empowered numerous Financial Services businesses to scale effortlessly during peak seasons. Automated scaling ensures that resources align with fluctuating demand, providing a seamless experience for end customers.

    Incedo employs AWS Lambda for serverless computing to enhance scalability. By decoupling functions and executing code in response to events, Lambda allows Incedo to scale effortlessly during peak demand, ensuring a responsive and cost-effective solution.

  2. Data Analytics for Wealth Management Customers

    Incedo’s capabilities shine in handling complex Big Data workloads. By optimizing data storage, processing, and analytics, Incedo enables organizations to derive valuable insights from massive datasets efficiently.

    Incedo harnesses the power of Amazon Redshift for efficient Big Data analytics. With its fully managed, petabyte-scale data warehouse, Redshift enables Incedo to analyse vast datasets and derive actionable insights, empowering organizations to make data-driven decisions.

  3. DevOps Acceleration

    Incedo’s Cloud Operations facilitate DevOps practices, enabling organizations to achieve faster development cycles, continuous integration, and seamless delivery. Automation of deployment pipelines ensures rapid and reliable application releases while maintaining the fine grain access control and security using the cross-account CI/CD pipelines.

    Incedo accelerates DevOps practices using AWS CodePipeline for continuous integration and delivery. Automated build, test, and deployment pipelines using AWS CodeCommit, CodeBuild and CodeDeploy streamline development workflows, enabling organizations to achieve faster release cycles and maintain application reliability.

  4. Global Content Delivery for a wealth management client
    Incedo leverages Amazon CloudFront, AWS’s content delivery network (CDN), for low-latency global content delivery. By caching content at edge locations, Incedo ensures reduced latency and enhanced user experiences, catering to a diverse, worldwide audience.
  5. Zero Ops based State of the Art operation centre
    Incedo’s capability of designing and deploying advanced serverless solutions in combination with global AWS services and containers demonstrates a proven state of the art framework designed considering the ZeroOps capabilities.

Conclusion: Incedo Setting Industry Standards

In conclusion, Incedo stands as a beacon of excellence in Cloud Operations, offering a suite of capabilities that address the dynamic needs of modern businesses. With a focus on automation, security, and performance optimization, Incedo empowers organizations to navigate the complexities of the cloud landscape with confidence. As the cloud computing industry continues to evolve, Incedo sets the standard for operational excellence, making them a trusted partner for businesses embarking on their cloud journey.

In the fast-changing landscape of cloud computing, the efficient management of costs and resources has emerged as a paramount concern for businesses of all sizes. The concern is also shared by the majority of enterprise IT leaders. According to a 2020 survey of 750 senior business and IT professionals at large enterprises across 11 industries and 17 countries, only 37% of respondents say they are achieving the full value expected on their cloud investments[i].Moreover, this is becoming a rising board-level issue – According to CloudZero’s State of Cloud Cost Intelligence 2022 report 73% of respondents, cloud costs concern the board or C-suite[ii].

As organizations expand their cloud presence, there is a growing need for strategies and practices that can help optimize financial operations in the cloud. This is precisely where Cloud FinOps, or Cloud Financial Operations, plays a pivotal role. Organizations that use FinOps effectively can reduce cloud costs by as much as 20 to 30 percent[iii].

Cloud FinOps, encompasses a range of practices and principles aimed at optimizing and overseeing the financial aspects of cloud computing within an organization. Cloud FinOps is not merely about reducing costs; it is about achieving a delicate balance between controlling cloud expenses and maximizing the value that the cloud can deliver. Its primary focus is on cost control, ensuring cost-effectiveness, and aligning cloud expenditures with the organization’s broader business objectives.

One of its key attributes is the collaborative approach it fosters, uniting teams from finance, IT, and operations in the endeavour to collectively manage cloud expenses. This collaborative approach goes beyond cost management, ensuring that cloud expenditures are in harmony with the overarching business goals. In our blog,  we will talk about why Cloud FinOps matters and share the simple steps we took to set it up internally and help others do the same. Join us as we break down why it is important and how it can make cloud management easier and more efficient for everyone.

Why is Cloud FinOps needed?

By uniting diverse perspectives and skill sets, Cloud FinOps cultivates a synergistic environment that empowers organizations to confidently and efficiently navigate the financial complexities in the ever-changing landscape of cloud computing. Cloud FinOps is your reliable guide for a bunch of good reasons:

  1. Cost Control and Optimization: While cloud technology offers remarkable flexibility and scalability, it can pose a financial challenge if not handled with precision. Cloud FinOps empowers organizations with the strategies and tools needed to regain control over their cloud expenses, ensuring that resources are used efficiently and budgetary constraints are avoided. In essence, it is a methodical approach to enhance financial discipline and resource optimization in the cloud environment.
  2. Cost Visibility: Gaining a comprehensive understanding of cloud expenses can be a formidable challenge in the realm of cloud management. Cloud FinOps practices provide organizations with the tools and methods to meticulously track and analyze their cloud spending, offering a detailed, granular view of where financial resources are allocated. It is similar to having a precise financial roadmap for your cloud expenditures.
  3. Efficiency: Cloud FinOps focuses on enhancing the efficient use of cloud resources by optimizing the size of instances, capitalizing on reserved instances for cost savings, and exploring cost-effective pricing models. It is like fine-tuning the performance of your machinery to maximize productivity and minimize costs.
  4. Business Alignment: Ensuring that cloud expenditures directly contribute to the achievement of business objectives is of paramount importance. Cloud FinOps practices are instrumental in aligning cloud spending with the delivery of tangible value to the organization. In essence, it is about ensuring that every cloud investment is a purposeful step toward fulfilling your business goals, making financial decisions a strategic asset for your organization.
  5. Accountability: Cloud FinOps uses strategies like cost allocation and tagging to ensure that teams and individuals are responsible for how much cloud resources they use. This encourages a culture of financial prudence and careful spending.

The setup essentials for a FinOps practice

Setting up a Cloud FinOps practice means taking specific actions to make sure we spend our cloud budget wisely, manage costs, and make sure our cloud resources match our business goals. Below is a comprehensive guide that outlines the initial steps to get started:

  1. Objectives and Goals: Start by defining your organization’s financial objectives regarding cloud usage. Are you aiming to reduce expenses, enhance cost transparency, allocate costs to specific teams or projects, or pursue other goals? Your FinOps practice’s actions will be tailored to these objectives, so ensure they are clearly defined.
  2. Team Formation: Build a cross-functional team comprising members from finance and IT Operations. This team will oversee the implementation and management of the Cloud FinOps practice, analyzing spending trends and offering insights into optimizing costs. The selection of the right individuals for this team is critical.
  3.  Cloud Cost Visibility: Deploy tools and methodologies to gain visibility into your cloud expenditures. Utilize cloud cost management tools such as AWS Cost Explorer, Azure Cost Management, or Google Cloud Cost Management. AWS Trusted Advisor is especially valuable for rightsizing recommendations and other cost-related insights.
  4.  Tagging and Labelling: Develop a systematic tagging and labelling strategy to track resources and allocate costs to specific departments, projects, environments, or teams. Tags and labels are vital for precise cost attribution, so ensure you have an effective tagging mechanism in place.
  5. Budgeting and Forecasting: Establish cloud budgets and forecasts based on historical usage data. This allows you to set cost expectations and monitor your spending against these predefined targets.
  6.  Cost Allocation: Implement cost allocation methodologies that accurately distribute cloud costs to different departments or projects. This may involve creating custom scripts or employing third-party tools to streamline the process.
  7.  Cost Optimization: Identify opportunities for cost optimization, such as rightsizing instances, utilizing reserved instances, or leveraging serverless computing. Regularly assess and adjust your resources to maximize efficiency and minimize unnecessary expenses.
  8. Cost Monitoring and Alerts: Ensure vigilant cost monitoring by setting up alerts that notify you when expenses surpass predefined limits. This quick-response system helps address unexpected cost spikes promptly.
  9.  Education and Training: One of the key requirements to establish a robust Cloud FinOps practice is investing in the education and training of your employees. By providing targeted training, you empower your team to navigate the cloud landscape with financial acumen. Equipping your workforce with the knowledge and skills needed to make informed decisions contributes significantly to the success of your Cloud FinOps practice, fostering a culture of financial responsibility and efficiency.
  10. Monthly Reporting: Generate regular financial reports outlining cloud costs, allocations, and savings. These reports serve as a crucial tool for informed decision-making within your Cloud FinOps practice. Share these insights with relevant stakeholders to enhance transparency and foster strategic choices aligned with your organizational goals.
  11.  Continuous Improvement: It is imperative to consistently refine your Cloud FinOps approach. Stay vigilant for pricing changes from cloud providers and keep abreast of evolving technology trends. This commitment to continuous improvement ensures the ongoing optimization of your cloud financial operations, aligning them with the dynamic landscape of both technology and pricing structures.
  12.  Governance and Policies: Enforce governance policies to align cloud resource provisioning with organizational standards. This alignment not only fosters a structured and compliant approach but also lays the foundation for effective cost management within the Cloud FinOps framework.
  13.  Cost Accountability: Cultivate accountability by associating cloud spending with specific teams or individuals. This not only encourages a sense of ownership but also empowers teams to actively manage and optimize their cloud usage, fostering a more cost-conscious and efficient Cloud FinOps practice.
  14. External Assistance: In instances where internal assistance in Cloud FinOps is limited, consider external expertise, such as engaging  with a Cloud FinOps consulting firm. Their specialized knowledge can bridge the gap, offering invaluable insights, best practices, and hands-on guidance. This external collaboration ensures a smoother implementation of Cloud FinOps, even if your in-house proficiency is currently lacking.
  15. Feedback Loop: Establish a culture of continuous improvement. Gather feedback from teams and stakeholders to refine the Cloud FinOps practice. Remember, establishing a Cloud FinOps practice is an ongoing commitment. Regular monitoring, adaptation to organizational needs and dedication are key. It is a crucial element of cloud management, ensuring cost-effectiveness and alignment with business goals.

Incedo’s Cloud FinOps Success with AWS Optimization:

Incedo’s Cloud FinOps practice empowers clients to uncover hidden cost-saving opportunities in their cloud expenditures. Our innovative approach combines a swift 5-day Diagnostics process with the cutting-edge CloudXpert platform and powerful AWS tools like Cost Explorer, Trusted Advisor, and Performance Manager to guide clients through seamless cloud expense optimization.

In a recent success story, Incedo achieved a remarkable 20% cost reduction for a client by seamlessly transitioning to a serverless data ingestion architecture. This achievement shows our commitment to delivering real results and helping organizations get the most value from their cloud investments.

Conclusion: In today’s rapidly evolving cloud computing landscape, efficient cost and resource management are paramount for businesses. Cloud FinOps, or Cloud Financial Operations, is instrumental in optimizing cloud expenses and aligning them with overarching business objectives. It thrives on collaboration among finance, IT, and operations teams, ensuring seamless financial navigation.

To make Cloud FinOps work effectively in your organization, you need to establish clear objectives, assemble the right team, gain cost visibility, implement resource tagging and labeling, set budgets and forecasts, employ cost allocation strategies, and continuously optimize costs. These steps are essential for ensuring that Cloud FinOps becomes a valuable and impactful practice within your operations.

Source:

[i] – https://newsroom.accenture.com/news/most-companies-continue-to-struggle-to-realize-full-business-value-from-their-cloud-initiatives-accenture-report-finds.htm
[ii] – https://www.cloudzero.com/state-of-cloud-cost-intelligence/
[iii] – https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-finops-way-how-to-avoid-the-pitfalls-to-realizing-clouds-value

Complexity of decision making in the VUCA world

In today’s VUCA (Volatile, Uncertain, Complex and Ambiguous) business environment, the decision makers are increasingly required to make decisions at speed, in a dynamic and ever evolving uncertain environment. Contextual knowledge including cognizance of dynamic external factors is critical, and the decisions need to be made in an iterative manner employing ‘test & learn’ mindset. This can be effectively achieved through Decision Automation Solutions that leverage AI and ML to augment the expert human driven decision-making process.

Incedo LighthouseTM for Automated Decision Making

Incedo LighthouseTM an AWS cloud native platform has been designed and developed from the ground up to automate the entire process of decision making. It has been developed with the following objectives:

  1. Distill signal from noise: The right problem areas to focus on are identified by organizing KPIs into a hierarchy from lagging to leading metrics. Autonomous Monitoring and Issue Detection algorithms are further applied to identify anomalies that need to be addressed in a targeted manner. Thereby, effectively identifying crucial problem areas that the business should focus its energy on, using voluminous datasets that are updated at frequent intervals (typically daily).
  2. Leverage context: Intelligent Root Cause Analysis algorithms are applied to identify the underlying behavioral factors through specific micro-cohorts. This enables action recommendations that are tailored to specific cohorts as opposed to generic actions on broad segments.
  3. Impact feedback loop: Alternate actions are evaluated with controlled experiments to determine the most effective actions – and use that learning to iteratively improve outcomes from the decisions.

Incedo LighthouseTM is developed as cloud-native solution leveraging several services and tools from AWS that make the process of executive decisions highly efficient and scalable.

Incedo LighthouseTM implements a powerful structure and workflow to make the data work for you via a virtuous problem-solving cycle with an aim to deliver consistent business improvements through automation of 6-step functional journey of Problem Structuring & Discovery to Performance Improvement to Impact Monitoring.

6-step-functional-journey-problem-structuring

Step 1: Problem Structuring – What is the Problem?

In this step, the overall business objective is converted into a specific problem statement(s) based on Key Performance Indicators (KPIs) that are tracked at the CXO level. The KPI Tree construct is leveraged to systematically represent problem disaggregation. This automation enhances the decision making process by enabling a deeper understanding of the issue and its associated variables. Incedo LighthouseTM provides features that aid the KPI decomposition step, such as KPI repository, self-serve functionality for defining the structure of KPI trees and publish those with latest raw data automatically.

Step 2: Problem Discovery – Where is the problem?

Here the objective is to attribute the anomalies observed in performance, which are significant deviations from the performance trend, to a set of customers / accounts / subscribers. Incedo LighthouseTM provides features, which are a combination of rule-based and anomaly detection algorithms, that aid in identifying most critical problem areas in the KPI trees, such as Time Series Anomaly Detection Non-time series Anomaly Detection, Cohort Analyzer and Automated Insights.

Step 3: Root Cause Analysis – Why is there a problem?

Once the problem is discovered at the required level of granularity, identification of the root causes that drive the business performance becomes critical. To automate the root cause identification for every new or updated data set the Root Cause Analysis must be packaged into a set of pre-defined and pre-coded model sets that are configurable and can be fine-tuned for specific use case scenarios. Incedo LighthouseTM enables this using pre-packaged configurable model sets, the output of which is presented in a format that is conducive for the next step, which is, action recommendations. These model sets include Clustering, Segmentation and Key Driver Analyzer.

Step 4: Recommended Actions

However sophisticated the algorithms are, if the workflow stops at only delivering the insights using anomaly detection and root cause analyzer etc, it would still be a lost cause. Why? Because the executives are not supported with recommendations to take corrective, preventive or corroborative actions based on the insights delivered. Incedo LighthouseTM incorporates the Action Recommendation module that enables the actions to be created at each cohort (customer microsegment) level for a targeted corrective or improvement treatment based on its individual nuance. The Action Recommendation module helps define and answer questions for each cohort: What is the action, Who should be the target for the action, and When the actions should be implemented and state the Goal of the action in terms of KPI improvement target.

Step 5: Experimentation

Experimentation is testing various actions on a smaller scale, and being able to select the optimal action variant that is likely to produce the highest impact when implemented on full scale. Incedo LighthouseTM has a Statistical Experimentation engine that supports business executives to make informed decisions on actions to be undertaken. Some of the key features of the module are: Choice of the experiment type from the options such as A/B Testing, Pre vs. Post etc., Finalization of the target population and Identification of the success metrics and define targets.

Step 6: Impact Monitoring

Post full scale implementation of actions, through their seamless integration into organization’s operating workflows, tracking their progress on an ongoing basis is critical for timely interventions. Our platform ensures that the actions are not merely implemented but are continuously monitored for their impact on key performance indicators and business outcomes.

A two-way handshake is required between Incedo LighthouseTM and the System of Execution (SOE) that is used as an operations management system to continually monitor the impact of the actions on ground. Incedo LighthouseTM covers the following activities in this step – Push Experiments/Actions, Monitor KPIs, and Experiment Summary.

Incedo LighthouseTM in AWS environment

Infrastructure to host the Incedo LighthouseTM platform plays an important role in the overall impact that the platform creates on business improvements through better and automated decision making. In cases where the clients are already leveraging the AWS Cloud, the Incedo LighthouseTM implementation takes advantage of the following AWS native services that provides significant efficiencies for successive deployments and ongoing service to the business users. A few of the AWS Services prominently used by Incedo LighthouseTM are:

AWS Compute: AWS provides scalable and flexible compute resources for running applications and workloads in the cloud. AWS Compute services allows the companies to provision virtual servers, containers, and serverless functions based on application’s requirements, and enable pay for what you use, making it a cost-effective and scalable solution. Key compute services used in Incedo LighthouseTM are: Amazon EC2 (Elastic Compute Cloud), AWS Lambda, Amazon ECS (Elastic Container Service) and Amazon EKS (Elastic Kubernetes Service).

AWS Sagemaker: There are various ML models that are the brain behind various modules in Incedo LighthouseTM the Anomaly Detection, Cohort Analyzer, Action Recommendation and Experimentation etc. All these models are developed, trained, validated and deployed via AWS Sagemaker.

AWS Glue: The large amount of frequently updated data in various source systems that is needed by the ML models is brought into the common analytical storage (data mart/ data warehouse/ data lake) etc. using AWS Glue jobs that implement ETL or ELT logic along with value add processes such as data quality checks and remediation.

Incedo LighthouseTM boosts effectiveness and efficiency of executive decision making with the power of AI. As a horizontal cloud-native platform powered by AWS, it is the key to achieving consistent business improvements across domains and use cases.

Migration to cloud has led a way to heavily automate the deployment process. Teams rely on deployment automation for not just deploying regular updates to their application, but the underlying cloud infrastructure as well. There are various deployment tools available in the market to set up pipelines for almost everything that we could think of. Faster delivery, less manual efforts, and easier rollbacks are now driving the agenda for Zero Touch Deployments.

What does Zero Touch in Cloud mean?

We would love a cloud environment where workload AWS accounts especially a production account require no console login to design, implement and operate the infrastructure and application resources. The team could have read access to view the resources but that’s as far as they can go. This helps in avoiding human errors such as forgetting to check the resource ARN before modifying/ deleting the resource on AWS CLI command. This happens with a lot of developers. Resolving these issues is what is the idea behind Zero Touch. Using pipelines and IaC (Infra As Code) tools, it becomes easier to apply it practically.

zero-touch-cloud-deployment

In picture (a), the IAM role “Shared-Deployment-Role” in the “Shared Deployment” account is assuming IAM roles in the workload accounts to deploy resources. The workload accounts could have additional roles to allow users to assume and login into a specific account. Users may have read-only access in Prod account to view services and resources. The “Deployment-Role” in each workload account is created along with the initial infrastructure layer using the IaC tool (AWS CloudFormation/ Terraform/ AWS CDK) and Pipelines (CodePipeline/ GitLab/ Jenkins/ BitBucket). AWS CodePipeline is configured in the Shared Deployment account and IaC templates are stored in the AWS CodeCommit repository for version control.

zero-touch-cloud-deployment

Picture (b) gives a high-level understanding of hoe Application deployment and Infrastructure deployment pipelines would look in AWS Cloud.

Infrastructure Layer:

Using CloudFormation templates, CodeBuild and CodePipeline; we deploy resources like and are not limited to IAM roles for deployment, VPC, Subnets, Transit Gateway/ Attachments, and Route53 hosted zone(s). These services and resources are necessary to deploy and launch the application. The resource ID/ ARN values are stored in Parameter Store for consumption by IaC templates for the application. Parameter Store helps in developing re-usable IaC templates. How? The answer is to create Parameter Store keys with the same name across all the workload accounts and allow Infrastructure templates to update the values dynamically. Deployment of the infrastructure layer is generally managed by the organization’s IT team with approved AWS services and the organization’s cloud best practices.

Application Layer:

Every application in an organization can differ in the services required to host it in the cloud. Application developers or DevOps teams can choose any one or combination of approved CI/CD and IaC tools to design and host the application in workload accounts. Teams can leverage CodePipeline, CodeBuild, CodeDeploy in Shared Deployment account to build and deploy applications in workload accounts by assuming respective “Deployment” roles. Remember that the IT team had created parameters that hold resource id(s)/ ARN(s) of resources that could be consumed by application templates. The Agile model for development, test, and deploying application templates are encouraged to be adopted ensuring only clean and tested code/template(s) go into Production.

Conclusion:

There is no one “the best” way of designing infra and application deployment. Size, complexity, cost, and time could determine what is optimal. A Zero Touch Cloud Deployment strategy can comprise various permutations and combinations of infra and application components. However, the motive behind the approach could help in minimizing human errors and many sleepless nights.

DevOps is a term that is not new for the software world. However, it is certainly the magical wand which has really sped up the digital transformation. In a sense, the entire SaaS products story is written with the help of DevOps . In today’s VUCA world, digital services aren’t simply nice to have but are a basic expectation from consumers and enterprise customers alike. In the whole digital transformation journey DevOps clearly aligns well with the business goals, ensuring that the experiences they deliver form a seamless and customer-delighting part of the entire journey.

Continuous delivery and integration with magnificent tools have allowed the companies to create entire products as individual chunks. These individual chunks of functionality, captured by user stories, can be developed, and deployed into production in a day or two, not in weeks or months. That has really changed the game while we look at product development.

The Product Led Approach (PLA) driven by DevOps has created a culture in which the final goal has converted into the delivery of a fixed set of requirements, on-time, and on-budget scenarios. Scripts that can set up the entire deployment infrastructure, including software-defined networking, are managed just like the source code of the services running on them. Business-centric services that can evolve quickly and independently, combined with frequent and reliable releases, finally put the old dream of reusable and re-combinable components in reach for the companies.

How DevOps can help in Digital Transformation?

  • Maturity Model: DevOps is the aggregation of cultural philosophies, practices, and gear that will increase an organization’s potential to supply programs and offerings at high velocity. This results in evolving and enhancing merchandise at a quicker tempo than businesses using the conventional software processes. Enterprises are moving from large, monolithic applications to smaller, loosely coupled microservices. This enables clients to act faster, better adapt to changing markets, and grow more effectively to achieve their business goals. Companies use DevOps continuous delivery practices that help teams take ownership of these services and then release updates faster.
  • Break Organization Silos to Collaborate: DevOps helps in driving the collaborative thought-process and change in mindset. DevOps helps organizations achieve digital transformation by changing the social mindset of the market, cutting off silos, and covering the way for continuous innovation and agile experimentation. With a DevOps model, development and operations teams are no longer “isolated”. In fact, DevOps encourages better communication between the two teams and creates development channels that enable continuous integration. The software problems are identified, resolved and deployed faster.
  • Organize Process around Customers: The increased speed allows companies to better serve their customers and be fair in the marketplace. Processes can be seamlessly designed and finalized based on customers’ business needs, helping them achieve higher value growth. When combined with rich digital telemetry from modern monitoring and observability tools, we end up with a strong knowledge of our systems that helps reduce mean time to recovery (MTTR), allowing teams to really take ownership of production services.
  • Build an experimental mindset: Experimentation is the fundamental need for success in today’s rapidly changing technology stack. DevOps can help create the speed of experimentation at which the business can reliably implement these ideas and launch them into the market to start learning again.
  • DevOps and Cloud: Cloud is part of almost every digital transformation journey. DevOps and cloud are completely synergetic to each other. This powerful combination has empowered the developers to respond to the business needs in near real-time. The latency of software development has become a part of past. The partnership of DevOps with cloud has given rise to a new term generally called ‘CloudOps’. The overall advancement in CloudOps has lowered the total cost of ownership for the organizations. This has made a direct impact not only on the top-line revenue and market share but also on its innovation capabilities and response time. Cloud was created majorly to tackle the challenges of Availability, Scalability and Elasticity goals based on dynamic demand. CloudOps uses the DevOps principles of CI/CD to realize the best practices of high availability by refining and optimizing business processes.

About the Customer

BSI Financial an industry leader providing mortgage solutions to consumers. In addition, it provides Digital platforms to independent mortgage servicing companies to drive operational excellence and intelligent customer experience. BSI had decided to pivot the enterprise to “Cloud-first” and “Ai-first” strategy, focus resources on customer engagement and innovation, and enable greater agility.

Platforms modernization and operations transformation were identified as a priority to create capacity for growth, lower operating costs, and allow for scalability. This encompassed loan servicing, MSR portfolio due diligence, Internal and External reporting, and Mortgage Operations.

What Incedo Did

BSI chose Incedo as a partner to re-imagine and engineer cloud native SaaS Platforms on AWS cloud. This encompassed:

Impact

promotion brand messaging

Promotion of brand and messaging for clients to end customers was simplified

enabling pay subscription model

Enabling pay per use subscription model for customers

seamless integration offerings

Seamless integration of third party offerings and services

monetization internal IP

Monetization of internal IP and capabilities created over time

increased self service platform

Increased self-service on the platform and reduced cost to serve for end customers

enhanced customer experience

Enhanced customer experience from simpler onboarding and engagement for end customers

increased upsell opportunities

Increased cross and upsell opportunities for end customers

Recruitment Fraud Alert