A Forrester Report suggests that by 2030, banking would be invisible, connected, insights-driven, and purposeful. ‘Trust’ will be key for building the industry in the future.  

But how do banks and FinTechs enable an excellent customer experience (CX) that translates into “trust” when the onboarding experience itself is time-consuming and prone to error. The disengagement is clear from industry reports. 85% of corporates complained that the KYC experience was poor. Worse, 12% of corporate customers changed banks due to the “poor” customer experience.

Losing a customer is disastrous because the investment and effort that goes into the process are immense. Both KYC and Customer Lifecycle Management (CLM) are expensive and time-consuming. Banks could employ hundreds of staff for a high-risk client for procuring, analyzing, and validating documents. Thomson Reuters reports that, on average, banks use 307 employees for KYC. They spend $40 million (on average) to onboard new clients. When a customer defects due to poor customer engagement, it is a double whammy for the bank. It loses a client and has to work harder to cover the costs of the investment made. Industry reports indicate that new customer acquisition is five times costly than retaining an existing one. 

The same scenario is applicable for financial companies, which must be very careful about who they take in as clients. As a result, FinTechs struggle with greater demand for customer-centricity while fending competition from challengers. By investing in digital transformation initiatives like digital KYC, many challenger banks and FinTechs deliver exceptional CX outcomes and gain a foothold. 

Today Commercial Banks and FinTechs cannot afford to overlook regulatory measures, anti-terrorism, anti-money laundering (AML) standards, and legislation, violations of which would incur hefty fines and lead to reputational damage. The essence of KYC is to create a robust, transparent, and up-to-date profile of the customer. Banks and FinTechs investigate the source of their wealth, ownership of accounts, and how they manage their assets. Scandals like Wirecard have a domino effect, and so banks must flag off inconsistencies in real-time. As a result, banks and FinTechs have teamed up with digital transformation partners and are using emerging technologies AI, ML, and NLP to make their operations frictionless and customer-centric. 

Decoding existing paint-points and examining the need for a comprehensive data extraction tool to facilitate seamless KYC

Long time-to-revenue results in poor CX

Customer disengagement in the financial sector is common. Every year, financial companies lose revenue due to poor CX. Here the prime culprit for customer dissatisfaction is the prolonged time-to-revenue. High-risk clients average 90-120 days for KYC and onboarding. 

The two pain points are – poor data management and traditional methods for extracting data from documents (predominantly manual). Banking c-suite executives concede that poor data management arising due to silos and centralized architecture is responsible for high time-to-revenue.  

The rise of exhaust data 

Traditionally, KYC involved checks on data sources such as ownership documents, stakeholder documents, and the social security/ identity checks of every corporate employee. But today, the KYC/investigation is incomplete without verification of exhaust data. And in the evolving business landscape, it is exigent that FinTech and banks take exhaust data into account. 

Emerging technologies like AI, ML, and NLP make onboarding and Client Lifecycle Management (CLM) transparent and robust. With an end-to-end CLM solution, banks and FinTech can benefit from an API-first ecosystem that supports a managed-by-exception approach. An API-first ecosystem that supports an exception management approach is ideal for medium to low-risk clients. Data management tools that can extract data from complex documents and read like humans elevate the CX and save banks precious time and money. 

Sheer volume of paperwork prolongs onboarding. 

The amount of paperwork accompanying the onboarding and KYC process is humongous. When it comes to business or institutional accounts, banks must verify every person’s existence on the payroll. Apart from social security and identity checks, ultimate beneficial owners (UBO), and politically exposed persons (PEP), banks would have to cross-examine documents related to the organization’s structure. Verifying the ownership of the organization and the beneficiaries’ check adds to the complexity. After that, corroborating data with media checks and undertaking corporate analysis to develop a risk profile. With this kind of paperwork involved, KYC could take days. 

However, as this is a low-complexity task, it is profitable to invest in AI. Instead of employing teams to extract and verify data, banks and FinTechs can use data extraction and comprehension tools (powered with AI and enabled with machine learning) to accelerate paperwork processes. These tools digitize documents and extract data from structured and unstructured documents, and as the tool evolves with time, it detects and learns from document patterns. ML and NLP have that advantage over legacy systems – learning from iterations.   

Walking the tightrope (between compliance and quick TOI)

Over the years, the kind of regulatory framework that America has adopted to mitigate financial crimes has become highly complex. There are multiple checks at multiple levels, and enterprise-wide compliance is desired. Running a KYC engages both back and front office operations. With changing regulations, Banks and FinTechs must ensure that KYC policies and processes are up-to-date. Ensuring that customers meet their KYC obligations across jurisdictions is time-consuming and prolonged if done manually. Hence, an AI-enabled tool is needed to speed up processes and provide a 360-degree view and assess the risk exposure. 

In 2001, the Patriot Act came into existence to counter terrorist and money laundering activities. KYC became mandatory. In 2018, the U.S. Financial Crimes Enforcement Network (FinCEN) incorporated a new requirement for banks. They had to verify the “identity of natural persons of legal entity customers who own, control, and profit from companies when those organizations open accounts.” Hefty fines are levied if banks fail to execute due diligence as mandated.

If they are to rely on manual efforts alone, banks and FinTechs will find it challenging to ensure CX and quick time-to-revenue while adhering to regulations. To accelerate the pace of operations, they need tools that can parse through data with greater accuracy and reliance than the human brain. And also can learn from processes.  

No time for perpetual KYC as banks struggle with basic KYC

For most low and medium-risk customers, a straight-through-processing (STF) of data would be ideal. It reduces errors and time to revenue. Client Lifecycle Management is essential in today’s business environment as it involves ensuring customers are compliant through all stages and events in their lifecycle with their financial institution. That would include raking through exhaust data and traditional data from time to time to identify gaps. 

A powerful document extraction and comprehension tool is therefore no longer an option but a prime requirement.  

Document extraction and comprehension tool: how it works 

Document digitization: IDP begins with document digitization. Documents that are not in digital format are scanned. 

OCR: Next step is to read the text. OCR does the job. Many organizations use multiple OCRS for accuracy. 

NLP: Recognition of text follows the reading of the text. With NLP, words, sentences, and paragraphs are provided a meaning. NLP uses sentiment analysis, part of speech tagging, and making it easier to draw a relation. 

Classification of documents: Manual categorization of documents is another lengthy process that is tackled by IDP’s classification engine. Here machine learning (ML) tools are employed to recognize the kinds of documents and feed them to the system.  

Extraction: The penultimate step in IDP is data extraction. It consists of labeling all expected information within a document and extracting specific data elements like dates, names, numbers, etc.

Data Validation: Once the data has been extracted, it is combined and pre-defined validation rules based on AI check for accuracy and flag off errors, improving the quality of extracted data.     

Integration/Release: Once the data has been validated/checked, the documents and images are exported to business processes or workflows. 

The future is automation!

The future is automation. An enriched customer experience begins with automation. To win customer trust, commercial banks and FinTechs must ensure regulation compliance, improve CX, reduce the costs by incorporating AI and ML and ensure a swifter onboarding process. In the future, banks and FinTechs that improvise their digital transformation initiatives and enable faster and smoother onboarding and customer lifecycle management will facilitate deeper customer engagement. They would have gained an edge. Others would struggle in an unrelenting business landscape.

True, there is no single standard for KYC in the banking and FinTech industry. The industry is as vast as the number of players. There are challengers/start-ups and decades-old financial institutions that coexist. However, there is no question that data-driven KYC powered by AI, ML brings greater efficiency and drives customer satisfaction. 

A tool like Magic DeepSight™ is a one-stop solution for comprehensive data extraction, transformation, and delivery from a wide range of unstructured data sources. Going beyond data extraction, Magic DeepSight™ leverages AI, ML, and NLP technologies to drive exceptional results for banks and FinTechs. It is a complete solution as it integrates with other technologies such as API, RPA, smart contract, etc., to ensure frictionless KYC and onboarding. That is what the millennial banks and FinTechs need.  

Burdened by silos and big and bulky infrastructure, the financial services sector seeks a change that brings agility and competitiveness. Even smaller financial firms are dictated by a need to cut costs and stand out. 

“The widespread, sudden disruptions caused by the COVID situation have highlighted the value of having as agile and adaptable a cloud infrastructure as you can — especially as we see companies around the world expedite investments in the cloud to enable faster change in moments of uncertainty and disruption like we faced in 2020.” Daniel Newman 

Embracing cloud in 2021

The pandemic has been the meanest disrupter of the decade. Many banks went into crisis mode and were forced to rethink their options and scale up to ensure greater levels of digital transformation. How quickly these were able to scale up to meet the customer’s demands became a critical asset in the new normal. 

With technology stacks evolving at lightning speeds and application architecture replaced with private, public, hybrid, or multi-cloud, the financial services sector can no longer resist the lure of the cloud.  Cloud has become synonymous with efficiency, customer-centricity, and scalability.  

Moreover, most financial institutions have realized that the ROI for investment in the cloud is phenomenal. The returns that a financial firm may get in 5 years are enormous. As a result, financial firms’ investment in the cloud market is expected to grow at a CAGR of 24.4% to $29.47 billion by 2021. The critical levers for this phenomenal growth would be business agility, market focus, and customer management.               

Unfortunately, while cloud adoption seems inevitable, many financial industry businesses are still grappling with the idea and wondering how to go about it efficiently. The smaller firms are relative newcomers in terms of cloud adoption. The industry had been so heavily regulated that privacy and fear of data leaks almost prevented the financial institutions from moving to the cloud. The most significant need is trust and reliability as migration to the cloud involves transferring highly secure and protected data. Therefore, the firms need a partner with expertise in the financial services industry to securely envision a transition to the cloud in the most seamless manner possible.  

Identifying your organization’s cloud maturity level     

The first step towards an efficient move to the cloud is identifying your organization’s cloud maturity level. Maturity and adoption assessment is essential as there are benefits and risks involved with short-and long-term impacts. Rushing headlong into uncharted waters will not serve the purpose. Establishing the cloud maturity stage accelerates the firm’s cloud journey by dramatically reducing the migration process’s risks and sets the right expectations to align organizational goals accordingly.

Progressing from none to optimized, presented below are the levels in terms of maturity. Magic FinServ uses these stages to assess a firm’s existing cloud state and then outlines a comprehensive roadmap that is entirely in sync with the firm’s overall business strategy. 

STAGE 1: PROVISIONAL

Provisional is the beginner stage. At this stage, the organization relies mainly on big and bulky infrastructure hosted internally. There is little or no flexibility and agility. At the most, the organization or enterprise has two or three data centers spread across a country or spanning a few continents. The LOBs are hard hit as there is no flexibility and interoperability. Siloed culture is also a significant deterrent in the decision-making process. 

The process for application development ranges from waterfall to basic forms of agile. The monolithic architecture/three-tier architecture hinders flexibility in the applications themselves. The hardware platforms are typically a mix of proprietary & open UNIX variants (HP UX, Solaris, Linux, etc.) to Windows.

There is a great deal of chaos in the provisional stage. Here the critical requirement is assessing and analyzing the business environment to develop an outline first. The need is to ensure that the organization gains confidence and realizes what it needs for fruitful cloud implementation. There should be a strong sense of ownership and direction to lead the organization into the cloud, away from the siloed culture. The enterprise should also develop insights on how they will further their cloud journey.

STAGE 2: VIRTUALIZATION 

In this next stage of the cloud maturity model, server virtualization is heavily deployed across the board. Though here again, the infrastructure is hosted internally, there is increasing reliance on the public cloud. 

The primary challenges that organizations face in this stage of cloud readiness are related to proprietary virtualization costs. LOBs may consider accelerating movement to Linux-based virtualization running on commodity servers to stay cost-competitive. However, despite the best efforts, system administration skills and costs associated with migration remain a significant bottleneck.

STAGE 3: CLOUD READY 

At this significant cloud adoption stage, applications are prepared for a cloud environment, in the public or private cloud as part of the portfolio rationalization exercise. 

The cloud migration approaches are primarily four types   

  • Rehosting: It is the most straightforward approach to cloud migration and as the name implies consists of lifting and shifting applications, virtual machines from the existing environment to the public cloud. When a lift-and-shift approach is employed, businesses are assured of minimum disruption, less upfront cost, and quick turn-around-time (this is the fastest cloud migration approach). But there are several drawbacks as well – there is no learning curve for cloud applications. Performance is not enhanced as there is no change in code. It is only moved from the data center to the cloud.        
  • Replatforming: Optimize lift and shift or move to another cloud from the existing cloud. Apart from what is done in the standard lift-and-shift, it involves optimization of the operating system (OS), changes in API, and middleware upgrade.   
  • Refactoring/Replacing: Here, the primary need is to make the product better and hence developers re-architect legacy systems to build cloud-native systems from scratch.    

The typical concerns at this cloud adoption stage are quantitative such as the economics related to infrastructure costs, developer/admin training, and interoperability costs. Firms or organizations are also interested in knowing the ROI and when it will finally break-even.

At this stage, an analysis of the organization’s risk appetite is carried out. With the help of a clear-cut strategy, firms can stay ahead of the competition as well. 

STAGE 4: CLOUD OPTIMIZED

Enterprises in this stage of cloud adoption realize that cloud-based delivery of IT services (applications or servers or storage or application stacks for developers) will be its end objective. They have the advantage of rapidly maturing cloud-based delivery models (IaaS and SaaS) and are increasingly deploying cloud-native architecture strategies and design across critical technical domains.

In firms with this level of maturity, cloud-native ways of developing applications are de facto. As cloud-native applications need to be architected, designed, developed, packaged, delivered, and managed based on a deep understanding of cloud computing frameworks, the need is for optimization throughout the ecosystem. The applications are designed for scalability, resiliency, and incremental enhance-ability from the get-go. Depending on the application, supporting tenets include IaaS deployment & management and container orchestration alongside Cloud DevOps. 

Conclusion

Cloud adoption has brought the immense benefits of reduced Capex Spend, lowered complexity in IT management, and improved security and agility across firms. The financial services sector has also increasingly adopted the cloud. Despite the initial apprehensions in terms of security and data breaches, an overwhelming 92% of banks are either already making effective use of the cloud or planning to make further investments in 2021/22, as evident from a report by the Culture of Innovation Index, recently published by ACI Worldwide and Ovum.  

While cloud adoption is the new norm, doing it effectively starts with identifying where the firm is currently and how long the journey is to be ‘cloud-native.’ 

Magic FinServ’s view of Cloud Adoption for Financial Firms

Magic FinServ understands the importance of a practical cloud roadmap. It strategizes and enables firms to understand what it is that they need. We are committed to finding the right fitment according to the financial firm’s business.

While in recent times, the preference is for a multi-vendor hybrid cloud strategy. With our cloud assessment and remediation services tailored specifically for financial institutions, we thoroughly understand the specialized needs of the capital market. Our team comprises capital-market domain expert cloud architects who assess, build, design, migrate cloud solutions tailored just for capital market players, and are in total compliance with the industry’s complex regulatory requirements.

At Magic FinServ, the journey begins with assessing maturity in terms of technical and non-technical capabilities. Magic has developed a comprehensive 128 point assessment that measures your organization’s critical aspects of cloud and organizational readiness. We understand the operational, security, and confidentiality demands of the buy-side industry and advise your firm on the best course of action. 

Magic FinServ helps demystify the cloud migration journey for firms and then continually improve the environment stability with the advanced Cloud DevOps offering, including SecDevOps. Our highly lauded 24/7 Production support is unique as it is based on adhering to SLAs at each stage of the journey. The SLAs are met across the solution and not just one area, and proper reporting is done to prevent any compliance-related issues. To explore how your organization can realize optimum cloud benefits across various stages of the cloud adoption journey, reach out to us at mail@magicfinserv.com or Contact Us.

Ingesting Unstructured data into other Platforms

Industry specific Products / Platforms like the ERP for specific functions and processes have contributed immensely to enhancing efficiency and productivity. SI partners and end-users have focused on integrating these platforms with existing workflows through a combination of customization/configuring of these platforms and re-engineering existing workflows. Data Onboarding is a critical activity however it has been restricted to integrating the platforms with the existing ecosystem. A key element that is very often ignored is integrating Unstructured Data sources in the Data Onboarding process.

Most enterprise-grade products and platforms require a comprehensive utility that can extract and process a wide set of unstructured documents, data sources and ingest the output into a defined set of fields spread across several internal and third-party applications on behalf of their clients. You are likely extracting and ingesting this data manually today, but an automated utility could be a key differentiator that reduces time, effort and errors from this extraction process. 

Customers have often equated use of OCR technologies as solutions to these problems, however OCR suffers from quality and efficiency issues thereby requiring manual efforts. More importantly OCR extracts the entire document and not just the relevant Data Elements, thereby adding significant noise to the process. And finally, the task of ingesting this data into the relevant fields in the applications / platforms is still manual.

When it comes to widely used and “customizable” case management platforms for Fincrime applications, CRM platforms, or client on-boarding/KYC platforms, there is a vast universe of unstructured data that requires processing outside of the platform in order for the workflow to be useful. Automating manual extraction of critical data elements from unstructured sources with the help of an intelligent data ingestion utility enables users to repurpose critical resources tasked with repetitive offline data processing.

Your data ingestion utility can be a “bolt on” or a simple API that is exposed to your platform. While the document and data sets may vary, as long as there is a well-defined list of applications and fields that are required to be populated, there is a tremendous opportunity to accelerate every facet of client lifecycle management. There are several benefits to both “a point solution” which automates extraction of a well-defined document type/format as well as a more complex, machine learning based utility for a widely defined format of the same document type. 

Implementing Data Ingestion

An intelligent pre and post processing data ingestion can be implemented in 4 stages, each stage increasing in complexity and value extracted from your enterprise platform:

Stage 1 
  • Automate the extraction of standard templatized documents. This is beneficial for KYC and AML teams that are handling large volumes of standard identification documents or tax filings which do not vary significantly. 
Stage 2 
  • Manual identification and automated extraction of data elements. In this stage, end users of an enterprise platform can highlight and annotate critical data elements which an intelligent data extraction utility should be able to extract for ingestion into a target application or specified output format. 
Stage 3
  • Automated identification and extraction as a point solution for specific document types and formats.
Stage 4
  • Using stage 1-3 as a foundation, your platform may benefit from a generic automated utility which uses machine learning to fully automate extraction and increase flexibility of handling changing document formats. 

You may choose to trifurcate your unstructured document inputs into “simple, medium, and complex” tiers as you develop a cost-benefit analysis to test the outcomes of an automated extraction utility at each of the aforementioned stages. 

Key considerations for an effective Data Ingestion Utility:

  • Your partner should have the domain expertise to help identify the critical data elements that would be helpful to your business and end users 
  • Flexibility to handle new document types, add or subtract critical data elements and support your desired output formats in a cloud or on-premise environment of your choice
  • Scalability & Speed
  • Intelligent upfront classification of required documents that contain the critical data elements your end users are seeking
  • Thought leadership that supports you to consider the upstream and downstream connectivity of your business process

This current blog is part three in the series of blogs on DLT infrastructure testing. 

While in the first blog, we covered all aspects of infrastructure testing for decentralized applications built on the blockchain or distributed ledger platforms and the Magic FinServ approach. In the second blog, we have addressed why customers must make infrastructure testing an integral part of the QA process. 

In this third blog of the series, we address another issue of critical importance – automation. Automation is an essential requirement in any organization today when disruptive forces are sweeping across domains. And as a McKinsey report indicates – “Automation can transform testing and quality control because the increased capacity it provides allows a company to move from spot checks to 100 percent quality control, which reduces the error rate to nearly zero.” 

Infrastructure testing -A critical requirement

While the importance of infrastructure testing cannot be denied, four attributes make it extremely complicated from the tester’s perspective. These are peer-to-peer networking(P2P), consensus algorithms, role-based nodes along with the permission for each node (only for private networks), and lastly, state and transactional data consistency under high load along with resiliency of nodes. 

To know more about these in detail,  you can check the links provided below, which lead to the first and second in the series of blogs:       

Infrastructure Testing for Decentralized Applications built on Blockchain or Distributed Ledger Platform

Why is Infrastructure Testing important for Decentralized Applications built on any Blockchain or DLT

From these blogs, it becomes evident that though infrastructure testing is an essential requirement for any decentralized application, it is also a time-consuming task. Most of the supported features for such applications require different configurations/arrangements of nodes meaning different network topologies for each feature. There is a high possibility that one feature may be tested with some number of nodes. However, for a proper test fix or enhancement of any sort, a different number of nodes from what was designed earlier is needed. 

Developing a comprehensive test strategy

As far as test strategies are concerned, most often deployed one utilizes docker-based containers to copy different network topologies with minimal changes. However, defining docker-based containers ( a.k.a. docker service) with different numbers is also a highly time-consuming activity. The addition of a single new container, depending upon the number of nodes, usually takes a couple of hours to set up Docker-based containers to create different network topologies. It is not only tedious but too complicated. 

One also must take into account the cloud. Most organizations now require infrastructure-testing to be carried out on cloud platforms to mimic the closure environment that they would be using in real-time, as closely as possible. However, setting up one node on any existing cloud service could easily take two to three hours, even with automated ways to spin off machines. Therefore to ensure quicker results, the option at hand is automation.

Automating the untested – how to get started

Today almost every organization/enterprise uses Agile methodology for product development and an automated way (with CI/CD) to create builds daily. Functional testing can be automated and integrated within CI/CD easily, but it is not so with non-functional testing like Infrastructure, Performance, Security, Resiliency, Load testing, etc. These are not easily integrated with CI/CD. Even if these are integrated within CI/CD, non-functional testing does not provide the kind of results organizations desire. 

When it comes to the question of manual  non-functional testing, it is rather tedious. Since there are frequent builds that have to be tested (for non-functional areas like infrastructure), manually setting up a different network topology is not viable. It takes a lot of time and is highly error-prone.  Non-functional testing of Blockchain (other than infrastructure) relies on node level rather than the network level; therefore for tests related to Performance, Security, Resiliency (all of which come under non-functional testing) are performed on standard network topologies. Thereby indicates that infrastructure testing directly relates to network topologies, whereas other non-functional testing processes mentioned earlier are impacted on a case-to-case basis.

For infrastructure testing, organizations must carry out the following activities to define the network topology:

  • Impact analysis of all changes related to the four significant factors listed earlier
  • If any of the four factors is impacted, then defining network topologies for each scenario
  • Set up of nodes for all probable network topologies
  • Creation of network for each network topology
  • Execution of functional/non-functional testing on each network topology to ensure that all network topologies are working as per the acceptance criteria

Impact analysis of changes 

To define the required numbers of network topology, organizations must first identify what all changes are to be done and how those changes would be impacted by peer-to-peer (P2P) networking logic, consensus algorithms logic, permissioning handler logic, or data/transaction consistency logic. If the impact is apparent, then organizations must define network topology. This activity is the most time-consuming task of all as one has to understand all the changes. 

Another critical task for organizations is to perform impact analysis for all changes and find out whether the four major factors have been impacted or not. The easiest way to process this task would be to get developers to register this information with meaningful keywords that can automate impact analysis. With proper automation in place, organizations can use impact analysis to determine whether existing network topologies can be used or a new one has to be created.

Defining network topologies: 

Once impact analysis is done and it is decided that new network topologies must be created to account for changes, then the next requirement is to define all network topologies.

For instance, if an organization reports an issue related to the functioning of nodes. Whenever there is an even number of consensus nodes within the network, then consensus seems to get stuck or takes longer than usual. To resolve the problem, developers work out the logic. In case the network does not have an even number of consensus nodes, then the need is to either convert one existing node to a consensus node or add the new one to the network. Either way, the network topology will be changed from one that exists. 

With proper automation in place, it is possible to keep the registry of all existing QA network topologies. Once the required network topology is fed in, it should provide data with pertinent information whether a new node is to be created or any existing network can be utilized after modifying a  number of nodes. Manually performing this task could take hours and sometimes even days if the organization has a long list of network topologies in their QA environments.

Setting up nodes for required network topology: 

There are two possibilities here, either modify the existing node or create a new node to have a new network topology. In either case, nodes will have to be set up manually. Again this will take time and require the engagement of someone who understands QA environments from an administrative perspective to set up nodes. Hence, it will increase the time taken and create dependency on new groups to coordinate for node setup without automation.

Creation of Network Topology: 

After setting up the required number of nodes, a network is created based on the network initialization process. If multiple network topologies have to be tested with several scenarios, then for each network topology, the following activities have to be performed:

  • Cleaning all involved nodes if existing network nodes is used
  • Initialization of network
  • Allowing for stabilization of the nodes for all components/services
  • Execution of functional scenarios
  • Destroying the network to free the nodes

As all the above activities are required to be completed for each network topology so without automation, this will consume a lot of time and make testing highly error-prone. Most of the time, network topologies use nodes that have overlap with other network topologies; hence missing any of the activities underlined earlier will result in inconsistency on the other running network. Experience suggests that cleaning the nodes is a highly error-prone activity within a shared environment of various network topologies. It becomes tough to discriminate why the errors are ensuing. Whether those are actual bugs to be reported or some nodes are now being used for two or more networks responsible for the error since clean up (of nodes) has not been done correctly. Without proper automation, all these activities will take significant time and raise a false alarm for the issues that have popped up due to some human error.

Execution of functional and non-functional tests: 

Functional tests must be executed without fail, whereas non-functional tests are always subject to the changes being made. These (non-functional tests) become essential if there is performance improvement or fix required for any security vulnerability. Even in case of any exceptional fix that hurts performance, this is required. 

Functional tests are implicitly covered in the network creation phase, and almost every organization focuses (or gives priority) on automating functional testing. Non-functional testing has always been the least priority for most; however, this becomes very tedious if required to be performed on multiple network topologies. It is rare to run non-functional testing for all network topologies as it has the very least dependency on different network topologies. Most of the time, non-functional testing is at node level rather than dependent upon different network topologies. 

Conclusion

In its Hype Cycle report for Blockchain Business for 2019, Gartner predicts that within five to 10 years, Blockchain will have a transformational impact across industries. According to David Furlonger, distinguished research vice-president at Gartner, permissioned ledgers in several key areas in banking and investment services will witness increased focus. In light of the uptick in interest from banking and investment services CIOs seeking to improve decades-old operations and processes, automation is desirable for driving ROI and efficiency in Blockchain incorporation and its automation.

Automated testing enables the developers to easily and quickly check new apps and updates for errors, defects, and other weaknesses. Infrastructure testing is one such area that organizations must automate as soon as possible if they desire to build robust decentralized applications. 

Magic FinServ’s automated test methodology is unique, and we have the relevant expertise to drive automation for testing Blockchain Infrastructure. We have had success with several clients who built financial products on blockchain platforms. 

To explore automated testing for blockchain infrastructure, write to us at mail@magicfinserv.com 

According to a recent forecast by Gartner, “by 2025, the business value added by blockchain will grow to slightly more than $176 billion, then surge to exceed $3.1 trillion by 2030.” Right from the voting process to the transfer of data for mission-critical projects, blockchain-based technology would be an integral part of the social, economic, and political setup the world over. 

There are many exciting components/features that make it possible for blockchain platforms to provide a secure decentralized architecture for activities ranging from processing transactions to storing data that is immutable. We have briefly discussed these in our earlier blog. The blog identified how various services and components make infrastructure testing a matter of utmost significance/consequence- while at the same time testing the developer’s or application team’s core competence.

Considering its immense impact in the days to come in all aspects of human life, it has become essential that clients investing in blockchain ensure that the nature of transactions is inviolable. 

To ensure this inviolability, the infrastructure of the blockchain must work seamlessly. Hence the need for infrastructure-testing of blockchain to verify if all the constituent elements are operating as desired. 

What comprises infrastructure testing for Blockchain/Distributed ledger platform 

In simple terms, infrastructure-testing of blockchain networks translates into verifying whether the end-to-end blockchain core network and its constituent elements are operating as desired. It is critical as it determines the reliability of a product, which depends entirely on nodes spread across the globe.

When it comes to decentralized applications built on the blockchain or distributed ledger platforms due to the nature of operations where each constituent element is highly reliant or linked with the other, any shortcoming or failure could jeopardize operations. Hence, to ensure continuity, reliability, and stability in services, infrastructure testing should be carried out with high focus.      

Defining the constituent elements of a Blockchain
  • All distributed ledger platforms, including blockchain, have a dedicated service responsible for establishing communication between the nodes utilizing peer-to-peer networking or any other networking algorithm. 
  • There is also a component or service that makes the network of such applications fault-tolerant using consensus algorithms. 
  • Another critical aspect of blockchain platforms is making consensus on the state and transactional data to process, followed by persisting of the manipulated data. 
  • When it comes to private networks, also known as consortium networks, there are many ways to achieve permission for each node to provide a secure and isolated medium among the participants. 

For confirming production usage for application builds over these platforms, infrastructure testing has similar importance as any other supported functionality. Without verifying functionality, none of the applications can be deployed to production. Similarly, decentralized applications built over various platforms can be deployed to production only after the reliability of infrastructure has been verified with all nodes’ probable numbers. 

What makes the entire exercise demanding are the following factors: 

  • Peer- to- Peer networking(P2P) 
  • Consensus algorithms
  • Role-based nodes along with permission for each node (meant only for private networks)
  • State and transactional data consistency under high loads along with resilience test of nodes

Another vital characteristic to be considered is the number of nodes itself.  Considering that such applications’ functionalities are dependent on the number of nodes, this is a key requirement. The number of nodes can vary depending upon:

  • Which service or component is to be tested 
  • How all the factors mentioned above impact the service or component
Importance of testing various components of Blockchain Infrastructure
Reliability testing

Reliability of infrastructure by far is the most challenging phase for any blockchain developer or application team. Here, confirming whether an application can run on targeted infrastructure or not is explored. Defining application reliability for multiple machines (a.k.a. nodes or servers or participants etc.) increases complexity exponentially due to the permutation and combinations of failures. 

Hence, wherever multiple machines are involved, it is the natural course of action for developers and application teams to measure application reliability on the infrastructure on which such applications will run. All the factors enlisted earlier attest that infrastructure testing is of prime consequence for decentralized applications built on all available platforms. 

Peer-to-Peer networking

If there is any flaw in peer-to-peer networking, then nodes will not communicate with each other. If nodes cannot establish connections with each other, then nodes will not be able to process the transaction with the same state. If nodes are not in the same state, then there will not be any new data manipulated and created to persist. In the case of blockchain, there will not be any new blocks. For the distributed ledger, there will not be any new data appended to the ledger. This may lead to chain forking or a messy state of data across the nodes that will eventually result in the network reaching a dead-end or getting stuck. 

Improper peer-to-peer network implementation can also lead to data exposure to unintended nodes that do not have permission to see the data. To overcome this risk of unintentional data exposure, proper testing must be performed. That will ensure that the expected number of participants and expected numbers of new participants can participate within the network as appropriate communication is established between nodes based on each node’s role and permissions. 

Consensus algorithms: 

Consensus algorithms have two critical functions: 

  • Drive consensus by ensuring that a majority of nodes are processing new data with the same state
  • Provide fault tolerance for network

Consensus algorithms must be verified with all possible types of nodes and all probable permissions that can be defined for each node. To verify the consensus algorithms, multiple network topologies are needed. Improper verification will result in the network getting stuck. It would also result in sharing of data with nodes that were not supposed to get the data. 

Any flaw in consensus will result in a “stuck network” and cause the forking of data. Worse, data can be manipulated with fraudulent nodes. Depending upon which consensus algorithm is used, the network topology can be created and verified with all expected features claimed to be working. 

Role-based nodes, along with their permission 

Each platform supports different roles for each node to ensure that nodes get only the intended information based on the defined permissions. Depending upon the different kinds of roles and their respective permissions, various network topologies are created to perform all required verifications. In case there is any missing verification, sensitive data is exposed to unintended nodes. The way data is shared between nodes is governed by consensus algorithms based on defined permission. 

Any flaw in the permission control mechanism can lead to sensitive data leakage. Data leakage is catastrophic, more so for private networks. The importance of accuracy cannot be emphasized more in this case, and it can only be achieved by ensuring a proper testing mechanism is being utilized.

State and transactional data consistency 

As there can be any number of nodes in real-time, it is highly critical to verify that each node has the same state and transactional data. All complicated transactions must be performed with an adequately defined load to ensure that all nodes have the same state and transactional data. 

Resiliency-based verification must be performed so that all nodes can get to the same state and transactional data, even when a fault is intentionally introduced to randomly selected nodes with a running network. 

Conclusion

To conclude, infrastructure testing should not be substituted with any traditional functional testing process. Furthermore, as this is a niche area, infrastructure testing must be entrusted to a partner with industry-wide experience and capable resources having a sound understanding of all the factors underlined above. A real-time experience in establishing testing processes for such platforms is a highly desired prerequisite.  Without infrastructure testing, it is perilous to launch a product in the market. 

Magic FinServ has delivered multiple frameworks designed for all the above factors. With an in-depth knowledge of multiple blockchain platforms, we are in an enviable position to provide exactly what the client needs while ensuring the highest level of accuracy and running all frameworks following industry standards and timelines. As each customer has their own specific way of developing such platforms and choosing different algorithms for each factor, choosing an experienced team is undoubtedly the best option to establish an infrastructure testing process and automate end-to-end infrastructure testing.

To explore infrastructure testing for your Blockchain/DLT applications, write to us at mail@magicfinserv.com

Introduction

Investment research and analysis is beginning to look very different from what it did five years ago. While five years ago, the data deluge could have confounded asset management leaders, they now have a choice on how things could be done differently, thanks to AI and advanced analytics. Advanced analytics helps create value by eliminating biased decisions, enabling automatic processing of big data, and using alternative data sources to generate alpha. 

With multiple sources of data and emerging AI applications heralding a paradigm shift in the industry, portfolio managers and analysts who earlier used to manually sift through large volumes of unstructured data for investment research can now leverage the power of AI tools such as natural language processing and abstraction to simplify their task. Gathering insights from press releases, filing reports, financial statements, pitches and presentations, CSR disclosures, etc., is a herculean effort and consumes a significant amount of time. However, with AI-powered data extraction tools such as Magic DeepSight™, quick processing of large-scale data is possible and practical.

A tool like Magic DeepSight™  extracts relevant insights from existing data in a fraction of the time and capital compared to manual processing. However, the real value it delivers is by supplementing human intelligence with powerful insights, allowing analysts to direct their efforts towards high-value engagements.

Processing Unstructured Data Is Tough

There are multiple sources of information that front office analysts process daily, which are critical to developing an informed investment recommendation. Drawing insights from these sources of structured and unstructured data are challenging and complex. These include 10-K reports, the reasonably new ESG reports, investor reports, and various other company documents such as internal presentations and several PDFs. SEC EDGAR database makes it easy to access some of this data, but extracting this data from SEC EDGAR and identifying and then compiling relevant insights is still a tedious task. Unearthing insights from other unstructured documents also takes stupendous manual efforts due to the lack of any automation. 

10-K Analysis using AI

More detailed than a company’s annual report, the 10-K is a veritable powerhouse of information. Therefore, accurate analysis of a 10-K report would lead to a sounder understanding of the company. There are five clear-cut sections of a 10-K report – business, risk factors, selected financial data, management discussion and analysis (MD&A), financial statements, and supplementary data, all of which are packed with value for analysts investors alike. Due to the breadth and scope of this information, handling it is inevitably time-consuming. However, two sections that usually require more attention than the others to analyze due to the complexity and existence of possible hidden anomalies are the “Risk Factors” and the “MD&A”. The “Risk Factors” section outlines all current and potential risks posed to the company, usually in the order of importance. In contrast, the   “Management’s Discussion and Analysis Of Financial Condition And Results Of Operations” (MD&A) section is the company management’s perspective of the previous fiscal and future business plans’ performance.

As front-office analysts sift through multiple 10-K reports and other documents in a day, inconsistencies in analysis can inadvertently creep in. 

They can miss important information, especially in the MD&A and Risk Factors sections, as they have to analyze many areas to study and more reports in the queue. Even after extracting key insights, it takes time to compare the metrics in the disclosures to a company’s previous filings and against industry benchmarks. 

Second, there is the risk of human bias and error, where relevant information may be overlooked.  Invariably, even the best fund managers would succumb to the emotional and cognitive biases inherent in all of us, whether confirmation bias, bandwagon effect, loss aversion, or various other biases that behavioral psychologists have formally defined. Failure to consider these issues will lead to suboptimal decisions on asset-allocation and often does. 

Using AI to analyze the textual information in the disclosures made within 10-Ks can considerably cut through this lengthy process. Data extraction tools can parse through these chunks of texts to retrieve relevant insights. And a tool or platform custom-built for your enterprise and trained in the scope of your domain can deliver this information to your business applications directly. More documents can be processed in a shorter time frame, and armed with new insights, analysts can use their time to take a more in-depth learning’s untapped potential look into the company in question. Implementing an automated AI-Based system omits the human errors,  allowing investment strategies to be chosen that are significantly more objective, in both their formulation and execution. 

Analysing ESG Reports

Most public and some private companies today are rated on their environmental, social and governance (ESG) performance. Companies usually communicate their key ESG initiatives yearly on their websites as a PDF document. Stakeholders are studying ESG reports to assess a company’s ESG conduct. Investment decisions and brand perception can hinge on these ratings, and hence care has to be taken to process information carefully. In general, higher ESG ratings are positively correlated with valuation and profitability while negatively correlated with volatility. An increased preference for socially responsible investments is most prevalent in Gen Z and Millennial demographics. As they are set to make-up 72% of the global workforce by 2029, they are also exhibiting greater concern about organizations’ and employers’ stance on environmental and social issues. This is bringing under scrutiny a company’s value creation with respect to ethical obligations that impact the society it operates in.

Although, ESG reports are significant when it comes to a company’s evaluation by asset managers, investors, and analysts, as these reports and ratings are made available by third-party providers there is little to no uniformity in ESG reports unlike SEC filings. Providers tend to have their own methodology to determine the ratings. The format of an ESG report varies from provider to provider, making the process of interpreting and analyzing these reports complicated. For example, Bloomberg, a leading ESG data provider, covers 120 ESG indicators– from carbon emissions and climate change effects to executive compensation and rights of shareholders. Analysts spend research hours reading reports and managing complex analysis rubrics to evaluate these metrics, before making informed investment decisions.

However AI can make the entire process of extracting relevant insights easy. AI-powered data cleansing and Natural Language Processing (NLP) tools can extract concise information, such as key ESG initiatives from PDF documents and greatly reduce the text to learn from. NLP can also help consolidate reports into well defined bits of information which can then be plugged into analytical models including market risk assessments, as well as other information fields. 

How Technology Aids The Process

A data extraction tool like Magic DeepSight™ can quickly process large-scale data, and also parse through unstructured content and alternate data sources like web search trends, social media data, and website traffic. Magic DeepSight™ deploys cognitive technologies like NLP, NLG, and machine learning for this. Another advantage is its ability to plug the extracted information into relevant business applications, without  human intervention. 

About NLP and NLG

Natural Language Processing (NLP) understands and contextualises unstructured text into structured data. And Natural Language Generation (NLG) analyses this structured data and transforms it into legible and accessible text. Both processes are powered by machine learning and allow computers to generate text reports in natural human language. The result is comprehensive, machine-generated with insights that were previously invisible. But how reliable are they?

The machine learning approach that includes deep learning, builds intelligence from a vast number of corrective iterations. It is based on a self-correcting algorithm which is a continuous learning loop that gets more relevant and accurate the more it is implemented. NLP and AI-driven tools, when trained in the language of a specific business ecosystem, like asset management, can deliver valuable insights for every stakeholder across multiple software environments, and in appropriate fields.

Benefits of Using Magic DeepSight™ for Investment Research

  1. Reduced personnel effort

Magic DeepSight™ extracts, processes, and delivers relevant data directly into your business applications, saving analysts’ time and enterprises’ capital.

  1. Better decision-making

By freeing up upto 70% of the time invested in data extraction, tagging, and management, Magic DeepSight™ recasts the analysis process. It also supplements decision-making processes with ready insights. 

  1. Improved data-accuracy

Magic DeepSight™ validates the data at source. In doing so, it prevents errors and inefficiencies from  creeping downstream to other systems. 

  1. More revenue opportunities

With reduced manual workload and emergence of new insights, teams can focus on revenue generation and use the knowledge generated to build efficient and strategic frameworks. 

In Conclusion

Application of AI to the assiduous task of investment research can help analysts and portfolio managers assess metrics quickly, save time, energy and money and make better-informed decisions in due course. The time consumed by manual investment research, especially 10-K analysis, is a legacy problem for financial institutions. Coupled with emerging alternative data sources, such as ESG reports, investment research is more complicated today. After completing research, analysts are left with only a small percentage of their time for actual analysis and decision-making. 

A tool like Magic DeepSight™ facilitates the research process, improves predictions, investment decision-making, and creativity. It could effectively save about 46 hours of effort and speed up data extraction, tagging, and management by 70%. In doing so, it brings unique business value and supports better-informed investment decisions. However, despite AI’s transformative potential, relatively few investment professionals are currently using AI/big data techniques in their investment processes. While portfolio managers continue to rely on Excel and other necessary market data tools, the ability to harness AI’s untapped potential might just be the biggest differentiator for enterprises in the coming decade. 

To explore Magic DeepSight™ for your organization, write to us mail@magicfinserv.com or Request a Demo

Background: Ethereum is the first programmable blockchain platform provided to the developer community to build business logic in the form of a Smart Contract that eventually helps developers build decentralized applications for any business use case. Once Smart Contracts are developed, they need to be registered on the blockchain, followed by deploying to the blockchain. After deploying the contract, the contract address gets assigned through which contract methods can be executed by the abstract layer built over ABI. Web3 is the module which is the most popular implementation to interact with local or remote node participating in the underlying blockchain network, built over Ethereum.

Define Decentralized Application Architecture for Testing: Needless to say that testing of any Decentralized application built over the blockchain platform is not only highly complex but also requires a specialized skill set with the most analytical mind of white box testers. At Magic FinServ, we possess rich real-time experience of some of the most complicated concepts of testing Blockchain-based decentralized applications. Based on this experience, our strategy divides Blockchain-based decentralized application into three isolated layers from a testing perspective –

1. Lowest-Layer – Blockchain platform to provide a platform on which smart contracts can be executed. 

a. Ethereum Virtual Machine

b. Encryption (Hashing & Digital Signature by using cryptography)

c. P2P Networking

d. Consensus Algorithm

e.  Blockchain Data & State of the network (Key-Value storage)

2.  Middle-Layer – Business Layer (Smart Contract) to build business logic for business use cases

a. Smart Contract development – Smart Contract Compilation, Deployment & Execution in Test Network

b. Smart Contract Audit

3. Upper-Layer – API Layer for Contracts, Nodes, Blocks, Messages, Accounts, Key Management & Miscellaneous endpoints to provide an interface to execute business logic and get updates on the state of the network at any given point in time. These interfaces can be embedded between upstream & downstream as well.

Based on these defined components of blockchain, we build an encompassing generic testing strategy for each layer in 2 broad categories –

1. Functional: As the category name suggests, this category ensures that all components that belong to each layer should function as per defined acceptance criteria by the business user/technical analyst/business analyst. We prefer to include System/Integration testing under this category to ensure that all components within each layer work as defined, but also as a complete system, it should accomplish the total business use case. 

2. Non-Functional: This category covers all kinds of testing other than functional testing like Performance, Security, Usability, Volatility & Resiliency testing not only at Node level but container level as well if Docker is being used to host any service.

In defining the generic testing strategy for these two broad categories, we surmise that infrastructure needs to be set up first & it will not be the same all the time. Before moving ahead on this, we need to answer other questions –

Question1: Why is the setting up of infrastructure the most critical & essential activity to start strategizing blockchain-based application testing?  

Question2: What all potential challenges do testers face while setting up infrastructure?

Question3: What all solutions do Testers have to overcome with Infrastructure setup?

To answer the first question:

We need to take a few steps backward to understand what we do for testing any software application in the traditional approach. For starting any software application testing, an environment has to be set up, but that is always a one-time activity until the development team does any significant change to the underlying platforms or technology. However, that happens very rarely. So testers can continue with testing without much worry about infrastructure.

The two core concepts of Blockchain technology are P2P networking & consensus algorithms. Testing these two components is heavily dependent on infrastructure setup, meaning how many nodes we need to test one feature or the other.

For P2P, we need a different number of connected nodes with an automated way to kill nodes randomly & then observe the behavior in each condition.

For Consensus, it depends on what kind of consensus is being used & based on the nature of consensus, different types of nodes, each with a different number of nodes will be needed.

Another differentiating factor that is not applicable for public blockchain but has a significant impact on a private blockchain is different types of permission to different nodes.

There is a frequent requirement to keep changing network topology for verifying most of the features of decentralized applications.

By now, we know how important it is to change network topology for each feature; otherwise, testing would not be as effective.  As we all know now, the blockchain network is a collection of many machines (a.k.a. nodes) connected with Peer To Peer networking. It is always a priority to automate the infrastructure required for mimicking the network topology that is needed for testing.

To answer the second question:

1. If we manually spin-off instances, let’s assume five instances, with required software & user setup, we need to spend almost 2–3 hours per instance.

2. Manually setting up machines is highly error-prone & mundane. Even simple automation does not help until the automation framework is intelligent enough to understand the need for different network topologies.

3. Due to agile methodology adoption, spending so much time setting up just infrastructure is not acceptable as the testing team usually does not have that length of time to complete testing for any given sprint.

4. All testers have to understand infrastructure setup as all need to change network topology to test most of the features. Finding a tester with good infrastructure knowledge is also challenging. 

5. Invalid network topology, most of the time, does not show an immediate failure due to the blockchain concept’s inherent nature. Eventually, an incorrect network topology leads to time and effort spent on non-productive testing without finding any potential bugs.

6. High defect rejection ratio by Dev, either due to incorrect network topology or due to incorrect peering among nodes

To answer the third question:

There are four ways to set up a network of nodes –

1. Physical machines with LAN

2. Virtual Machines

3. Docker Containers on the same machine can be considered as an isolated machine

4. Cloud Platform

We use Docker containers & cloud platforms to set up the infrastructure for testing blockchain-based applications as setting up physical machines, or virtual machines is not viable from a maintenance perspective. 

Physical machines with LAN: To set up a blockchain network with physical devices is tough & scalability is challenging since we need additional devices to achieve the desired testing quality. During Infrastructure testing, we need to make machines (a.k.a. Nodes) go up & down to verify the volatility. This is a cumbersome process for the tester as well. Setting up the network with physical devices requires physical space and need maintenance of these machines at regular intervals. We usually do not recommend going for this option; however, if a customer requires the testing to be done in such a manner, we can define a detailed strategy to execute it. 

Virtual Machines: Compared to a network of physical machines, virtual machines do have a lot of advantages. Increasing the number of VMs on an underlying device also highly complicates the matter since maintaining VMs is not user friendly. Another disadvantage is that we need to hardcode the limit of all the resources beforehand. However, combining Option1 and Option2  (multiple physical machines with multiple VMs on a single machine) seems to be a better choice, although it still requires lots of maintenance and carries overheads that act as a time sink for the tester. As reducing time to test is a critical aspect of quality delivery, we focus on saving as much time as possible to be invested in other higher-value elements of testing.

The advantage of using a cloud platform lies in the ability to spin off as many machines as needed without the overheads of maintenance or any other physical activity. However, it is still an uphill task to maintain such a network with multiple machines on the cloud too. Eventually, we thought of option 4 with Docker, and then we concluded that by combining option3 with option4, we could create a very solid strategy to perform infrastructure testing by overcoming various problems. 

Based on our real-world experiences, we tried various options individually, and in combination, our recommended approach is this process. 

Always perform Sanity/Smoke testing for any build with docker containers. Once all sanity/smoke tests finish without any failure, then switch to replicate the required network topology for functional testing of new enhancements & features.

The advantages of our approach are

1. Build failure issues can be found in less time & can be reported back to dev without any delay that cloud infrastructure introduces. Before taking this approach, we had to spend 2-3 hours to report any build failure bug, whereas the same can be caught in 5-10 minutes as we always run selective test cases under sanity/smoke. 

2. Saved the cost of cloud infrastructure in case of failure in the build, as there is no uptime in case of build failure. 

3. Saved a lot of time for the testing team to spend in infrastructure setup. 

4. Dev team gets more time to fix issues found in sanity/smoke testing as it gets reported in just a few minutes. 

5. Significant reduction in rejection of bugs by the development team. 

6. Timely delivery percentage for the build, without any major bug, has increased significantly. 

To dive into the testing strategy, using docker containers with cloud platform shall be covered in an upcoming blog, followed by our automation framework for the infrastructure setup testing. We will also try to define the answer for the most frequently asked questions by the customer –

Question1: Why should customers be concerned about Infrastructure testing for decentralized applications?

Question2: Why should customers look for an automated way of Infrastructure Setup testing for blockchain-based decentralized applications?

Stay tuned for the upcoming blogs and follow us on LinkedIn for timely updates.  

What are Smart Contracts?

Smart contracts are translations of an agreement, including terms and conditions, into a computational code. Blockchain follows the “No Central Authority” concept, and its primary purpose is to maintain transaction records without a central authority. To automate this process of recording transactions, Smart Contracts were constituted. Smart Contracts carry several beneficial traits, including automation, immutability, and a self-executing mode.

What is a DAML Smart Contract?

DAML is the open-source language from Digital Asset created to support DLT or distributed ledger technology. It allows multiple parties to do transactions in a secure and permissioned way. Using DAML as the coding language enables developers to focus only on building the smart contracts’ business logic rather than fretting about the nitty-gritties of underlying technology. DAML Smart Contracts run on various blockchain platforms and regular SQL database while providing the same logical privacy model and immutability guarantees.

A personal perspective

As a technology leader with years of experience, I remember the one line from my early days is the “Write Once Run Anywhere” slogan by Sun Microsystems to highlight the Java language’s cross-platform benefits.

I believe, in the coming years, DAML is the language that will enjoy similar popularity as Java due to its cross-platform benefits, ease of use, and versatility. DAML can revolutionize how business processes are managed by simplifying the contracts and making them ‘smarter.’

Comparing DAML

DAML V/S General Purpose Languages

Today, few popular general-purpose languages are in use for creating multi-party blockchain applications, i.e., Java, Go, and Kotlin.

All of these can also be used to create smart contracts. But the challenge lies in the sheer complexity of the task at hand.  The code that needs to be produced to build an effective smart contract, using these languages, is daunting. DAML can achieve the same result by writing 5-7 times less code in a much simpler manner. 

Smart Contract basic data types are contract-oriented (parties, observers, signatories/others), which is in direct contrast to the general-purpose languages (int/float/double). So, the very essence of smart contract languages such as DAML is one and only one – contracts, making it a superior choice for writing Smart Contracts. 

Comparison with Existing Smart Contract Languages

The domain of Smart Contracts is better handled by languages that have been purpose-built for smart contracts, like Solidity, DAML, AML/others. Among these smart contract languages, DAML is the only open-source and Write Once Run Anywhere (WORA). The DAML contract type is also private in nature. At a logical level, DAML has a strict policy. However, at the persistence level, different ledgers might implement privacy in different ways.

DAML for Private and Public Ledgers

The two types of ledgers, Private and Public serve different purposes and should be used accordingly. The underlying concept is that information on the ledgers is immutable once created.

Public Ledgers: Open to all, and anybody can join the system, and each member has access/read/write transactions. Examples: Bitcoin/ Ethereum/ others

Private Ledgers: Also known as permissioned networks or permissioned blockchains, have limitations in participation. It has higher security and limited permissions. Examples: Hyperledger Fabric and Corda. Some Private ledgers offer different privacy and data sharing settings, like Hyperledger sawtooth, although permissioned, allows all nodes to receive a copy of all transactions. 

DAML-Open source language allows the involved parties to do transactions in a secure and permissioned way. Thus, enabling developers to focus on the business logic rather than spending precious time in fine-tuning the underlying persistence technology.

At a logical level, DAML has a strict policy for permissioned access. However, at the persistence level, different ledgers might implement privacy in different ways. 

Sample of a Smart Contract: 

Reporting a trade transaction between 2 counter parties to a regulator or reviewing authority using Smart Contracts.

module Finance where

template Finance

  with

exampleParty : Party

exampleParty2 : Party

exampleParty3 : Party

regulator : Party

exampleParameter : Text

— more parameters here

  where

signatory exampleParty, exampleParty2

observer regulator

controller exampleParty can

   UpdateExampleParameter : ContractId Finance

     with

        newexampleParameter : Text

       do

         create this with

              exampleParameter = newexampleParameter

template name template keyword defines the parameters followed by the names of parameters and their types

template body where keyword can include:

template-local definitions let keyword

Let’s you make definitions that have access to the contract arguments and are available in the rest of the template definition.

signatories signatory keyword Required 

The parties (see the Party type) must consent to create an instance of this contract. You won’t be able to create an instance of this contract until all of these parties have authorized it.

observers observer keyword Optional. 

Parties that aren’t signatories but who you still want to be able to see this contract. For example, the SEC wants to know every contract created, and the SEC should be aware of this.

Optional: Text that describes the agreement that this contract represents.

Explanation of the code snippet

DAML is whitespace-aware and uses layout to structure blocks. Everything that is below the first line is indented and thus part of the template’s body.

The signatory keyword specifies the signatories of a contract instance. These are the parties whose authority is required to create the contract or archive it again – just like a real contract. Every contract must have at least one signatory.

Here the contract is created between two parties-Party1 and Party2, and the regulator is the observer. Every transaction done is visible to the observer, i.e., the regulator playing the role of Regulator (SEC) in this case and SEC can be looking at every transaction. So, smart contracts can be created in this space.

DAML disclosure policy ensures that Party3 will not be able to view the transactions as it is neither signatory, nor observer or controller, and it is just a party to the contract.

Here is a link to the repository provided by DAML which contains examples for several such use cases modeled in DAML.

  1. How to write smart contracts using DAML and various use-cases

https://github.com/digital-asset/ex-models

  1. Ledger implementation enabling DAML applications to run on Hyperledger Fabric 2.x

https://github.com/digital-asset/daml-on-fabric and for learning

Compilation and Deployment of DAML

DAML has both the language as well as the run time environments (in the form of libraries known as DAML SDK). Developers need to focus on writing smart contracts (in the way of language features provided by DAML SDK) without bothering about the underlying persistence layer. It also provides support for existing data structures (List/Map/Tuple) and also provides the functionality for creating a new data structure. 

Other notable features of DAML 

  • A .dar file is the result of compilation done through DAML Assistant, and eventually, .dar files are uploaded into the ledger so that the contracts can be created from the templates in the file. This .dar is made up of multiple .dalf files.  A .dalf file is the output of a compiled DAML package or library and it’s underlying format is DAML-LF.
  • Sandbox is a lightweight ledger (in-memory) implementation available only in the dev environment. 
  • Navigator is a tool for exploring what is there on the Ledger and it shows what contracts can be seen by different parties and submit commands on behalf of those parties.
  • DAML gives you the ability to deploy your smart contracts on the local ledger(in-memory) so that various scenarios can be easily tested. 

Testing DAML Smart Contracts

1) DAML has a built-in mechanism for testing templates called ‘scenarios’. Scenarios emulate the ledger. One can specify a linear sequence of actions that various parties take, and subsequently these are evaluated with the same consistency, authorization, and privacy rules as they would be on the sandbox ledger or ledger server. DAML Studio shows you the resulting transaction graph.

2) Magic FinServ launched its Test Automation suite called Intelligent Scenario Robot, or IsRobo™, an AI-driven Scenario Generator that helps developers test smart contracts written in DAML. It generates the unit test cases (negative and positive test cases) scenarios for the given smart-contract without any human intervention, purely based on AI.

Usage in Capital Markets

Smart contracts, in general, have excellent applications across the capital markets industry. I shall cover some use cases in subsequent blogs outlining how multi-party workflows within enterprises can benefit by minimizing reconciliations of data between them, and allow mutualization of the business process. Some popular applications currently being explored by Magic FinServ are: 

  • Onboarding KYC
  • Reference data management
  • Settlement and clearing for trades
  • Regulatory reporting
  • Option writing contracts (Derivatives industry)

Recent Noteworthy Implementations of DAML Smart Contracts are: 

  • International Swaps Derivatives Association (ISDA) is running a pilot for its Common Domain Model (CDM) for clearing of interest rate derivatives using a distributed ledger.
  • The Australian Stock Exchange (ASX) and Swiss investment bank UBS are continually providing the inputs to validate the CDM’s additional functionality alongside ISDA and Digital Asset.

DAML Certification process

To get hands-on experience with DAML, free access to docs.daml.com is available, where developers may learn from the study material/download the run time, and build sample programs. However, to reinforce learning and add this as a valuable skill, it is better to be a DAML-Certified Engineer. It is worth pursuing as the fee is reasonable and the benefits are manifold. There are not plenty of DAML developers available in the market, so it is a rather sought-after skill as well. 

Conclusion

DAML is ripe for revolutionizing the way business processes are managed and transactions are conducted. 

The smart contracts that are developed on open-source DAML can run on multiple DLTs / blockchains and databases without requiring any changes (write once, run anywhere). 

With the varied applications and relative ease of learning, DAML is surely emerging as a significant skill to add to your bouquet if you are a technologist in the capital markets domain. 

To explore the DAML applications with Magic FinServ, read more here

To schedule a demo, write to us mail@magicfinserv.com 

The accessibility, accuracy, and wealth of data on the Securities and Exchange Commission’s EDGAR filing system make it an invaluable resource for investors, asset managers, and analysts alike. Cognitive technologies are changing the way financial institutions and individuals use data reservoirs like the SEC EDGAR. In a world that is being increasingly powered by data, artificial intelligence-based technologies for analytics and front-offices processes are barely optional anymore. Technology solutions are getting smarter, cheaper, and more accurate, implying that your team’s efforts can be directed towards high-value engagements and strategic implementations. 

DeepSight™ by Magic FinServ is a tailor-made solution for unstructured data-related challenges of the financial services industry. It uses cutting-edge technology to help you gain more accurate insights from unstructured and structured data, such as datasets from the EDGAR website, emails, contracts & documents, etc. saving over 70% of the existing costs.

AI-based solutions significantly enhance the ability to extract information and turn into knowledge from the massive data deluge, therefore providing enormous critical information to make decisions. This often translates to building higher competitiveness &, therefore, higher revenue.

What are the challenges of SEC’s EDGAR?

The SEC’s EDGAR presents vast amounts of data of public companies’ filed corporate documents, including quarterly and annual reports. While the reports are comprehensive and better accessible on public portals than before, filings such as daily filings and forms require much more diligent effort to peruse since it is tedious. There is also an increased margin of human error and bias when manually combing through data in such volumes. Quick availability of this public data also means that market competitors track and process it fast, in real-time. 

The numerous utilization possibilities of this data come with challenges in analysis and application. The issue of external data integration into fund management operations has been a legacy problem. The manual front-office processing of massive datasets is tedious and fragmented today but changing fast. Analysis of such large amounts of data is time-consuming and expensive; therefore, most analysts only utilize a handful of data points to guide their investment decisions, leaving untapped potential trapped in the other data points.  

After a lukewarm 1.1 percent organic net flow in the US every year between 2013 and 2018, cognitive technologies have now brought about a long-due intervention in the form of digital reinvention. Previously limited to applications in the IT industry, these technologies have been transforming capital management for a short while, but with remarkable impact. While their appearance in finance is novel, they present unique use cases to extract and manage data. 

How can technology help with the processing of EDGAR data used in the industry?

Data from EDGAR is being used across various business applications. Intelligent reporting, zero redundancies, and timely updates ultimately drive the quality of investment decisions. As investment decisions can be highly time-sensitive, especially during volatile economic conditions, extracting and tracking relevant information in real-time is crucial. 

Magic DeepSight™ is trained to extract relevant and precise information from SEC’s EDGAR, organize this data as per your requirements, deliver it in a Spreadsheet or via API’s or even better ingest it directly into your business applications. Since Magic DeepSight™ is built ground-up with AI technology, it has a built-in feedback loop, allowing you to train the system automatically with every use.

This focused information retrieval and precision analysis hastens and enhances the investment assessment process of a fund or an asset manager– a process that is fraught with tedious data analysis, complicated calculations, and bias when done solely manually.

Investment advice collaterals that are accurate, informative, and intelligible are part of the value derived through Magic DeepSight™. NLP and AI-driven tools, especially those trained in the language of your business ecosystem, can help you derive insights across multiple software environments in their appropriate fields. And all of it can be customized for the stakeholder in question. 

Meanwhile, tighter regulations on the market have also increased the costs of compliance. Technology offsets these costs with unerring and timely fulfillment of regulatory requirements. The SEC has company filings under the magnifying glass in recent exams, and hefty fines are being imposed on firms for not meeting the filing norms. Apart from pecuniary implications, fulfilling these requirements pertain to your firm’s health and the value perceived by your investors. 

What’s wrong with doing it manually?

Most of the front-office processes continue to be manual today, forcing front-office analysts slugging through large chunks of information to gain valuable insights. The information on EDGAR is structured uniformly, but the lengthy retrieval process negates the benefits of this organization of data. For example, if you wish to know acquisition-related information about a public company, you can access their Form S-4 and 8K filings easily on the SEC EDGAR website. But going through all the text to precisely find what is needed takes time. With Magic DeepSight™, you can automate this extraction process so analysts can focus on the next steps. 

And while a team of analysts is going through multiple datasets quickly, likely, relevant insights from the data that falls outside the few main parameters being considered are overlooked. And if such a hurdle arises with organized data, processing unstructured documents with large blocks of text, press releases, company websites, and Powerpoint presentations unquestionably takes much longer and is equally problematic. With Magic DeepSight™, you can overcome this blind spot. It can quickly process all values in a given dataset, and using NLP, it efficiently extracts meaningful information from unstructured data from multiple sources. Using this information, Magic DeepSight™ can provide you with new patterns and insights to complement your research team.

How does Magic DeepSight™ transform these processes?

While most data management solutions available in the market are industry-agnostic, Magic DeepSight™ is purpose-built for the financial domain enterprise. AI models, such as that of Magic DeepSight™ trained on financial markets’ datasets, can comprehend and extract the right data points. Built with an advanced domain-trained NLP engine, data is analyzed from an industry perspective and customized to your needs. Magic DeepSight™ is available on all cloud environments and on-premises if needed. Moreover, it integrates across your existing business applications without causing any disruptions to your current workflow.

DeepSight™ is built on a reliable stack of open source libraries, complimented by custom code, wherever needed, and trained to perfection by our team. This versatility is also what makes it easily scalable. Magic DeepSight™ can treat a wide range of information formats and select the most appropriate library for any dataset. By using Magic DeepSight™, Search—Download–Extraction of relevant information from the SEC EDGAR database can become easy and efficient. Information on forms such as disclosures on a 10K, including risk assessment, governance, conflict of interest, etc. is accurately summarized in a fraction of the time taken previously, freeing up space for faster and better-informed decision making. 

But it is more than just a data extraction tool. DeepSight™ is also integrated with other technologies such as RPA, smart contracts, workflow automation, and more– making it an end-to-end solution that adds value to each step of your business processes. 

Our team can also customize DeepSight™ to your enterprise’s requirements delivering you automated, standardized, and optimized information-driven processes across front-to-back offices.

What business value does Magic DeepSight™ provide?

  • It completely automates the process of wading through vast amounts of data to extract meaningful insights saving personnel time and effort, thus reducing costs up to 70%.
  • It becomes an asset to the research processes by employing NLP to extract meaningful information from an assortment of unstructured document types and formats, improving your firm’s overall data reservoir quality.
  • The band of your insights, made possible with AI, offer a richer perspective that was previously hidden, thus helping you drive higher revenues with better-informed investment decisions. 

Magic DeepSight™ digitally transforms your overall operations. Firms that adopt AI, data, and analytics will be better suited to optimize their business applications. 

To explore Magic DeepSight™ for your organization, write out to us mail@magicfinserv.com

Until recently, your enterprise may have considered smart contracts as a tool to bridge silos from one organization to another – that is to establish external connectivity over Blockchain. However, what if we proposed applying the same concept so a firm can be instrumental in addressing enterprise-wide data reconciliation and system integration / consolidation challenges to expedite time to market and streamline (i.e internal, regulatory, FP&A, supplier risk) reporting. 

Afterall, about 70-80% of reconciliation activity takes place within the enterprise. The best part? A firm can do this with minimal disruption to its current application suite, operating system and tech stack. We will look at traditional approaches and explain how smart contracts are the way to get started on one of those journeys when one never looks back

To set the stage, let’s cover the self-evident truths. Reconciliation tools are expensive and third party tool implementations typically require multi year (and multi million dollar) investments. Over 70% of Reconciliation requirements are within the Enterprise amongst internal systems. Most reconciliation resolutions start with an unstructured data input (pdf/email/spreadsheet) which requires a manual review/scrubbing to be ingested easily. For mission critical processes, this “readiness of data” lag can result in delays and lost business, backlogs, unjustifiable cost and worst of all, regulatory penalties. 

Magic Finserv proposes a three-fold approach to take on this challenge. 

  1. Data readiness: Tackle the unstructured data solution using AI and ML utilities that can access data sources and ingest them into a structured format. Often Reconcilliation is necessary because of incorrect or incomplete data, ML can anticipate what is wrong / missing from past transactions and remediate. This is the Auto Reconciliation.
  2. Given unstructured data elements may reside in fragmented platforms or organizational silos, the Firm must have an intelligent way of integrating and mutualizing itself with minimal intervention. An ETL or data feed may look appealing initially, however, these are error prone and do not remediate the manual reconciliation tasks for exception management.  Alternatively, a smart contract based approach can streamline your rule-based processes to create a single data source. 
  3. Seamless integration to minimize the disconnect between applications. The goal is to create an environment where reconciliation is no longer required. Ideally.

We have partnered with Digital Asset to outline a solution that brings together an intelligent data extraction tool, a DAML smart contract and a capital markets focused integration partner that will reduce manual end to end reconciliation challenges for the enterprise.

Problem statement & Traditional Approach

Given that most enterprise business processes run through multiple disparate applications with their respective unique databases, it has been proven a monolithic application approach is close to impossible. And not recommended due to issues with a Monolithic application architecture. Traditionally, this challenge has been addressed using integration tools such as an Enterprise Service Bus, SOA, where the business gets consumed in the cycle of data aggregation, cleansing and reconciliation. Each database becomes a virtual pipeline of a business process and an additional staging layer is created to deliver internal/external analytics. In addition, these integration tools are not intelligent as they only capture workflows with adapters (ad hoc business logic) and do not offer privacy restrictions from the outset. 

Solution

The Digital Assets DAML on X initiative extends the concept of the Smart Contract onto multiple platforms including Databases. The DAML on X interfaces with the underlying Databases using standard interfacing protocols, the Smart Contract mutualizes the Data Validation rules as well as the access privileges. Once you create a DAML smart contract, the integrity of the process is built into the code itself, and the DAML runtime makes disparate communication seamless. It is in its DNA to function as a platform independent programming language specifically for multi-party applications.

Without replacing your current architecture such as the ESB, or your institutional vendor management tool of choice, use the DAML runtime to make application communication seamless and have your ESB invoke the necessary elements of your smart contract via exposed APIs .  

Handling Privacy, Entitlements & Identity Management

Every party in the smart contract has a “party ID” plugged in directly with your identity management solution that you are using institutionally. You can even embed “trustless authentication”. 

The idea is that entitlements/rights & obligations are baked directly into the language itself as opposed to a normal business process management tool where you build out your business process and then put the entitlements/ marry them in phase 3 of the process – only to realize that workflow needs to change. 

DAML handles this upfront – all of the authentication is taken care of the persistence layer/IDM that you decide on. The smart contract template represents a data scheme in a database and the Signatories/controllers in our example represent role-level permissioning of who can do what/when and who can see what/when

 The image below shows how the golden source of data is generated.


It is a purpose built product that contains automatic disclosures and privacy parameters out of the box. You don’t need to keep checking your code to see if the guy who is exercising command is actually allowed to see the data or not. All of this is within the scope of the DAML runtime. 

Already kickstarted your enterprise blockchain strategy?

Firstly, Amazing! Second, since DAML Smart contracts can run on databases or distributed ledgers of your choice (Fabric, Corda etc. ), it’s a unique solution that gives you the flexibility to get started with the application building and even change underlying ledgers at any point. You can also Integrate between multiple instances. I.e. If you are running a DAML app on Fabric and another DAML app on corda, both apps can talk to one another. 

The key takeaway here is that most enterprises are held up with determining which ledger meets their needs. With its intuitive business workflow focused approach, developing your DAML applications while you select your ledger fabric can expedite revenue capture, implement consistent enterprise reporting and reduce the burden of reconciliation – the smart contract through to the integration layer is completely portable. 

Get Insights Straight Into Your Inbox!

    CATEGORY