Ingesting Unstructured data into other Platforms

Industry specific Products / Platforms like the ERP for specific functions and processes have contributed immensely to enhancing efficiency and productivity. SI partners and end-users have focused on integrating these platforms with existing workflows through a combination of customization/configuring of these platforms and re-engineering existing workflows. Data Onboarding is a critical activity however it has been restricted to integrating the platforms with the existing ecosystem. A key element that is very often ignored is integrating Unstructured Data sources in the Data Onboarding process.

Most enterprise-grade products and platforms require a comprehensive utility that can extract and process a wide set of unstructured documents, data sources and ingest the output into a defined set of fields spread across several internal and third-party applications on behalf of their clients. You are likely extracting and ingesting this data manually today, but an automated utility could be a key differentiator that reduces time, effort and errors from this extraction process. 

Customers have often equated use of OCR technologies as solutions to these problems, however OCR suffers from quality and efficiency issues thereby requiring manual efforts. More importantly OCR extracts the entire document and not just the relevant Data Elements, thereby adding significant noise to the process. And finally, the task of ingesting this data into the relevant fields in the applications / platforms is still manual.

When it comes to widely used and “customizable” case management platforms for Fincrime applications, CRM platforms, or client on-boarding/KYC platforms, there is a vast universe of unstructured data that requires processing outside of the platform in order for the workflow to be useful. Automating manual extraction of critical data elements from unstructured sources with the help of an intelligent data ingestion utility enables users to repurpose critical resources tasked with repetitive offline data processing.

Your data ingestion utility can be a “bolt on” or a simple API that is exposed to your platform. While the document and data sets may vary, as long as there is a well-defined list of applications and fields that are required to be populated, there is a tremendous opportunity to accelerate every facet of client lifecycle management. There are several benefits to both “a point solution” which automates extraction of a well-defined document type/format as well as a more complex, machine learning based utility for a widely defined format of the same document type. 

Implementing Data Ingestion

An intelligent pre and post processing data ingestion can be implemented in 4 stages, each stage increasing in complexity and value extracted from your enterprise platform:

Stage 1 
  • Automate the extraction of standard templatized documents. This is beneficial for KYC and AML teams that are handling large volumes of standard identification documents or tax filings which do not vary significantly. 
Stage 2 
  • Manual identification and automated extraction of data elements. In this stage, end users of an enterprise platform can highlight and annotate critical data elements which an intelligent data extraction utility should be able to extract for ingestion into a target application or specified output format. 
Stage 3
  • Automated identification and extraction as a point solution for specific document types and formats.
Stage 4
  • Using stage 1-3 as a foundation, your platform may benefit from a generic automated utility which uses machine learning to fully automate extraction and increase flexibility of handling changing document formats. 

You may choose to trifurcate your unstructured document inputs into “simple, medium, and complex” tiers as you develop a cost-benefit analysis to test the outcomes of an automated extraction utility at each of the aforementioned stages. 

Key considerations for an effective Data Ingestion Utility:

  • Your partner should have the domain expertise to help identify the critical data elements that would be helpful to your business and end users 
  • Flexibility to handle new document types, add or subtract critical data elements and support your desired output formats in a cloud or on-premise environment of your choice
  • Scalability & Speed
  • Intelligent upfront classification of required documents that contain the critical data elements your end users are seeking
  • Thought leadership that supports you to consider the upstream and downstream connectivity of your business process

This current blog is part three in the series of blogs on DLT infrastructure testing. 

While in the first blog, we covered all aspects of infrastructure testing for decentralized applications built on the blockchain or distributed ledger platforms and the Magic FinServ approach. In the second blog, we have addressed why customers must make infrastructure testing an integral part of the QA process. 

In this third blog of the series, we address another issue of critical importance – automation. Automation is an essential requirement in any organization today when disruptive forces are sweeping across domains. And as a McKinsey report indicates – “Automation can transform testing and quality control because the increased capacity it provides allows a company to move from spot checks to 100 percent quality control, which reduces the error rate to nearly zero.” 

Infrastructure testing -A critical requirement

While the importance of infrastructure testing cannot be denied, four attributes make it extremely complicated from the tester’s perspective. These are peer-to-peer networking(P2P), consensus algorithms, role-based nodes along with the permission for each node (only for private networks), and lastly, state and transactional data consistency under high load along with resiliency of nodes. 

To know more about these in detail,  you can check the links provided below, which lead to the first and second in the series of blogs:       

Infrastructure Testing for Decentralized Applications built on Blockchain or Distributed Ledger Platform

Why is Infrastructure Testing important for Decentralized Applications built on any Blockchain or DLT

From these blogs, it becomes evident that though infrastructure testing is an essential requirement for any decentralized application, it is also a time-consuming task. Most of the supported features for such applications require different configurations/arrangements of nodes meaning different network topologies for each feature. There is a high possibility that one feature may be tested with some number of nodes. However, for a proper test fix or enhancement of any sort, a different number of nodes from what was designed earlier is needed. 

Developing a comprehensive test strategy

As far as test strategies are concerned, most often deployed one utilizes docker-based containers to copy different network topologies with minimal changes. However, defining docker-based containers ( a.k.a. docker service) with different numbers is also a highly time-consuming activity. The addition of a single new container, depending upon the number of nodes, usually takes a couple of hours to set up Docker-based containers to create different network topologies. It is not only tedious but too complicated. 

One also must take into account the cloud. Most organizations now require infrastructure-testing to be carried out on cloud platforms to mimic the closure environment that they would be using in real-time, as closely as possible. However, setting up one node on any existing cloud service could easily take two to three hours, even with automated ways to spin off machines. Therefore to ensure quicker results, the option at hand is automation.

Automating the untested – how to get started

Today almost every organization/enterprise uses Agile methodology for product development and an automated way (with CI/CD) to create builds daily. Functional testing can be automated and integrated within CI/CD easily, but it is not so with non-functional testing like Infrastructure, Performance, Security, Resiliency, Load testing, etc. These are not easily integrated with CI/CD. Even if these are integrated within CI/CD, non-functional testing does not provide the kind of results organizations desire. 

When it comes to the question of manual  non-functional testing, it is rather tedious. Since there are frequent builds that have to be tested (for non-functional areas like infrastructure), manually setting up a different network topology is not viable. It takes a lot of time and is highly error-prone.  Non-functional testing of Blockchain (other than infrastructure) relies on node level rather than the network level; therefore for tests related to Performance, Security, Resiliency (all of which come under non-functional testing) are performed on standard network topologies. Thereby indicates that infrastructure testing directly relates to network topologies, whereas other non-functional testing processes mentioned earlier are impacted on a case-to-case basis.

For infrastructure testing, organizations must carry out the following activities to define the network topology:

  • Impact analysis of all changes related to the four significant factors listed earlier
  • If any of the four factors is impacted, then defining network topologies for each scenario
  • Set up of nodes for all probable network topologies
  • Creation of network for each network topology
  • Execution of functional/non-functional testing on each network topology to ensure that all network topologies are working as per the acceptance criteria

Impact analysis of changes 

To define the required numbers of network topology, organizations must first identify what all changes are to be done and how those changes would be impacted by peer-to-peer (P2P) networking logic, consensus algorithms logic, permissioning handler logic, or data/transaction consistency logic. If the impact is apparent, then organizations must define network topology. This activity is the most time-consuming task of all as one has to understand all the changes. 

Another critical task for organizations is to perform impact analysis for all changes and find out whether the four major factors have been impacted or not. The easiest way to process this task would be to get developers to register this information with meaningful keywords that can automate impact analysis. With proper automation in place, organizations can use impact analysis to determine whether existing network topologies can be used or a new one has to be created.

Defining network topologies: 

Once impact analysis is done and it is decided that new network topologies must be created to account for changes, then the next requirement is to define all network topologies.

For instance, if an organization reports an issue related to the functioning of nodes. Whenever there is an even number of consensus nodes within the network, then consensus seems to get stuck or takes longer than usual. To resolve the problem, developers work out the logic. In case the network does not have an even number of consensus nodes, then the need is to either convert one existing node to a consensus node or add the new one to the network. Either way, the network topology will be changed from one that exists. 

With proper automation in place, it is possible to keep the registry of all existing QA network topologies. Once the required network topology is fed in, it should provide data with pertinent information whether a new node is to be created or any existing network can be utilized after modifying a  number of nodes. Manually performing this task could take hours and sometimes even days if the organization has a long list of network topologies in their QA environments.

Setting up nodes for required network topology: 

There are two possibilities here, either modify the existing node or create a new node to have a new network topology. In either case, nodes will have to be set up manually. Again this will take time and require the engagement of someone who understands QA environments from an administrative perspective to set up nodes. Hence, it will increase the time taken and create dependency on new groups to coordinate for node setup without automation.

Creation of Network Topology: 

After setting up the required number of nodes, a network is created based on the network initialization process. If multiple network topologies have to be tested with several scenarios, then for each network topology, the following activities have to be performed:

  • Cleaning all involved nodes if existing network nodes is used
  • Initialization of network
  • Allowing for stabilization of the nodes for all components/services
  • Execution of functional scenarios
  • Destroying the network to free the nodes

As all the above activities are required to be completed for each network topology so without automation, this will consume a lot of time and make testing highly error-prone. Most of the time, network topologies use nodes that have overlap with other network topologies; hence missing any of the activities underlined earlier will result in inconsistency on the other running network. Experience suggests that cleaning the nodes is a highly error-prone activity within a shared environment of various network topologies. It becomes tough to discriminate why the errors are ensuing. Whether those are actual bugs to be reported or some nodes are now being used for two or more networks responsible for the error since clean up (of nodes) has not been done correctly. Without proper automation, all these activities will take significant time and raise a false alarm for the issues that have popped up due to some human error.

Execution of functional and non-functional tests: 

Functional tests must be executed without fail, whereas non-functional tests are always subject to the changes being made. These (non-functional tests) become essential if there is performance improvement or fix required for any security vulnerability. Even in case of any exceptional fix that hurts performance, this is required. 

Functional tests are implicitly covered in the network creation phase, and almost every organization focuses (or gives priority) on automating functional testing. Non-functional testing has always been the least priority for most; however, this becomes very tedious if required to be performed on multiple network topologies. It is rare to run non-functional testing for all network topologies as it has the very least dependency on different network topologies. Most of the time, non-functional testing is at node level rather than dependent upon different network topologies. 

Conclusion

In its Hype Cycle report for Blockchain Business for 2019, Gartner predicts that within five to 10 years, Blockchain will have a transformational impact across industries. According to David Furlonger, distinguished research vice-president at Gartner, permissioned ledgers in several key areas in banking and investment services will witness increased focus. In light of the uptick in interest from banking and investment services CIOs seeking to improve decades-old operations and processes, automation is desirable for driving ROI and efficiency in Blockchain incorporation and its automation.

Automated testing enables the developers to easily and quickly check new apps and updates for errors, defects, and other weaknesses. Infrastructure testing is one such area that organizations must automate as soon as possible if they desire to build robust decentralized applications. 

Magic FinServ’s automated test methodology is unique, and we have the relevant expertise to drive automation for testing Blockchain Infrastructure. We have had success with several clients who built financial products on blockchain platforms. 

To explore automated testing for blockchain infrastructure, write to us at mail@magicfinserv.com 

According to a recent forecast by Gartner, “by 2025, the business value added by blockchain will grow to slightly more than $176 billion, then surge to exceed $3.1 trillion by 2030.” Right from the voting process to the transfer of data for mission-critical projects, blockchain-based technology would be an integral part of the social, economic, and political setup the world over. 

There are many exciting components/features that make it possible for blockchain platforms to provide a secure decentralized architecture for activities ranging from processing transactions to storing data that is immutable. We have briefly discussed these in our earlier blog. The blog identified how various services and components make infrastructure testing a matter of utmost significance/consequence- while at the same time testing the developer’s or application team’s core competence.

Considering its immense impact in the days to come in all aspects of human life, it has become essential that clients investing in blockchain ensure that the nature of transactions is inviolable. 

To ensure this inviolability, the infrastructure of the blockchain must work seamlessly. Hence the need for infrastructure-testing of blockchain to verify if all the constituent elements are operating as desired. 

What comprises infrastructure testing for Blockchain/Distributed ledger platform 

In simple terms, infrastructure-testing of blockchain networks translates into verifying whether the end-to-end blockchain core network and its constituent elements are operating as desired. It is critical as it determines the reliability of a product, which depends entirely on nodes spread across the globe.

When it comes to decentralized applications built on the blockchain or distributed ledger platforms due to the nature of operations where each constituent element is highly reliant or linked with the other, any shortcoming or failure could jeopardize operations. Hence, to ensure continuity, reliability, and stability in services, infrastructure testing should be carried out with high focus.      

Defining the constituent elements of a Blockchain
  • All distributed ledger platforms, including blockchain, have a dedicated service responsible for establishing communication between the nodes utilizing peer-to-peer networking or any other networking algorithm. 
  • There is also a component or service that makes the network of such applications fault-tolerant using consensus algorithms. 
  • Another critical aspect of blockchain platforms is making consensus on the state and transactional data to process, followed by persisting of the manipulated data. 
  • When it comes to private networks, also known as consortium networks, there are many ways to achieve permission for each node to provide a secure and isolated medium among the participants. 

For confirming production usage for application builds over these platforms, infrastructure testing has similar importance as any other supported functionality. Without verifying functionality, none of the applications can be deployed to production. Similarly, decentralized applications built over various platforms can be deployed to production only after the reliability of infrastructure has been verified with all nodes’ probable numbers. 

What makes the entire exercise demanding are the following factors: 

  • Peer- to- Peer networking(P2P) 
  • Consensus algorithms
  • Role-based nodes along with permission for each node (meant only for private networks)
  • State and transactional data consistency under high loads along with resilience test of nodes

Another vital characteristic to be considered is the number of nodes itself.  Considering that such applications’ functionalities are dependent on the number of nodes, this is a key requirement. The number of nodes can vary depending upon:

  • Which service or component is to be tested 
  • How all the factors mentioned above impact the service or component
Importance of testing various components of Blockchain Infrastructure
Reliability testing

Reliability of infrastructure by far is the most challenging phase for any blockchain developer or application team. Here, confirming whether an application can run on targeted infrastructure or not is explored. Defining application reliability for multiple machines (a.k.a. nodes or servers or participants etc.) increases complexity exponentially due to the permutation and combinations of failures. 

Hence, wherever multiple machines are involved, it is the natural course of action for developers and application teams to measure application reliability on the infrastructure on which such applications will run. All the factors enlisted earlier attest that infrastructure testing is of prime consequence for decentralized applications built on all available platforms. 

Peer-to-Peer networking

If there is any flaw in peer-to-peer networking, then nodes will not communicate with each other. If nodes cannot establish connections with each other, then nodes will not be able to process the transaction with the same state. If nodes are not in the same state, then there will not be any new data manipulated and created to persist. In the case of blockchain, there will not be any new blocks. For the distributed ledger, there will not be any new data appended to the ledger. This may lead to chain forking or a messy state of data across the nodes that will eventually result in the network reaching a dead-end or getting stuck. 

Improper peer-to-peer network implementation can also lead to data exposure to unintended nodes that do not have permission to see the data. To overcome this risk of unintentional data exposure, proper testing must be performed. That will ensure that the expected number of participants and expected numbers of new participants can participate within the network as appropriate communication is established between nodes based on each node’s role and permissions. 

Consensus algorithms: 

Consensus algorithms have two critical functions: 

  • Drive consensus by ensuring that a majority of nodes are processing new data with the same state
  • Provide fault tolerance for network

Consensus algorithms must be verified with all possible types of nodes and all probable permissions that can be defined for each node. To verify the consensus algorithms, multiple network topologies are needed. Improper verification will result in the network getting stuck. It would also result in sharing of data with nodes that were not supposed to get the data. 

Any flaw in consensus will result in a “stuck network” and cause the forking of data. Worse, data can be manipulated with fraudulent nodes. Depending upon which consensus algorithm is used, the network topology can be created and verified with all expected features claimed to be working. 

Role-based nodes, along with their permission 

Each platform supports different roles for each node to ensure that nodes get only the intended information based on the defined permissions. Depending upon the different kinds of roles and their respective permissions, various network topologies are created to perform all required verifications. In case there is any missing verification, sensitive data is exposed to unintended nodes. The way data is shared between nodes is governed by consensus algorithms based on defined permission. 

Any flaw in the permission control mechanism can lead to sensitive data leakage. Data leakage is catastrophic, more so for private networks. The importance of accuracy cannot be emphasized more in this case, and it can only be achieved by ensuring a proper testing mechanism is being utilized.

State and transactional data consistency 

As there can be any number of nodes in real-time, it is highly critical to verify that each node has the same state and transactional data. All complicated transactions must be performed with an adequately defined load to ensure that all nodes have the same state and transactional data. 

Resiliency-based verification must be performed so that all nodes can get to the same state and transactional data, even when a fault is intentionally introduced to randomly selected nodes with a running network. 

Conclusion

To conclude, infrastructure testing should not be substituted with any traditional functional testing process. Furthermore, as this is a niche area, infrastructure testing must be entrusted to a partner with industry-wide experience and capable resources having a sound understanding of all the factors underlined above. A real-time experience in establishing testing processes for such platforms is a highly desired prerequisite.  Without infrastructure testing, it is perilous to launch a product in the market. 

Magic FinServ has delivered multiple frameworks designed for all the above factors. With an in-depth knowledge of multiple blockchain platforms, we are in an enviable position to provide exactly what the client needs while ensuring the highest level of accuracy and running all frameworks following industry standards and timelines. As each customer has their own specific way of developing such platforms and choosing different algorithms for each factor, choosing an experienced team is undoubtedly the best option to establish an infrastructure testing process and automate end-to-end infrastructure testing.

To explore infrastructure testing for your Blockchain/DLT applications, write to us at mail@magicfinserv.com

Introduction

Investment research and analysis is beginning to look very different from what it did five years ago. While five years ago, the data deluge could have confounded asset management leaders, they now have a choice on how things could be done differently, thanks to AI and advanced analytics. Advanced analytics helps create value by eliminating biased decisions, enabling automatic processing of big data, and using alternative data sources to generate alpha. 

With multiple sources of data and emerging AI applications heralding a paradigm shift in the industry, portfolio managers and analysts who earlier used to manually sift through large volumes of unstructured data for investment research can now leverage the power of AI tools such as natural language processing and abstraction to simplify their task. Gathering insights from press releases, filing reports, financial statements, pitches and presentations, CSR disclosures, etc., is a herculean effort and consumes a significant amount of time. However, with AI-powered data extraction tools such as Magic DeepSight™, quick processing of large-scale data is possible and practical.

A tool like Magic DeepSight™  extracts relevant insights from existing data in a fraction of the time and capital compared to manual processing. However, the real value it delivers is by supplementing human intelligence with powerful insights, allowing analysts to direct their efforts towards high-value engagements.

Processing Unstructured Data Is Tough

There are multiple sources of information that front office analysts process daily, which are critical to developing an informed investment recommendation. Drawing insights from these sources of structured and unstructured data are challenging and complex. These include 10-K reports, the reasonably new ESG reports, investor reports, and various other company documents such as internal presentations and several PDFs. SEC EDGAR database makes it easy to access some of this data, but extracting this data from SEC EDGAR and identifying and then compiling relevant insights is still a tedious task. Unearthing insights from other unstructured documents also takes stupendous manual efforts due to the lack of any automation. 

10-K Analysis using AI

More detailed than a company’s annual report, the 10-K is a veritable powerhouse of information. Therefore, accurate analysis of a 10-K report would lead to a sounder understanding of the company. There are five clear-cut sections of a 10-K report – business, risk factors, selected financial data, management discussion and analysis (MD&A), financial statements, and supplementary data, all of which are packed with value for analysts investors alike. Due to the breadth and scope of this information, handling it is inevitably time-consuming. However, two sections that usually require more attention than the others to analyze due to the complexity and existence of possible hidden anomalies are the “Risk Factors” and the “MD&A”. The “Risk Factors” section outlines all current and potential risks posed to the company, usually in the order of importance. In contrast, the   “Management’s Discussion and Analysis Of Financial Condition And Results Of Operations” (MD&A) section is the company management’s perspective of the previous fiscal and future business plans’ performance.

As front-office analysts sift through multiple 10-K reports and other documents in a day, inconsistencies in analysis can inadvertently creep in. 

They can miss important information, especially in the MD&A and Risk Factors sections, as they have to analyze many areas to study and more reports in the queue. Even after extracting key insights, it takes time to compare the metrics in the disclosures to a company’s previous filings and against industry benchmarks. 

Second, there is the risk of human bias and error, where relevant information may be overlooked.  Invariably, even the best fund managers would succumb to the emotional and cognitive biases inherent in all of us, whether confirmation bias, bandwagon effect, loss aversion, or various other biases that behavioral psychologists have formally defined. Failure to consider these issues will lead to suboptimal decisions on asset-allocation and often does. 

Using AI to analyze the textual information in the disclosures made within 10-Ks can considerably cut through this lengthy process. Data extraction tools can parse through these chunks of texts to retrieve relevant insights. And a tool or platform custom-built for your enterprise and trained in the scope of your domain can deliver this information to your business applications directly. More documents can be processed in a shorter time frame, and armed with new insights, analysts can use their time to take a more in-depth learning’s untapped potential look into the company in question. Implementing an automated AI-Based system omits the human errors,  allowing investment strategies to be chosen that are significantly more objective, in both their formulation and execution. 

Analysing ESG Reports

Most public and some private companies today are rated on their environmental, social and governance (ESG) performance. Companies usually communicate their key ESG initiatives yearly on their websites as a PDF document. Stakeholders are studying ESG reports to assess a company’s ESG conduct. Investment decisions and brand perception can hinge on these ratings, and hence care has to be taken to process information carefully. In general, higher ESG ratings are positively correlated with valuation and profitability while negatively correlated with volatility. An increased preference for socially responsible investments is most prevalent in Gen Z and Millennial demographics. As they are set to make-up 72% of the global workforce by 2029, they are also exhibiting greater concern about organizations’ and employers’ stance on environmental and social issues. This is bringing under scrutiny a company’s value creation with respect to ethical obligations that impact the society it operates in.

Although, ESG reports are significant when it comes to a company’s evaluation by asset managers, investors, and analysts, as these reports and ratings are made available by third-party providers there is little to no uniformity in ESG reports unlike SEC filings. Providers tend to have their own methodology to determine the ratings. The format of an ESG report varies from provider to provider, making the process of interpreting and analyzing these reports complicated. For example, Bloomberg, a leading ESG data provider, covers 120 ESG indicators– from carbon emissions and climate change effects to executive compensation and rights of shareholders. Analysts spend research hours reading reports and managing complex analysis rubrics to evaluate these metrics, before making informed investment decisions.

However AI can make the entire process of extracting relevant insights easy. AI-powered data cleansing and Natural Language Processing (NLP) tools can extract concise information, such as key ESG initiatives from PDF documents and greatly reduce the text to learn from. NLP can also help consolidate reports into well defined bits of information which can then be plugged into analytical models including market risk assessments, as well as other information fields. 

How Technology Aids The Process

A data extraction tool like Magic DeepSight™ can quickly process large-scale data, and also parse through unstructured content and alternate data sources like web search trends, social media data, and website traffic. Magic DeepSight™ deploys cognitive technologies like NLP, NLG, and machine learning for this. Another advantage is its ability to plug the extracted information into relevant business applications, without  human intervention. 

About NLP and NLG

Natural Language Processing (NLP) understands and contextualises unstructured text into structured data. And Natural Language Generation (NLG) analyses this structured data and transforms it into legible and accessible text. Both processes are powered by machine learning and allow computers to generate text reports in natural human language. The result is comprehensive, machine-generated with insights that were previously invisible. But how reliable are they?

The machine learning approach that includes deep learning, builds intelligence from a vast number of corrective iterations. It is based on a self-correcting algorithm which is a continuous learning loop that gets more relevant and accurate the more it is implemented. NLP and AI-driven tools, when trained in the language of a specific business ecosystem, like asset management, can deliver valuable insights for every stakeholder across multiple software environments, and in appropriate fields.

Benefits of Using Magic DeepSight™ for Investment Research

  1. Reduced personnel effort

Magic DeepSight™ extracts, processes, and delivers relevant data directly into your business applications, saving analysts’ time and enterprises’ capital.

  1. Better decision-making

By freeing up upto 70% of the time invested in data extraction, tagging, and management, Magic DeepSight™ recasts the analysis process. It also supplements decision-making processes with ready insights. 

  1. Improved data-accuracy

Magic DeepSight™ validates the data at source. In doing so, it prevents errors and inefficiencies from  creeping downstream to other systems. 

  1. More revenue opportunities

With reduced manual workload and emergence of new insights, teams can focus on revenue generation and use the knowledge generated to build efficient and strategic frameworks. 

In Conclusion

Application of AI to the assiduous task of investment research can help analysts and portfolio managers assess metrics quickly, save time, energy and money and make better-informed decisions in due course. The time consumed by manual investment research, especially 10-K analysis, is a legacy problem for financial institutions. Coupled with emerging alternative data sources, such as ESG reports, investment research is more complicated today. After completing research, analysts are left with only a small percentage of their time for actual analysis and decision-making. 

A tool like Magic DeepSight™ facilitates the research process, improves predictions, investment decision-making, and creativity. It could effectively save about 46 hours of effort and speed up data extraction, tagging, and management by 70%. In doing so, it brings unique business value and supports better-informed investment decisions. However, despite AI’s transformative potential, relatively few investment professionals are currently using AI/big data techniques in their investment processes. While portfolio managers continue to rely on Excel and other necessary market data tools, the ability to harness AI’s untapped potential might just be the biggest differentiator for enterprises in the coming decade. 

To explore Magic DeepSight™ for your organization, write to us mail@magicfinserv.com or Request a Demo

Background: Ethereum is the first programmable blockchain platform provided to the developer community to build business logic in the form of a Smart Contract that eventually helps developers build decentralized applications for any business use case. Once Smart Contracts are developed, they need to be registered on the blockchain, followed by deploying to the blockchain. After deploying the contract, the contract address gets assigned through which contract methods can be executed by the abstract layer built over ABI. Web3 is the module which is the most popular implementation to interact with local or remote node participating in the underlying blockchain network, built over Ethereum.

Define Decentralized Application Architecture for Testing: Needless to say that testing of any Decentralized application built over the blockchain platform is not only highly complex but also requires a specialized skill set with the most analytical mind of white box testers. At Magic FinServ, we possess rich real-time experience of some of the most complicated concepts of testing Blockchain-based decentralized applications. Based on this experience, our strategy divides Blockchain-based decentralized application into three isolated layers from a testing perspective –

1. Lowest-Layer – Blockchain platform to provide a platform on which smart contracts can be executed. 

a. Ethereum Virtual Machine

b. Encryption (Hashing & Digital Signature by using cryptography)

c. P2P Networking

d. Consensus Algorithm

e.  Blockchain Data & State of the network (Key-Value storage)

2.  Middle-Layer – Business Layer (Smart Contract) to build business logic for business use cases

a. Smart Contract development – Smart Contract Compilation, Deployment & Execution in Test Network

b. Smart Contract Audit

3. Upper-Layer – API Layer for Contracts, Nodes, Blocks, Messages, Accounts, Key Management & Miscellaneous endpoints to provide an interface to execute business logic and get updates on the state of the network at any given point in time. These interfaces can be embedded between upstream & downstream as well.

Based on these defined components of blockchain, we build an encompassing generic testing strategy for each layer in 2 broad categories –

1. Functional: As the category name suggests, this category ensures that all components that belong to each layer should function as per defined acceptance criteria by the business user/technical analyst/business analyst. We prefer to include System/Integration testing under this category to ensure that all components within each layer work as defined, but also as a complete system, it should accomplish the total business use case. 

2. Non-Functional: This category covers all kinds of testing other than functional testing like Performance, Security, Usability, Volatility & Resiliency testing not only at Node level but container level as well if Docker is being used to host any service.

In defining the generic testing strategy for these two broad categories, we surmise that infrastructure needs to be set up first & it will not be the same all the time. Before moving ahead on this, we need to answer other questions –

Question1: Why is the setting up of infrastructure the most critical & essential activity to start strategizing blockchain-based application testing?  

Question2: What all potential challenges do testers face while setting up infrastructure?

Question3: What all solutions do Testers have to overcome with Infrastructure setup?

To answer the first question:

We need to take a few steps backward to understand what we do for testing any software application in the traditional approach. For starting any software application testing, an environment has to be set up, but that is always a one-time activity until the development team does any significant change to the underlying platforms or technology. However, that happens very rarely. So testers can continue with testing without much worry about infrastructure.

The two core concepts of Blockchain technology are P2P networking & consensus algorithms. Testing these two components is heavily dependent on infrastructure setup, meaning how many nodes we need to test one feature or the other.

For P2P, we need a different number of connected nodes with an automated way to kill nodes randomly & then observe the behavior in each condition.

For Consensus, it depends on what kind of consensus is being used & based on the nature of consensus, different types of nodes, each with a different number of nodes will be needed.

Another differentiating factor that is not applicable for public blockchain but has a significant impact on a private blockchain is different types of permission to different nodes.

There is a frequent requirement to keep changing network topology for verifying most of the features of decentralized applications.

By now, we know how important it is to change network topology for each feature; otherwise, testing would not be as effective.  As we all know now, the blockchain network is a collection of many machines (a.k.a. nodes) connected with Peer To Peer networking. It is always a priority to automate the infrastructure required for mimicking the network topology that is needed for testing.

To answer the second question:

1. If we manually spin-off instances, let’s assume five instances, with required software & user setup, we need to spend almost 2–3 hours per instance.

2. Manually setting up machines is highly error-prone & mundane. Even simple automation does not help until the automation framework is intelligent enough to understand the need for different network topologies.

3. Due to agile methodology adoption, spending so much time setting up just infrastructure is not acceptable as the testing team usually does not have that length of time to complete testing for any given sprint.

4. All testers have to understand infrastructure setup as all need to change network topology to test most of the features. Finding a tester with good infrastructure knowledge is also challenging. 

5. Invalid network topology, most of the time, does not show an immediate failure due to the blockchain concept’s inherent nature. Eventually, an incorrect network topology leads to time and effort spent on non-productive testing without finding any potential bugs.

6. High defect rejection ratio by Dev, either due to incorrect network topology or due to incorrect peering among nodes

To answer the third question:

There are four ways to set up a network of nodes –

1. Physical machines with LAN

2. Virtual Machines

3. Docker Containers on the same machine can be considered as an isolated machine

4. Cloud Platform

We use Docker containers & cloud platforms to set up the infrastructure for testing blockchain-based applications as setting up physical machines, or virtual machines is not viable from a maintenance perspective. 

Physical machines with LAN: To set up a blockchain network with physical devices is tough & scalability is challenging since we need additional devices to achieve the desired testing quality. During Infrastructure testing, we need to make machines (a.k.a. Nodes) go up & down to verify the volatility. This is a cumbersome process for the tester as well. Setting up the network with physical devices requires physical space and need maintenance of these machines at regular intervals. We usually do not recommend going for this option; however, if a customer requires the testing to be done in such a manner, we can define a detailed strategy to execute it. 

Virtual Machines: Compared to a network of physical machines, virtual machines do have a lot of advantages. Increasing the number of VMs on an underlying device also highly complicates the matter since maintaining VMs is not user friendly. Another disadvantage is that we need to hardcode the limit of all the resources beforehand. However, combining Option1 and Option2  (multiple physical machines with multiple VMs on a single machine) seems to be a better choice, although it still requires lots of maintenance and carries overheads that act as a time sink for the tester. As reducing time to test is a critical aspect of quality delivery, we focus on saving as much time as possible to be invested in other higher-value elements of testing.

The advantage of using a cloud platform lies in the ability to spin off as many machines as needed without the overheads of maintenance or any other physical activity. However, it is still an uphill task to maintain such a network with multiple machines on the cloud too. Eventually, we thought of option 4 with Docker, and then we concluded that by combining option3 with option4, we could create a very solid strategy to perform infrastructure testing by overcoming various problems. 

Based on our real-world experiences, we tried various options individually, and in combination, our recommended approach is this process. 

Always perform Sanity/Smoke testing for any build with docker containers. Once all sanity/smoke tests finish without any failure, then switch to replicate the required network topology for functional testing of new enhancements & features.

The advantages of our approach are

1. Build failure issues can be found in less time & can be reported back to dev without any delay that cloud infrastructure introduces. Before taking this approach, we had to spend 2-3 hours to report any build failure bug, whereas the same can be caught in 5-10 minutes as we always run selective test cases under sanity/smoke. 

2. Saved the cost of cloud infrastructure in case of failure in the build, as there is no uptime in case of build failure. 

3. Saved a lot of time for the testing team to spend in infrastructure setup. 

4. Dev team gets more time to fix issues found in sanity/smoke testing as it gets reported in just a few minutes. 

5. Significant reduction in rejection of bugs by the development team. 

6. Timely delivery percentage for the build, without any major bug, has increased significantly. 

To dive into the testing strategy, using docker containers with cloud platform shall be covered in an upcoming blog, followed by our automation framework for the infrastructure setup testing. We will also try to define the answer for the most frequently asked questions by the customer –

Question1: Why should customers be concerned about Infrastructure testing for decentralized applications?

Question2: Why should customers look for an automated way of Infrastructure Setup testing for blockchain-based decentralized applications?

Stay tuned for the upcoming blogs and follow us on LinkedIn for timely updates.  

What are Smart Contracts?

Smart contracts are translations of an agreement, including terms and conditions, into a computational code. Blockchain follows the “No Central Authority” concept, and its primary purpose is to maintain transaction records without a central authority. To automate this process of recording transactions, Smart Contracts were constituted. Smart Contracts carry several beneficial traits, including automation, immutability, and a self-executing mode.

What is a DAML Smart Contract?

DAML is the open-source language from Digital Asset created to support DLT or distributed ledger technology. It allows multiple parties to do transactions in a secure and permissioned way. Using DAML as the coding language enables developers to focus only on building the smart contracts’ business logic rather than fretting about the nitty-gritties of underlying technology. DAML Smart Contracts run on various blockchain platforms and regular SQL database while providing the same logical privacy model and immutability guarantees.

A personal perspective

As a technology leader with years of experience, I remember the one line from my early days is the “Write Once Run Anywhere” slogan by Sun Microsystems to highlight the Java language’s cross-platform benefits.

I believe, in the coming years, DAML is the language that will enjoy similar popularity as Java due to its cross-platform benefits, ease of use, and versatility. DAML can revolutionize how business processes are managed by simplifying the contracts and making them ‘smarter.’

Comparing DAML

DAML V/S General Purpose Languages

Today, few popular general-purpose languages are in use for creating multi-party blockchain applications, i.e., Java, Go, and Kotlin.

All of these can also be used to create smart contracts. But the challenge lies in the sheer complexity of the task at hand.  The code that needs to be produced to build an effective smart contract, using these languages, is daunting. DAML can achieve the same result by writing 5-7 times less code in a much simpler manner. 

Smart Contract basic data types are contract-oriented (parties, observers, signatories/others), which is in direct contrast to the general-purpose languages (int/float/double). So, the very essence of smart contract languages such as DAML is one and only one – contracts, making it a superior choice for writing Smart Contracts. 

Comparison with Existing Smart Contract Languages

The domain of Smart Contracts is better handled by languages that have been purpose-built for smart contracts, like Solidity, DAML, AML/others. Among these smart contract languages, DAML is the only open-source and Write Once Run Anywhere (WORA). The DAML contract type is also private in nature. At a logical level, DAML has a strict policy. However, at the persistence level, different ledgers might implement privacy in different ways.

DAML for Private and Public Ledgers

The two types of ledgers, Private and Public serve different purposes and should be used accordingly. The underlying concept is that information on the ledgers is immutable once created.

Public Ledgers: Open to all, and anybody can join the system, and each member has access/read/write transactions. Examples: Bitcoin/ Ethereum/ others

Private Ledgers: Also known as permissioned networks or permissioned blockchains, have limitations in participation. It has higher security and limited permissions. Examples: Hyperledger Fabric and Corda. Some Private ledgers offer different privacy and data sharing settings, like Hyperledger sawtooth, although permissioned, allows all nodes to receive a copy of all transactions. 

DAML-Open source language allows the involved parties to do transactions in a secure and permissioned way. Thus, enabling developers to focus on the business logic rather than spending precious time in fine-tuning the underlying persistence technology.

At a logical level, DAML has a strict policy for permissioned access. However, at the persistence level, different ledgers might implement privacy in different ways. 

Sample of a Smart Contract: 

Reporting a trade transaction between 2 counter parties to a regulator or reviewing authority using Smart Contracts.

module Finance where

template Finance

  with

exampleParty : Party

exampleParty2 : Party

exampleParty3 : Party

regulator : Party

exampleParameter : Text

— more parameters here

  where

signatory exampleParty, exampleParty2

observer regulator

controller exampleParty can

   UpdateExampleParameter : ContractId Finance

     with

        newexampleParameter : Text

       do

         create this with

              exampleParameter = newexampleParameter

template name template keyword defines the parameters followed by the names of parameters and their types

template body where keyword can include:

template-local definitions let keyword

Let’s you make definitions that have access to the contract arguments and are available in the rest of the template definition.

signatories signatory keyword Required 

The parties (see the Party type) must consent to create an instance of this contract. You won’t be able to create an instance of this contract until all of these parties have authorized it.

observers observer keyword Optional. 

Parties that aren’t signatories but who you still want to be able to see this contract. For example, the SEC wants to know every contract created, and the SEC should be aware of this.

Optional: Text that describes the agreement that this contract represents.

Explanation of the code snippet

DAML is whitespace-aware and uses layout to structure blocks. Everything that is below the first line is indented and thus part of the template’s body.

The signatory keyword specifies the signatories of a contract instance. These are the parties whose authority is required to create the contract or archive it again – just like a real contract. Every contract must have at least one signatory.

Here the contract is created between two parties-Party1 and Party2, and the regulator is the observer. Every transaction done is visible to the observer, i.e., the regulator playing the role of Regulator (SEC) in this case and SEC can be looking at every transaction. So, smart contracts can be created in this space.

DAML disclosure policy ensures that Party3 will not be able to view the transactions as it is neither signatory, nor observer or controller, and it is just a party to the contract.

Here is a link to the repository provided by DAML which contains examples for several such use cases modeled in DAML.

  1. How to write smart contracts using DAML and various use-cases

https://github.com/digital-asset/ex-models

  1. Ledger implementation enabling DAML applications to run on Hyperledger Fabric 2.x

https://github.com/digital-asset/daml-on-fabric and for learning

Compilation and Deployment of DAML

DAML has both the language as well as the run time environments (in the form of libraries known as DAML SDK). Developers need to focus on writing smart contracts (in the way of language features provided by DAML SDK) without bothering about the underlying persistence layer. It also provides support for existing data structures (List/Map/Tuple) and also provides the functionality for creating a new data structure. 

Other notable features of DAML 

  • A .dar file is the result of compilation done through DAML Assistant, and eventually, .dar files are uploaded into the ledger so that the contracts can be created from the templates in the file. This .dar is made up of multiple .dalf files.  A .dalf file is the output of a compiled DAML package or library and it’s underlying format is DAML-LF.
  • Sandbox is a lightweight ledger (in-memory) implementation available only in the dev environment. 
  • Navigator is a tool for exploring what is there on the Ledger and it shows what contracts can be seen by different parties and submit commands on behalf of those parties.
  • DAML gives you the ability to deploy your smart contracts on the local ledger(in-memory) so that various scenarios can be easily tested. 

Testing DAML Smart Contracts

1) DAML has a built-in mechanism for testing templates called ‘scenarios’. Scenarios emulate the ledger. One can specify a linear sequence of actions that various parties take, and subsequently these are evaluated with the same consistency, authorization, and privacy rules as they would be on the sandbox ledger or ledger server. DAML Studio shows you the resulting transaction graph.

2) Magic FinServ launched its Test Automation suite called Intelligent Scenario Robot, or IsRobo™, an AI-driven Scenario Generator that helps developers test smart contracts written in DAML. It generates the unit test cases (negative and positive test cases) scenarios for the given smart-contract without any human intervention, purely based on AI.

Usage in Capital Markets

Smart contracts, in general, have excellent applications across the capital markets industry. I shall cover some use cases in subsequent blogs outlining how multi-party workflows within enterprises can benefit by minimizing reconciliations of data between them, and allow mutualization of the business process. Some popular applications currently being explored by Magic FinServ are: 

  • Onboarding KYC
  • Reference data management
  • Settlement and clearing for trades
  • Regulatory reporting
  • Option writing contracts (Derivatives industry)

Recent Noteworthy Implementations of DAML Smart Contracts are: 

  • International Swaps Derivatives Association (ISDA) is running a pilot for its Common Domain Model (CDM) for clearing of interest rate derivatives using a distributed ledger.
  • The Australian Stock Exchange (ASX) and Swiss investment bank UBS are continually providing the inputs to validate the CDM’s additional functionality alongside ISDA and Digital Asset.

DAML Certification process

To get hands-on experience with DAML, free access to docs.daml.com is available, where developers may learn from the study material/download the run time, and build sample programs. However, to reinforce learning and add this as a valuable skill, it is better to be a DAML-Certified Engineer. It is worth pursuing as the fee is reasonable and the benefits are manifold. There are not plenty of DAML developers available in the market, so it is a rather sought-after skill as well. 

Conclusion

DAML is ripe for revolutionizing the way business processes are managed and transactions are conducted. 

The smart contracts that are developed on open-source DAML can run on multiple DLTs / blockchains and databases without requiring any changes (write once, run anywhere). 

With the varied applications and relative ease of learning, DAML is surely emerging as a significant skill to add to your bouquet if you are a technologist in the capital markets domain. 

To explore the DAML applications with Magic FinServ, read more here

To schedule a demo, write to us mail@magicfinserv.com 

The accessibility, accuracy, and wealth of data on the Securities and Exchange Commission’s EDGAR filing system make it an invaluable resource for investors, asset managers, and analysts alike. Cognitive technologies are changing the way financial institutions and individuals use data reservoirs like the SEC EDGAR. In a world that is being increasingly powered by data, artificial intelligence-based technologies for analytics and front-offices processes are barely optional anymore. Technology solutions are getting smarter, cheaper, and more accurate, implying that your team’s efforts can be directed towards high-value engagements and strategic implementations. 

DeepSight™ by Magic FinServ is a tailor-made solution for unstructured data-related challenges of the financial services industry. It uses cutting-edge technology to help you gain more accurate insights from unstructured and structured data, such as datasets from the EDGAR website, emails, contracts & documents, etc. saving over 70% of the existing costs.

AI-based solutions significantly enhance the ability to extract information and turn into knowledge from the massive data deluge, therefore providing enormous critical information to make decisions. This often translates to building higher competitiveness &, therefore, higher revenue.

What are the challenges of SEC’s EDGAR?

The SEC’s EDGAR presents vast amounts of data of public companies’ filed corporate documents, including quarterly and annual reports. While the reports are comprehensive and better accessible on public portals than before, filings such as daily filings and forms require much more diligent effort to peruse since it is tedious. There is also an increased margin of human error and bias when manually combing through data in such volumes. Quick availability of this public data also means that market competitors track and process it fast, in real-time. 

The numerous utilization possibilities of this data come with challenges in analysis and application. The issue of external data integration into fund management operations has been a legacy problem. The manual front-office processing of massive datasets is tedious and fragmented today but changing fast. Analysis of such large amounts of data is time-consuming and expensive; therefore, most analysts only utilize a handful of data points to guide their investment decisions, leaving untapped potential trapped in the other data points.  

After a lukewarm 1.1 percent organic net flow in the US every year between 2013 and 2018, cognitive technologies have now brought about a long-due intervention in the form of digital reinvention. Previously limited to applications in the IT industry, these technologies have been transforming capital management for a short while, but with remarkable impact. While their appearance in finance is novel, they present unique use cases to extract and manage data. 

How can technology help with the processing of EDGAR data used in the industry?

Data from EDGAR is being used across various business applications. Intelligent reporting, zero redundancies, and timely updates ultimately drive the quality of investment decisions. As investment decisions can be highly time-sensitive, especially during volatile economic conditions, extracting and tracking relevant information in real-time is crucial. 

Magic DeepSight™ is trained to extract relevant and precise information from SEC’s EDGAR, organize this data as per your requirements, deliver it in a Spreadsheet or via API’s or even better ingest it directly into your business applications. Since Magic DeepSight™ is built ground-up with AI technology, it has a built-in feedback loop, allowing you to train the system automatically with every use.

This focused information retrieval and precision analysis hastens and enhances the investment assessment process of a fund or an asset manager– a process that is fraught with tedious data analysis, complicated calculations, and bias when done solely manually.

Investment advice collaterals that are accurate, informative, and intelligible are part of the value derived through Magic DeepSight™. NLP and AI-driven tools, especially those trained in the language of your business ecosystem, can help you derive insights across multiple software environments in their appropriate fields. And all of it can be customized for the stakeholder in question. 

Meanwhile, tighter regulations on the market have also increased the costs of compliance. Technology offsets these costs with unerring and timely fulfillment of regulatory requirements. The SEC has company filings under the magnifying glass in recent exams, and hefty fines are being imposed on firms for not meeting the filing norms. Apart from pecuniary implications, fulfilling these requirements pertain to your firm’s health and the value perceived by your investors. 

What’s wrong with doing it manually?

Most of the front-office processes continue to be manual today, forcing front-office analysts slugging through large chunks of information to gain valuable insights. The information on EDGAR is structured uniformly, but the lengthy retrieval process negates the benefits of this organization of data. For example, if you wish to know acquisition-related information about a public company, you can access their Form S-4 and 8K filings easily on the SEC EDGAR website. But going through all the text to precisely find what is needed takes time. With Magic DeepSight™, you can automate this extraction process so analysts can focus on the next steps. 

And while a team of analysts is going through multiple datasets quickly, likely, relevant insights from the data that falls outside the few main parameters being considered are overlooked. And if such a hurdle arises with organized data, processing unstructured documents with large blocks of text, press releases, company websites, and Powerpoint presentations unquestionably takes much longer and is equally problematic. With Magic DeepSight™, you can overcome this blind spot. It can quickly process all values in a given dataset, and using NLP, it efficiently extracts meaningful information from unstructured data from multiple sources. Using this information, Magic DeepSight™ can provide you with new patterns and insights to complement your research team.

How does Magic DeepSight™ transform these processes?

While most data management solutions available in the market are industry-agnostic, Magic DeepSight™ is purpose-built for the financial domain enterprise. AI models, such as that of Magic DeepSight™ trained on financial markets’ datasets, can comprehend and extract the right data points. Built with an advanced domain-trained NLP engine, data is analyzed from an industry perspective and customized to your needs. Magic DeepSight™ is available on all cloud environments and on-premises if needed. Moreover, it integrates across your existing business applications without causing any disruptions to your current workflow.

DeepSight™ is built on a reliable stack of open source libraries, complimented by custom code, wherever needed, and trained to perfection by our team. This versatility is also what makes it easily scalable. Magic DeepSight™ can treat a wide range of information formats and select the most appropriate library for any dataset. By using Magic DeepSight™, Search—Download–Extraction of relevant information from the SEC EDGAR database can become easy and efficient. Information on forms such as disclosures on a 10K, including risk assessment, governance, conflict of interest, etc. is accurately summarized in a fraction of the time taken previously, freeing up space for faster and better-informed decision making. 

But it is more than just a data extraction tool. DeepSight™ is also integrated with other technologies such as RPA, smart contracts, workflow automation, and more– making it an end-to-end solution that adds value to each step of your business processes. 

Our team can also customize DeepSight™ to your enterprise’s requirements delivering you automated, standardized, and optimized information-driven processes across front-to-back offices.

What business value does Magic DeepSight™ provide?

  • It completely automates the process of wading through vast amounts of data to extract meaningful insights saving personnel time and effort, thus reducing costs up to 70%.
  • It becomes an asset to the research processes by employing NLP to extract meaningful information from an assortment of unstructured document types and formats, improving your firm’s overall data reservoir quality.
  • The band of your insights, made possible with AI, offer a richer perspective that was previously hidden, thus helping you drive higher revenues with better-informed investment decisions. 

Magic DeepSight™ digitally transforms your overall operations. Firms that adopt AI, data, and analytics will be better suited to optimize their business applications. 

To explore Magic DeepSight™ for your organization, write out to us mail@magicfinserv.com

Until recently, your enterprise may have considered smart contracts as a tool to bridge silos from one organization to another – that is to establish external connectivity over Blockchain. However, what if we proposed applying the same concept so a firm can be instrumental in addressing enterprise-wide data reconciliation and system integration / consolidation challenges to expedite time to market and streamline (i.e internal, regulatory, FP&A, supplier risk) reporting. 

Afterall, about 70-80% of reconciliation activity takes place within the enterprise. The best part? A firm can do this with minimal disruption to its current application suite, operating system and tech stack. We will look at traditional approaches and explain how smart contracts are the way to get started on one of those journeys when one never looks back

To set the stage, let’s cover the self-evident truths. Reconciliation tools are expensive and third party tool implementations typically require multi year (and multi million dollar) investments. Over 70% of Reconciliation requirements are within the Enterprise amongst internal systems. Most reconciliation resolutions start with an unstructured data input (pdf/email/spreadsheet) which requires a manual review/scrubbing to be ingested easily. For mission critical processes, this “readiness of data” lag can result in delays and lost business, backlogs, unjustifiable cost and worst of all, regulatory penalties. 

Magic Finserv proposes a three-fold approach to take on this challenge. 

  1. Data readiness: Tackle the unstructured data solution using AI and ML utilities that can access data sources and ingest them into a structured format. Often Reconcilliation is necessary because of incorrect or incomplete data, ML can anticipate what is wrong / missing from past transactions and remediate. This is the Auto Reconciliation.
  2. Given unstructured data elements may reside in fragmented platforms or organizational silos, the Firm must have an intelligent way of integrating and mutualizing itself with minimal intervention. An ETL or data feed may look appealing initially, however, these are error prone and do not remediate the manual reconciliation tasks for exception management.  Alternatively, a smart contract based approach can streamline your rule-based processes to create a single data source. 
  3. Seamless integration to minimize the disconnect between applications. The goal is to create an environment where reconciliation is no longer required. Ideally.

We have partnered with Digital Asset to outline a solution that brings together an intelligent data extraction tool, a DAML smart contract and a capital markets focused integration partner that will reduce manual end to end reconciliation challenges for the enterprise.

Problem statement & Traditional Approach

Given that most enterprise business processes run through multiple disparate applications with their respective unique databases, it has been proven a monolithic application approach is close to impossible. And not recommended due to issues with a Monolithic application architecture. Traditionally, this challenge has been addressed using integration tools such as an Enterprise Service Bus, SOA, where the business gets consumed in the cycle of data aggregation, cleansing and reconciliation. Each database becomes a virtual pipeline of a business process and an additional staging layer is created to deliver internal/external analytics. In addition, these integration tools are not intelligent as they only capture workflows with adapters (ad hoc business logic) and do not offer privacy restrictions from the outset. 

Solution

The Digital Assets DAML on X initiative extends the concept of the Smart Contract onto multiple platforms including Databases. The DAML on X interfaces with the underlying Databases using standard interfacing protocols, the Smart Contract mutualizes the Data Validation rules as well as the access privileges. Once you create a DAML smart contract, the integrity of the process is built into the code itself, and the DAML runtime makes disparate communication seamless. It is in its DNA to function as a platform independent programming language specifically for multi-party applications.

Without replacing your current architecture such as the ESB, or your institutional vendor management tool of choice, use the DAML runtime to make application communication seamless and have your ESB invoke the necessary elements of your smart contract via exposed APIs .  

Handling Privacy, Entitlements & Identity Management

Every party in the smart contract has a “party ID” plugged in directly with your identity management solution that you are using institutionally. You can even embed “trustless authentication”. 

The idea is that entitlements/rights & obligations are baked directly into the language itself as opposed to a normal business process management tool where you build out your business process and then put the entitlements/ marry them in phase 3 of the process – only to realize that workflow needs to change. 

DAML handles this upfront – all of the authentication is taken care of the persistence layer/IDM that you decide on. The smart contract template represents a data scheme in a database and the Signatories/controllers in our example represent role-level permissioning of who can do what/when and who can see what/when

 The image below shows how the golden source of data is generated.


It is a purpose built product that contains automatic disclosures and privacy parameters out of the box. You don’t need to keep checking your code to see if the guy who is exercising command is actually allowed to see the data or not. All of this is within the scope of the DAML runtime. 

Already kickstarted your enterprise blockchain strategy?

Firstly, Amazing! Second, since DAML Smart contracts can run on databases or distributed ledgers of your choice (Fabric, Corda etc. ), it’s a unique solution that gives you the flexibility to get started with the application building and even change underlying ledgers at any point. You can also Integrate between multiple instances. I.e. If you are running a DAML app on Fabric and another DAML app on corda, both apps can talk to one another. 

The key takeaway here is that most enterprises are held up with determining which ledger meets their needs. With its intuitive business workflow focused approach, developing your DAML applications while you select your ledger fabric can expedite revenue capture, implement consistent enterprise reporting and reduce the burden of reconciliation – the smart contract through to the integration layer is completely portable. 

COVID 19 and the associated social distancing and quarantine restrictions, has dictated new measures for business standards, forcing companies into a major overhaul in the way they work. Remote Working is just one key part of this change, this impacts workplaces and the global workforce significantly.

This cause-effect relationship is now at the forefront, fundamentally transforming existing business models, business practices, business processes, and supporting structures and technology. According to Gartner, “CIOs can play a key role in this process since digital technologies and capabilities influence every aspect of business models.”

Business process management ( BPM)  was the primary means for investment banks and hedge funds  to make internal workflows efficient. In investment banking, BPM  focused on the automation of operations management by identifying, modeling, analyzing, and subsequently improving business processes.

Most investment firms have some form of BPM for various processes. For instance, compliance processes appear to have some form of software automation in their workflows at most investment banks and hedge funds. This is because banking functions such as compliance, fraud, and risk management exert pressure to develop cost-effective processes. Wherever Automation was not possible,  manual labor-intensive functions were outsourced through KPO’s to comparatively cheaper South-East Asian destinations thereby reducing costs. With COVID-19’s social distancing norms levied, this traditional KPO model to handle Front, Middle, and Back Office processes is cracking up as it relies on several people working together. There is an urgent need to rethink these processes with a fresh vision and build intelligent systems that are remotely accessible, for handling all such processes like KYC, AML, Document Digitization, Data Extraction from Unstructured documents, Contract Management, Trade reconciliation Invoice Processing, Corporate Actions, etc.

Now more than ever, organizations need to embrace agility, flexibility, and transformation. As per KPMG, the modern enterprise must become agile and resilient to master disruption and maintain momentum. Optimizing the operations process can transform the business to support lean initiatives that lead to innovation—an aspect that can no longer be ignored. With the help of cross-functional domain experts, organizations can discover and subsequently eliminate inefficiencies in the operations and business processes by identifying the inconsistencies, redundancies, and gaps that can be streamlined.  Intelligent Workflow initiatives and goals align business improvement with business objectives and visibly reduce the probability of negative ROI and impact on projects and initiatives.

Using new technologies like AI and Machine Learning, organizations can quickly adapt and improve with precision and gain the multi-layered visibility needed to drive change and reach strategic goals across an enterprise. The proper use of Artificial Intelligence can solve business case problems and relieve enterprises from various technology or data chokes. AI techniques can help traditional software perform tasks better over time, thus empowering people to focus their time on complex and highly strategic tasks.

Best-Practices for Adoption of AI-Based BPM Solutions

Before moving into AI-based process automation, a crucial idea for investment banking business leaders to realize is that they need to shift their perspective of emerging technology opportunities. Many AI projects will be deployed before they return the desired result, 100 % of the time.

AI ventures require ample algorithmic tuning, so it can take several months to reach a state of high precision and confidence. This is important because banks, in their core business processes, cannot jump into large AI projects and expect seamless functions across the board straightaway. Any large project would result in a temporary impediment to the specific business process or push it into a downtime before the AI project is complete. 

So bankers need to develop a mentality of try-test-learn-improve while considering AI to gain confidence in data science projects. Also, it is advisable to choose an AI service provider with extensive experience and knowledge of the domain, to achieve desired results. An investment firm should expect to have a prototype solution in the first iteration which they need to improve by incorporating user feedback to correct minor issues to achieve an MVP status. The smaller and shorter projects, that focus on improving a particular sub-process within the entire process workflow are better suited for investment firms. This approach allows small AI teams to develop and deploy projects much faster. Such projects are advisable since they bring a significant positive business impact, while still not hindering the current workflow and process.  

Such attitudinal changes are decisive shifts from the conventional approach to technology that investment banking leaders have taken. This is presumably not something firms can change overnight and requires careful preparation, planning, and a strategy to help the workforce have an incremental improvement approach to business processes. These fundamental shifts demand that leaders prepare, motivate, and equip their workforce to make a change. But leaders must first be prepared themselves before inculcating this approach in their organizations.

Our interactions with CXO’s in the investment banking industry indicate that process optimization applications of AI can bring a  disproportionate benefit in terms of operational efficiency,  sorely needed in these challenging times.  

Magic FinServ offers focussed process optimization solutions for the Financial Services Industry leveraging New Gen Technology such as AI, ML, across hedge funds, asset management, and Fintechs. This allows financial services institutions to translate business strategy and operational objectives into successful enterprise-level changes, thus positively impacting revenues and bottom-line growth. With the relevant domain knowledge of capital markets and technological prowess, our agile team builds customized turnkey solutions that can be deployed quickly and demonstrate returns as early as 2 weeks from the first deployment. Discover the transformation possibilities with our experts on AI solutions for hedge funds and asset managers. 

Write to us mail@magicfinserv.com to book a consultation.

Firstly, a sincere wish for safety and wellbeing of all, my deepest sympathies for those who fought valiantly and prayers for those who continue to fight. As our communities fight for lives and livelihood, we as  business leaders shoulder the responsibility to help our organizations and the world arise strong and resurgent. 

Magic FinServ is one such company where we could, overnight, move our operations into a remote working model, with all the security and confidentiality norms intact. This was only possible because we are a cloud-first company, effectively running our business on the cloud while supporting numerous clients across geographies. Amidst efforts to minimize disruptions to our daily business operations, we were also highly cognizant of the increased security vulnerabilities arising out of this paradigm shift. We made some hard and expensive choices to keep our global teams functioning well during severe lockdowns. We improvised and made possible actions that we would have never dreamt of and we will continue to make difficult choices in the months to come. There is no “Going Back to Work” as we know it today as several aspects that we took for granted will no longer be required while repeated lockdowns and disruptions will  become the norm. 

The Rising popularity of Cloud

As per a survey conducted by Forbes, in early 2020, as many as 50% of Financial Services leaders had placed Cloud BI as their top priority this year. And in a post-COVID world, the cloud is definitely going to be the center of all technology. Cloud, thus moved quickly from being an IT cost-center of a hedge-fund to an essential component for running a nimble, agile, and highly scalable organization that operates on a fully variable cost model, and most importantly securely accessible to all stakeholders. Smart managers will seize this opportunity to design a whole new organization from a brand new set of principles as virtual is now our new reality.   

Cloud for Hedge Funds

As the situation around COVID19 having unprecedented business implications arose, a key question also emerged that begs an answer:  Why are only some companies thriving and handling this disruption well? From a technical viewpoint, the companies that are handling it well are either the SaaS companies or those that have set themselves up predominantly operating on the cloud. 

For hedge funds, asset managers, and other capital market entities, the cloud has capabilities to support front, middle and back-office functions. This includes everything ranging from business applications and client relationship management systems to data management solutions and accounting systems. Cloud emerged as a path of choice but its considerations for capital markets are different than ones applicable for other businesses, owing to industry regulations, complex reporting, the sensitivity of data, and compliance requisites of the industry. 

As a provider of Digital Technology (AI / ML / Blockchain / Cloud) Services, Magic FinServ has a unique proposition that makes deploying and maintaining a capital markets cloud initiative time-bound, cost-effective, and highly secure. Our deep understanding of the vertical enables us to be a strategic partner as our customers design their organization to take on the new challenges and opportunities. 

Getting Started With Cloud: Time for a Health Check

A highly recommended first step towards the cloud, for any hedge fund or asset manager,  would be a comprehensive assessment of your organization for cloud readiness and maturity. The assessment of your IT infrastructure and operations for business continuity, reliability, scalability, accessibility, while maintaining the same levels of security and confidentiality as physically secure operations centers, is rather imperative so you can plan and weather the disruptions to emerge stronger and leaner. Well begun is half done stands true for cloud as well. 

At Magic FinServ, we developed a 128 point assessment offering that measures your organization on these critical aspects. We understand the operational, security, and confidentiality demands of the buy-side industry and we assess your ability to meet these exacting demands. Increasingly, your customers, investors, and other counterparties will also assess you on these parameters, so a comprehensive assessment study will help you respond to these queries with confidence.

The assessment need not be a time consuming, expensive affair since we have customized and optimized our assessment for the buy side-industry. A typical small to midsize operation would need about 2-3 months. It is a relatively small time investment that will identify the gaps and make recommendations to bridge these gaps so that your onward cloud journey is smooth, in-line with your business objectives and saves you from expensive mistakes later. 

Migration and Deployment to Cloud

According to ValueWalk, almost 90% of hedge funds will move to the cloud, in the next 5 years. Migration / Deployment to cloud was often seen as an IT cost initiative earlier, however, as firms move from a CapEx to an OpEx preference, it is now increasingly becoming a key element of a whole new way of operations. 

Most organizations in the financial services industry take a phased approach of moving to the cloud, with multi-year plans. They start with setting the framework and testing the waters with an initial few applications, usually business applications like Email, File Sharing, OMS, Risk, and CRM, moving them to a hosted model. The benefits of adopting this hosted model include gaining a highly available infrastructure of the cloud providers. This is typically followed by migrating data to the cloud and finally moving the bulk of the workload in a lift and shift mode. 

Somewhere in this journey, security is addressed. What is often missing in this  journey is the aspect of transformation, especially when there is the burden of legacy, monolithic applications that are in dire need of modernization and transformation. The proper planned and orchestrated migration to cloud is an ideal opportunity to address this long pending initiative.

Magic FinServ, with its focus on capital markets vertical, has developed an Integrated, Incremental, and Scalable method of incorporating cloud into the customer’s ecosystem. An integrated approach to Applications, Infrastructure, and Security helps us come up with a robust and holistic plan. The approach uses as many native services of the cloud provider as possible making it easily adaptable to the cloud environment, bringing in cost efficiencies. A segmented and incremental approach to Applications (Microservices), Infrastructure and Security (DevSecOps, Micro-Segmentation) results in moving incremental and prioritized workloads to the cloud, helping utilize multiple cloud environments thereby leveraging the best of all the providers and something that is integrated well into the hedge fund’s specific environment. Implementing the Infrastructure-as-code helps in making the cloud environment extremely manageable, scalable, simplified, and secured. 

This systemized incremental approach has helped entities to achieve rapid time to market and highly optimized cost of deployment while bringing incremental benefits very early in the deployment life cycle. Our objective remains to make this transition as much self-funded and sustainable as possible thereby delivering a high ROI. 

Managing the Cloud Environment Effectively

The Cloud is democratizing the consumption of IT Services and driving innovation. However if not governed effectively, this sudden freedom and access could spiral your  cloud’s running and managing costs, while making it susceptible to security risk. The democratization has been made possible by public cloud providers making available out of the box capabilities, or native cloud capabilities. However, these additional capabilities come at the cost of additional spend as well as some loss of flexibility. Optimal management of such capabilities is necessary to maintain a balance between time to market on one hand and cost, flexibility, and security on the other.

Magic FinServ has developed an integrated operations and IT monitoring support capability to provide customers with a SaaS type model, enabling the smooth and uninterrupted running of business operations incorporated in the architecture itself. Automated release and deployment, coupled with automated infrastructure testing help make change and configuration management easy and fast. Since uptime is crucial to operational efficiency and profitability, the high-touch support model across L1, L2, L3, ensures quick resolution of any issues and congruence across functions. 

Handling Enterprise Data

A key element of the buy-side industry is the management of enterprise data. This not only impacts upfront costs but also could potentially impact business outcomes. Magic FinServ, as a member of the EDMCouncil, ensures that an enterprise data architect is a part of our cloud center of excellence, as a best practice. We have been supporting enterprise data initiatives for several buy-side organizations over the years and hence are abreast of the inconsistencies that may be caused by customizing underlying data models to suit specific organization needs. Our industry-driven high touch support services help in managing these inconsistencies, especially as we help move data to the cloud or the constant upstream and downstream in hybrid cloud systems. 

Conclusion

As Asset Managers and Hedge Funds make this move to the cloud in a new paradigm, they should ideally make the move with trusted and industry-oriented managed service providers, since this is a tectonic shift in their operating model. Ultimately the move to the cloud is not just a technology choice, it’s a business decision.

Get Insights Straight Into Your Inbox!

    CATEGORY